Data Braider
Data Braider is a No-Code API creation tool for Microsoft Dynamics 365 Business Central. It allows you to define custom API endpoints for reading and writing data.
Key Definitions
When working with Data Braider, it is important to understand the core concepts of REST APIs and calling into the Business Central API.
[link to Business Central API docs]
Within Business Central, each time you call Data Braider, you are calling one of two Business Central API endpoints:
- /read – this endpoint is the path for all READ calls to Braider.
- /write - this endpoint is the path for all WRITE calls to Braider.
Within the API calls, you call an individual ‘endpoint’ within Braider. This is a user-defined name for each one. Each endpoint you define has various options, which will be covered more in the setup sections, but at the core:
- Name
- Which tables (and how they Connect)
- Which Fields (Ans what filters you want)
- What Type of Endpoints
- The JXON formatting of the response
Endpoints Types available are as follows:
- Read - this endpoint is for reading data from BC only.
- Write Per-Record – this endpoint is for putting information into BC.
- Write Batch – this endpoint is for putting information into BC.
The difference between Per-Record and Batch is that in Batch mode, all data must be valid for the whole transaction to complete, while Per-Record allows for gentle error handling – a single issue won’t prevent all other parts of the information from going through.
Why Braider?
Typically, with Business Central, if you need to have third-party access to your data via API, and there is no existing API for the data you want to share, you must have development assistance to create your own API for your partner.
Data Braider allows any authorized user the ability to grant access to the information without the need for developer intervention. Additionally, small changes are often possible with just a few clicks. In other words, Data Braider facilitates full control over access to your information.
Basic Setup
At a minimum, to start using Braider, the only key step is configuring a Data Braider Endpoint.
Configuring a Read Endpoint
To get started with creating a new Endpoint, go to the Data Braider API Endpoints list. Choose the New action. This will open the API Endpoint Card. You need to start with the General information:
- Code – the unique ID you want to use for this endpoint
- Description – A note field for you to describe the intended purpose
- The JSON formatting of the response
- What type of Endpoint (Read)
1 For the first example, we will make a new endpoint for accessing Vendor information. This will be called the VEND endpoint. For our integration, we will use the Flat style of Output JSON Type. Our resulting endpoint should look like:
2 Now that the Endpoint is defined, we now add which data should be made available in the Endpoint Line section:
3 For each Endpoint, you can specify one or more Tables to pull information from. In this example, we will use only one, the Vendor table. In the Source Table field, click the lookup to locate the Vendor table. This will result in seeing how many possible fields (5) we could choose to include in the data: 4 Once the table has been selected, you must choose which fields to include by choosing the Field action (6):
5 This will open the Field list to select which fields to mark as Included (7):
6 We will select the following fields:
- No.
- Name
- Address
- City
- Phone No.
- Currency Code When you select your fields, and close the Fields window, you should see the correct number of Included Fields (8) for the table:
7 Now, we have a fully configured basic read endpoint. We can preview what our third party will get as a response by selecting the Preview JSON action (9):
8 This will open a new window showing the results of calling the Endpoint:
9
Configuring a Write Endpoint
To get started with creating a new Endpoint, go to the Data Braider API Endpoints list. Choose the New action. This will open the API Endpoint Card. You need to start with the General information:
- Code – the unique ID you want to use for this endpoint.
- Description – A note field for you to describe the intended purpose.
- The JSON formatting of the response.
- What type of Endpoint (Write Per Record).
1 For the second example, we will make a new endpoint for accessing Customer information. This will be called the CUST endpoint. For our integration, we will use the Flat style of Output JSON Type. Our resulting endpoint should look like the following:
2 Now that the Endpoint is defined, we add which data should be made available in the Endpoint Line section:
3 For each Endpoint, you can specify one or more Tables to pull information from. In this example, we will use only one, the Customer table. In the Source Table field, click the lookup to locate the Customer table. This will result in seeing how many possible fields (5) we could choose to include in the data: 4 Once the table has been selected, you must choose which fields to include by choosing the Field action (6): 5 This will open the Field list to select which fields to mark as Included (7) 6 We will select the following fields:
- No.
- Name
- Address
- City
- Document Sending Profile When you select your fields, you must also select the Write Fields (8). Please note that if you do not select the Write Fields, access will be Read Only:
7 Then, close the Fields window, you should see the correct number of Included Fields (8) for the table:
8 Now we move to Write-Back Endpoint Settings where we choose the allowed actions that include:
- Require Entire Primary Key
- Insert Allowed
- Modify Allowed
- Delete Allowed These settings define what the user can and cannot do regarding the information displayed. For example, a user may be able to Insert and Modify, but is not allowed to Delete; as follows:
9 Now, we have a fully configured basic read endpoint. We can preview what our third party will get as a response by selecting the Preview Json action (9):
10 This will open a new window showing the results of calling the Endpoint:
11
git
Consuming a Read Endpoint Using Postman
Postman is an API Platform/Tool, allowing you to call API endpoints. When used with Data Braider, it is used to send and receive data through the API Endpoints you have created.
This guide will assume you understand how Postman works and know how to set up the baseline structure to be able to communicate with Data Braider. (A basic setup would include setting up the correct Environment, supplying it the Company ID, and the API URL to your Business Central Server that has Data Braider installed.)
The company ID can be received through an API call to the default engine, here is an example of such a call to a Business Central Cloud server (this URL includes the tenant ID, e6530ea3-74f0-4147-9b06-0554afd9ae53):
This example API call will give you information regarding your company, including the Company ID you require to set up your Postman Environment to start using it with Data Braider.
The response you will get back with look like the following:
{
“@odata.context”: “[https://api.businesscentral.dynamics.com/v2.0/e6530ea3-74f0-4147-9b06-0554afd9ae53/APIBook/api/v2.0/$metadata#companies](https://api.businesscentral.dynamics.com/v2.0/e6530ea3-74f0-4147-9b06-0554afd9ae53/APIBook/api/v2.0/$metadata#companies)”,
“value”: [
{
“id”: “86e811fb-de3a-ed11-bbae-6045bd8e554a”,
“systemVersion”: “21.0.46256.46853”,
“timestamp”: 4283,
“name”: “CRONUS USA, Inc.”,
“displayName”: “”,
“businessProfileId”: “”,
“systemCreatedAt”: “2022-09-23T01:28:11.793Z”,
“systemCreatedBy”: “00000000-0000-0000-0000-000000000001”,
“systemModifiedAt”: “2022-09-23T01:28:11.793Z”,
“systemModifiedBy”: “00000000-0000-0000-0000-000000000001”
}
As you can see above, your response data will give you the Company ID you require, along with the name of the company, so you are able to identify the correct company, in case you have multiple companies set up in your Business Central Environment.
Once you have fetched the Company ID you require, your API calls to Data Braider will look like the following example: https://api.businesscentral.dynamics.com/v2.0/e6530ea3-74f0-4147-9b06-0554afd9ae53/APIBook/api/v2.0/companies(86e811fb-de3a-ed11-bbae-6045bd8e554a)/read
As you can see, the Company ID has been supplied within the URL used to call an endpoint, make note of the ‘/read’ at the end, this reflects the use of a READ endpoint in Data Braider, used to receive information from your Business Central Environment.
The READ endpoint is quite straightforward, as it will fetch whatever data you have supplied in your Data Braider API Endpoint.
The READ type endpoint can be called with both Get and Post.
Basic GET Calls to the Read Endpoint
Performing a Get call require you to extend your URL a little bit further to make it know which endpoint it is calling, the example below shows you how you extend the URL to reach a specific endpoint, Customers in this instance: https://api.businesscentral.dynamics.com/v2.0/e6530ea3-74f0-4147-9b06-0554afd9ae53/APIBook/api/v2.0/companies(86e811fb-de3a-ed11-bbae-6045bd8e554a)/read(‘CUSTOMERS’)
When performing this Get call to your Read Endpoint in Data Braider, you would receive a response like following:
{
"@odata.context": "[https://api.businesscentral.dynamics.com/v2.0/e6530ea3-74f0-4147-9b06-0554afd9ae53/APIBook/api/v2.0/$metadata#companies(86e811fb-de3a-ed11-bbae-6045bd8e554a)/read/$entity](https://api.businesscentral.dynamics.com/v2.0/e6530ea3-74f0-4147-9b06-0554afd9ae53/APIBook/api/v2.0/$metadata#companies(86e811fb-de3a-ed11-bbae-6045bd8e554a)/read/$entity)",
"@odata.etag": "W/\"JzIwOzEyMjA2NTUxMDg1Mzg0NDQ1MDA1MTswMDsn\"",
"code": "CUSTOMERS",
"description": "",
"endpointType": "Read Only",
"outputJSONType": "Hierarchy",
"jsonResult": "[{\"_delta\":\"\",\"level\":10000,\"sourceTableNumber\":18,\"sourceTableName\":\"Customer\",\"pkString\":\"No.=CONST(10000)\",\"sourceSystemId\":\"86E811FB-DE3A-ED11-BBAE-6045BD8E554A\",\"data\":{\"No\":\"10000\",\"Name\":\"Sailing Essentials\",\"timestamp\":1699021023}},{\"_delta\":\"\",\"level\":10000,\"sourceTableNumber\":18,\"sourceTableName\":\"Customer\",\"pkString\":\"No.=CONST(20000)\",\"sourceSystemId\":\"7685C3EF-C740-EE11-BE71-6045BDC89DF0\",\"data\":{\"No\":\"20000\",\"Name\":\"Camping Tools\",\"timestamp\":1699020983}},{\"_delta\":\"\",\"level\":10000,\"sourceTableNumber\":18,\"sourceTableName\":\"Customer\",\"pkString\":\"No.=CONST(30000)\",\"sourceSystemId\":\"7A85C3EF-C740-EE11-BE71-6045BDC89DF0\",\"data\":{\"No\":\"30000\",\"Name\":\"School of Fine Art\",\"timestamp\":1692697413}},{\"_delta\":\"\",\"level\":10000,\"sourceTableNumber\":18,\"sourceTableName\":\"Customer\",\"pkString\":\"No.=CONST(40000)\",\"sourceSystemId\":\"7E85C3EF-C740-EE11-BE71-6045BDC89DF0\",\"data\":{\"No\":\"40000\",\"Name\":\"Alpine Ski House\",\"timestamp\":1692697413}},{\"_delta\":\"\",\"level\":10000,\"sourceTableNumber\":18,\"sourceTableName\":\"Customer\",\"pkString\":\"No.=CONST(50000)\",\"sourceSystemId\":\"8285C3EF-C740-EE11-BE71-6045BDC89DF0\",\"data\":{\"No\":\"50000\",\"Name\":\"Relecloud\",\"timestamp\":1692697413}}]",
"filterJson": "",
"pageStart": 0,
"pageSize": 0
}
Looking at the response like this, it is quite difficult to make out what is what, but there are tools that can make this more readable (see Utilities). The key understanding is that the jsonResult contains a stringified JSON result. Use your preferred manner of converting this string back to a JSON.
Here is an example of the first item in the above response, after making it more readable:
{
"_delta": "",
"level": 10000,
"sourceTableNumber": 18,
"sourceTableName": "Customer",
"pkString": "No.=CONST(10000)",
"sourceSystemId": "86E811FB-DE3A-ED11-BBAE-6045BD8E554A",
"data": {
"No": "10000",
"Name": "Sailing Essentials",
"timestamp": 1699021023
}
In this example, we have set up an Endpoint in Data Braider to give the “No.” and “Name” of all customers in our Business Central environment. This will include the timestamp (last datetime it was changed) automatically.
Reading Data with Filtering via POST calls
To extend on this functionality a little bit more, we can use Postman POST function instead of GET, this will allow us to supply a filter on our Read Endpoints in Data Braider.
To set up a filter, you will need to collect some additional information, The Table Name (or Number) as well as what field Name (or Number) we wish to filter on, out of the fields that we have made available in the Data Braider Endpoint we have created.
In our example, we only have Customer “No.” and “Name” available; therefore we will put a filter on the “Customer” Tables “No.” field.
In Postman, we will go over to the “Body” section and supply the necessary JSON entry required to perform a Read with Filters.
A Filter body entry requires some specific information to function properly:
- A Code entry line to reflect the Endpoint Call Code
- a FilterJson with our Json Filter code
For our example we are targeting our Endpoint called “CUSTOMERS”, the Table connected on the endpoint, “Customer”, the Field on the Table we wish to filter on, “No.” (make note, this field has a (.) at the end and is required for filter on this field to work), and the Filter we wish to supply, “20000” in our example:
{
"code": "CUSTOMERS",
"filterJson":"[{\"table\":\"Customer\", \"field\":\"No.\", \"filter\":\"20000\"}]"
}
The FilterJson entry can also be adjusted to filter on the number for the Customer Table (18), as well as the Field Number for “No.” (1), and still give the same result, as in this example:
{
"code": "CUSTOMERS",
"filterJson": "[{\"table\":\"18\",\"field\":\"1\",\"filter\":\"20000\"}]"
}
These can be found in our Data Brader API Endpoint within our Business Central instance, Open up the Endpoint in question, and locate the Endpoint Line section:
- The Table Number, this can be used to substitute “Customer”, in the FilterJson code line.
- A clickable FlowField, click on this to open a list with all Fields included in our selected Endpoint, to locate the Field Number(s).
![Alt text](/path/to/img.jpg)
- The Field Number, this can be used to substitute “No.”
![Alt text](/path/to/img.jpg)
A filter is supplied in the Body of the Postman API Call, and in addition, when using a filter, we have also lifted out the Endpoint Call Code (‘CUSTOMERS’) in this example) from the URL, and insert it into the body of code as “CUSTOMERS”.
As you can see in the example below, we have lifted out the (‘Customer’) from our URL, and instead put it in our code line in the body, as well as supplied a FilterJson line with the filter parameters we require:
![Alt text](/path/to/img.jpg)
Make note, in this example, we are making use of Variables to supply the correct URL, this is shown in the example below as {{baseurl}} and {{companyid}}, as this is frequently used for developers when working in multiple Business Central instances. This can be substituted with the complete URL instead.
When we click “Send” we will receive a filtered Read view of our Customers, showing us only entries that contain the given filter (20000):
{
"@odata.context": "[http://docker-bc22-w1-default:7048/BC/api/sparebrained/databraider/v2.0/$metadata#companies(86e811fb-de3a-ed11-bbae-6045bd8e554a read/$entity](http://docker-bc22-w1-default:7048/BC/api/sparebrained/databraider/v2.0/$metadata#companies(86e811fb-de3a-ed11-bbae-6045bd8e554a read/$entity)",
"@odata.etag": "W/\"JzE5Ozg2ODIwODIyMDg0ODgwOTgxMTcxOzAwOyc=\"",
"code": "CUSTOMERS",
"description": "",
"endpointType": "Read Only",
"outputJSONType": "Hierarchy",
"jsonResult": "[{\"_delta\":\"\",\"level\":10000,\"sourceTableNumber\":18,\"sourceTableName\":\"Customer\",\"pkString\":\"No.=CONST(20000)\",\"sourceSystemId\":\"86E811FB-DE3A-ED11-BBAE-6045BD8E554A\",\"data\":{\"No\":\"20000\",\"Name\":\"Camping Tools\",\"timestamp\":1699020983}}]",
"filterJson": "[{\"table\":\"Customer\",\"field\":\"No.\",\"filter\":\"20000\"}]",
"pageStart": 0,
"pageSize": 0
}
The filtering used within the endpoint reflects a filter that you would supply within your Business Central instance, i.e you can write the Filter Text containing (<, >, =, *, |, &, ?, etc.), as you would inside Business Central. Here are some examples on valid filter entries: Anything Starting with 20:
{
"code": "CUSTOMERS",
"filterJson":"[{\"table\":\"Customer\",\"field\":\"No.\",\"filter\":\"20*\"}]"
}
20000 OR 30000:
{
"code": "CUSTOMERS",
"filterJson":"[{\"table\":\"Customer\",\"field\":\"No.\",\"filter\":\"20000|30000\"}]"
}
Anything Bigger than 20000:
{
"code": "CUSTOMERS",
"filterJson":"[{\"table\":\"Customer\",\"field\":\"No.\",\"filter\":\">20000\"}]"
}
Anything Not Equal to 20000:
{
"code": "CUSTOMERS",
"filterJson":"[{\"table\":\"Customer\",\"field\":\"No.\",\"filter\":\"<>20000\"}]"
}
Anything Smaller or Equal to 20000:
{
"code": "CUSTOMERS",
"filterJson":"[{\"table\":\"Customer\",\"field\":\"No.\",\"filter\":\"<=20000\"}]"
}
In addition to this, you are also able to insert multiple filters at once, as in this example, we are supplying a filter on both the No. field, as well as the Name field:
{
"code": "CUSTOMERS",
"filterJson": "[{\"table\":\"Customer\",\"field\":\"No.\",\"filter\":\">20000\"},{\"table\":\"Customer\",\"field\":\"Name\",\"filter\":\"School*\"}]"
}
Pay attention to how the filters are split up, both entries are contained within a ({) and (}), and divided by a (,).
If you would happen to write an incorrect FilterJson string, you would get the following useful error as a response:
{
"@odata.context": "[http://docker-bc22-w1-default:7048/BC/api/sparebrained/databraider/v2.0/$metadata#companies(b14698e2-c740-ee11-be71-6045bdc89df0)/read/$entity](http://docker-bc22-w1-default:7048/BC/api/sparebrained/databraider/v2.0/$metadata#companies(b14698e2-c740-ee11-be71-6045bdc89df0)/read/$entity)",
"@odata.etag": "W/\"JzE5OzM0Mjc2MTQ5NzQwNjM5OTAyNDAxOzAwOyc=\"",
"code": "CUSTOMERS",
"description": "",
"endpointType": "Read Only",
"outputJSONType": "Hierarchy",
"jsonResult": "[{\"Row\":0,\"Column\":1,\"Error\":true,\"Detail\":\"There were issues with the filters provided:\"},{\"Row\":1,\"Column\":2,\"Error\":true,\"Detail\":\"Field No not found\"}]",
"filterJson": "[{\"table\":\"Customer\",\"field\":\"No\",\"filter\":\">20000\"},{\"table\":\"Customer\",\"field\":\"Name\",\"filter\":\"School*\"}]",
"pageStart": 0,
"pageSize": 0
}
Make note that the only thing that was wrong with the sent FilterJson string was the Field name, written as “No” instead of “No.”, missing the required (.).
Sending New Records into BC using Postman
Postman is an API Platform/Tool, allowing you to call API endpoints. When used with Data Braider, it is used to send and receive data through the API Endpoints you have created.
This guide will assume you understand how Postman works and know how to set up the baseline structure to be able to communicate with Data Braider. (A basic setup would include setting up the correct Environment, supplying it with the Company ID, and the API URL to your Business Central Server that has Data Braider installed.)
When working on an endpoint on Data Braider, you do not need an external tool to view the results for Read-Only endpoint, as Data Braider allows you to view the JSON results by clicking a simple action button called “Results JSON”. This will show exactly what you’ll get in Postman.
However, when you create Write endpoints to send information into BC, that is where you need Postman to test the JSON results. For example, when you have a Write type of endpoint, you can see that when you open “Edit Endpoint Fields,” you can see more information on the field definition than there is in the Read Only mode.
[screenshot]
You can see the “Write Enabled” field that allows you to choose which fields are writable and which are not. Also, there is the “Default Value” and “Mandatory” fields.
- The “Default Value” allows you to pre-specify information on an endpoint.
- The “Write Enabled” allows you to define whether the pre-specified information is writable or not. For example, you can have the Credit Value of a company included, with a pre-specified value that cannot be changed.
[screenshot]
The “Mandatory” field specifies which fields are required. And this ability to have “Write Enabled”, “Default Values”, and “Mandatory” checking on a per-field basis is Data Braider specific, which saves time and effort on each endpoint for people who are not developers. And it also saves time for developers since they no longer have to hard code for hours to achieve the results provided by Data Braider with just a few clicks.
Updating Existing Records in BC using Postman
Postman is an API Platform/Tool, allowing you to call API endpoints. When used with Data Braider, it is used to send and receive data through the API Endpoints you have created.
This guide will assume you understand how Postman works and know how to set up the baseline structure to be able to communicate with Data Braider. (A basic setup would include setting up the correct Environment, supplying it with the Company ID, and the API URL to your Business Central Server that has Data Braider installed.)
To edit the API of an item on BC, you do the following:
- Configure an endpoint called Item, for example.
- In the Edit Endpoint Field, you can add information you want to be visible in the API. For example, you can add the shelf No. The item is on in the warehouse, or the ship-to information, etc.
The same path allows you to hide the information you do not want to be visible in the API.
[Screenshot]
Instead of relying on developers to re-write the entire API to add the information you want to an API, Braider allows non-developers to add and remove information easily. This allows you to easily generate and update Read-Only JASON endpoints.
[Screenshot]
Deleting Records in BC using Postman
Generate Postman Collection:
Postman is an API Platform/Tool, allowing you to call API endpoints. When used with Data Braider, it is used to send and receive data through the API Endpoints you have created.
This guide will assume you understand how Postman works and know how to set up the baseline structure to be able to communicate with Data Braider. (A basic setup would include setting up the correct Environment, supplying it with the Company ID, and the API URL to your Business Central Server that has Data Braider installed.)
Since Postman is a common tool for API testing, Data Braider makes it even easier. As is evident from the name of the feature, it allows you to generate Postman collections. Namely, once you have selected all the endpoints you want, you simply click on a button added by Data Braider called “Generate Postman Collection”.
[Screenshot]
Once you click the button, the following will happen:
- It will download a file in Postman which you can import.
- It will create examples of all the different endpoints.
- It will provide examples of all the different types, inserts, and updates.
- It pre-fills all the information in all the examples mentioned above.
Fast Generation of Power BI Calls
When people are building Power BI dashboards against BC, they are pulling information out of BC and putting it into an analysis tool. Analysis tools allow you to display different information.
However, with Data Braider, you are provided with a new functionality that allows you to copy/paste the information we want into Power BI directly, thus displaying the information in detail.
Read Only – BC Side filtering and per-request filtering:
FlowField Logic:
The standard API for items on BC only shows the total inventory, without any reference to details such as variants or locations, for instance. This is where Data Braider comes in. Data Braider allows you to know what variants of an item are available, and each variant in which location.
In the Item variant level, you can define what FlowField data you want included, and it is a simple procedure in Data Braider.
- From the Item table (Parent Table), you apply a filter in the variant filter from the variant code.
- This will bring down the Item Inventory FlowField into the Item variant level.
The same process is applicable to the location level.
[Screenshot]
The Data Braider engine facilitates this process and allows you to combine complex information to put it together in a manner that suits your needs. A process that is essential for any company that wants to present the stock-level information for different items, but that is difficult and lengthy to be implemented in a standard API endpoint, is made easier and accessible with Data raider, not only for developers, but also for non-developers.
Per-request Filtering
In the standard API endpoint, you can configure the BC side to pre-configure a filter. However, you might want to be able to get a list of all the items and choose a filter you want.
Data Braider offers you two different levels of filtering:
- The endpoint filters that always apply.
- The specific filters you can request through Data Braider.
Additional Information
Detailed Audit Logging and Clear Error Messages:
Error Messages
A clear error message can go a long way in facilitating your work. For instance, if you do something incorrect with filtering in a BC API, the standard error messages you get are ambiguous and unclear; they do not specify the problem or what you did wrong.
With Data Braider, however, the error messages are clear, precise, and to the point. They provide an accurate diagnosis of the problem, as such saving you time in figuring the problem out and allowing you to resolve the problem momentarily.
Audit Logging
From the API endpoint configuration, you can enable Logging. This feature allows you to choose how easily the endpoint can self-clear, providing the following options:
- You can specify the maximum number of log entries to keep for the selected endpoint.
- You can specify a time for log entries older than a specific date.
The “Data Braider Endpoint Logs” displays all the calls made to the selected endpoint. Furthermore, one of the major highlights of this feature is when opening an entry, you can see the exact JSON Input – a feature specific to Data Braider and not available with the standard API endpoints.
This feature also allows you to view the exact Output of the endpoint in response to the Input. As such providing an advantage for troubleshooting, as it displays all the information of the transaction. Moreover, this feature allows you to see whether the call was successful or not, and clicking on the unsuccessful entries will show you the malformed information that led to the failure of the call. This importance of this feature emerges from its ability to provide you with a detailed diagnosis of each transaction, a feature that is Data Braider-specific.
Another advantage of this feature is that it allows you to enable logging for specific endpoints. As such, you can enable logging for only the problematic endpoints, allowing you to keep track of the calls made in said endpoint, and then create a diagnostic report to figure out where the problem is, and eventually solve the problem.
Import/Export Configurations for Easy Re-use:
This process is one that requires hours of work in a standard API endpoint. However, Data Braider allows you to complete this process through a few clicks on buttons added.
The “Export” button added by Data Braider exports all the selected endpoints.
You can also Import the information into a different scenario on BC.
Another advantage of this feature is that you can edit the information even after you import it into the desired location.
Duplicate Information
Another Data Braider-specific feature is that it allows you to duplicate information without breaking the existing integration. The process is as follows:
- Create a new costumer, for instance.
- From the “Related” action button, you can “Copy From”.
- Then you choose the endpoint from which you want to copy.
This simple, straightforward procedure is a time-saver that packs a few hours of work into a few clicks.
Privacy Information
Using Data Braider, you are still accessing Business Central via the standard Business Central APIs. This means that you gain all the authorization safety and security of the Business Central API itself. Additionally, you have the same level of permission and data security options that the Business Central API provides.
This means that you have full control over how any third party can access your Business Central data using their account information (User) and choosing what permissions you grant access to (Permissions).
[Link to Business Central Docs on security]
Additionally, Data Braider allows you to define what types of Write operations are allowed for a given endpoint. So, you can easily configure a third party to have permission to Update a customer, but not create or delete any.
Using the Filtering technology on the Endpoint, you can also restrict what data the endpoint is allowing access to.
Between all these security layers, you should be able to fully control access and privacy control of your critical business data.