Consuming End-User Defined Custom Data Mappings in Integrations with Tray Embedded
Yep. That’s the title of this blog. Feel free to submit it to the Guiness Book of World Records for biggest mouthful, most convoluted blog titles on LinkedIn. That being said, let’s jump right into a problem that you need to solve with integrations at scale.
The problem that exists in integrations is determining what data needs to be mapped between any two services. It’s easy enough to hard-code that a Salesforce Account ID should map to a specific column in a Google Sheet. However, what do you do when you do not know the custom fields that an end-user has created in say Salesforce? Sure, you can easily ask your end-user what that custom field is and hard-code it into their integration for them. That’s easy. But if you had to do this for every single end-user, the only thing you would have time for in your day job is to duplicate and hard-code these data mappings. That’s no good.
Enter Tray Embedded! With Tray Embedded, you can present options for users to define exactly what those custom fields are and what they should map to in other end systems. This blog post will walk you through setting up a Salesforce to Google Sheets integration where a user can select the specific Salesforce fields they want to map to certain columns in Google Sheets. It will also then show how to consume those user inputs in your workflows.
The Workflow
The workflow will be webhook-triggered. When it is triggered, it will retrieve Salesforce records from the end-user’s account. It will then perform some data manipulation to put it in a format that Google Sheets can digest.
The Google Sheet aka the Pseudo-Database
We will have the end-user determine what Salesforce fields will map to the Google Sheets columns My_ID and My_Name. We can also think of this Google Sheets service as our pseudo-database. If you are collecting data from any given service and are wanting to map it to an acceptable format for your database to consume, this Google Sheet example will act as a stand-in database.
The Config Wizard End Game
The end-user will supply their Salesforce authentication. Then they will determine what Salesforce object they want to see field properties on for use in the mapping section beneath this object field.* Then the user will choose what field properties will map to the two uneditable columns in the Google Sheet.
*NOTE: You can choose to hide this Salesforce object selection from the user. For example, if you always want to only show fields that exist on a Salesforce Account object, you can move this field to the Hide from Users section and give it a config default of “Account”.
How to Expose These Fields in the Config Wizard
Upon inserting the Salesforce connector into your workflow, the authentication needed to power that connector will be exposed as a configurable property. To make the Salesforce object configurable, simply go to the operation you are performing that requires a record type to be inputted. Convert that record type field to a config. Below is an example of doing that on the Find Records operation.
All that is left now is exposing a config property that is of type array. Since we will be using the Script connector to perform the data mapping later, we can go ahead and create an input property on that connector that will house this array.
Creating the Data Mapping Options in the Config Wizard
Click on the mappings config property in the Solution Editor and then clicking on Use Data Mapping will reveal the below modal.
There are many different ways you can retrieve the data you want to use in this component. For more details on flexible data mapping in Tray Embedded, go read the documentation here!
In this example, we are going to use Dynamic lookup because we want to mimic the API calls used in the Salesforce connector. These same API calls are what’s used to present you options to configure on any given operation in the connector. We are going to mimic the Find records operation because it can return the properties on the returned records.
We are then going to tie the end-user’s unique authentication to this API call in the auth slot field. This will ensure that the custom properties that exist in this user’s Salesforce can be surfaced. We will then choose private_list_object_fields to present the returned object’s fields as options. Finally, we will attach the selected Salesforce object to show the properties for that object in the config slot field.
Recommended by LinkedIn
Since we know what my pseudo-database, my Google Sheet, is expecting, we can hard-code a list of the properties that exist there. We can also make the options that appear here to be dynamic based on some API call just like we did with the Salesforce example above!
On the final screen of this modal, we will want to specify the two Google Sheets properties that are required to be mapped to from some Salesforce fields. We can do this by using a combination of the Mark all right column fields as read only and Do not allow end users to add mappings checkboxes.
Consuming the Custom Data Mappings
The easiest way to determine how the end-user’s choices are stored in the mappings config property is to make a sample solution instance. After doing so, we can see how that data is structured as input into the Script connector like the below image:
To map the Salesforce Account ID and Account Name to My_ID and My_Name, we will need to update the Script connector.
We will include the Salesforce record as an added input variable. Then we will add this Javascript into the connector:
exports.step = function (input) {
// The new object we will use to create an entity in Google Sheets:
var newEntity = {};
// For each field in the mapping, add a field into `newEntity`
// where the name is the key from the right side of the mapping
// and value from the Salesforce object, using the key from left side of the mapping
for (var field in input.my_mappings) {
newEntity[input.my_mappings[field]] = input.salesforceRecord[field];
}
return newEntity;
};
Prepping the Mapped Data for Consumption in Google Sheets
In order to add this mapped data as a row in my Google Sheet, we need to do some data manipulation to construct objects containing column_heading and value properties. We can discover that this is the required data structure by testing a sample row insert with hard-coded data in the Google Sheet connector. You can do the same for any given service you are moving data into.
To do this data prep, we will include another Script connector to create these objects.* The Javascript to perform this prep is the following:
exports.step = function(input, fileInput) {
let headings = ["A","B"];
let heading_counter = 0;
let formatted_for_google_sheets_data = [];
for (let field in input.data){
formatted_for_google_sheets_data.push({
"column_heading":headings[heading_counter],
"value":input.data[field]
});
heading_counter++;
}
return formatted_for_google_sheets_data;
};
After prepping this data, we can now send it off to add a new row of data in the Google Sheets connector.
*NOTE: You can combine the two Script connector steps into a single Javascript snippet. For the sake of the explanation of mapping and prepping data, I separated it into separate steps.
Conclusion
At a high level, these are the steps you need to follow for getting user-defined data mappings to manipulate data when connecting two services:
1. Expose the selectable properties in the config wizard by using the data mapping checkbox on a config property.
2. Perform the data mapping in a Script connector step on the incoming data from some service.
3. Prep the newly mapped data for consumption by the service the data is going to.
4. Send the prepped data to the outbound service.