Design Patterns for Success Factors Integration Process using BOOMI

Design Patterns for Success Factors Integration Process using BOOMI

Introduction

Integration Design is the Key for the data movements in an Organization. With the advent of Cloud based applications, the need for Integration will be very vital. Generally these Integrations can range from Simple to Complex, depending on the Integration need.

There are many Integration design patterns out there like Canonical data model, Façade design pattern, Migration, Broadcast, Messaging etc. These design patterns serve as a formula for the Integration Specialist. It is expected that your Integration specialist has a clear understanding of these patterns and apply them during the development.

A Design Pattern is a repeatable piece of algorithm that can serve as a Template that can be used in a predictable manner.

Why should you use Patterns?

Using Patterns can simplify many common issues that one might face during the design. When you have a set of Patterns established for your use case, it will be very helpful in a distributed development scenarios. Using patterns can avoid costly mistakes upfront during the design, development and Implementation phases of a Project. Some of the benefits include ease of development, cost savings, time reduction, error rate reduction, lower maintenance cost, lower downtime etc.

If the Interface is not developed correctly, you will realize an increased Maintenance cost down the line. It will be a nightmare all along and finally we tend to blame it on the Middleware itself. On the other hand, if it has been developed correctly in the first place, you will see a good ROI much sooner.

Objective of this Article

I would like to discuss just a couple of design Patterns, a few use cases and the Patterns that I have worked with successfully and why they saved us time and money. I would like to keep the discussion within the perspective of Success Factors Integration using BOOMI as the Integration Process tool. I will walk you through the examples as necessary and even compare between an Integration with No-Design Pattern and an Integration With-Design Pattern.

The Design Patterns are Tool agnostic. Anyone can use the Patterns for the right use cases for any Integration Process using any Integration tool. For Example: you can use a Canonical Pattern for Sales Force Application to Database Integration using Hana Cloud Integration (HCI), BOOMI or WebSphere as your Integration Platform. Basic principles remain the same. Hence a Pattern ties closely to the Integration Process and not the Tool.

Design is based on the Use Case, experience and Imagination to a certain degree and finally passing all the Test Cases. No one solution is perfect and the Solution that works is Perfect.

1. Payroll Integration - Multiple Countries

Canonical Model is nothing but a standard way of Interface representation. Consider a requirement where Payroll information needs to be sent from Success Factors to multiple Trading Partners across different countries. As Success Factors is one central portal for Global Employees, this is a very common scenario.

Here is the requirement. Company A wants to Integrate all Employees Payroll Information with their respective 3rd Party Payroll Management Companies. For simplicity, we will identify these 3rd Party Payroll company names by their Country names suffixed with P. For Example: IndiaP process payroll for India Company Code Employees and ChileP is for Chile Company code Employees.

Approach I is a No-Pattern approach.

By this what I mean is that the Developer might not be using any Design Pattern that could have simplified the process. This also could be due to the fact that each of these Interfaces are built by different Parties and this is their way of Separation of work.

Drawback of this approach is less Loosely Coupled, More development and Maintenance cost and Less ROI.

Approach II uses the Facade and Routing Pattern based on the Content.

As the Data Source is one and only one, when a change happens to the Employee Records in Company A, the Interface pulls the Delta Changes based on Date Time Stamp via Compound Employee Interface. All the Payroll and Other relevant Information changes are realized in BOOMI irrespective of the Destination. The message is then Routed to the respective country companies based on the Message. This is called Message Based Routing. The content is routed based on the Company Code of the Employee.

If there is further Processing specific to each of the Country, then a Sub Process would encompass those Country Specific Processing. Thus all the Message from the Source first gets to the Core Processing, which is one and only one. Message gets Split to invoke Country Specific Sub Process and then reaches the Destination.

Country Specific Sub Process can include a Facade Design Pattern to map only the fields required by the destination.

If you compare this Approach with Approach I, the BOOMI process is repeated for each of these Countries in Approach I and there is 1 Process per Country. More Code, More Maintenance and thereby More Cost.

Approach III with Canonical, Facade and Routing Patterns

Company A will build a Canonical data structure based on the data requirements. This is a unified approach. Building such a Canonical Data Model is a challenge for an Enterprise as one should know all the Data Requirements upfront from all the Employee Countries beforehand. But once we have this built in place, the ROI is definitely realized much sooner than Approach I and II.

BrazilP might require only Personal Info changes and Payroll Info changes to be sent. It does not require Address Info. IndiaP however, requires Personal, Payroll and Address changes.

Now you will build the Canonical model to reflect Personal, Payroll and Address. Not the subscriber would retrieve this Information via the Canonical Interface and then take only what he wants.

BrazilP would get the Info from this Canonical Interface and disregard the Address Info and use the Personal and Payroll related changes. A subscriber process would ideally reject the record and will not recognize the change, when there is only a change in the address.

One can utilize another Mapping to utilize a Facade Design Pattern to extract specific fields for BrazilP or IndiaP. A mapping between Canonical Data Model and the Country Specific Data Model would expose only the data required by that Company Code.

The above would ease the development challenges, time, enable modularization, minimize downtime due to Interface changes etc.

2. Restart Interfaces - Commit and Rollback

One important challenge during the construction of an Interface is to know when to Roll Back, save the Restore Point and resend the Records. How do you know when and how to fail an Interface?

Let me give an Example. I retrieve 3 records from Success Factors for posting to a SAP CATSDB Time Sheet Custom Interface. First record posted Successfully, Second one failed.As a result, the third record did not process. What is my Recovery Point? How do I set my Logical Unit of Work (LUW) for an Interface?

Compound Employee API is a critical common API used for all Data Transfer Operations in Success Factors. When the API is called, a Snapshot of the Employee is retrieved at that Query Time. There is a Date Time Stamp that comes for each Query as a part of the Payload called execution_timestamp. This is the exact time of retrieval of the Information from the SuccessFactors.

Design should be Simple, Easy to Understand and Maintain. Care should be taken not to Over Design or Over Simplify the Process as well. It should just address the Use Case as well as pass all possible Test Case scenarios.

That being said, in order to arrive at a Restore point, you should first define your LUW. You then save the Execution Time Stamp at the LUW level.

Simple formula to define LUW is sometimes at the Interface Level. Interface Level LUW means, if my Interface fails for any Record, I will Roll Back the whole set as if the Interface never ran. This means, if 1 out of 3 passed and 1 posting was successful, I will re-run for all 3. This is a much easier pattern.

Interface Level LUW (Start Over from the beginning upon Failure)

I will define 2 Process Properties called "Last Execution Time" and "Last Execution Time - Temp". "Last Execution Time" is the only one that I would choose to "Persist across Subsequent Executions" in BOOMI.

When a Process is Started, I will save the Last Time Run in the "Last Execution Time - Temp". In the Map, I will save the Execution_TimeStamp of the Compound Employee to the "Last Execution Time".

This way, if the Process is Success, the most recent time stamp is saved in the Process Property Automatically. The Compound Employee API retrieves data in Ascending Order and so the latest Time Stamp is saved for future runs.

If the Process Fails, I will Roll Back the Time from the "Last Execution Time - Temp". I will swap the "Last Execution Time" from "Last Execution Time - Temp". This is the reason why I save this in the first place.

Use Case for the above would be the following:

  1. Smaller and Frequent Update Interfaces
  2. Multiple Updates Allowed without Issues. (Not Suitable for One Time Payment Positing)
  3. Other Record Posting can wait until the Error is handled for the failed record.

Employee Level LUW (Re-run only Failed Records)

If your LUW is at an Individual Employee level, you may need to apply the Data Split to separate the Record Sets at the Employee level. A set of Processing is done for each Employee and then collect all update failed records in a Cache for reporting.

Using the same above CATSDB example, as we are processing record by record, we know which ones failed to Update by the returned response from SAP Function Module. When a error is detected, we branch out to cache the Error Details as well as the Record set itself. We will not fail the Interface at this point, but continue processing the next Employee Record.

When you repeat the above process, we finally end up with a set of Successful Transactions and failed Transactions, all at the Employee level.

Now finally, we read the Error Cache and send the Information via Email Alerts to the Support group. The Support group can then rerun the Interface after correcting the Data/reason for Error.

In this Use Case, we are saving and Moving the Last Execution Time Stamp normally as if the Interface was Successful, though a few records might have failed to update.

The Interface restarts are not necessary, but the design should allow the Maintenance to run the Interface for only a set of Employees, On-Demand. This means, you allow the Process to read the Compound employee API for only certain Employees, if the Process Property has the Indicator set to run for Specific Employees. The Process Property should allow for saving a set of Person_Id_External for the Re-runs.

Conclusion

Interface Design has to be kept as simple as possible. By utilizing the reusable design patterns, the process becomes more Predictable and Repeatable.

Records are frequently lost or never processed due to faulty Interface design. This results in laborious Reconciliation efforts. By utilizing a Predictable and Repeatable Design Patterns, ROI can be realized for the Interfaces much sooner.

Designing Interfaces is a complex problem that should be designed by an Architect, working with the Enterprise Architect, as applicable. This will enable to visualize and set the design to easily adapt for the future upcoming needs of the Enterprise.

Once the Interface has been designed, it is difficult for an Enterprise to re-visit and re-design with less effort. Hence care should be taken to design Interfaces correctly upfront. Middleware exists to solve the Data Exchange Needs and Faulty Interface Design is capable of raising questions on the usage of Middleware Itself.





     

To view or add a comment, sign in

More articles by Rajkumar T.

Others also viewed

Explore content categories