Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 

Introduction


This is in continuation from my earlier blog post related to customer expectations and challenges with master data integration in a distributed landscape.

With this blog post, I would like to focus on ways in which these challenges can be met centrally thereby acting as a value driver for customers.


 

Challenge :Support for complete S/4HANA data model


As described in the previous blogs, SAP customers have landscapes with multiple instances of S/4 HANA or ERP systems. Quite frequently there are scenarios wherein these instances need to exchange master data. Thus  system A and B (from the above diagram ) could be different instances of S/4HANA system itself. Thus:

  • For customers, aligned domain model should intrinsically cover all SAP S/4HANA attributes. This would enable integration of multiple S/4HANA (or ERP) system in customer landscape.



  • Internally within SAP , aligned domain model should provide the flexibility to applications to define attributes outside the scope of core attributes.  This should take care of the challenge wherein customers need to integrate multiple instances of S/4HANA  (or ERP) applications.This would need some discretion on part of applications as what is true for S/4HANA might not be true for others. If done without discretion, this could potentially result in 'ballooning' up the domain model defeating the whole purpose of the exercise. Thus there could be an alternative point of view of allowing this on case to case basis only.



  • Ownership of core attributes should still lie centrally with aligned domain model. All applications should be able to communicate via core attributes of a master data object without need of explicit mapping. That essentially means core attributes are compatible across  applications semantically.



  • Individual applications should be able to request inclusion of attributes in core attributes as per business requirements. This should be followed by central alignment  of requested attributes across applications eventually resulting in inclusion in the core model



  • Since this is going to be an incremental exercise, it has to be taken care of through versioning of data model


Once domain model is aligned,  Integration services should enable publish of events based on the same (with core attributes and application specific attributes) across multiple applications. Subscribing applications should be able to discern what they want to consume. This brings us the the next challenge:

 

Challenge: Manage data consistency during integration


Customer wants small atomic data updates (also called patches) rather than transecting with full scope each and every time. This would take care of the challenges mentioned in the earlier blogs. Let me try to illustrate with the below examples ((please refer to the earlier blog for details on scenario):

  • In the first example, application C should be able to send only relevant updates for object 1 for e.g. address. These updates should not impact other data (like bank information) in master data integration layer. These atomic/ patch based updates should apply for entire body or to individual fields with in the structure (for e.g. street within address) of master data object.



  • Application C should also be able to state explicitly  the 'nature' of update i.e.


    • remove - removes an existing field.




    • replace - replaces the value of an existing field.



    • upsert - adds a new field if it does not exist otherwise replaces it . 




Thus it should be able to differentiate b/w not sending data and purposefully deleting it. 

In addition to enable applications to publish patch based updates, subscribing applications should also be able to consume them. This would require some adoption effort on consuming application part.

Value proposition with patch based updates :

  • Avoid overwriting changes inadvertently by writer applications outside of targeted patches .



  • Reduce the size of the overall event there by resulting in improved performance and scalability of operations.


 

Conclusion


I would like to conclude here for this blog post. I hope I am able to propose some answers to the business challenges that I have mentioned in my first blog post. There is no doubt in my mind that although these are complex challenges, resolving them could bring significant business value to our customers.