Financial Management Blogs by Members
Dive into a treasure trove of SAP financial management wisdom shared by a vibrant community of bloggers. Submit a blog post of your own to share knowledge.
cancel
Showing results for 
Search instead for 
Did you mean: 

The motivation for data monitoring


In the following, the usage of data monitoring within the context of data integration of SAP FPSL (financial product sub-ledger) with Apache Kafka is presented.


A separate article about data integration with Apache Kafka and SAP systems (e.g., FPSL) that covers the main challenges and considerations was published lately.


The architecture of the SAP system consists of layers where data are stored, processes and methods that operate on these layers.


For integrating any non-SAP System to SAP FPSL, SAP delivers standard functions to write data into SAP layers. Here mainly technical proof will be done so that standard ETL solutions do not automatically contain standard evidence of consistency.

This can lead to missing integrity in time, especially when the delivery systems are in different time zones and when they deliver data to different times.

Furthermore, referential integrity is not covered. Here more than one layer needs to be considered and checks on keys and values have to be performed. This proof has to be done before data are stored in SAP so that a complete set of data can be evaluated within SAP.

Finally, a semantic proof is important as well, so that the attributes of characteristics are correct and the processes and methods of SAP systems can be operated without errors.

To fill these gaps, the Apache Kafka to SAP adapter Xeotek SAAPIX was released. Xeotek SAAPIX can be used with the data monitoring tool KaDeck to allow full traceability of the data pipeline.

 

How can KaDeck help?


KaDeck by Xeotek is a data-centric monitoring solution that enables the user to show and analyze data and processes in Apache Kafka. Using Kadeck's topic browser, data streams can be viewed, analyzed, and in case filtered.

KaDeck fully integrates with Xeotek SAAPIX, which connects Apache Kafka to SAP FPSL.

By keeping data streams in topics, data can be extracted at any time from the delivery systems and send to SAP. As soon as other dependent systems have extracted their data as well, relevant data will be validated according to their referential and semantic integrity.

In case the user determines errors due to the rules for referential and semantic integrity according to the SAP data model, data can either be not loaded into the SAP System or will be sent to an error topic.

 

Experience with Kadeck?


The objective of this example was to send data via Kafka to an SAP Back-End System (SAP Financial products sub-ledger or SAP Bank Analyzer). In particular, financial transactions and business transactions were loaded.

The delivery system was simulated by JSON-files and via Kafka loaded into SAP. The sample data were created according to the SAP data model so that no mapping was necessary.

The Kafka UI Kadeck was used to inspect the delivered data and to identify error data.

Data having not appropriate expressions to fulfill the recognition rules of the SAP model were identified as errors.

These data were easily identified by using various filters and preprocessors (advanced filters with JavaScript or Java) that come with KaDeck.

 



Fig. 1 Filtering and analyzing data inside topics with KaDeck. 

 

What is planned?


So far, Kadeck was used to inspect data in one topic so that semantic inconsistencies were identified.

Another aspect is to analyze data spread across multiple topics to fulfill referential integrity. This will be possible in a future version of KaDeck. Therefore, for example, referential integrity between business partners and business partner roles and financial transactions can be proofed across topics.
1 Comment
Top kudoed authors