Skip to Content
Technical Articles

Data Monitoring with Kadeck within Apache Kafka in the context of integrating SAP Financial Products Subledger

Motivation for data monitoring

In the following, the usage of data monitoring within the context of an integration of an SAP back system in the area of financial services as SAP FPSL (financial product sub-ledger) is presented. The architecture of the SAP system consists of layers where data are stored and processes and methods that operate on these layers.

For integrating the non-SAP System to SAP FPSL, SAP delivers standard functions to write data into SAP layers. Here mainly technical proof will be done so that standard ETL solutions do not automatically contain standard evidence of consistency.

This can lead to a missing integrity in time, especially when the delivery systems are in different time zones and when they deliver data to different times.

Furthermore, referential integrity is not covered. Here more than one layer needs to be considered and checks on keys values have to be performed. This proof has to be done before data are stored in SAP so that a complete set of data can be evaluated within SAP.

Finally, a semantic proof is important as well, so that the attributes of characteristics are correct and the processes and methods of SAP systems can be operated without errors.

 

How can Kadeck help?

Kadeck by Xeotek is a data-centric monitoring solution that enables the user to show and analyze data and processes in Apache Kafka. Using Kadeck’s topic browser, data streams can be viewed, analyzed, and in case filtered.

By keeping data streams in topics, data can be extracted at any time from the delivery systems and send to SAP. As soon as other dependent systems have extracted their data as well, relevant data will be validated according to their referential and semantic integrity.

In case the user determines errors due to the rules for referential and semantic integrity according to the SAP data model, data can either be not loaded into the SAP System or will be sent to an error topic.

 

Experience with Kadeck?

The objective of this example was to send data via Kafka to an SAP Back-End System (SAP Financial products sub-ledger or SAP Bank Analyzer). In particular, financial transactions and business transactions were loaded.

The delivery system was simulated by JSON-files and via Kafka loaded into SAP. The sample data were created according to the SAP data model so that no mapping was necessary.

The Kafka UI Kadeck was used to inspect the delivered data and to identify error data.

Data having not appropriate expressions to fulfill the recognition rules of the SAP model were identified as errors.

These data were easily identified by using various filters and preprocessors (advanced filters with JavaScript or Java) that come with KaDeck.

 

Fig. 1 Filtering and analyzing data inside topics with KaDeck. 

 

What is planned?

So far, Kadeck was used to inspect data in one topic so that semantic inconsistencies were identified.

Another aspect is to analyze data spread across multiple topics to fulfill referential integrity. This will be possible in a future version of KaDeck. Therefore, for example, referential integrity between business partners and business partner roles and financial transactions can be proofed across topics.

A separate module that allows orchestration of the data to be loaded will also be soon available.

Be the first to leave a comment
You must be Logged on to comment or reply to a post.