ABAP Integration – Replicating tables into SAP Data Intelligence via SAP LT Replication Server
In this blog post I will show you, how you can make use of the “SLT Connector” operator to consume up to date business data within SAP Data Hub and SAP Data Intelligence.
Remark: SAP Data Hub and SAP Data Intelligence can be treated for the purpose of this scenario exactly the same. For simplicity reasons I will mention SAP Data Intelligence only. In case you would like to run this scenario with a SAP Data Hub system, the procedure is exactly the same.
SAP Data Intelligence offers a built-in integration with SAP Landscape Transformation Replication Server (SLT), the real-time replication technology from SAP positioned for data replication out of SAP systems. The pre-delivered SLT Connector operator within SAP Data Intelligence will handle the communication to the remote SLT component on the source system and allows doing delta replication of tables into SAP Data Intelligence based on SLT technology.
This functionality is part of the ABAP Integration within SAP Data Intelligence. If you are not familiar with the overall concept of the ABAP Integration, please have a look at the overview blog post for ABAP Integration.
For any SAP S/4HANA systems greater than 1610, you are good to start. The remote SLT component is included in the core of your SAP S/4HANA system.
If you however run this scenario with a SAP Business Suite source system, you need to make sure, that the non-modifying Add-On DMIS 2018 SP02 (or DMIS 2011 SP17) is installed on that system.
Besides, you need to be able to establish a RFC connection from your SAP Data Intelligence system to the SAP system. Ideally, you have already created this connection via SAP Data Intelligence Connection Management. To get more details on the connectivity, have a look at the following note: 2835207 -SAP Data Hub – ABAP connection type for SAP Data Hub / SAP Data Intelligence
We received many requests by customers and internal stakeholders with a use case pretty similar to the one you see in the below picture.
There is flight data stored in a custom table (ZSFLIGHT) on a SAP Business Suite System, that we would like to store on a S3 file system. It is important for our use case, that the data in the S3 file is always up to date, respecting any changes to the flight data in the source system.
The SAP Business Suite system has DMIS 2018 SP02 installed. It includes SLT functionality such as SLT’s read engine and the built-in change data capturing mechanism that will allow fetching deltas.
To provision the data to the S3 bucket, we will use a SAP Data Intelligence pipeline, that reads the data via SLT into SAP Data Intelligence, transforms the data in a compatible format and finally writes it to S3.
Prepare the source system (ABAP system)
First of all we will logon to the SAP Business Suite system to prepare SLT. Before we can communicate from our SAP Data Intelligence pipeline with SLT, we need to have a SLT Configuration in place (which you can imagine like a project entity inside SLT, representing basically a combination of a source system connection and a target system connection).
- Therefore go to the SLT cockpit by entering transaction code ltrc in the command field. Within this environment you can find details to existing SLT data replications and you can also create, monitor and execute additional ones.
- Click on “new” to create a new SLT Configuration.
- Provide a SLT Configuration Name, for instance “SLT_DEMO” and click “next”.
- Specify the source system connection, in our case RFC Connection equals none (as we like to load data out of the same system, that also SLT is running on). Click “next”.
- Specify the target system connection to SAP Data Hub or SAP Data Intelligence. Therefore choose option “Others” and specify “SAP Data Hub / SAP Data Intelligence”.
- Define the SLT Job Settings. If you plan just a simple test of replicating a single table to SAP Data Intelligence, it is fine to provide one job for “Data Transfer Jobs” and as well for “Calculation Jobs”.
- Click “next” and then “create”.
- Note down the Mass Transfer ID, that has been generated. This ID uniquely identifies the SLT Configuration and is required later for the configuration of the SLT Connector operator.
Implement the data pipeline (SAP Data Intelligence)
Having created the SLT configuration, we are good to start building our pipeline in SAP Data Intelligence.
- Open your SAP Data Intelligence Modeler and click on the “+” to create a new pipeline.
- Make sure, that all categories are selected for the operator repository (especially we need the category of ABAP operators).
- Drag and Drop the SLT Connector operator to your workspace. If you can’t find it, you might want to use the search functionality.
- Now we need to configure the SLT Connector operator. We need to provide the Mass Transfer ID of our SLT Configuration, the table that we would like to replicate and the connection to the ABAP system. Ideally, this has already been created in the central Connection Management. If so, we can just reuse it. If not, we can also specify the connection manually.
- Drag and Drop the ABAP Converter Operator to the workspace. This operator is required in order to transform the table records coming from the SLT Connector operator into a standard string format (based on JSON, CSV or XML).
- To configure the ABAP Converter, we need to specify the same ABAP connection as before and need to define the format that we would like to use. In our case we will use CSV.
- Drag and Drop now the Write File operator. This operator will write the records of table sflight down to the S3 file system.
- The Write File operator needs the following configuration values.
- Connect the three operators to a pipeline and save it. Note, that the SLT Connector offers at the moment two outports, “outRecord” and “outTable”. The outRecord outport will pass the data record by record, whereas the outTable outport hands the data over in bulks (one RFC call takes a portion of records at one glance). Typically we will use the outTable ourport, as this is faster.
Execute the data transfer
- Before starting the actual pipeline, let’s take a look at our SFLIGHT table in the ABAP system. We can check the data via transaction se16: We see 13 records. As we did not implement any kind of filtering on the way to SAP Data Hub, we expect the very same data records also at the end in our S3 file.
- Also we will take a look at our S3 bucket. To browse the bucket for our file, we will use the MinIO Browser. At the moment it looks like this:
- There is not yet a file sflight.csv for our flight data created ( – the other file rating.csv can be ignored).
- Now start the execution of the pipeline.
- Once the pipeline is running, we will see within the SLT Cockpit, that the table replication is being scheduled.
- Having a look at the MinIO Browser, we see that the file has been created right away.
- Let’s download the file to verify the result.
- As we are also interested in delta data, the pipeline is running constantly. If the source table is changed, the delta is immediately arriving in the S3 file. No we will provoke changes to the source data, to verify also the delta replication. We now delete and update a record via se16 in the ABAP system.
- Checking the file again via MinIO Browser we can see that the file has changed already with the timestamp within the “Last Modified” column.
- Now let’s open the file. We can see that there have been two additional records appended, one for the delete and one for the update operation. At the end of each record, we can also track whether the delta record results out of an inser, update or delete operation (see the D and U at the end). This is pretty cool, as it allows us to react on the operations differently. We might face scenarios, where we are not interested in replicating deletes, but only updates and inserts. For such scenarios we could easily extend the pipeline with an additional operator, that filters out certain records.
Thank you for reading this blog post. Feel free to try it out on your own and share your feedback with us.
What would be the approach to connect S/4 HANA Cloud over SLT to SAP Data Hub or SAP Data Intelligence ? From the slides I've got the impression this is feasible, but the explanation in this blog (and anywhere else) only refers to the S/4 HANA On Premise case.
unfortunately the SLT is not allowed to call into S/4HANA Cloud systems requesting data based on tables. So this scenario indeed only works for S/4 oP and Business Suite systems.
What you can do however is to extract data based on CDS Views via the CDS View Reader operator in SAP Data Hub / SAP Data Intelligence. This operators allows also initial load and delta (like SLT). You might want to checkout this blog post to get some details on the CDS View replication to SAP Data Hub / SAP Data Intelligence: https://blogs.sap.com/2019/10/29/abap-integration-replicate-abap-cds-views-via-sap-data-hub/
Very useful blog Britta! Thanks a ton:-)
would like to know, how change data capture worked in above scenario, as used only tables as source , not the ODP context tables.
Can you give any blog post in which : Replicating Real time S/4 data via ODP / ODQ SLT to Data service and Data service will connect to AWS S3 as target.
SAP LT Replication Server (SLT) comes with a CDC mechanism on table level.
I would suggest to search/google for SAP LT Replication Server (SLT) or go to help.sap.com . I am pretty sure you will find a ton of material ?
Great post ! I am new to Data Intelligence, your team's blogs are helpful to gain insights into different operators.
You mentioned the pipeline is running constantly to get delta data. What happens if it has to be stopped?
When using SLT, we can suspend replication (eg: maintenance event) and resume it without data loss. Is similar feature available in Data intelligence?
I am trying to read delta using ABAP ODP reader, all works as expected as long as the pipeline is constantly running. However, once it is stopped and started a new delta reinitialization is happening. I would like to reconnect to old ODP subscription and pull data from the point when it was stopped
I added 'subscription id' in configuration, graph still creates a new subscription when it starts.
We are on SAP DI 3.0 connected to a Suite on HANA.
Any guidance on this is greatly appreciated.
I am experiencing the same problem with the ABAP ODP reader. The parameter "subscription ID" has no effect. Have you been able to resolve this?
After I submitted my analysis (see below posts for root cause) SAP released new notes. Please install note 2979473 to fix issue .
Thanks very much for investigating and for fixing the issue
Unfortunately not yet, I am still trying. OPD extractor works perfectly fine for subscriptions from BW, I am at a loss on what else to try in DI operator settings.
Can you please update here if you find a solution?
I found the root cause of issue. In our case it is ‘Subscription id’ config name mismatch between what is defined in DI graph and what ABAP code is expecting.
In DMIS 2011 sp18 'ODP reader' operator code is expecting name to be 'subscriptionID’ but in DI operator has config name as "Subscription ID" . So no matter what the value in DI, ABAP code always treated as if there is no subscription id and started a new delta init. I changed config name in json script version of DI graph and delta mode worked as expected. I am working with SAP to figure out corrective notes to fix it.
Hope this helps.
Realy nice post. I am fairly new in this subject of Data Intelligence. Now I am building some pipelines ans see that the SLT connector you are using is deprecated.
Using the new one is different as you can not specify the way you want to unload the data(full load, replication or delta load).
In the documentation of the new slt connector i cannot find the unload methodology. I do see the following:
"Currently, the only supported sequence for replicating the table in the mass transfer is to first do an initial load and then start the actual replication process."
But here I am not sure how to do this.
Can you please help a bit further on this.
Nice feature of the new SLT connector is that it also sends the metadata this is good news.
for the new SLT connector operator there is a new way to configure the operator. You need in the first hand to define and select your connection. Afterwards, you need to select the version you want to use for the operator. Depending on the version you select, you do not need to have the ABAP conversion operator in the pipeline, as for the V2 operator the output is from the type message. After selecting the version you should get displayed other fields to customize, also the entry Transfer Mode. In the dropdown menu you can select between the different options with full load, replication and delta load. The intial load transfers all data to your pipeline, the replication includes the initial load and then transfers the subsequent changes and the delta load only transfers new data without an initial load.
So out of these different options you can select what suits more to your use case. In addition to the different settings you now have the possibility to define a Subscription to pick up your current state also from different graphs. To get access to the Data Intelligence documentation, you can use following link. The sentence you referred to is from the SAP Data Hub documentation, which differs in some aspects.
I hope this answered your question.
Thanks for your quick reply.
Now I am using SAP Data Intelligence cloud and having version 2010.29.22. And only having V1 availble of the slt reader.
can you confrim that it might be due to having SP17 of DMIS2011_1_731
the V2 operator can be used when the SP19 is installed for the DMIS system. The details can be found under this note.
If we are on a S/4 1809 where the ABAP CDS Reader with DI is not supported and the SLT Connector is the way to go until we upgrade. Does DI include the license for SLT in this particular use case?
Hi David, yes, the SLT Runtime License is included in DI for this use case. Please also check out the SLT Licensing Note.
I am facing some problem while doing the Initial load with Huge Data ( say VBAK ) and its sends only few records to AWS and timeout error.
Anyhow AWS are receiving as a packets, but from SAP ( BADI BADI_IUUC_REPL_OLO_EXIT) @ Initial load. it has some 700K records/
When calling Method PUT_OBJECT in /LNKAWS/CL_AWS_S3_BUCKET class, its giving exceptions C_STATUS_400_BAD_REQUEST. Any suggestions to handle this situation?
Highly appreciate your inputs.
thank you for raising this. Unfortunately from the description below I can't support with root cause analysis. I suggest you create a ticket so that we can look at it in detail.
Have you got this issue resolved? I am interested to know the approach suggested by SAP for this.
Hi Britta ,Very useful blog Thanks a ton.
I want Replicating tables into SAP Data Intelligence via SAP LT Replication Server, from ERP On premise ECC with a Hana database
I already installed the SLT plugin, however I have problems creating the RFC, exactly the step "Specify target system" I do not have the option of the scenario "SAP Data Hub / Sap Data Intelligence" 🙁
as seen in the image.
Someone knows what this is ? and that can help me to solve it. Thanks
Can you confirm, Following the prerequisite for the above use case
DHP is the correct option for SAP Data Intelligence.
I don't know why there is no description text for the keys in your system.
Thank you ! for your answer, what you can identify is that since I was entering my SAP GUI with a language other than English, I was not getting the correct translation.
I want to connect SAP tables into SAP Data Intelligence via SAP LT Replication Server, from ERP On premise ECC.
But for me in target radio button, other option is not visible.
can you provide necessary steps?
I am using SLT, to replicate data from ECC SAP to DI, is there any way to identify if the record that I receive from the replication corresponds to an INSERT OR UPDATE in the source or to obtain in some way what was the timestamp in which the change occurred in the source system, in this case the ECC SAP.
Thanks a lot.
yes. Once you enter the delta replication phase for a table in replication, the delta record, that is being transferred to SAP DI, contains a "D", "U" or "I", depending on whether the change results out of a delete, update or an insert operation.
Regarding the timestamp information, you may want to add a custom timestamp field to your target structure that contains timestamp information. You will need to implement a SLT rule to populate this field. I have written a guide on that topic you might want to have a look at.
Thanks Hi Britta , for your answer, I added the timestamp field .
To contextualize, we are using SAP DI and the SLT operator, to replicate data from the ERP ECC, the ERP database is Hana (DMIS2011 SP20)
At this moment we are doing initial loading, the pipeline starts to run but after a while, the pipeline ends with finished status, but in the erp it generates an error "Error for Badi methid WRITE_DATA_FOR_INITIAL_LOAD; error code 1 for table"
Anyone who is due to this error and how can I fix it, I appreciate in advance the help you can give me,below images of the configuration in the ltrs and the error generated
Hi Cesar Sanchez Gutierrez ,
There is an SAP note: 3084684 to tackle this issue. Did you try implementing it?
There is a BTP service called "Object Store" that generates a local S3 on BTP (see
Can SAP DI access to this S3 directly or via "Object Store" API ?
Is there a way to generate a delta CSV file in batch mode while using ODP reader V2 or SLT
connector V2 Since the lastbatch flag does not come as true? If the upstream process needs a new
CSV file with timestamp in its name created from DI, streaming a delta would not fit the need. Do you
have any recommendation meeting this scenario?
Great blog and very helpeful indeed. I was able to follow and get it working. I have DI 3.2 (on prem) and SLT DIMS 2011 SP21. I wanted to also explore the option of replicate the table once and send to many targets. I understood that I can achieve this with ODQ (I have the ODQ setup in SLT). So from ECC to SLT ODQ, then I need DI to pull from the SLT ODQ to multiple targets. Can this be achieved:
This will allow the table to be replicated once but transferred to many targets. The idea being that we can add additional targets later too.
Really appreciate your time in responding back to this.
Gday! Requesting you to please check on the below Question and revert
Thankyou in advances! Appreciate your Valuable Inputs on above question