Skip to Content
Product Information

ABAP Integration – Replicating tables into SAP Data Hub via SAP LT Replication Server



In this blog post I will show you, how you can make use of the “SLT Connector” operator to consume up to date business data within SAP Data Hub and SAP Data Intelligence.

Remark: SAP Data Hub and SAP Data Intelligence can be treated for the purpose of this scenario exactly the same. For simplicity reasons I will mention SAP Data Hub only. In case you would like to run this scenario with a SAP Data Intelligence system, the procedure is exactly the same.

SAP Data Hub offers a built-in integration with SAP Landscape Transformation Replication Server (SLT), the real-time replication technology from SAP positioned for data replication out of SAP systems. The pre-delivered SLT Connector operator within SAP Data Hub will handle the communication to the remote SLT component on the source system and allows doing delta replication of tables into SAP Data Hub based on SLT technology.

This functionality is part of the ABAP Integration within SAP Data Hub. If you are not familiar with the overall concept of the ABAP Integration, please have a look at the overview
blog post for
ABAP Integration.



For any SAP S/4HANA systems greater than 1610, you are good to start. The remote SLT component is included in the core of your SAP S/4HANA system.

If you however run this scenario with a SAP Business Suite source system, you need to make sure, that the non-modifying Add-On DMIS 2018 SP02 (or DMIS 2011 SP17) is installed on that system.

Besides, you need to be able to establish a RFC connection from your SAP Data Hub system to the SAP system. Ideally, you have already created this connection via SAP Data Hub Connection Management. To get more details on the connectivity, have a look at the following note: 2835207 -SAP Data Hub – ABAP connection type for SAP Data Hub / SAP Data Intelligence 

Use Case


We received many requests by customers and internal stakeholders with a use case pretty similar to the one you see in the below picture.


There is flight data stored in a custom table (ZSFLIGHT) on a SAP Business Suite System, that we would like to store on a S3 file system. It is important for our use case, that the data in the S3 file is always up to date, respecting any changes to the flight data in the source system.

The SAP Business Suite system has DMIS 2018 SP02 installed. It includes SLT functionality such as SLT’s read engine and the built-in change data capturing mechanism that will allow fetching deltas.

To provision the data to the S3 bucket, we will use a SAP Data Hub pipeline, that reads the data via SLT into SAP Data Hub, transforms the  data in a compatible format and finally writes it to S3.



Prepare the source system (ABAP system)


First of all we will logon to the SAP Business Suite system to prepare SLT. Before we can communicate from our SAP Data Hub pipeline with SLT, we need to have a SLT Configuration in place (which you can imagine like a project entity inside SLT, representing basically a combination of a source system connection and a target system connection).

  1. Therefore go to the SLT cockpit by entering transaction code ltrc in the command field. Within this environment you can find details to existing SLT data replications and you can also create, monitor and execute additional ones.
  2. Click on “new” to create a new SLT Configuration.
  3. Provide a SLT Configuration Name, for instance “SLT_DEMO” and click “next”.
  4. Specify the source system connection, in our case RFC Connection equals none (as we like to load data out of the same system, that also SLT is running on). Click “next”.
  5. Specify the target system connection to SAP Data Hub or SAP Data Intelligence. Therefore choose option “Others” and specify “SAP Data Hub / SAP Data Intelligence”.
  6. Define the SLT Job Settings. If you plan just a simple test of replicating a single table to SAP Data Hub, it is fine to provide one job for “Data Transfer Jobs” and as well for “Calculation Jobs”.
  7. Click “next” and then “create”.
  8. Note down the Mass Transfer ID, that has been generated. This ID uniquely identifies the SLT Configuration and is required later for the configuration of the SLT Connector operator.


Implement the data pipeline (SAP Data Hub)

Having created the SLT configuration, we are good to start building our pipeline in SAP Data Hub.

  1. Open your SAP Data Hub Modeler and click on the “+” to create a new pipeline.
  2. Make sure, that all categories are selected for the operator repository (especially we need the category of ABAP operators).
  3. Drag and Drop the SLT Connector operator to your workspace. If you can’t find it, you might want to use the search functionality. 
  4. Now we need to configure the SLT Connector operator. We need to provide the Mass Transfer ID of our SLT Configuration, the table that we would like to replicate and the connection to the ABAP system. Ideally, this has already been created in the central Connection Management. If so, we can just reuse it. If not, we can also specify the connection manually.
  5. Drag and Drop the ABAP Converter Operator to the workspace. This operator is required in order to transform the table records coming from the SLT Connector operator into a standard string format (based on JSON, CSV or XML).
  6. To configure the ABAP Converter, we need to specify the same ABAP connection as before and need to define the format that we would like to use. In our case we will use CSV.
  7. Drag and Drop now the Write File operator. This operator will write the records of table sflight down to the S3 file system.
  8. The Write File operator needs the following configuration values.
  9. Connect the three operators to a pipeline and save it. Note, that the SLT Connector offers at the moment two outports, “outRecord” and “outTable”. The outRecord outport will pass the data record by record, whereas the outTable outport hands the data over in bulks (one RFC call takes a portion of records at one glance). Typically we will use the outTable ourport, as this is faster.

Execute the data transfer

  1. Before starting the actual pipeline, let’s take a look at our SFLIGHT table in the ABAP system. We can check the data via transaction se16: We see 13 records. As we did not implement any kind of filtering on the way to SAP Data Hub, we expect the very same data records also at the end in our S3 file.
  2. Also we will take a look at our S3 bucket. To browse the bucket for our file, we will use the MinIO Browser. At the moment it looks like this:
  3. There is not yet a file sflight.csv for our flight data created ( – the other file rating.csv can be ignored).
  4. Now start the execution of the pipeline.
  5. Once the pipeline is running, we will see within the SLT Cockpit, that the table replication is being scheduled.
  6. Having a look at the MinIO Browser, we see that the file has been created right away.
  7. Let’s download the file to verify the result.
  8. As we are also interested in delta data, the pipeline is running constantly. If the source table is changed, the delta is immediately arriving in the S3 file. No we will provoke changes to the source data, to verify also the delta replication. We now delete and update a record via se16 in the ABAP system.
  9. Checking the file again via MinIO Browser we can see that the file has changed already with the timestamp within the “Last Modified” column.
  10. Now let’s open the file. We can see that there have been two additional records appended, one for the delete and one for the update operation. At the end of each record, we can also track whether the delta record results out of an inser, update or delete operation (see the D and U at the end). This is pretty cool, as it allows us to react on the operations differently. We might face scenarios, where we are not interested in replicating deletes, but only updates and inserts. For such scenarios we could easily extend the pipeline with an additional operator, that filters out certain records.


Thank you for reading this blog post. Feel free to try it out on your own and share your feedback with us.


You must be Logged on to comment or reply to a post.
  • What would be the approach to connect S/4 HANA Cloud over SLT to SAP Data Hub or SAP Data Intelligence ? From the slides I've got the impression this is feasible, but the explanation in this blog (and anywhere else) only refers to the S/4 HANA On Premise case.

    • Hi Mark,

      unfortunately the SLT is not allowed to call into S/4HANA Cloud systems requesting data based on tables. So this scenario indeed only works for S/4 oP and Business Suite systems.

      What you can do however is to extract data based on CDS Views via the CDS View Reader operator in SAP Data Hub / SAP Data Intelligence. This operators allows also initial load and delta (like SLT). You might want to checkout this blog post to get some details on the CDS View replication to SAP Data Hub / SAP Data Intelligence:



  • Hi Britta,

    would like to know, how change data capture worked in above scenario, as used only tables as source , not the ODP context tables.

    Can you give any blog post in which : Replicating Real time S/4 data via ODP / ODQ SLT to Data service and Data service will connect to AWS S3 as target.

  • Hi,

    SAP LT Replication Server (SLT) comes with a CDC mechanism on table level.

    I would suggest to search/google for SAP LT Replication Server (SLT)  or go to . I am pretty sure you will find a ton of material ?


    Best, Tobias

  • Hi Britta

    Great post ! I am new to Data Intelligence, your team's blogs are helpful to gain insights into different operators.

    You mentioned the pipeline is running constantly to get delta data. What happens if it has to be stopped?

    When using SLT,  we can suspend replication (eg: maintenance event) and resume it without data loss.  Is similar feature available in Data intelligence?

    I am trying to read delta using ABAP ODP reader, all works as expected as long as the pipeline is constantly running. However, once it is stopped and started a new delta reinitialization is happening. I would like to reconnect to old ODP subscription and pull data from the point when it was stopped

    I added 'subscription id' in configuration, graph still creates a new subscription when it starts.

    We are on SAP DI 3.0 connected to a Suite on HANA.

    Any guidance on this is greatly appreciated.





  • Julius

    Unfortunately not yet, I am still trying.  OPD extractor works perfectly fine for subscriptions from BW, I am at a loss on what else to try in DI  operator settings.

    Can you please update here if you find a solution?





  • Julius

    I found the root cause of issue. In our case it is ‘Subscription id’ config name mismatch between what is defined in DI graph and what ABAP code is expecting.

    In DMIS 2011 sp18 'ODP reader' operator code is expecting name to be 'subscriptionID’ but in DI operator has config name as "Subscription ID" . So no matter what the value in DI, ABAP code always treated as if there is no subscription id and started a new delta init. I changed config name in json script version of DI graph and delta mode worked as expected. I am working with SAP to figure out corrective notes to fix it.


    Hope this helps.



  • Hello Birtta,

    Realy nice post. I am fairly new in this subject of Data Intelligence. Now I am building some pipelines ans see that the SLT connector you are using is deprecated.
    Using the new one is different as you can not specify the way you want to unload the data(full load, replication or delta load).
    In the documentation of the new slt connector i cannot find the unload methodology. I do see the following:
    "Currently, the only supported sequence for replicating the table in the mass transfer is to first do an initial load and then start the actual replication process."
    But here I am not sure how to do this.

    Can you please help a bit further on this.

    Nice feature of the new SLT connector is that it also sends the metadata this is good news.

    Regard Raymond

    • Hello Raymond,

      for the new SLT connector operator there is a new way to configure the operator. You need in the first hand to define and select your connection. Afterwards, you need to select the version you want to use for the operator. Depending on the version you select, you do not need to have the ABAP conversion operator in the pipeline, as for the V2 operator the output is from the type message. After selecting the version you should get displayed other fields to customize, also the entry Transfer Mode. In the dropdown menu you can select between the different options with full load, replication and delta load. The intial load transfers all data to your pipeline, the replication includes the initial load and then transfers the subsequent changes and the delta load only transfers new data without an initial load.

      So out of these different options you can select what suits more to your use case. In addition to the different settings you now have the possibility to define a Subscription to pick up your current state also from different graphs. To get access to the Data Intelligence documentation, you can use following link. The sentence you referred to is from the SAP Data Hub documentation, which differs in some aspects.

      I hope this answered your question.

      Regards, Martin

      • Hello Martin,

        Thanks for your quick reply.

        Now I am using SAP Data Intelligence cloud and having version 2010.29.22. And only having V1 availble of the slt reader.

        can you confrim that it might be due to having SP17 of DMIS2011_1_731

        regards Raymond

  • Hi,

    If we are on a S/4 1809 where the ABAP CDS Reader with DI is not supported and the SLT Connector is the way to go until we upgrade. Does DI include the license for SLT in this particular use case?



  • Hi Britta,


    I am facing some problem while doing the Initial load with Huge Data ( say VBAK ) and its sends only few records to AWS and timeout error.

    Anyhow AWS are receiving as a packets, but from SAP ( BADI BADI_IUUC_REPL_OLO_EXIT) @ Initial load. it has some 700K records/

    When calling Method PUT_OBJECT in /LNKAWS/CL_AWS_S3_BUCKET  class, its giving exceptions C_STATUS_400_BAD_REQUEST. Any suggestions to handle this situation?

    Highly appreciate your inputs.

    • Hi Bala,

      thank you for raising this. Unfortunately from the description below I can't support with root cause analysis. I suggest you create a ticket so that we can look at it in detail.

      Kind regards