Skip to Content

Hana Smart Data Integration – Overview

This post is part of an entire series

Hana Smart Data Integration – Overview

Prior to Hana SP9 SAP suggested to use different tools to get data into Hana: Data Services (DS), System Landscape Transformation (SLT), Smart Data Access (SDA), Sybase Replication Server (SRS), Hana Cloud Integration – DS (HCI-DS),… to name the most important ones. You used Data Services for batch transformations of virtually any sources, SLT for realtime replication of a few supported databases with little to no transformations, HCI-DS when it comes to copying database tables into the cloud etc.
With the Hana Smart Data Integration feature you get all in one package plus any combination, when it comes to loading a single Hana instance.

The user however has very simple requirements when it comes to data movement these days:

  • Support batch and realtime for all sources
  • Allow transformations on batch and realtime data
  • There should be no difference between loading local on-premise data and loading over the Internet into a cloud target other than the protocol being used
  • Provide one connectivity that supports all
  • Provide one UI that supports all

The individual tools like Data Services do make sense still for all those cases the requirement matches the tool’s sweet spot. For example a customer not running Hana or where Hana is just yet another database, such a user will prefer a best of breed standalone product like Data Services always. Customers requiring to merge two SAP ERP company codes will use SLT for that, it is built for this use case. All of these tools will continue to be enhanced as standalone products. In fact this is the larger and hence more important market! But to get data into Hana and to use the Hana options, that is when it becomes hard to argue why multiple external tools should be used, each with its own connectivity and capability.

In addition to that the Hana SDI feature tries to bring the entire user experience and effectiveness to the next level, or lays the groundwork for that at least.

Designing Transformations

Let’s start with a very simple dataflow, I want to read news from CNN, check if the text “SAP” is part of the news description and put the result into a target table. Using Hana Studio, we create a new Flowgraph Model repo object and I dragged in the source, a first simple transformation and the target table. Then everything is configured and can be executed. So far nothing special, you would do the same thing with any other ETL tool.


But now I want to deal with the changes. With any ETL tool in the market today, I would need to build another dataflow handling changes for the source table. Possibly even multiple in case deletes have to be processed differently. And how do I identify the changed data actually?


With Smart Data Integration all I do in above dataflow is to check the realtime flag, everything else happens automatically.

How are changes detected? They are sent in realtime by the adapter.

What logic needs to be applied on the change data in order to get it merged into the target table? The same way as the initial load did, considering the change type (insert/update/delete) and its impact on the target.

The latter is very complex of course, but we when looking at what kind of dataflows the users have designed for that, we were able to come up with algorithms for each transformation.

The complexity of what happens under the cover is quite huge, but that is the point. Why should I do that for each table when it can be automated for most cases? Even if it works for 70% of the cases only, that is already a huge time saver.

Ain’t that smart?

The one thing we have not been able to implement in SP9 is joins, but that was just a matter of development time. The algorithms exists already and will be implemented next.


How does Hana get the news information from CNN? Via a Java adapter. That is the second major enhancement we built for SP9. Every Java developer can now extend Hana by writing new Adapters with a few lines of code. The foundation of this feature is Hana Smart Data Access. With this you can create virtual tables, which are views on top of remote source tables and read data from there.

For safety reasons these adapters do not run inside Hana but are hosted on one or many external computers running the Hana Data Provisioning Agent and the Adapters. This agent is a very small download from Service Market Place and can be located on any Windows/Linux computer. Since the agent talks to Hana via either TCP or https, the agent can even be installed inside the company network and loads into a Hana cloud instance!

Using that agent and its hosted adapters Hana can browse all available source tables, well in case of a RSS feed there is just a single table per RSS provider, and a virtual table being created based on that table structure.

Now that is a table just like any other, I can select from it using SQL, calculation views or whatever and will see the data as provided by the adapter. The user cannot see any difference to a native Hana table other than reading remote data will be slower than reading data from Hana.

That covers the batch case and the initial load.

For realtime Hana got extended to support a new SQL command “create remote subscription <name> using (<select from virtual table>) target <desired target>”. As soon as such remote subscription got activated, the Adapter is asked to listen for changes in the source and send them as change rows to Hana for processing. The way RSS changes are received is by querying the URL frequently and push all found rows into to Hana. Other sources are might support streaming of data directly but that is up to the adapter developer. As seen from Hana the adapter provides change information in realtime, how the adapter does produce that we do not care.

This concludes a first overview about Hana Smart Data Integration. In subsequent posts I will talk about the use cases this opens up, details of each component and the internals.

There is also a video from the Hana Academy on youtube:

You must be Logged on to comment or reply to a post.
  • Excellent blog. It looks like not we no longer need SLT to replicate data in real-time in HANA side car. Instead, it can be done natively with HANA using smart data integration capabilities. Correct?

    • Hi Abhishek,

      Not absolutely, This realtime replication is like real time jobs what we have in Data Services stack. This Realtime option is to load the delta into the target. However, we still have the SLT in SAP to replicate data into HANA Database.

      Kind Regards,


      • I'd like to be more precise here:

        SLT is one option to identify changes in SAP. It is used by the SLT frontend and can write data into Hana directly. But it can be used as source by Data Services or by SDI (once the SLT adapter for SDI is ready).

        Hana SDI is the framework to receive realtime changes, transform the data on the fly and load into Hana (among other things). So way more than a simple replication SLT can do by itself. Plus SDI has all the UIs for setup, creation, administration, monitoring in Hana, reusing the Hana features for that. These are not UIs that have been added to Hana Studio, we are using the architecture and the features Hana provides for this. Easy to do since SDI is a Hana native feature.

        If you use SLT for all, all the monitoring, setup etc is done in the SLT SAPGUI application (although some Integration is done in Hana Studio).

        Now it depends on what you need. If all you are doing is SAP to Hana 1:1 replication and ABAP is your way to go, all the nice features SDI provides, all the integration might not be worth that much, hence SLT by itself is fine. But if you want to do just a little bit more, I would use a SLT (or similar) adapter to provide the changes and Hana transformations to consume the changes.

        • One part of HANA SDI is the framework to receive real time changes. Second part of HANA SDI is DP agent with different adapters whose purpose is to identify changes and send them to HANA. These adapters are based on smart data access technology.

          So my understanding is these adapters (SDA based) and SLT approach to identify changes are completely different (from technology point of view)

          Now there are ECC Adapters for oracle, DB2 et al. These can identify the changes.

          Ofcourse when SLT is used, it is an ABAP system with RFC to Source ABAP system (i'm not considering SLT for non ABAP system here).

          Isn't this based on two different technology (SLT with ABAP and SDI adapters based on JAVA) ?

          So basically this sound to me like competing options. Ofcourse the choice could be made depending on the criteria's you mentioned.

          For purpose of just replication (without any fancy options of SDI), can either options be used ?

          The main driver for this is to understand for the HANA live side car scenario, why is SAP pushing for SLT when SDI can be used instead ? why cause confusion and not keep it simple.

          Werner would appreciate to know your opinion.

          • I guess the answer is history.

            In the sidecar scenario you do not want to use transformations, it is a 1:1 replication only, SLT is an established product for that, SLT was the only product available up to last November.

            So in that sense, there is no compelling event to change something.

            From my perspective, customers will find the options SDI provides interesting still. In the sidecar scenario they might like the fact that SDI does not require the ERP to be brought into a read-only mode during the initial load, they might prefer the higher speed during initial load. And maybe they want to do some transformations later one.

            So the next step will be to configure SLT to send the change rows not via a secondary database connection to Hana, but instead by calling an RFC server which would be the SLT Adapter for SDI. Then you have the advantages of both worlds. On the SAP system you stick to SLT but you can control all via Hana thanks to SDI and you get the full advantages of SDI.

            Later you might, or might not find, that another adapter provides you with the same ERP changes but has even more advantages. Transactional consistency, less performance impact on the source, less latency, easier to maintain on the SAP side. If that is the case, you simply swap out the SLT adapter with the new adapter.

            Or in short: Yes, there should be just one solution, but SLT is too mature and often all you need, hence we are not there yet.

          • Hello Werner,

            Typically SLT is installed on a separate ABAP server and this is additional cost(especially hardware). Since SDI is installed directly on HANA there is some cost saving. For demo scenarios involving HANA Side car model , do you see any issues if SDI is used for replication instead of SLT?. The SAP ECC system runs on DB2 and currently SDI Adapter is available for this.



  • How do we promote HANA EIM objects such as data flows from dev to QA to prod? Do we use HANA Application Lifecycle Manager, Solution Manager or some other tool?

    • They are all Hana Repo objects like all others as well.

      One exception is the remote source, that should not be transportable as it is created just once and with different settings for dev/prod.

      A virtual table is currently not covered by CDS by an oversight, needs to be corrected asap.

      But hdbtaskflow, hdbrepflow,... are repo objects.

  • Hello Experts,

    It is an excellent article and we've started using flowgraphs for an integration project.

    We are trying to do data transformation using SDI flowgraphs and we are looking for some inputs to handle some use cases in the right way.

    1. How to handle conditional transformations? If a column in Source table has value 'a' then the value in target table should be 1 and if the value is 'b' the value in target should be 2, etc. Basically we are not sure on how to handle where we have to have transformations based in if-else or case statements.

    2. If we have to join multiple tables, is there any difference in using multiple join nodes instead of adding multiple tables to a single join node? Will both will be optimized by Hana of will there be any performance difference? What happens behind the scenes for both of these scenarios?

    3. How to handle record level error logging? For example, if I have 10 records and 2 of them are failing due to some constraint violation in the target table, I would like the 8 records to go to target tables and the 2 records to be logged in an error table with the errors instead of stopping the entire task with no record moving to target table

    4. What is the best way to debug issues in flowgraphs? I  couldn't find any material in this regard.

    I did post these questions as a separate thread and I'm waiting for the response. Hana SDI Flowgraph related questions. It thought of posting it here since it is related to the article. Pls remove this if I violate any norm by posting it here again.

    Appreciate your response and direction.



    • Regarding the Join question here is a bit more detail. If you express the multi-table join as a single join or multiple 2 way joins with no other operations in between them, then the two *should* likely be optimized similarly. The multiple joins should be rolled up into a single join and both end up being executed by the SQL engine as well.

      Assuming that the two situations that you're comparing are as follows, I'd expect performance to be similar between the two.



      Join1 -> Join2 -> Join3

      • Hi Mitch,

        so you are saying, that the the data transformation actually understands that two join nodes are directly connected and that it therefore create a single SQL command for the full join set?

        As SQL defines joins as operations between at most two tables, any higher number of joins in a statement will internally be evaluated equally to a sequential execution of the joins. With HANA however, the execution will not necessarily be sequential, but parallel (if possible) instead.

        This only works on a single statement level. As soon as the join nodes execute their own SQL commands separately and hand over the result set of the first node to the next, this will perform much worse than doing the same in a single statement.

        - Lars

        • Hello Lars,

          It's not the data transformation itself which understands two+ join nodes being directly connected, but rather the various optimizers in play (CE and SQL) which analyze and may combine these operations. You're correct that if the individual operations do not get combined, they will perform worse than the single statement would. However, in general, if there's nothing preventing the optimizers from combining the operations they should be combined into a single statement rather than being executed as individual statements.


          • Yes Lars, that's absolutely correct. The .hdbflowgraph ends up getting transformed in entirety into a calculation scenario and will be executed on the calculation engine including any of the DQ operations as well. This is where the CE optimizer (and also SQL optimizer) play roles in combining various operations etc.

            The main difference between these and other calculation scenarios is that with execution of these scenarios you also get information regarding what operations have processed, how many records were processed, what the current execution status is, etc.

  • Hi Werner,

    I'm assuming all the SDI features explained in this article like Flowgraphs, SDA, etc are part of Hana Base and available on all the different kind of Hana Offerings like SAP Cloud, On-premise, HanaOne, etc. Is this a valid assumption?



    • Hi Chandru,

      Let's make a distinction between "technically available" and "licensed".

      Since SDI is part of the core HANA installation, it's technically available wherever you have HANA SP09 (or higher): flowgraphs, replication tasks, DP server etc, are all part of you core HANA install, HANA Studio and the HANA Web IDE. It's only the "data provisioning agent" and monitoring DU that are separate downloads.

      But from a license point of view, SDI is NOT part of every HANA package. Below is the current state (July 2015) - keep in mind that licensing and packaging can change over time:

      - SDI is part of the high end "HANA Enterprise Edition".

      - For other HANA onPremise editions, you can buy the "HANA EIM" option which gives you SDI + SDQ.

      - For the HANA Cloud Platform, there is currently no package available yet that includes SDI, but target is to have this ready later this year.

      Hope this helps.


      Product manager SAP HANA Cloud Integration for data services and HANA EIM (smart data integration).

      • Hello Ben

        I was reading the article and got kind of confused with the comments. In the beginning of the article, Werner Daehn  says that

        "...HCI-DS when it comes to copying database tables into the cloud etc.

        With the Hana Smart Data Integration feature you get all in one package plus any combination."

        I have a scenario where I have to replicate data from an on premise SQL Server to a database in Hana Cloud Platform. According to Werner, I'd believe that SDI would be the best option. But, in your comment, you say that

        "- For the HANA Cloud Platform, there is currently no package available yet that includes SDI, but target is to have this ready later this year."

        So, it's not clear for me if SDI would be a option for this scenario, or I'd still have to use HCI-DS, or any other option. The variety of tools gets me a little confused. Could you try to explain better and advise on the best option for that specific scenario?


        Luis Becker

        • Luis,

          it's a matter of timing. Today (end of August), there is no package you can buy from SAP that enables SDI on your HANA instance on HCP. So today, HCI-DS would be your only option.

          This is not a technical limitation (I have done demo's using SDI on HCP without any issues - even specifically loading from SQL Server), it's the commercial/pricing/legal stuff that just takes time. If you ask again in a few months from now, the recommendation will definitely be to use SDI for loading/extracting data to/from HCP.


          • Hello, Ben

            Could you please share with the current state of availability of SDI on any HANA instance on HCP?

            If SDI still not available for the HANA in the Cloud, may you direct to a roadmap or something similar that uncovers when SDI will be launched for HCP?


            Best, Igor

          • Igor, we are still working on the commercialization for SDI on HCP.

            Technically it is already available (it's part of ANY HANA SPS09 or SPS10 instance), but we need to add it into HCP license packages so that customers and partners can buy it and get access to the mandatory agent download package on SMP.

            Target is now Q1 next year. I'm keeping my fingers crossed...

  • Trying out using the SDI agents and cannot get either Windows or Linux agents to register in cloud mode (non-cloud mode is fine), we get the error

    ADMIN_SET_CONFIG_PROPERTIES Http error Internal Server Error

    Can't see anything about this error anywhere, has anyone else seen this?

      • Development said:

        For this ADMIN_SET_CONFIG_PROPERTIES Http error Internal Server Error

        The actual error message should be in the agent logs.

        • Thanks Werner,

          From the log I see the entries below, am I missing an authorisation on my user maybe? We'll álso raise a ticket in the SAPSupport Portal

          2015-07-31 14:28:33,734 [DEBUG] SocketConnector.write  - >>PING_AGENT:SDA:1:null:null:CloudTest3:null:

          2015-07-31 14:28:33,734 [DEBUG]  - <<SUCCESS_MSG:SDA:1:::CloudTest3::

          2015-07-31 14:28:33,734 [DEBUG] SocketConnector.write  - >>ADMIN_REGISTER_ADAPTER:SDA:1:TwitterAdapter:null:CloudTest3:null:

          2015-07-31 14:28:33,750 [DEBUG]  - <<ERROR_MSG:SDA:1:TwitterAdapter::CloudTest3::

          2015-07-31 14:28:33,750 [ERROR]  - Http Error::Forbidden

          2015-07-31 14:30:03,937 [ERROR] View.handleRegisterAdapter  - Failed to register adapter.

 Adapter 'TwitterAdapter' could not be registered with HANA server. Request Failed for ADMIN_REGISTER_ADAPTER Context: Http Error::Forbidden

          • Is there any logs before this that indicates the agent was able to successfully register with the server?

            Usually REGISTER_AGENT message should appear at the very beginning.

            If possible please send me the entire log file.

          • I got a bit further, here's the latest error (exception 151044)

            2015-08-20 11:33:04,125 [INFO ][255]    - TwitterAdapter Version: 1.1.2 with SDKVersion: 2 registered.

            2015-08-20 11:33:05,282 [INFO ][138]    - Communication to XS Sever is successful at http://<redacted>.com:8000/sap/hana/im/dp/admin/dpadmin.xsjs

            2015-08-20 11:33:05,297 [INFO ][138]    - Communication to XS Sever is successful at http://<redacted>.com:8000/sap/hana/im/dp/admin/dpadmin.xsjs

            2015-08-20 11:33:08,313 [INFO ][138]    - Communication to XS Sever is successful at http://<redacted>.com:8000/sap/hana/im/dp/admin/dpadmin.xsjs

            2015-08-20 11:33:08,360 [ERROR][247]    - Request failed for GET_REGISTERED_ADAPTERS Context: No Data Get

            2015-08-20 11:33:08,438 [INFO ][138]    - Communication to XS Sever is successful at http://<redacted>.com:8000/sap/hana/im/dp/admin/dpadmin.xsjs

            2015-08-20 11:33:08,469 [INFO ][138]    - Communication to XS Sever is successful at http://<redacted>.com:8000/sap/hana/im/dp/admin/dpadmin.xsjs

            2015-08-20 11:33:11,235 [INFO ][138]    - Communication to XS Sever is successful at http://<redacted>.com:8000/sap/hana/im/dp/admin/dpadmin.xsjs

            2015-08-20 11:33:16,094 [ERROR][159]    - Sever Returned :{"api":"CREATE_ADAPTER","message":"dberror(PreparedStatement.execute): 403 - internal error: Cannot get adapter capabilities: exception 151044: Agent \"Cloud-20-8-2015\" is not available.\n","responseCode":1}

            2015-08-20 11:33:16,094 [ERROR][245]    - Exception while processing requrest "SDA Request for TwitterAdapter for request type ADMIN_REGISTER_ADAPTER"

            2015-08-20 11:33:16,094 [ERROR][246]    - Http Error::Internal Server Error

          • Fixed the error now, was a combination of two things:

            1. The Hana users for the configuration tool and the agent can't be the same user.
            2. In cloud mode the service user needs to write to the agent config file dpagentconfig.ini. Making the service user an admin fixed this (a better solution would be to set the correct permissions on the files, but this is a temporary test system).
          • I had the same issue when trying to debug a custom adapter in eclipse. Unfortunately, your fix did not work in my case but I was able to solve it and want to share my solution in case others are having the same problem:

            I had to add an additional VM argument in the debug configuration which points to the agent configuration:


            Afterwards, everything worked fine.

  • Very nice blogs.

    So, do I understand correctly that basically SPS9 ?

    1.discard former SDA : through unixODBC along with dedicated ODBC client (one dedicated to each dbms source), running on same hana server

    2. set up a more generic adapter approach with the dataprovisionning agent (JRE) running on a separate (win/linux) server than hana, but the latter having a new dpserver process for syncing

    3. with the following advantages :

    - scalability

    - delta capturing via true realtime replication

    - oData adapters, RSS, twitter adapaters

    - SDK to build custom adapters to any other sources

    • That is all correct, except maybe on the point 1).

      There are certain databases we have full control about and which we want a very deep integration with, no compromises. Say Sybase IQ?

      For these we will continue to have the ODBC based adapters still.

  • Nice blog. It is great to know SDI, one package covers entire EIM needs..

    How does it work to replicate data from HANA DB to Non-HANA DB that supports only row-based tables. Can I use SDI to real-time replicate HANA columnar table to non-HANA row based table.?

    • Since this is a generic tool, you do not have to worry about row/columnar storage. Your request has other requirements however:

      • "replicate from Hana" -> means having an adapter for Hana which supports realtime. Available in the upcoming SP11 release.
      • "replicate to non-Hana" -> means writing into the virtual table. We do support that somewhat in SP10, but it is still a far fetch from having 100%.

      Hence my answer would be: Although technically possible and although it will certainly grow into that area, for the time being use SDI for getting data into Hana only. Consider SAP DataServices or Sybase replication Server for your particular use case instead.

    • Hi Laks, as Werner stated above it is technically possible in SPS10 and SAP supports that (got a confirmation from solutions team). here is the adapter settings for SQL2012 and DML provide readwrite support.


    • Someone of you already know if exist Migration Tool from Data Services to HANA IM (SDI/SDQ)? In case you were not there, did you have idea if there is a roadmap to release it?

  • One of our customers who is already on BW 7.4 on HANA requires data from non-SAP sources (Oracle) to be brought to HANA with additional need for transformations and real-time. Since SDI/SDQ option provides this feature so my assumption is we can explore this instead of going for a dedicated tool like SLT or Data Services. With SDI, we would get away with one tier and separate server cost, and since the integration/replication/transformation would happen on HANA with direct connectivity to source, the performance would be fast.

    My question is whether SDI/SDQ option can be enabled on BW on HANA? If yes, then what are the pre-requisites? I understand this is possible with additional license cost, but can someone confirm me with the pre-requisites and a relevant link where I can find the detailed information.

    Thanks in advance.

    • Regarding license I can't help.

      But BW on Hana can utilize all Hana objects and does so already. It can read from Hana tables, Hana virtual tables, Hana Calcviews, Hana.... Therefore it can use SDI as well, which provides access to new sources via Hana tables, Hana virtual tables, Hana Calcviews,... 🙂

      Therefore you can expect to get an even more tight integration between BW and SDI, e.g. BW being able to create the virtual tables instead of you having to create them using Hana Studio, or BW utilizing the SDI FlowGraph UI to perform complex transformations.

  • /
        • Can you make sure the machine is reachable from HANA machine?

             you can log on to hana machine and then ping the inln address.

          Secondly, is your agent behind firewall and also can you make sure the port is open.

              If you are using windows agent say in laptop then you will need to open the listener port 5050 in the firewall.



          • Hi,

            I have exactly the same error. My agent is not reachable from the HANA serveur because the agent is behind a firewall. The HANA server is reachable from the agent on http or https. I have not been sucessfull in setup the http connexion. I am trying to setup an adapter to mssql Ides server.

            -I have started the dpserver on my HANA server SPS10

            - I have installed ssl certificats from the server on the agent and certificats from agent on the server

            - I have installed the HANA_IMP_DP_DU

            - I have provided authorizations to the user in the agent to connect the Hana server :






            Nevertheless, I still have the error :

            2016-01-15 17:04:57,254 [DEBUG] - <<SUCCESS_MSG:SDA:1:::IDES::

            2016-01-15 17:04:57,276 [DEBUG] SocketConnector.write - >>ADMIN_SET_CONFIG_PROPERTIES:SDA:1:null:null:IDES:null:

            2016-01-15 17:04:57,642 [DEBUG] - <<ERROR_MSG:SDA:1:::IDES::

            2016-01-15 17:04:57,642 [ERROR] - Execute statement create agent "IDES" protocol 'TCP' host '' port 5050 enable ssl failed with SAP DBTech JDBC: [403]: internal error: Cannot connect to agent: IDES at Context: Failed to connect to agent at, exception 2110004: Error invalid address: getaddrinfo, rc=-2: Name or service not known

            Do you have an idea or do you see a missing point ?


          • Hi David,

            You are trying to setup cloud connection from agent to HANA but you use TCP to register.

            If you are doing it from config tool, when you click connect to hana there is an option to indicate that this is an HTTP connection. Try that and try to register the agent again.



          • You're right !!

            The description HANA on Cloud get me wrong.
            Check the Hana on cloud box when you are on premise ... too SAP for me 😉

            My connexion is working but it use HTTP and I would like HTTPS.

            Do I need to set HTTPS port 4300 in this screen ?
            In the configure ssl button, what is the difference between :

            - Is SSL enabled on HANA server ?

            - Enable SSL for Agent ?

            the last one is for TCP ?



  • Hi,

    Will the SDI/SDQ functionality be made available through the HANA ondemand trial? It would be good to build a few demo's to understand how and where to use before paying/installing elsewhere.



  • Hi

    I was evaluating using SDI for specific use cases by comparing this against BusinessObjects Dataservices. I wanted to validatate if it is possible to load a flat file using SDI and if so how. Currently the remote sources available are all ODBC adapters and there seems to be be no flat file adapter available.

    Also is variables available in SDI just as variables in BODS ?

    Please confirm



  • Hi,

    While creating the virtual table I am getting the following error:

    Could not execute 'CREATE VIRTUAL TABLE VT_TABLE AT "RemoteSource"."<NULL>"."<NULL>"."ABAPTABLES.TABLENAME"' in 2.417 seconds .

    SAP DBTech JDBC: [476]: invalid remote object name: Unsupported datatype  (VARBINARY): line 0 col 0 (at pos 0)

    How should I proceed? Please help.



    • This bug was fixed in the next version. I sent  Harshit the internal SAP pointer. If anybody outside SAP encounters the issue, I suppose the maximum I'm allowed to say is to please contact your SAP rep.

    • Hi ,

      This occurs to me when, I deleted virtual table and created new one with the same name.

      So solution is, just change the name of the new virtual table and do not use the old name of virtual table anymore. I do not know if it is universal solution but it worked for me.


  • Hi,

    I have read through the blogs notes above and cannot see an answer if I missed it my apologies.

    Please can you tell me if the SAP SDI can be used as a Standard ETL to load legacy data into SAP ECC. Do we need to have a ECC Adaptor? So is SAP HANA SDI a replacement for data services.



    • Hi Rogan, in short: No, it is not a replacement and its not SDI's sweet spot of functionality.

      But to answer it in detail, we first need to define what "Loading into ECC" means. We could simply argue your ECC system runs on Hana, it consists of tables, use SDI to load these tables. But that would be a very dangerous approach. A new sales order for example is much more than adding a few rows in VBAK and VBAP. Much better to call the corresponding ECC functions, which for sure do all that is needed when creating a sales order.

      Technically speaking that is either a BAPI or an IDOC call.

      Yes, calling BAPIs via SDI is the next logical step. Today we have virtual tables, virtual procedures is the next obvious step to support. And then it is just a matter of time to enhance the ABAP adapter to present all BAPIs as virtual procedures.

      On main problem with BAPIs is that they are nested. Sales order header, line item, business partner table, ..... and Hana does not support nested data types in that complexity. So we can abstract that as XML strings and the such but not native as Data Services does.

      I would simply stick to my initial assessment: Use SDI when realtime and Hana is the sole target. Else there are better options and Data Services being one of them. In your case ECC is the target, not Hana.


      • Hello Werner,

        Thank you for the response and for the clarification on this. We are running SAP ECC on a Hana Database not using S/4 Hana yet. So the Master data would be ETL from an existing JDE system to SAP ECC.

        Regards Rogan

        • Hmm, another idea: You create virtual tables for the JDE table in your Hana system. With that you have solved the data transport/access problems. Now it is just a matter of making these tables known in ABAP and writing ABAP programs loading the data.

          Above will have the problem of identifying changes, but that you can solve by rather replicating the JDE data into Hana using SDI realtime and the CHANGE_DATE, CHANGE_TYPE upsert loader option. Then you would have the JDE data in Hana tables, not Hana virtual tables, plus the information on changes.

          Best would obviously be to utilize the SDI flowgraph editor for the data transformation instead of manual coding of ABAP programs. But that's where we are a bit behind due to Hana limitations (nested schema support), as said.

          Or you use Data Services.

          Quite a range of options. Can you pick the one or a combination that suits you best?

          • Hello Werner,

            In what way it is different from SAP HCI-DS, since both Smart Data Integration and HCI-DS falls under HCP umbrella.

            Is it something going to replace HCI-DS...?


            Sriprasad Shivaram Bhat

          • From a technology point of view HCI-DS is SAP Data Services based, SDI is Hana based. And since both are about getting data into Hana HCP, both can be used.

            Feature wise SDI is way more powerful with the exception of Workflows and ABAP dataflows. So once those two features will exist in SDI, that will be the time to replace the execution engine. If you don't need both features and rather use SQL Script to run the initial load and use e.g. realtime for the delta, then I would think SDI is the better starting point.

            Please note that this is the technical part of the answer. I have no insight into timelines and priorities at the moment.That would be something to ask Ben Hofmans as the lead product manager.


    • In general: YES, SDI can do bi-directional data loads, from external sources into HANA, and also from HANA to external targets.

      However, you might need to check the details for the target you want to write too. Not all adapters support bi-directional data loads (e.g. Hadoop, Twitter, but also the ABAP adapter are currently source-only adapters). You can check the product availability matrix (PAM) for details (direct link to Service Market Place: )

      Also keep in mind that the main use case for SDI is loading into HANA, the reverse is possible as well, but might be less optimized.



  • I understand using SDI we can connect to ECC tables and replicate those tables in HANA. Does SDI provide any capability to fetch data from Standard Extractors from BI content?


    • Yes, the ABAP adapter in SDI also supports the business content extractors that are leveraged in BW.These extractors can be used as a source to load the data into HANA on HCP.

      • Hi Ben,

        We are trying to extract data from sap BW system and used SAP BW apapter that came with  data provisioning agent. We are able to see the infocubes reflecting on the cloud platform but not able to create virtual table.

        Could you please advise/point to some material on extracting data from sap bw using data provisioning agent(sdi)

        With Best Regards,



  • Hi Werner,

    I am trying to get data from a S4Hana system to an XAP System. When I create the virtual tables, there is some repetition of some of the records. Do you have any idea what could be the reason?



  • Hello,
    We are trying to implement real-time using ABAP Adapter (SP11) against our ECC (on MSSQL). We are using ABAP and not the LogReader as Security Team want to restrict DB Access.
    Q1) Is ABAP Adapter (SP11) real-time now? Or can it be configured such way?
    Q2) The Batch is working fine but when I make the flowgraph realtime, I am getting error in statement "EXEC 'ALTER REMOTE SUBSCRIPTION "BJUNEJA"."kgm.ftp.MMIM.ZMB52::VBAK_TEST4_RS" QUEUE';
    (The Reset works fine but it fails on Queue)

    Bharat Juneja

  • Just want to add a note of clarification to the text highlighted in yellow above; while SDI includes the integration styles covered by the products mentioned, the SDI license does not include licenses for those other products.  I mention this because an AE was confused by the wording and thought licensing SDI gave rights to SLT, DS, etc., which is not the case.



  • Hi,

    Could anyone please provide resolution for this error:

    While Creating flow graph we are using data source (Virtual Function), it contains SOAP Url’s. While getting data from these urls all data types String.String automatically gets converted into template table NVARCHAR(5000). In this scenario we could able to successfully load upto 1000 records. But we want to load huge amount of data (like Transactional Data). In this situation we got struck by facing the below error.


    (dberror) 2048 – column store error: task framework: [2620] executor: plan operation failed;Cannot execute remote function call: ::fetchResponse : Exception while processing request “SDA Request for SOAPAdapter of type FEDERATION_GET_LOB”.

  • Hello Werner & Ben,

    We will load selected data (20+ tables) from ERP into a HANA  data-mart in real-time via the OracleECCAdapter. This data can then be modified by our B2C customers.

    I would very much appreciate you expert opinion to establish the best mechanism to get these changes back into the ERP system. I can see from the PAM, that there can be an opportunity to use the SDI OData Adapter (as Target). Ideally, I want to use the HANA Platform rather than involving any external integration component (i.e. PI/PO, BizTalk, etc).

    Can I use this combination of adapters to achieve the objective? If not, can you advise
    an alternative where I can explore?

    Would love to hear your thoughts and thank you in advance for your consideration.

    // Will .

  • Team, I created and loaded a virtual table from a spreadsheet using the excel adapter. And I created and loaded a real table from the virtual table using a flow graph and template table. My issue is that there is no primary key on either table. As a workaround I ran an ALTER TABLE in the SQL console, to add a constraint. But I want a more automated approach. I tried using a procedure, but the read-only aspect of the procedure generated an error. It seems like I am missing something simple. Any suggestions would be greatly appreciated.

  • Team, I found the solution to my previous issue. I used the right-click shortcut to create my virtual table from the remote source. This shortcut does not offer the option of changing the metadata. However I instead used a replication task and was able to identify the primary key fields for the virtual table.

  • Hello Experts,

    I have ECC hana connection using SDI.  My agenda is to bring data from ECC in real-time with few transformation on the fly.

    For this, I have created flowgraph where I am taking VBAK (virtual table) as the data source, added filter node to bring data only for last 4 years based on ERDAT and finally pushing data in data sink(template table). i have checked the realtime flag in container as well as data source node's properties tab to get delta in realtime however i am getting error stating 'column not allowed: MANDT in select clause'. 

    Appreciate inputs to achieve the requirement and overcome mentioned error.


  • Hi Werner,

    I am working on a POC to pull data from the Workfront API's , below is the link to the documentation.

    Looks like the API's are pure HTTPS GET/POST and the REST adapter that comes with HCI does not seem to support this type of services. So to make it work we have created a REST service for these Workfront services and exported the WADL files and hosted on a public domain. Using this custom created WADL works fine in HCI.

    Unfortunately, our HCI and HCP are in different data centers and we cannot connect HCP to HCI. Sap has recommended us to use SDI. We started working on SDI but we could not find a REST adapter that we can use to consume these services.  Can you please let me know if there is any alternative to this.











  • It's understood that SDI can be used to read data from multiple sources like Hadoop (hive or Impala) for ingestion into HANA. Can it be used for writing data to a Hadoop environment?

    As per note 2469516 - BW4SL - Open Hub Destinations , "3.x or "Third Party Tool" Open Hub Destinations must be deleted or replaced by features available in SAP BW/4HANA like SAP HANA smart data integration"

    "Third Party Tool" Open Hub data can be inserted from BW into target databases through BO Data Services so if SDI is replacing a third party tool open hub, is a "Write" functionality available in SDI for connected sources / targets?



  • Hi all,

    I am a bit confused about the ability of SDI / SDQ to support data migration into S/4HANA:

    • A L1 sales presentation from SAP about SDI/SDQ states that "data migration is not a supported scenario" (slide 32)
    • while the same presentation and this blog highlight the ETL, transformation and data quality features of SDI/SDQ

    Will Data Services (including Rapid Data Migration Solution) and S/4HANA Migration Cockpit remain as tools of choice for data migration into S/4HANA or will they be replaced by SDI/SDQ?



  • Hello everyone! Very useful guide, Werner! Thank you!

    I have an issue, however. I don't see data provisioning tab in eclipse application developer version:

    But I can see this tab on hana web-developer version:


    Do anyone have any idea what could be the reason?

    • There is no such thing as a FlowGraph editor in Eclipse. What you found here is the old Predictive Editor, used for AFL functions exclusively.

      WebIDE is the only existing solution.

      • Hello Werner,

        I have the same issue (that the "Data Provisioning" tab is not available in Eclipse), but saw that the screenshot in your guide also shows a flowgraph in Eclipse instead of in the web IDE Is there any way to activate the functionality in Eclipse? We cannot use the web IDE yet due to certain requirements.

        Kind regards,



        • The flowgraph editor in Eclipse is outdated. Only the WebIDE should be used for that.

          The Data Provisioning node should be present in Hana Studio always. Even if you do not have the permissions to do anything with that, if I recall correctly. It would be empty but the node itself should be present. Are we talking about the same thing?


          Edit: Ah, I see. You mean that screen with the Lookup etc. As said, WebIDE is your only chance. Even if you would use the Hana Studio editor to modify hdbflowgraph files, it would not be compatible.

          • Hi Werner,

            Thank you so much for your quick response! I do mean the screen with Lookup, Table Comparison, Pivot, etc. As I said, we are not able to use the web IDE, as it can only connect to databases on the Cloud Foundry platform, but this connection cannot be used in the SAP Analytics Cloud.

            Even though it's a deprecated approach, we will not be able to use the web IDE for quite some time (so we cannot occupy ourselves with compatibility yet), so would like to add the "Data Provisioning" tab to our Eclipse work environment. Is there a toolset that we can download for this? (maybe from the hanaondemand tools?)

            Kind regards,



    • Executing a flowgraph is nothing else than a SQL command. START TASK or CALL <stored procedurename>;

      So whatever can be used to call multiple SQL commands in sequence can be used to fulfill your requirement. And that is

      • another stored procedure you create by hand
      • a script in a scheduler
      • Hana process chains (Hana native implementation of BW process chains)
  • Hello Werner Daehn,

    I have a basic question for clarification, we are integrating non-sap 3rd party Oracle system using SDI (of BW/4HANA 2.0 SP03). We have DPAGENT Installed in the HANA system.

    Question : do we need to install DPAGENT on the source system (i.e. non-sap 3rd party Oracle system) ?

    Is there any step-by-step doc that details complete setup procedure.

    Appreciate your time.


    • If the DPAgent can reach the Oracle system, then you can install the agent on Hana as well. Not typical but possible.


      In more detail: The DPAgent runs the adapter and the adapter acts as the bridge between Hana and Oracle. Hence it needs to be reachable from Hana and needs to be able to connect to Oracle. The connection between DPAgent/Adapter to Hana is simple as you can either use TCP/IP or https (aka “cloud”).


      Example 1: Hana runs in the cloud, Oracle in your IT center. Of course a DPAgent somewhere in the cloud cannot talk to your Oracle instance via SQL*Net. Network does not allow that. Hence you install the DPAgent in your network.

      Example 2: Hana and Oracle are next to each other, in the same network. Then you can install DPAgent anywhere. On the server running Oracle, on the Hana server, on a separate computer.


      The reason why you typically will install the DPAgent not on Hana but closer to the source is for a) network and b) to avoid installing any additional Java software and the Oracle jdbc driver on the Hana server. For some adapters it is required to install them on the source, e.g. the File Adapter. It can read local files but not remote files (unless they files are made appear to be local via network shares).

  • How to Log and Debug the Custom Adapter...?


    I have created an Eclipse plugin development application cloned from MysqlAdapter from this git,

    Link :

    using OSGi with Equinox, and tried to log using log4j which is not getting logged.

    I placed the in the src/ directory.

    # Root logger optionfro
    log4j.rootLogger=DEBUG, file, stdout
    # configuration to print into file
    log4j.appender.file.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
    # configuration to print on console
    log4j.appender.stdout.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n

    I have places log4j.jar inside lib/ directory

    I have added these to MANIFEST.MF

    Require-Bundle: org.apache.log4j;bundle-version="1.2.15"
    Bundle-ClassPath: ., lib/log4j.jar

    Exported the plugin as JAR and deployed the plugin, the plugin runs fine. But the logs are not generated at the file provided in file.

    What is correct procedure to Debug / Log the exported plugin(JAR) Logs..?

    Thanks in advance,

    ~ Praz