Skip to Content
Technical Articles
Author's profile photo Denys van Kempen

Real-Time Replication between SAP HANA Cloud and SAP HANA on-premises | Hands-on Video Tutorials


With this blog series we provide an update with the latest information on getting started with SAP HANA Cloud on the SAP Cloud Platform.

  1. About SAP HANA Cloud
  2. SAP HANA Cloud Getting Started
  3. SAP HANA Cloud and SAP Business Application Studio
  4. HDI with SAP HANA Cloud
  5. SAP Analysis for Microsoft Office and SAP HANA Cloud
  6. Cloud Foundry Advanced
  7. SAP HANA Cloud and SAP BTP Trust
  8. Data masking and data anonymization
  9. Predictive Analysis Library (PAL) and Automated Predictive Library (APL)
  10. Remote data sources and virtual tables
  11. OData with SAP HANA Cloud
  12. SAP HANA Cloud Graph
  13. Role Attributes
  14. SAP HANA Cloud and Smart Data Integration <=

For more information about the free trial, see

For the new features overview posts, see

Questions? Post as comment.

Useful? Give us a like and share on social media.



Hands-On Video Tutorials

Smart Data Integration Master, Tahir Hussain Babar (a.k.a. Bob), just added a new series to the SAP HANA Cloud playlist on the SAP HANA Academy YouTube channel with nine new videos showing how we can configure real-time data replication between an SAP HANA platform system (on-premises) and SAP HANA Cloud.

In this blog, you will find the videos embedded with some additional information and resources. You can watch the nine video tutorials in about one hour and a bit. What you get back is:

  • How to setup the SAP Cloud Platform environment and create SAP HANA Cloud instance
  • How to configure the SAP Web IDE development environment to work with SAP HANA Cloud
  • How to install and configure an SAP HANA smart data integration (SDI) Data Provisioning Agent (DPA) and register the SAP HANA Adapter
  • How to create a remote source and virtual table (runtime)
  • How to create an HDI container and access external objects
  • How to create a virtual table (design time)
  • How to create and execute a Flowgraph
  • How to create a replication task for real-time replication between an SAP HANA platform system on-premises as source and SAP HANA cloud as target.

To bookmark the playlist on YouTube, go to > SAP HANA Cloud.


About SAP HANA Smart Data Integration 


For the documentation see the SAP HANA Cloud Getting Started Guide and the SAP HANA Administration Guide about how to configure remote data sources and virtual objects. See the SAP HANA Smart Data Integration guide(s) for information about how to use SAP Web IDE to develop the artifacts for real-time replication.

SAP Community Blog Posts

Highly recommended as well are the blog posts from Takao Haruki on this topic:

Learning Track

In this learning track, you will find a series of tutorials on how to incorporate SAP HANA Cloud into your existing data landscape based on SAP HANA on-premise.


1. Series Overview

The first video introduces the series with two short demos showing SAP HANA Smart Data Access (SDA) and virtual tables and SAP HANA Smart Data Integration (SDI) with flowgraphs for batch mode or real-time replication. An architecture discussion explains the systems, technologies, and activities.

For this demo, AWS is used as cloud provider and the following resources have been created:

  • SAP HANA Cloud (SAP Cloud Platform, Cloud Foundry environment)
  • SAP HANA, express edition: simulating on-premises system
  • Windows system with Data Provisioning Agent (DPA)

In the subsequent videos, Bob explains how to set this up.

[0:00] – Introduction, about SAP HANA Cloud, documentation

[3:00] – Demo showing Smart Data Access virtual table

[4:00] – Demo showing Smart Data Integration with flowgraphs in SAP Web IDE for an ETL job, for execution in batch mode or for real-time replication.

[5:20] – Architecture explanation for DPA, Remote Source, HDI Containers, user-provided service, virtual tables, flowgraphs, and replication tasks


2. Create SAP HANA Cloud Instance

In the second video tutorial, we learn how we can create an SAP HANA Cloud instance.

The API endpoints for the Cloud Foundry Org and for the SAP HANA Cloud instance service are needed for later reference.

For the documentation, see

[0:00] – Connect to the SAP Cloud Platform

[1:00] – Create SAP Cloud Platform subaccount (AWS)

[2:00] – Enable Cloud Foundry and create space. API endpoint for Cloud Foundry org

[3:30] – Configure Entitlements: add service plans: hana, hdi-shared, MEMORY 4GB

[6:00] – Create instance

[7:30] – API endpoint for SAP HANA Cloud service

[8:00] – Connect to SAP HANA cockpit


3. Security and SAP Web IDE

In this video a development user account is created in the SAP HANA Cloud database. We also enable SAP Web IDE as SAP Cloud Platform service and configure the tool for our project.

For the documentation, see

[00:00] – Introduction

[01:00] – Create new database user using the SAP HANA database explorer SQL console using a script from the SAP HANA Academy GitHub repository. System privileges CREATE REMOTE SOURCE, ADAPTER ADMIN, and AGENT ADMIN are granted.

[02:00] – Connect as devuser.

[03:20] – Create new subaccount using SAP Cloud Platform Cockpit (HANA Tools) in Neo environment.

[04:10] – Enable service SAP Web IDE Full-Stack

[04:40] – Configure Service: add user as DIDeveloper

[05:10] – Configure Security > Trust Management: Local Service Provider: Principle propagation = Enabled

[06:00] – Launch SAP Web IDE

[06:30] – Workspace Preferences > Extensions: SAP EIM Smart Data Integration Editors and SAP HANA Database Development Tools

[07:00] – Workspace Preferences > Cloud Foundry: API Endpoint of Cloud Foundry subaccount hosting the SAP HANA Cloud environment (video #2 [02:00]).

[08:45] – Launch database explorer embedded in SAP Web IDE.

[09:30] – Change schema to devuser


4. Data Provisioning Agent Installation and Configuration

In this video, Bob shows how to install the SDI Data Provisioning Agent (DPA) on a Windows computer and register the agent and the HanaAdapter with the SAP HANA Cloud database.

For the documentation, see

[00:00] – Introduction

[01:00] – Download SDI Data Provisioning Agent for your platform from SAP Development Tools []

[01:30] – Install DPA and specify: Agent unique name, service account, default ports 5050 and 5051

[04:00] – Launch DPA. Specify HANA hostname [SAP HANA Cloud endpoint], port [443], agent admin user [DEVUSER]

[05:00] – Connect and register agent with agent name [sha_agent] and hostname [IP Public address]

[06:20] – Verify registration in the catalog browser of the database explorer (SAP Web IDE)

[06:40] – Register HanaAdapter



5. Creating Remote Sources

In this video tutorial, we learn how to create a remote source and virtual in the SAP HANA Cloud database connecting to SAP HANA, express edition.

For the documentation, see

[00:00] – Introduction and recap

[01:00] – Showing SALES source table in SAP HANA, express edition

[02:20] – Create remote source in SAP HANA Cloud providing name, adapter, host, port number, database name, schema, and credentials. Highlighting CDC (Change Data Capture) settings.

[05:00] – Connect to remote source and create virtual object for the remote SALES table.

[06:00] – Validate access to remote object and open data.

[07:00] – Validate update change from source to target.

[08:00] – Validate update change from target to source.


6. HDI Containers

SAP HANA HDI uses containers to store design-time artifacts and the corresponding deployed run-time (catalog) objects. The design-time container is used to store the design-time representations of the catalog objects that you want to create during the deployment process. The database run-time container (RTC) stores the deployed objects built according to the specification stored in the corresponding design-time artifacts.

In this video tutorial, we learn how to create a HDI Containers in the SAP Web IDE.

For the documentation, see

[00:00] – Introduction and recap

[02:15] – Build new project in SAP Web IDE: Cloud Foundry > SAP HANA Database Application for SAP HANA Cloud

[03:30] – Build project

[04:00] – Build database artifact

[04:30] – Show hdi-shared service instance and application

[05:15] – Add database to database explorer for HDI container. Remote sources are not available for HDI containers so we will need to create a user-provided service.


7. HDI to Classic Schema Access

In this video tutorial, we learn how to access objects stored in the SAP HANA Classic schema from a HDI Container. User-Provided Services, mta.yaml files and “grants” files are discussed.

For the documentation, see

[00:00] – Introduction and recap

[01:00] – Create user-provided service with the credentials to the schema in SAP HANA Cloud where the remote table resides.

[02:00] – Update the configuration file for the multitarget application (MTA.yaml) created in the previous video adding the user-provided service to the modules and as resource

[03:45] – Rebuild database module

[04:00] – Create hdbgrants artifact with code snippet providing object owner rights and application user rights and build.


8. Virtual Tables and Flowgraphs

In video 5 we created a virtual table as catalog object (runtime) to validate our configuration. In this video we perform the same task but this time as development artifact.

For the documentation, see

[00:00] – Introduction and recap

[02:00] – Create new virtual table in the project providing virtual table name, remote source, database, schema, and object.

[04:00] – Remote table will be visible in the HDI container as table

[05:30] – Create a Flowgraph in the project, add data source and select virtual table.

[07:00] – Add data target and build the artifact.

[08:00] – Execute the Flowgraph.

[08:30] – Target table and task (procedure) have been created.


9. Replication Tasks

In the last video tutorial, Bob introduces creating Real-time Replication Tasks in a HDI Container when using Smart Data Integration (SDI) in the SAP HANA Cloud.

The SAP HANA Administration Guide documents the equivalent SQL statements and system views:

[00:00] – Introduction and recap

[01:30] – Create a new replication task in the project

[02:00] – Select remote source and object (dialog similar to video #5), and specify replication behavior.

[04:50] – Build the object

[06:00] – SOURCE_SALES and TARGET_SALES virtual tables are created in the target system (SAP HANA Cloud).

[06:45] – Remote subscription, replication procedure, and task are created in the target

[07:30] – Call procedure.

[08:00] – CDC (Change data capture) tables and triggers are created in the source system (SAP HANA, express edition).

[08:45] – Confirm replication works with UPDATE and INSERT statement.


Share and Connect 

Questions? Post as comment.

Useful? Give us a like and share on social media. Thanks!

If you would like to receive updates, connect with me on

For the author page of SAP Press, visit

For the SAP HANA Cloud e-bite, see

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Ivan Mirisola
      Ivan Mirisola

      Great blog - right when I needed to share this information!

      Much appreciated.

      Author's profile photo Enio Terra
      Enio Terra

      Amazing!! Thanks Denys...


      Author's profile photo Sunny Makker
      Sunny Makker

      Helle everyone,

      currently I face an issue which i can't solve myself. Maybe I will find a solution by posting my challenge.


      • DB2 is connected via DB2LogReader Adapter to XSA using Smart Data Integration
      • Created a replication task which loads one single table
      • Load mode is "intial + realtime" and preserve all

      Note: The scenario described above worked pretty fine for about two hours. We captured every single data change coming from the DB2 source system. At one point we truncated the table we had in use for testing directly within DB2. Since then I am no longer able to get a replication task mode "initial + realtime" running although "initial only" works as fine as before.

      So the moment I execute the replication task in "initial + realtime" i receive the following error message:

      Could not execute 'CALL "DELTA_XXX"."DataWareHouse.Database.Replication ...'
      Error: (dberror) [686]: start task error: "DELTA_XXX"."DataWareHouse.Database.Replication Tasks::XXX_DELTA.START_REPLICATION": line 27 col 6 (at pos 1110): [140038] Error executing SQL command in task.;Error executing ALTER_REMOTE_SUBSCRIPTION_QUEUE with command: 'ALTER REMOTE SUBSCRIPTION "DELTA_XXX"."SUB_XXX" QUEUE'. ,exception 71000129: SQLException
      exception 71000256: QUEUE: SUB_XXX: Failed to add subscription for remote subscription SUB_XXX[id = 16579466] in remote source XXX_LogReader[id = 14918173]. Error: exception 151050: CDC add subscription failed: RS[XXX_LogReader]: Failed to add the first subscription. Error: Failed to subscribe table ["SCHEMA"."XXX"]. Error: Initialization for replication of database <XXX> has not been done.

      Maybe you guys have a working solution for me. Thanks in advance.

      Author's profile photo Denys van Kempen
      Denys van Kempen
      Blog Post Author

      Hi Sunny,

      Would you mind posting this comment as a question on the forum: I have a few ideas but the Answers section of the Community is more suitable for an exchange. Thanks

      Author's profile photo Sunny Makker
      Sunny Makker

      Hi Dennis,

      thanks for the advice. You can check on the issue HERE.

      Author's profile photo Vamshi Madhavaram
      Vamshi Madhavaram

      Hi Dennis,

      This blog says on how to establish a real time replication between SAP HANA on-premise to SAP HANA Cloud.

      We are planning to establish a real time replication between HANA Service to HANA Cloud. Is it possible to set up a real time replication connection between SAP HANA Service to SAP HANA Cloud?

      How can we achieve that?

      Author's profile photo Denys van Kempen
      Denys van Kempen
      Blog Post Author

      Hi Varnshi,

      According to the SAP HANA SDI PAM (page 7), batch processing is supported but not real-time replication

      Author's profile photo varun bhargav
      varun bhargav

      Hi Denys van Kempen


      Thanks for the blog!

      When I trying to create the remote subscription , I am getting an error:
      "invalid adapter name: HANAODBC"

      I am using SDA to create remote source and the adapter I have choose is hanaodbc, here is the query I used to create remote source as mentioned here

      And I am able to create virtual tables and query them.


      your advice would be really helpful




      Author's profile photo Denys van Kempen
      Denys van Kempen
      Blog Post Author

      Hi Varun,

      Suggest to post the comment as question to the forum  >

      Tags: #saphana #saphanacloud

      Those that follow the tag are notified (experts from SAP, partners, customers).

      This might help to resolve the issue faster

      Author's profile photo varun bhargav
      varun bhargav

      Thanks for the rsponse Denys van Kempen .

      I have posted it here

      Author's profile photo Andrea Botto
      Andrea Botto

      Great Blog Denys!


      One question, is "SAP HANA Smart Data Integration" independent from "SAP Integration Suite"?

      So if I don't have the integration suite can I use SDI?


      Thank you

      Author's profile photo Denys van Kempen
      Denys van Kempen
      Blog Post Author

      Hi Andrea,


      Yes and yes.


      SAP HANA smart data integration (SDI), is a "version/edition" of the generic SAP Data Services, Data Integration tool specific to SAP HANA, released around 2014. The technology originates from ActaWorks (late 1990s) via BusinessObjects.

      SAP Integration Suite is a portfolio (box) of integration services gathered around Cloud Integration, previously marketed as SAP Cloud Platform Integration (CPI), SAP HANA Cloud Platform, integration service, and HANA Cloud Integration (HCI), released 2013. The technology originates from the same Data Integration tool complemented with process integration/orchestration, a web UI, Eclipse plugin-in, etc.

      Author's profile photo Werner Dähn
      Werner Dähn

      I am not happy with the hint "SDI is a version/edition of the generic SAP Data Services". The two do not share the technology, the use case, the capabilities.

      Author's profile photo Denys van Kempen
      Denys van Kempen
      Blog Post Author

      Hi Werner,

      Thanks for your input . You would know 😉


      Would you mind sharing how SAP (BusinessObjects) Data Services technology relates to HCI, SDI, SDA, etc. (if at all)?


      Author's profile photo Werner Dähn
      Werner Dähn

      Denys van Kempen I can try.

      Use case:

      • BODS is for getting any data into any system in batch.
      • SDA is for blending data from remote systems into Hana - federation.
      • SDI is an addon to SDA to cover more sources the SDA way and also supports realtime replication from remote to Hana. From Hana to remote is not(!) a use case and also nobody would install a full Hana instance just to get data from e.g. Oracle to SQL Server. So it cannot replace BODS.


      • BODS was written in C++ for extreme speed. It has its own data types, its own readers and loaders, its own language ("ATL"), its own UI, lots of transforms to enable pretty much every typ of data integration requirement easily. It has some basic realtime capabilities but not meant for replication or realtime CDC. It has a C++ based Windows-only UI.
      • SDI is built on the foundations of SDA, so the query optimizer of Hana. It utilizes Hana SQL, Hana calcengine and exposes the SDA adapter API to everybody in order to enable more sources. It has/had a WebIDE based UI.

      Relationship between SDI and BODS:

      When we designed SDI, the thought was that BODS will be used as ETL tool for hybrid use cases and for cases where Hana is the center of the world, better have something with a better fit of Hana's realtime story. Hence SDI is using Hana data types, SDI supports federation&batch&realtime, does focus on transactional consistencies, etc. (As said before, BODS is batch only. Certainly no federation, almost no realtime and no real option to load data transactional.)

      To enable that, SDI/Hana needs functionalities to ...

      • to connect to source systems: The SDI Adapters and SDK, which are widely used and work well.
      • to support batch data integration: The WebIDE's Flowgraph editor, which never got the chance to get built properly in my opinion.
      • to support realtime replication: The WebIDE's Realtime Replication editor, which again did not get the funding to be built properly.
      • to support realtime transformations: The Flowgraph editor has traces of this functionality but most transforms, not even a join, allow setting the realtime flag.

      So the only possible relationship between BODS and SDI is the Flowgraph editor.

      Because of the ease of use of BODS transforms to solve any data integration problem (and some more reasons), the Flowgraph transforms look mostly the same. Some transforms are missing in the Flowgraph editor even today, though, and no plans to add them as far as I am aware.

      If you concur with my opinion that the Flowgraph editor lacks too much for being used in enterprise projects, no commonality between the two remains.

      As of today I recommend to my customers:

      • BODS as a batch data integration tool with extreme performance, for any-to-any use cases and allows to solve easy and complex data integration problems in a matter of minutes.
      • SDI to bring more sources into Hana, to build your own Hana adapters and to use those in SQL statements including realtime replication.


      HCI is a different beast. Its origins are in the Enterprise Application Integration space, where webservice calls are chained together in order to implement a simpler API, e.g. mark an employee as retired in SuccessFactors and at the same time remove the credentials in the Active Directory to prevent further login with this account. Very low level, where you manually set http headers, parse responses, etc.

      You can obviously ask why there are different tools to call webservices but SAP is not special here. Almost all companies have a hard split between Data Integration tools and EAI tools despite the huge overlap. That is mostly for historical reasons but today we have the technologies to provide a single tool for both and the first combined products start to appear.

      Author's profile photo Denys van Kempen
      Denys van Kempen
      Blog Post Author

      Thanks for taking the time for this extensive comment. Much appreciated.


      Author's profile photo Ratish R Nair
      Ratish R Nair

      Hello All,


      we have a central cloud hana database and from there it is being sent to multiple other smaller application related database. Is it possible to achieve real time data transfer within the cloud to cloud ?


      If yes can anyone share the readings related to the same?






      Author's profile photo Denys van Kempen
      Denys van Kempen
      Blog Post Author

      Hi Ratish,

      Best to post this comment as a question on the community forum


      This allows for a better reach and more timely response.

      (copy/paste is fine)

      Author's profile photo Veronika Bajus
      Veronika Bajus

      Hello Denys,

      may I ask you what are the next steps after data replication into tables?

      Table will be the part of HDI container. Is that possible to have these tables permanently stored in SAP Hana Coud database under specified schema instead of the schema of hdi container?

      Many thanks for your answer!

      Best Regards

      Veronika Bajus