Skip to Content
Author's profile photo David Pugh



I was in 2 minds where to put this short blog. Do I put it in the HANA or Enterprise Information Management areas of SCN. I decided to put it in the EIM area as there are already a number of articles in HANA and I wanted to introduce some new capabilities within the more traditional EIM space.

SAP HANA SP09 introduced a whole host of new capabilities. In this blog I’m going to cover 2 of those, Smart Data Integration (SDI) and Smart Data Quality (SDQ), which fall under the umbrella of SAP HANA Enterprise Information Management.

SDI & SDQ have the ability to source data, replicate data, transform and cleanse data in batch or real time into SAP HANA, in on-premise or cloud environments. This provides a simplified landscape where we can provision and consume data.

I’m not going to go into detail about the architecture etc but more information can be found at

For those that are familiar with SAP Data Services then the design concepts for SDI / SDQ are similar. We have a HDBFlowGraph (dataflow), sources, transforms and targets.

Transforms are split into 2 main categories, General and Data Provisioning.


General contains the standard capabilities;

  • Data Source – source table.
  • Data Sink – target table.
  • Data Sink (Template Table) – creates a table based on the previous transforms data structure.
  • Aggregation – creates an aggregated result set based on the specified aggregation method such as SUM or Count.
  • Filter – filters the incoming result set based on an expression.
  • Join – combines data from 2 input tables by using values common to each.
  • Sort – combines data from 2 input tables by using values common to each.
  • Union – produce a result set from 2 tables with the same schema.
  • Procedure – call a stored procedure.
  • AFL Function – Accesses functions of the Application Function Library.


Data Provisioning contains the more advanced transforms;

  • Date Generation – generates a series of dates.
  • Row Generation – creates a result set based on a user defined number of rows.
  • Case Node – used to route records based on value.
  • Pivot – transforms rows into columns.
  • Unpivot – transforms columns into rows.
  • Lookup – retrieves column value(s) from a lookup table that matches an expression.
  • Cleanse – used to parse, standardise, correct & enrich person, firm, address information.
  • Geocode – enrich address data with latitude / longitude information.
  • Table Comparison – compares 2 tables and produces the difference between them flagged as insert, update, delete.
  • Map Operation – allows you to change the operation codes. Change an update to insert.
  • History preserving – allows you to produce a new row in the target table rather than update an existing row.

To create a flowgraph we drag a combination of the required transforms on to the canvas a join them together. In the example below I’m joining 3 source tables in SAP ASE_Orders, ASE_Order_Details & ASE_Customers. The customer data is then passed through the cleanse transform where we are parsing / cleansing name & address information before we load the result set into a template table in HANA.


This is just a brief overview of the new capabilities SAP HANA EIM brings in SP09.

More detailed information and demonstrations can be found here–overview

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Dirk Venken
      Dirk Venken

      You can actually join more than 2 tables in a single join transform. Right click in the join, select Add extra input from the pop-up menu and connect the dots.

      Author's profile photo Former Member
      Former Member

      Thanks david. We recently introduced BODS as our maintstream ETL tool. But after looking at above capabilities of SDI in HANA, it seems like SDI is a replacement of BODS for new SAP customers.

      Do you have any insight if organization should actually have two SAP tool ETL tools i.e both SDI HANA and BODS?

      Our BODS scenarios are more data integration across multiple sources and then sending transformed data to targets(internal or external).

      Author's profile photo Paul Medaille
      Paul Medaille

      Hello Basant

      SDI/SDQ should not be seen as a replacement for Data Services.  They are separate products with separate roadmaps, though of course they do have overlapping functionality.

      I think, at least for now, most organizations will perhaps use one or another, while other organizations will find use cases for both.  One of the great advantages of SDI/SDQ, aside from the speed and power they offer, is the ability to simplify the landscape.  As these products mature and customers become more HANA-centric, we may see customers decide to replace Data Services with SDI/SDQ, but that does not make them a replacement product for Data Services.

      I think the key is for any customer to evaluate their integration needs and use cases and then choose the right product based on those factors.

      Author's profile photo Former Member
      Former Member

      Thanks Paul. We used the same reasoning to position BODS in our landscape with HANA Platform taking the centerstage. With the considered use cases , we see BODS in a more contained state going forward once we start using SDI in HANA EIM itself.

      I understand the SDI does all transformational piece for loading tables in HANA but wasnt quite sure if the SDI of HANA can be used for doing the tranformational logic if we need to send data outside HANA as well. I mean can SDI be used if i need to write logic to send a feed from HANA table to some other third party system which might have an ODBC connectivity with the HANA Platform.

      Author's profile photo Paul Medaille
      Paul Medaille

      Hello Basant -

      As of SP10, SDI can move data bi-directionally, so yes, you can move into non-HANA targets.



      Author's profile photo Former Member
      Former Member


      So we can use SDI in HANA EIM to read/send data from non HANA systems.

      But what if i need to load data from a flat file or write data to a flat file. The flowgraph in HANA EIM can only accept tables/views as data sources or data sinks.

      Also , would there be any concept of variables and parameters in HANA like in BODS.

      If i think of replacing BODS completely then i have to be very sure that HANA EIM is as flebible as BODS.

      Author's profile photo Kiran Shenvi
      Kiran Shenvi

      Hi All,

      I am new to Flowgraphs and needed your help!
      Suppose we have 2 column in Data Source and want to concatenate them and map it to a single column in .the target table., which is the correct type of node to be used? Its is possible to achieve...
      I couldn't find any relevant example online or documentation with this use case.. Kindly guide me on the required steps.


      Author's profile photo David Pugh
      David Pugh
      Blog Post Author

      Hi Kiran,

      You can use the filter transform to change the mapping of columns including derived values. To concatenate you can either use the concat function CONCAT(stringArg1, stringArg2) or double pipe stringArg1||stringArg2



      Author's profile photo Kiran Shenvi
      Kiran Shenvi

      Thanks David... it worked!

      Author's profile photo Former Member
      Former Member

      Hello experts,


      How can we handle dependency with flowgraph? For example: I have created Flowgraph1 and Flowgraph2, Now I want to start my flowgraph2 only when Flowgraph1 was successfully completed. Appreciate your help!!



      Author's profile photo Former Member
      Former Member

      Hello All,

      How to use if then else statement in Flowgraph in SAP HANA. Could you please help me on this as i am facing syntax error while running the flowgraph ?

      Thanks in Advance!!