Skip to Content
Technical Articles
Author's profile photo Bruno Beghini

Exploring SAP Datasphere new features: Capturing Delta Changes in Your local table and Transformation Flow


On September 20th, 2023, SAP has realized new features for SAP Datasphere, in this blog I am focusing on “Capturing Delta Changes in Your local table” which can be very useful in plenty of scenarios where changes on the source needs to be capture. I will perform also a quick walk-through the new Transformation Flow that can load those delta changes from one source table to a target table.

For further details please check:


Capturing Delta Changes in Your local table


How does it work?


When creating a new local table you are able to switch the “Delta Capture” on:


When this is done a delta capture table will be automatically created <table_name>_Delta and two additional fields are added to the table definition:


  • You can change both business name and technical name but you can’t change the data type (read-only).
  • The delta capture columns can’t be set as key column.
  • They can’t be deleted if the toggle is switched on.
  • Once the table is deployed, the toggle can’t be switched off.


Here we can see exactly the difference of a local table with delta capture disabled (Left) and a local table with delta capture enabled (right):

As you can realize there are two objects (active records table and delta capture table) that are responsible for showing data accordantly for when consume for the following SAP Datasphere apps:

  Table with active records only Table with delta capture columns More information
Table Editor – Data Preview Not used Used as source object Data Preview is available once deployment is completed, and show only table with delta capture columns. Viewing or Previewing Data in Data Builder Objects
Table Editor – Data Maintenance Not used Used as source and target objects You can perform table maintenance once deployment is completed.Maintain Local Table Data
Table Editor – File Upload Not used Used as source and target objects You can update a table with delta capture by uploading a new csv file, after deployment is completed. Load or Delete Local Table Data
Transformation Flow Used as source objects Used as source and target objects
Replication Flow Not used Used as target object Creating a Replication Flow
View Builder Used as source object Not used Only active records table is visible in

  • Data Builder Landing Page
  • Repository Tree as shown in View Builder, Transformation Flow, Data Flow, Replication Flow, ER Modeler
  • Association targets list
  • Graphical view – Calculated column – currency conversion tables list

Creating a Graphical View

ER Model Used as source object Not used Only the active records table is visible in the repository tree, but you can create local tables with delta capture. See Create a Table in an E/R Model
Data Flow Used as source object Not used Only the active records table is visible in the repository tree. SeeAdd a Source
Business Builder Used as source object Not used Table with delta capture column is hidden when searching for entities. Only the “Active Record” table can be selected.




Below I will show case the difference of creating and loading a local table with and without delta capture option enabled:

  • Local Table without delta capture (Delta Capture Disabled)
    1. Create table based on file


Table creation based on file

Note: Date, Sales Rep, Location and Product were set as Key fields

Table created successfully with the flat file data.

  1. Loading flatfile data

Now lets imagine the scenario where you need to change some data in the flatfile and upload the same record again (record with same key). E.g.: below file the Plan value of 20230101;Sales RepA: Location A; Product 01 has changed from 7100 to 7000.


Duplicated Key Error


After trying to upload the file the system shows unique constraint violated error, that means the file contain duplicated records considering the defined key, or that the file contains a record that is already in the table (that is the case).

This behavior is by design, SAP Datasphere only uses “insert” to add records to a local table via file upload functionality, alternatively you can enable the flag “Delete Existing Data Before Upload”, that way the system will delete all existing entries before reloading it. For this you must have the complete file considering the local table data will be truncated before uploading the new data. Note: this is not required if the flat file contain only new data (data with different key).

Using the “Delete Existing Data Before Upload” option:

Using Data Editor:

Updates in the file content can also be done using Data Editor, this can be very handy for small adjustments on the data:

  • Local Table with delta capture ( Delta Capture Enable)
    1. Create table using Delta Capture

Now  using the same table as a template I will create a new table with the Delta Capture Option activated. Note: Currently it is not possible to change this flag after the table has been deployed.

Just for simplification purposes and time optimization I am using the “save as” option to create a copy of the existing table, but it can also be created from scratch if needed.

Now we are able to switch the delta capture on:

Keep in mind that after deploying it is not possible to set this functionality off.

As you can see in the image below two additional fields were added (Change Type and Change Date):

These fields are used to control the delta of the table content.

Uploading initial data

Now we have just uploaded the initial data using the same file previously utilized:

This is how the data looks like after the first load:


Uploading same file with changes

As a next step we are going to change the plan value of the first record of the file and upload it again:

Then after loading the file we can see the new values with the new change date:

Note: As mentioned in the documentation the Change Type for Flat files is always “I” and in case you need to delete some specific records you need to use data editor or upload a file with only the necessary records and use the deletion flag, as demonstrated below:

Deleting data:

The first record is deleted from the flat file:

And the file was uploaded again using the flag:

As you can realize the first record got the deletion flag:

In this case when the deletion flag is active the system eliminates the record from the active table but keep the records in the delta table with the change type ‘D’ in order to capture the deletion in the delta mechanism.




Note: if you dele the table content using the delete data from table(

Both delta and active records table will be empty:


Modifying the data using data editor

In the example below the following procedures were performed:

  • Row 1: New inserted record
  • Row 2: Deleted record
  • Row 3: Updated record
  • Row 4: Existing record

As displayed above the change type and change date works accordantly.


Transformation Flow

Now I am going to show how it works together with the brand new “transformation flow“:

  • First you need to create a new transformation flow und data build:

  • Then you can click on the edit icon on the left side of the view transformation:

  • View transformation: here is where you can perform all the necessary joins, calculations, aggregations and filters you might need, first you just need to drag and drop the table you want to use as source:

  • Then you can add all necessary transformation, in this example I will add a calculated field that indicates if the actual is equal or higher than the plan (Target Achieved):



  • You can go back to the previous screen and add a new target, in this case we are going to use the default target_1:

Note: since we are using as source a delta capture enabled local table the delta capture option on the new table (Target_1) is already enable by default.


  • Deploy the transformation flow:


  • Using Data preview option we can see the data in any step of the transformation, here is the content of the source after calculating the “Target Achieved” column:

  • In the left pane of the transformation flow we can choose the load type (“Initial” or “Initial and Delta”):

  • And in the top are we can run the transformation flow or also add a schedule on the calendar icon in the left pane:

  • After running the transformation flow you can check the status on the left pane or on the Transformation Flow monitor:



  • Now I will reload the flat file an then perform some changes (also deletions) and trigger the transformation flow execution:

Data loaded to the Target Table before changes:

Source Table after changes (Deletions, Updates and Inserts as you can see in the Change type column):


In the DIM (Data integration Monitor) you can check the Run details:

Additional details can be seen in the messages, example when you click in “view detail” of “Reading data from the ”View Transform” operation and writing it to the target table.” The system will display the operation used to write the data in the target table.

If you realize here the “Change Date” is used to control the deltas that have already been loaded during my previous tests and allow only the new data to be loaded.

  • This is how the target looks like after the transformation flow run:



In my opinion those new features are great improvements that brings enormous benefits to the business in terms of agility and scalability and for sure there is much more to come.

For more information about the next improvements regarding SAP Datasphere, please check its roadmap here:

Also, in case of some improvement needed please submit your idea here:

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Marianne Loenen
      Marianne Loenen

      Thank you for the nice write-up of this new feature Bruno!

      Author's profile photo Martin Kreitlein
      Martin Kreitlein

      Hello Bruno,

      thanks for that really great blog.

      Unfortunately many of your screenshots are very small 🙁

      And if you zoom into, then they will be blurred, since the original size is so small.

      Maybe you can upload the big ones in higher resolution?

      BR, Martin

      Author's profile photo Bruno Beghini
      Bruno Beghini
      Blog Post Author

      Hello Martin,

      Sorry about that probably it was some issues while I was copying the images from one place to another.

      Now should be fine,

      Thanks a lot for letting me know,

      Bruno Beghini

      Author's profile photo Anshul sharma
      Anshul sharma

      Amazing Article ,and nicely explained Bruno

      Author's profile photo Nageswara Reddy Mallidi
      Nageswara Reddy Mallidi

      Hi Bruno,

      Thanks for the nice blog on the new feature in DSP.

      I have a couple of questions.

      Does Transformation flow support only Local tables as source table?

      Can we not use Remote table as source table?

      Generally the local tables are loaded from flat file/excel sheets.
      Can we not create Transformation flow using Remote Table as source table, which is loaded directly from ECC/S4 systems.

      Thanks & Regards,

      Author's profile photo Bruno Beghini
      Bruno Beghini
      Blog Post Author

      Hi Nages,


      Thank you for your questions,


      As per my knowledge only local tables are accepted there and the idea of remote table is to consume the data directly from the source system using federation or also you have the possibility of data replication in the remote table itself. If you need to perform some transformation you can do it using graphical or SQL view or also replicating the data into a local table you can use "Data Flows".

      To consume data from SAP ECC/S4 you can use federation as mentioned above or also consume data using data flow instead of remote table and perform any necessary transformation by creating a new Data Flow.

      Another option would be to use the replication flow to replicate multiple tables of your source system to Datasphere with the possibility of using deltas. And then is case of new transformations and persistency are required you can use the transformation flow on top of the tables loaded by the replication flow.

      In summary I would suggest that you investigate further these possibilities (Remote Table, Data Flow and Replication flow) in order to decide the best way moving forward based on you specific scenario.


      Best regards,


      Bruno Beghini

      Author's profile photo Alberto Simeoni
      Alberto Simeoni

      Hello bruno,

      A thing that really hurts my heart is that existing Graphical / SQL Views can not be used inside replication flows!

      So to optimize existing logics a person needs to re-do it again from the beginning?

      this and the fact that CDC tables needs to be fresh new and not deployed tables...


      I hope in the future "the music changes".

      beccause it is absurd to introduce new features that are totally stand alone and not integrated with the existing modelling.

      Author's profile photo Bruno Beghini
      Bruno Beghini
      Blog Post Author

      Hello Alberto,

      Thank you for your comments please check my points below,

      Currently you can create a Graphical/SQL on top of the target tables of the replication flows, so no need to recreate those.

      The solution is evolving a lot and with that also improving its integration aspects.

      Best regards,

      Bruno Beghini

      Author's profile photo Alberto Simeoni
      Alberto Simeoni

      for example this is an existing ETL flow:

      1) Load from SAP S/4HANA the MATDOC table (with replication flow)

      2) Apply some logics (add standard costs from MBEW) and group by fiscal year / period (to reduce row count). These logics are applied in a Graphical View

      3) Use a dataflow to read from the graphical view and save into a table ( we do not want to read from the beginning 10 years of material movements, only the last 2 months.

      4) before step 3 you need to delete from target the last 2 months.


      New ETL Flow:

      1) Load from SAP S/4HANA the MATDOC table (with replication flow)

      2) Apply some logics (add standard costs from MBEW) and group by fiscal year / period (to reduce row count). These logics are applied in a Transformation Flow with CDC delta.


      To change from Old ETL Flow to the new one I need to re-implement the Graphical View into a "Transformation Flow base view".


      Author's profile photo Bruno Beghini
      Bruno Beghini
      Blog Post Author

      Hi Alberto,

      Thank you for the detailed explanation, I see your point now, yes currently is not possible to reuse an existing graphical view into the transformation flow.

      The alternative I see would be to apply all possible filters in order to optimize the reading time in the graphical view it self and with that trying to avoid the second data replication but I guess it was already considered...

      Best regards,

      Bruno Beghini


      Author's profile photo Oddmar Lid
      Oddmar Lid

      Hi Bruno and Alberto,


      We have a Datasphere solution on top of our SAP S/4HANA system.

      Currently we are using Remote Table functionality towards CDS Views in SAP S/4HANA system to replicate data into Datasphere.

      Realitime replication for Transaction data and Snapshot replication for Master Data.

      One of the features Datasphere is missing is to remove leading zeroes from the main reporting dimensions (Embedded Analytics on top of SAP S/4HANA and SAP BW has this) during reporting.

      And to have leading zeroes on the main reporting dimensions in a SQL data warehouse like Datasphere, is a nightmare while writing SQL code where you then need to know for each dimension how many leading zeroes you should add while writing filters, etc.

      Our goal is to remove all leading zeroes on the source data coming into Datasphere from our SAP S/4HANA system and this should preferably be done during the Realtime/Snapshot replication of data.

      We have tried with building Calculation/Views directly on top of Remote tables where we use LTRIM to remove leading zeroes and use this as source into our business data models.

      But when comparing performance on Business Data models directly on top of Remote Tables vs. a view layer to remover leading zeroes, there is quite a noticeable time difference unfortunately.

      We have tried with Replication flow, but there is no such ETL functionality.

      I was hoping that with Transformation flows we could incorporate such ETL functionality together with a Replication flow, but as far as I can see, I can only use the Target table from a Replication flow as source in a Transformation Flow.

      If we then use Transformation Flows for this purpose to load Delta Changes from a Replication Flow target table into a new internal table, we will:

      • Duplicate the Transaction and Master data
      • Loose the Realtime capability (even though replication flows as not realtime due to it can only be “scheduled” to run once per hour)

      Do you know a way that Transformation flows, together with Replication Flows can help us in this scenario without losing RealTime and duplicating Transaction/Master data?

      Thanks for any insight into this question / scenario!!

      Best regards

      Oddmar Lid

      Author's profile photo Bruno Beghini
      Bruno Beghini
      Blog Post Author

      Hello Oddmar,

      Thanks a lot for sharing your scenario with us,

      Regarding your question as far as I'm aware the only real-time replication currently available is the way you are doing (remote tables).

      What you can try in order to optimize the reporting performance of the approach where you are using a view to remove the leading zeros you can check if it is possible to add some more filters to restrict the data before applying the LTRIM() or try to use the persistency/partitions functionalities to see if it can help somehow, you can find more details here:

      Also the updated Best Practice document might be useful:

      Thank you and best regards,

      Bruno Beghini

      Author's profile photo Jonathan Haun
      Jonathan Haun

      It appears that a Delta Enabled table cannot be set as a target table in a data flow and quite frankly this makes delta tables feature useless. What is the point of a delta table, if I can't define a data flow to load it?

      Author's profile photo Bruno Beghini
      Bruno Beghini
      Blog Post Author

      Hello Jonathan,

      Thank you for your message,

      For that purpose the transformation flow was created there you can apply necessary rules, filters, joins and so on, you can find out more details here:

      Creating a Transformation Flow | SAP Help Portal

      Best regards,

      Bruno Beghini

      Author's profile photo Jani Uusihannu
      Jani Uusihannu

      Thanks Bruno,

      A good overview! We have a case, in which we get only the delta data (based on changed on date on header level) into a remote table in Datasphere from CPI, and from there it's pushed to a persisted table.

      The initial load brings following rows:

      KEY1 KEY2 Value
      A 1 100
      A 2 150
      B 1 300
      B 2 200

      In the source system the row 2 with keys 'A-2'  is deleted, and the delta data will be like:

      KEY1 KEY2 Value
      A 1 100

      So it will bring all records based on the key but the deleted one. Can the transformation flow handle this kind of case?

      Author's profile photo Bruno Beghini
      Bruno Beghini
      Blog Post Author

      Hi Jani,

      Thank you for your comment,

      Regarding your question since the source system does not send the deleted records nor delete it from the local table, I don't believe transformation flow will handle the case you described, I believe for this the alternative would be to use the "DELETE" option of the data flow to delete the delta records that are being loaded into the system based on the KEY1 and then reload it.

      Best regards,

      Bruno Beghini