Skip to Content
Author's profile photo Former Member

BW on HANA: Working of In-Memory DSO

Hello SCN,

We will discuss about HANA Architecture with BW and how it improves performance with
BW and understand all we need to know about In-Memory DSO in this blog.

How does today’s Standard DSO work?

Data Store Objects are used for staging purposes and also for operational reporting in SAP BW. The Standard DSOs consists of 3 Tables Active table, Activation Queue and Change log table.

/wp-content/uploads/2012/01/1_93101.jpg

Activation of DSO means calculating the delta and updating the current data snapshot
into Active table and writing the history of changes made into change log table.

We can see a detailed flow of the same in the following diagram:

Let us take an example scenario:

We have an ATM1 in which we have a initial
balance of 400 and an delta load of balance 100.

/wp-content/uploads/2012/01/2_93102.jpg

How activation of Request takes place?

The request activation takes place on BW side and saves the changes to the database as depicted in the figure below:

/wp-content/uploads/2012/01/3_93103.jpg

Where is the today’s DSO performance critical now?

As discussed above, we have 3 tables which DSO uses for calculating Delta. Application Server performs delta calculation and executes every SQL statement in the database. Thus for every calculation Application server needs to communicate with the DBMS, thereby creating heavy load on RDBMS server. These round trips makes the activation time very high, which takes more than 80% of runtime.

As we know about HANA computation power, SAP has decided to move the calculation part to SAP HANA DATABASE to optimize the activation time.

/wp-content/uploads/2012/01/5_93105.jpg

What is IMCE (HANA) and what do you need to know about column store?

Main Storage:

It contains main parts of the data which is not only highly compressed but also read optimized.

History Storage:

It helps in versioning of data (History).

Delta Storage:

It exists only in main memory. Change operations are stored here and are used for delta management.

/wp-content/uploads/2012/01/4_93104.jpg

What is Delta Merge operation?

  1. It moves data from Delta Storage to Main Storage because
    Main Storage is read optimized.
  2. It uses MVCC to maintain the consistency in the database for read & write operations. MVCC is discussed in detail in the later sections.

Write Operation:

  1. It is used only on delta Storage as it is write-optimized.
  2. The updating of database is performed by inserting a new entry into delta storage.

Read Operations:

  1. It reads from Main & Delta storages and merges the results.
  2. IMCE uses MVCC (Multi Version Concurrency Control) to ensure consistent read operations.

How this architecture with BW does improve the activation times?

The request activation along with updating of database takes place on SAP HANA DB side as
depicted in the figure below:

/wp-content/uploads/2012/01/5_93105.jpg

This means there will be no more roundtrips to database for saving the updated information. As the activation is taking place in the database
itself. Hence reduces the activation time.

Note: We can make DSO as an In-Memory based DSO only if it is of Standard type.

Only DSO of type “Standard” can be converted to Standard DSO. You can find the same in the screenshots below:

Standard DSO:

/wp-content/uploads/2012/01/6_93107.jpg

You can see the same option is not available for other types of DSO’s.

Direct Update:

/wp-content/uploads/2012/01/7_93108.jpg

Write-Optimized DSO:

/wp-content/uploads/2012/01/8_93109.jpg

What is MVCC?

Multi Version concurrency control (MVCC) for ensuring consistent concurrent access to the real time databases.

The purpose of MVCC is:

  1. To ensures database consistency.
  2. To resolve read-write and write-write conflicts.
  3. To prioritize transactions and avoid deadlocks.

How does MVCC work?

We have a Table in database which looks like this:

Object Value
A 10

Let us consider Read operation as R (A) and Write operation as W (A).The timestamps registered, are used by MVCC to lock and ensure consistency. Ti, Where i = 1,2,3,4 and 5 (Timestamps 1<2<3<4<5)

The order of transactions takes place as the below Diagram:

/wp-content/uploads/2012/01/10_93111.jpg

In the above manner, MVCC ensures consistent read operations.

How does In Memory DSO work?

In IMCE, the request activation takes place at IMCE side as shown below. And BW server will have a view on these tables at SAP HANA DB.

In Memory DSO consists of same tables i.e. Activation Queue, Change log and Active Data table. Along with control tables /BIC/A<ODSNAME>70 and /BIC/A<ODSNAME>80. </p><p>Change log and Active Data table consists of additional field IMO_INT_KEY.

Request Activation in In Memory DSO:

Below figures will depict the process of request activation in In Memory DSO.

/wp-content/uploads/2012/01/11_93112.jpg


Now let us take the same example of a sample transaction which was discussed earlier using Standard DSO.

Example:

We have an ATM1 in which we have an initial balance of 400 and a delta load of balance 100.

Step 1:

/wp-content/uploads/2012/01/12_93116.jpg

Step 2:

13f.PNG

Hope you understood the working of InMemory DSO. Now let us discuss migration concepts of DSO to InMemory DSO. Different options available for migration:

Simple Migration:

It is a Simple Migration:

Only Active data gets converted and change log history gets deleted. Conversion is faster and requests cannot be rolled back
after migration.

2. Full Migration:

Active data gets converted along with change log history. Conversion is slower but there is no restriction on rollback of requests.

This is about HANA Architecture with BW and how
it improves performance with SAP BW and working of InMemory DSO.

We will discuss about working of In-Memory Cube, in my next blog.

BW on HANA: Working of In-Memory Info Cube

Related content:

Please find the links
below which helped me in writing this blog

SAP High-Performance Analytic Appliance 1.0 (SAP HANA)

HANA and BW 7.30 – Part 2

Assigned Tags

      20 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo TANKA RAVICHANDRA
      TANKA RAVICHANDRA
      Hi
      In most of the books i read that whenever we do activation then the data in the activation queue will move to active table and changelog table parallely but in ur diagram the flow is different . But i think that ur data flow is correct can u plz explain whether the data will be loaded parallely into the active and change log table or sequentially from activation queue to changelog and then into the active table .
      And plz can u explain a little bit more HANA what is main use of HANA and why we should use it.

      Thanx & Regards,
      RaviChandra

      Author's profile photo Former Member
      Former Member
      Blog Post Author
      Hello RaviChandra,

      In my example diagram,you can see Request 1: (ATM1 400) in the active table, which later gets replaced by new value "500".

      This means, when the activation starts, the data from Activation queue is moved parallely into Active table and change log table and truncates the data from activation queue. After delta calculation is done the Value in active table gets "replaced"(depends on ODS setting) with new value and change log holds the trace of transaction.For this purpose i have shown the arrow from change log to Active table to show how the result is determined in the active table and also the significance of these calculations which now @hana is done by Calculation view. Hope i answered your question.

      Author's profile photo TANKA RAVICHANDRA
      TANKA RAVICHANDRA
      Hi

      Thanx for the reply can u explain about HANA overview ? what is the main use of HANA and etc.

      Thanx & Regards,
      RaviChandra

      Author's profile photo Former Member
      Former Member
      Blog Post Author
      Hello Ravi,

      Regarding HANA Overview, i have posted one link on architecture in this blog at related content.This document discusses about the architecture of HANA,  And regarding importance of HANA, do you want me to tell importance related to BW or HANA importance as a new tool deployed by SAP?
      When coming to importance in BW, HANA solves some problematic concepts in existing BW like Data loading performances and reporting performances. It does not change any existing features available but it only betters the functionality.

      Author's profile photo TANKA RAVICHANDRA
      TANKA RAVICHANDRA
      Hi

      Thanx for the reply can u explain about HANA overview ? what is the main use of HANA and etc.

      Thanx & Regards,
      RaviChandra

      Author's profile photo Former Member
      Former Member

      Hi all,

      Don't forget about problems!

      By example dso and cubes partitioning.

      https://cw.sdn.sap.com/cw/ideas/8452

      Author's profile photo Anand Tiragati
      Anand Tiragati

      Hi,

      Good job krishna...

      I couldn't understand completely, anyway I will go through it again.

      Confirm me, In the example, it should be  DELTA ATM1 500 instead of ATM1 100...right?

      And,

      Different options available for migration:

      Full Migration and Full Migration??

      Regards,

      Anand.

      Author's profile photo Former Member
      Former Member
      Blog Post Author

      Hello Anand,

      Thank you for reading this blog. Yes you are correct , i have corrected the images and it is simple migration and full migration.

      Thanks,

      Krishna

      Author's profile photo Former Member
      Former Member

      Hello Krishna,

      Thanks for sharing the document.

      I have few questions:

      1. What is the signifance of the control table /BIC/A<dso>70, /BIC/A<dso>80 and the additional field IMO_INT_KEY.

      2. How will the history storage be updated with the data from the delta storage?

      3. Can you explain me the process as to how the delta record ATM1 100 gets updated in the different tables.

      4. How will history table hold the data? Will history table hold the init data(400) and the delta data(100) in it?

      Thanks,

      Prasath

      Author's profile photo Former Member
      Former Member

      Hi krishna,

      i have one doubts in the below statement.

      Application Server performs delta calculation and executes every SQL statement in the database. Thus for every calculation Application server needs to communicate with the DBMS, thereby creating heavy load on RDBMS server.

      Here actually the data is present in the  PSA , then how the burden will be created on the RDBMS  server . can you explain a little bit more on this .

      tnx,

      praveen kodam.

      Author's profile photo Rama Shankar
      Rama Shankar

      Good blog - thanks!

      Author's profile photo Kamal Mehta
      Kamal Mehta

      Nice one .

      I am not sure about Control Tables . Can you please elaborate on the same.

      Thanks

      Kamal

      Author's profile photo Vivek Singh Bhoj
      Vivek Singh Bhoj

      Nice Article

      Regards,

      Vivek

      Author's profile photo Joseph Gonzales
      Joseph Gonzales

      Hi:

      In reading SP Note 1849497, and with SP10, SAP will no longer be using a Calculation View for the Change Log.

      Regards,

      Joe G.

      Author's profile photo Kodanda Pani KV
      Kodanda Pani KV

      Hi,

      Nice explanation thanks for sharing.

      Thanks,

      phani.

      Author's profile photo Krishna Tangudu
      Krishna Tangudu

      Thanks guys for all your comments.. I will reply to your comments soon.. Didnt see these comments for a while as it is in another account

      Regards,

      Krishna Tangudu

      Author's profile photo Viren Pravinchandra Devi
      Viren Pravinchandra Devi

      Hello,

      Thanks for sharing the blog.

      I think Delta merge option is availalble in DTP as well.(correct if wrong). Question is considering flag is ticked delta merge is supposed to happen as soon as request is loaded. So I would like to know if delta merge happens while loading to activation queue or while performing activation?

      Regards,

      Viren

      Author's profile photo Geeta Sharma
      Geeta Sharma

      Hello,

      When data modifications are initially saved in a delta storage that is optimised for write access. However, most of the data is saved in a highly compressed format in the main storage, which is optimised in terms of required memory space and read performance.


      After activation, an automatic check is run to see whether a delta merge can be performed. This also applies to DataStore objects that belong to a semantically partitioned object.


      Make sure that the DTP or process type always triggers a delta merge. If no delta merge takes place, the data remains in delta storage. Over time this will result in storage problems and have a negative impact on read performance.


      Check this help library link.


      https://help.sap.com/saphelp_nw73/helpdata/en/62/9d41934c0744ef9a8f21fa4c70baa3/frameset.htm


      Thanks

      Author's profile photo Viren Pravinchandra Devi
      Viren Pravinchandra Devi

      Thanks Geeta. So Delta Merge option in DTP is only for cubes and write optimised DSO. This means it happens only for Active table which makes sense.  For standard DSO , option in DTP is not useful , it would be automatic.

      Correct if wrong.

      Regards,

      Viren

      Author's profile photo Geeta Sharma
      Geeta Sharma

      Yes, you are right.

      The merge process operation differs for data loading into different SAP NetWeaver BW objects. The delta merge is either performed automatically by the application or must be triggered manually within the application. This depends on the relevant object type:

      Persistent Staging Area (PSA)

      The delta merge is automatically performed after the system writes to the PSA using the smartmerge

      Master Data

      The system uses automerge

      Standard/SAP HANA Optimized DataStore Object

      The delta merge is automatically performed in the activation process. (This also applies to DataStore objects that belong to a semantically partitioned object.)

      Write-Optimized DataStore Object

      The delta merge is not performed automatically. (This also applies to objects that belong to a semantically partitioned object.)

       The Data Transfer Process (DTP) has an Update tab that contains the Trigger Database Merge checkbox. This controls the delta merge, once the DTP request has been successfully processed. By default, this checkbox is selected. (see Figure "DTP Configuration")

       There is a Process Type, NewDB Merge, that can be used within a Process Chain to execute the "smartmerge"

      Standard/SAP HANA Optimized InfoCube

      The delta merge is not performed automatically. (This also applies to objects that belong to a semantically partitioned object.)

       The DTP has an Update tab that contains the Trigger Database Merge checkbox. This controls the delta merge, once the DTP request has been successfully processed. By default, this checkbox is selected.(see Figure "DTP Configuration")

       There is a Process Type, NewDB Merge, that can

      ​Thanks,

      Geeta