Skip to Content

During SAP Sapphire in Orlando I got the opportunity to attend strategy sessions related to SAP HANA technology. I must say that I really appreciate that SAP is interested in getting feedback for their products.

Therefore I decided to try a new kind of experiment – to write a “brainstorming blog”. Below you can see some ideas how SLT replication could be improved. Feel free to criticize them in case you disagree or to append you own ideas. Maybe SLT team will find these ideas inspiring and we will influence their direction. Let’s see.

All suggestions are divided into two main areas based on associated technology.

SAP HANA Studio

1.) SLT heartbeat detection

As you might know SAP HANA Studio is playing only passive role in area of SLT replication. All information you can find in Data Provisioning screen is taken from local SAP HANA tables.

For example the list of replicated systems is stored in table RS_REPLICATION_COMPONENTS located in schema SYS_REPL and replication status for each table is stored in table RS_STATUS in the appropriate schema. Also all actions performed by users are not directly executed but only stored in table RS_ORDER (or table RS_ORDER_EXT).

SLT system is monitoring these command tables and in turn updating current activity in status tables.

This passive approach is creating quite a big space for error. In case that SLT is not working properly or not at all – there is no way how this can be seen from Data Provisioning screen where everything seems to be fine.

The potential solution is quite simple. SLT can in regular intervals update time-stamp in dedicated SAP HANA table and Data Provisioning cockpit can interpret this value. In case that time-stamp is not updated for certain period then there is a very high chance that SLT is in trouble.

2.) Easy resolution of replication errors

When table replication is in status error then there is nothing you can do to resolve this state from SAP HANA Studio. The only way is to run advanced monitoring workbench in SLT system where very specific knowledge is required.

Customers should able to run some kind of “auto-repair” function directly from Data Provisioning screen. This function would attempt to perform consistency check, clean-up and if required then after user confirmation also new provisioning of given table. No advanced knowledge should be required.

Also customer should be offered simple explanation what is the nature of error, whom to call and where to continue the investigation.

3.) Overall replication progress bar, table load progress

When you start replication of multiple tables then all you can do is to passively wait. You might guess what is the overall progress based on updated status of tables however there is no progress information displayed.

One solution would be to adjust Data Provisioning screen to contain simple progress bar and to display overall replication status as text (for example in similar way like R3load status is presented during migration: “Replication status: running 3, waiting 8, completed 13, failed 1, total 25”).

Another inconvenience is missing replication progress for each table – especially when initial load is running. In case of big tables initial load (or load operation in general) can take dozens of minutes or maybe a few hours.

You still have the possibility to manually check the amount of rows in source table (either by querying database statistics or by querying ABAP statistics from table DBSTATTORA). Then you can check amount of already loaded rows in SAP HANA database (using Show Definition function). Comparing these two values can give you hint about progress of load operation.

However this is a tedious manual process that can be easily automated.

4.) Initial load estimation / Re-provisioning estimation

When you are asked to provision a table then usually question that follows is “How long will that take?” Currently there is no way of predicting this especially when you are doing first replication on new hardware. You might have very rough estimation based on size of tables however this can be very inaccurate.

Again solution can be relatively simple. All that is required is that SLT needs to collect various statistics and then (if allowed by customer) these can be sent to SAP for analysis.

Following information should be collected:

  • hardware configuration where SLT is running – this can be then used to calculate first variable representing “power” of the machine (HW_POWER)
  • table name (and corresponding structure) – this can be then used to estimate complexity of table or to directly assign complexity to well known SAP tables (TABLE_COMPLEXITY)
  • amount of records in table and size of the table – this can be represent size factor of the replication (TABLE_SIZE)
  • replication duration – how much time the initial load took (REPLICATION_TIME)

These values can be then used to create following formula and to find proper generic variables:

     REPLICATION_TIME = TABLE_SIZE * TABLE_COMPLEXITY / HW_POWER

Of course historical values collected by SLT can be then used in case that table needs to be provisioned again.

Data Provisioning screen in SAP HANA studio should contain details about the table or selected tables to be provisioned including time estimation.

SLT system

1.) Consistency check and Clean-up functions

I really love SLT replication as my most favourite type of replication into SAP HANA. However I must say that things are not working as they should. Although the replication principle is very simple the implementation is so abstract that there is a huge space for errors. And errors are happening more often then what can be considered normal.

I have no constructive ideas in area of preventing errors. However I do have some ideas in area of error troubleshooting.

Definitely useful function would be the possibility to run consistency check for given objects. It happened to me multiple times that status in SAP HANA (table RS_STATUS, fields ACTION and STATUS) was different then status in SLT (table IUUC_RS_STATUS, fields ACTION and STATUS). This error is quite obvious yet there is no way how to fix it without running update query on database level in SLT, HANA or both systems.

Similar problem can be observed with tables RS_ORDER. Sometimes these tables are also having multiple “last” entries for same replicated table. It can also happen that when table is de-provisioned – it is removed from table list in transaction IUUC_SYNC_MON but does still exist in Mass Transfer definition and there is no way how to get rid of it.

Fantastic function would be consistency check where all these object would validated against each other and all inconsistencies would be removed. In case of unclear state user can be queried for decision.

Also “orphaned” entries should be automatically identified and removed during SLT start to keep the system clean and tidy.

2.) Purge functions

With following variants:

  • Purge of whole SLT   
  • Purge of specific Mass Transfer ID
  • Purge of specific table

Another nice function would be to purge the configuration. To remove EVERYTHING from SLT regarding specific table – like that it was never ever replicated by SLT for this particular Mass Transfer. This function would remove all entries related to given table in given Mass Transfer including possible inconsistencies without impacting other tables replicated by SLT. Then table can be safely provisioned again without risking collision with obsolete entries.

Same function should be available to be executed on Mass Transfer level (to clean up everything in given Mass Transfer definition) and also on whole SLT level (to make it like after installation including removal of all obsolete Mass Transfer IDs).

Of course corresponding purge actions should be also executed in source systems.

3.) Replication Statistics

Detailed statistics about replication process should be available:

  • how many records were replicated during last period (for example on hourly base)
  • how much time was spent in replication activities
  • how much time was spent reading from source system and how much time writing to SAP HANA (to determine where the replication time is spent)
  • what was minimum, average and maximum utilization of background jobs suggesting if more background jobs should be allocated

All these statistics would enable additional insight into the process of replication offering possibility to understand if and how SLT system should be adjusted.

4.) Visualization of replication process

Every activity in SLT is composed from series of steps. For example replication process is composed from initial load and then from ongoing replication. Initial load can be even more broken down to activities like table deletion in source system, table creation in source system, table creation in SLT system, creation of logging table, generation of runtime objects, calculation of access plan, trigger creation in source system, etc.

It is not very clear which activities are performed and in case of issue where exactly the replication was interrupted. It would help to have for each table details like tree of steps including semaphore lights and having possibility to watch as gray lights are turning into  green lights or in case of trouble into red light pinpointing step where the error occurred.

Such thing would allow everyone to better understand steps that are being performed and would also enable more effective problem determination.

5.) Troubleshooting wizard

Once the error in replication process is discovered (either by consistency check or from visualization of replication process) then troubleshooting wizard should be executed leading the user through the problem determination and guiding him to the problem area.

Nice example of such wizard can be seen in resolving data load problems in BW (in transaction RSA1).

6.) Dialog for replication adjustment (currently possible only by ABAP adjustment)

SLT is offering possibility to adjust the replication process. Features like row filtering based on defined criteria, removing columns or adding new calculated columns or changing column data type are possible with SLT.

However you need to develop new objects in ABAP language and register them in SLT tables. Then SLT is automatically calling these objects to run conversions mentioned above.

I believe that SAP should currently focus on stabilizing the product to avoid issues rather then adding new features – however possibility to adjust data type should be leveraged. Very simple dialog doing code generation and registration designed only for change of data type for particular table would do the job. Justification for the need is explained in next point.

7.) Data-type consistency with BO Data Services

This is very important point. I must admit that I did not test with latest versions however I would be surprised to see the change.

There is a big inconsistency in area of data types between BO Data Services and SLT replication technologies. SLT is replicating data types in same format as ABAP – which is often serialized string representing the value. Best example here is date field that stored as YYYYMMDD formatted string in ABAP and is replicated in same way by SLT.

Everything is fine as long as you do not need to use multiple replication technologies.

Problem will arise when you will start using BusinessObjects Data Services. BO Data Services are designed to translate the data between various systems. To allow this BO Data services is always interpreting source data into internal format and then translating into format used for target system. In other words date type field stored as serialized string in ABAP will be interpreted as date value and then stored as data type “Date”.

Again everything is fine as long as you are using only BO Data Services as replication technology.

Core of this trouble is that you cannot easily join tables using date as serialized string with tables using date as value. You might achieve the functionality only by using formulas however this approach will lead to serious performance problems and long query execution times.

In case that you need to combine these two technologies you need to make adjustments in one of these replication tools – either to change BO Data Services to use data types of SLT replication or to adjust SLT to convert data types used in BO Data Services.

Ideal situation would be if this adjustment can be done by click of a button – some kind of “compatibility mode” that can be easily activated in BO Data Services and/or in SLT.

8. ) Documentation

Last but not least – SLT needs documentation. SLT is currently designed as a black box where admin does not need to know the internal mechanics. This is fine as long as SLT is working as expected. However this is not daily reality – SLT can get some problems and then admin is left without any guidance how to solve the situation…

To report this post you need to login first.

18 Comments

You must be Logged on to comment or reply to a post.

  1. Christian Schäfer

    All your Points are realy true and there passed about 10 Month.

    Your Ideas are not implemented now 🙁 .

    Info: NVarchar(2000) will be converted to NCLOB in HANA 😕

    This is the reason for a verry slow Replication 🙁

    (0) 
      1. Christian Schäfer

        It is nearly the same problem as you described in point 7.

        SLT uses only ABAP TYPES to create the tables in HANA.

        If you have a column with type Datetime or Timestamp or something like that, you will get a Stringtype in HANA.

        You can use a workaround by adding a calculated column or try to change the datatype in HANA manually. But this cant be a solution.

        Further more if you have a String in MSSQL or Oracle with more than about 1024 letters it will become NCLOB in HANA.

        I don’t know the real reason but this slows down your replication extreme.

        The used internal ABAP Type for that is not listed in the list of types in IUUC_REPL_TAB_DV.

        (0) 
          1. Michael Harding

            Tomas –

            Great points.  Like previous posts, its a bit dissappointing that we are not seeing these improvement opportunities addressed 10 months later. 

            One of the areas I’d also like to see some clarity on is the SLM strategy.  A couple points here: 

            1)  There really should be more around a unified patching strategy around what I would refer to as the HANA ‘ecosystem’ in a Sidecar type implementation:  HANA DB, HANA clients & shared libraries (on source systems & for developers), DMIS patches (on source systems), SLT system.  Of course SAP’s suggestion is ‘apply the latest patch’, but the patch levels across these components are not in sync and updates are coming seemingly weekly.  Stabilization across this ecosystem is an uphill battle, so trying to keep up with the patch levels in a Production environment is difficult.  Furthermore, many of the replication issues you see in the SLT space are not observed during testing because transactional volume is much lower in non-Prod systems. 

            2) Alignment with TDMS.  The DMIS engine in source systems support both TDMS and SLT, yet each of these replication products seem to be running independent SLM cycles;  it’s as if the developers of each product are not communicating.  Our implementation hit a scenario where TDMS actually required a higher DMIS patch level than what HANA was supporting at the time.

            Thanks,
            Mike

            (0) 
  2. Raj K

    Hi Thomas,

    One thing i really concern about SLT is complex transformations rules.

    Through t-code IUUC_REPL_CONTENT (IUUC *** RUl MAP) we can specify mapping rules for replication settings.

    In Insert line of code there is a restriction of 72 characters for each line of code.

    In Insert Include Name we can perform complex transformations but ABAP coding skills are required 🙁 .

    My point here is to have a drag and drop facility to perform complex transformations, instead of writing ABAP logic, as we do have in ETL tool BODS.

    Such kind of development makes life of HANA modeler easy 🙂

    Regards

    Raj

    (0) 
    1. Tomas Krojzl Post author

      Actually this is quite interesting idea… Potential solution can be usage of SAP HANA Studio modeling features where you could model data transformation that can be then “saved” into SLT as ABAP code doing designed adjustments during transformation.

      Of course this “SLT modeling” would be limited to features provided by SLT – so no complex features would be possible unless SLT itself would be extended..

      Advantage would be no need to know ABAP, ability to define the replication without the need to leave SAP HANA Studio, possibility to package and export the modeling (reusability), etc..

      Disadvantage would be risk of de-synchronization between SAP HANA and SLT in case of backup/restore, etc..

      Anyway very nice idea…

      (0) 
  3. Bastiaan Lascaris

    Thanks Tomas, these are good ideas. I bumped into a few issues which where easier to solve, if some of the points in this blog where resolved by SAP by now. Especially with purging and solving replication errors.I hope that they will tackle this soon.

    (0) 
  4. Gregory Misiorek

    Hi Thomas,

    Time check for January 2015, so how is the implementation going? Any luck in having those features in the commercially available product?

    Thx,

    greg

    (0) 
  5. Tomas Krojzl Post author

    Hello Greg,

    to be honest I am not anymore in position where I am working hands-on with SLT so I cannot comment on what is current status of SLT… Now I am more on infrastructure side of SAP HANA (dealing with architecture, HA, DR, operation, monitoring, backups, etc..)

    I think best person to say if points above were covered and how is Tobias Koebler

    Tomas

    (0) 
    1. Gregory Misiorek

      Tomas,

      sorry to ‘lose’ you to more rewarding pursuits, but even though recommended by SAP, SLT is still competing with other ETL products and if it doesn’t simplify or make the process easier it runs a risk of being neglected or ignored by the wider ecosystem.

      i understand this is not necessarily your concern and thanks for responding.

      greg

      (0) 
  6. Shanaka Chandrasekera

    Dear Tomas,

    Thank you very much for the information shared.

    Can we change configuration data in mass transfer ID sap hana SLT, like change the RFC or change the job schedule (real time to specific time period)

    Thanks.

    Shanaka.

    (0) 
    1. Tomas Krojzl Post author

      Hello,

      I am afraid I do not know – as written in comments above I stopped working with SLT few years ago as I changed my job role and did not yet have opportunity revisit this subject.

      Tomas

      (0) 

Leave a Reply