Skip to Content

Hi,

the load from archives is available as of DMIS 2010 SP7, or DMIS 2011 SP2.

Prerequisite for an archive load is that the tables for which data should be transferred from the archive into HANA have been added to the replication. So this is the very first step.

In the example of archive object SD_VBAK, there are very many tables involved. For simplification, only a small subset of them is selected. This is done, as usual, in the data provisioning UI provided by the HANA studio:

/wp-content/uploads/2013/07/1_239711.png

Only if those tables are processed in the normal initial load and replication processing, it is possible to use report IUUC_CREATE_ARCHIVE_OBJECT to define a migration object to load data from the archive.

Usage of report IUUC_CREATE_ARCHIVE_OBJECT is explained in its online documentation. In our example, the parameters would be supplied as shown below:

/wp-content/uploads/2013/07/2_239712.png

The mass transfer ID to be specified here is the one assigned to the respective HANA replication configuration and can be seen from the WebDynpro started with transaction LTR.

Transfer behavior 1 means that we want to transfer data to HANA. If the data should be deleted, the transfer behavior can be set to 8.

Pressing the “execute” button for this report will then show a selection screen where all those tables which are both being replicated and are part of the archive object are shown. The user could select a part of those tables, or all of them, to be loaded from the archive:

/wp-content/uploads/2013/07/3_239713.png

After having chosen the relevant tables, press ENTER to have the corresponding migration object created, and its runtime objects generated. Then, you will see the new object in the “data transfer monitor” tab of transaction LTRC, the name will be exactly the name of the archive object, if not specified otherwise.

/wp-content/uploads/2013/07/4_239714.png

Currently it is still necessary to invoke step “Calculate Access Plan” manually from the “Steps” tab of transaction MWBMON. Only the two highlighted fields need to be manually filled:

/wp-content/uploads/2013/07/5_239717.png

After that, the data from the archive will be automatically transferred.

Necessary authorizations

Please note that the rfc user assigned to the rfc destination pointing from the SLT system to the sender system requires an additional authorization which is currently not part of role SAP_IUUC_REPL_REMOTE. The authorization check is:

      AUTHORITY-CHECK OBJECT ‘S_ARCHIVE’

 

      ID ‘ACTVT’    FIELD ’03’

      ID ‘APPLIC’   FIELD l_applic

      ID ‘ARCH_OBJ’ FIELD object.

That means, for authorization object S_ARCHIVE, we need an authorization with activity 03 (display), the respective archive object, and the application to which the archive object belongs.

Filter or mapping rules not available

Note that the automatic assignment of filter or mapping rules which is done for other migration objects (initial load and replication) by means of table IUUC_A S S_RUL_MAP will not work for the archive load objects. If any filtering and / or mapping is required for such objects, the corresponding rules need to be manually assigned to the migration object.

Relevant Notes

Hope this helps to understand the load from archives much better. Special thanks to Günter for these details.

Best,

Tobias

To report this post you need to login first.

17 Comments

You must be Logged on to comment or reply to a post.

  1. Janakiram Mandadi

    Good blog Tobias. Our experience with archived data pull in to HANA DB is also same. Just few more points to add here:

     

    1. We can add the tables list directly in SLT. No need to LOAD/REPLICATE from HANA Studio. Here are the steps:

     

         SAP Transaction LTRC in SLT server, after selecting the Master Transfer ID, you will see an option ‘Add tables’. Here you can add the relevant tables.

     

    2. ABAP program IUUC_CREATE_ARCHIVE_OBJECT parameter, Archived Date from and Archive DateTo are based on ‘Date of Archiving’ (not the transaction date). Although this information is in the help, but could be missed easily.

     

    3. Calculate Access Plan step/program might take few minutes to few hours depending upon the size of the tables and the number of tables in the archiving object.

     

    — Janakiram Mandadi

    (0) 
      1. Prathish Thomas

        Hello Tobias ,

         

        Your aforesaid blog is very useful , few questions

         

        1. Whether it will drop the tables and load data from archive under the same schema?

        2. When we should specify the report parameters ? Is it a manual execution on every archival dates ?

        3. Any advantage over your other blog on stop job/reset trigger approach ?

         

         

         

        Thanks in advance

        Thomas

        (0) 
        1. Tobias Koebler Post author

          Hi,

          1.) It will just load the records, no drop table

          2.) These report and the parameters has to be manually executed, after archiving.

          3.) I think this is obvious, it is time vs. safety. If you can ensure no data change during inactive triggers, go for the other approach. If you cannot ensure – this is the only way to ensure that all data is available.

           

          Best,

          Tobias

          (0) 
    1. Justin Molenaur

      Hi Tobias, I am at the same point as Keerthan. You state that for any mapping rules to apply to the archiving load also, that they need to be assigned.

       

      In the reference links I don’t see any information on how to do that, can you provide this level of detail as well? I know that customers that already have customization in place for the replicated target data will also need to apply the same to the archived data.

       

      Regards,

      Justin

      (0) 
      1. Tobias Koebler Post author

        Hi, you are right – the rule is not assigned by an archive load. We have this option on our roadmap. So far there is no approach (also not a manual one) that can 100% ensures that the rule is assigned.

         

        Best, Tobias

        (0) 
        1. Justin Molenaur

          Ok, that is good to know. That could be a problem in the case that the normal replicated table has some rules assigned.

           

          I guess the only option would be to maybe load to a different schema (configuration), then use SQL or another method to perform the transformation and load into the matching base table. This way, you could apply the logic after the archive DB data has been loaded into HANA.

           

          See any other options?

           

          Regards,

          Justin

          (0) 
  2. Shubhrajit Chowdhury

    Hi Tobias,

     

    Thank you for your valuable blog. We have followed every steps as you have described, but still the archived data is not getting reloaded in HANA DB.

     

    I have checked the status in LTRC transaction. There it is showing ‘loaded’ as false and status ‘failed’.

     

    slt.JPG

    Can you please let us know what can be the possible cause?

     

    Regards,

    Shubhrajit

    (0) 
    1. Justin Molenaur

      Hi Shubhrajit – my client is having the exact same issue so I have an open Incident (438593/2015) with SAP development currently on this. I will be sure to update back what I hear.

       

      Regards,

      Justin

      (0) 
  3. Shubhrajit Chowdhury

    Hi,


    Is there any way to identify Archived data in HANA DB? Is there any available indicator for Archived data?

     

    Any help on this would be great help.

     

    Regards,

    Shubhrajit

    (0) 
  4. Bart Lisk

    I see this report reads the list of archiving objects from table ARCH_OBJ.  Can you explain how we will see available Archiving objects in SLT?  Also, to be clear is the data soming from the archive replicated to the same tables as the main replication or a seperate table.  For instance, does the archved data for VBAK load into the the same VBAK table and the normal replication?

    (0) 
  5. Christian Harrington

    Hi, we have similar question as Bart…

    We execute report IUUC_CREATE_ARCHIVE_OBJECT in the SLT system, but parameter Archiving object in the selection screen reads the archives from the SLT. But we need obviously to read from source system in our case ECC.

    It seems we may miss some basic configuration but it’s not explained anywhere…can someone explain it to us please? Do we need to transfert some config/files from ECC to SLT manually?

    We do have LTRC configuration defined for source system ECC, and have many tables under replication without issues.

    Thanks,

    Christian

    (0) 
  6. Christian Harrington

    Hi

    answering to myself and other who might have had the same issues. I find the online documentation is minimal and not clear enought to help customers.

    -The report IUUC_CREATE_ARCHIVE_OBJECT is to be run on SLT system, NOT on source systems. This is not obvious as this report exists also on source system.

    -The drop-down for Archiving object in that report is not useful since it doesn’t show the source system archiving objects, but you can type them directly (ex: SD_VBAK), it’s gonna work! Once you know that it is much easier.

    -Load behavior: I found that for an archiving object, if there are many archive files stored for selected dates, it reads all files. It also extracts data only for tables listed under Table overview in LTRC. as mentionned by anakiram Mandadi. It is very useful to know that you can add a table definition with “Data provisioning” button (start recording or start replication w/o load, etc. for some reason Create table doesn’t work) in case you don’t want to load a big table just to be able to load the archived data for that table.

    -Also if the archive object contains say 4 tables and you loaded only one, but later on you want to load the 3 others, you cannot rexecute the report for that object. You have to delete the archive object table, then you can reexecute the report with all tables you need.

    Otherwise works fine!

    We had to apply note 2011760 and 2081827 on BOTH SIDES source system and SLT. We have DMIS 2011 SP09 on SLT and source systems. ERP on NW 7.01 and SLT on NW 7.4

    Christian

    (2) 
    1. G. Cheung

      Hi Christian, Tobias Koebler

      We’re currently extracting archived MKPF/MSEG data and all works fine. However, we experience very long processing times primarily caused by long reading times. Packages of around 500 records take 3-10 seconds to read: on average we have a throughput of 300K records/hour. We have 200 million to load adding up to about 25 days 🙁

      Would you know of any parameters we could tweak to speed up the process?

      Many thanks!
      Glenn

      (0) 

Leave a Reply