Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
BerndSieger
Advisor
Advisor

Introduction

With this blog I would like to present a new functionality to view and restore data of the queues for the extraction of logistics data to the BI. The functionality has been made available via note 1008250, as well as via support packages.

From a high-level point of view you get two new things with the note:

  1. A new table which is storing data of the queues for the Logistics BI extraction.

  2. A new transaction (LBWR) which can be used to either view the data of this table or to rebuild queue data from it.


So why should you care about it? Well, you should if you ever wanted to

  • view data sitting in one of the MCEX queues or

  • restore data which has been lost for your BI without doing a new init/setup.


For example you are enhancing your extractor with new fields but for some unknown reason during the delta process the fields' contents gets lost. Now you have one more point in the processing chain where you can easily check the data, without debugging.

Another example would be data loss due to RFC problems in one of your queues: With this functionality you can rebuild the missing LUWs without doing a setup of data (init).

 
Questions and Explanations

For a detailed technical documentation you might want to refer to the report documentation of report RMBWV3RE. In future releases this documentation will also be available via SAP Reference IMG.
Guessing what might become frequently asked questions, I am trying to answer some here:

  1. What is needed to get this started?

    1. Find out the name of the queue you are interested in (usually MCEXnn, where nn is the application number from the LO Cockpit).

    2. Start transaction LBWR. You will need the proper authorizations for this, though (see note 1008250).

    3. Enter the queue name in the first input field. Press enter again to refresh the screen.

    4. Make sure the Processing Mode "Customizing Changes Only" is selected.


    5. Scroll down for the Customizing block of the screen. Here you will see two parameters:

      • No.Coll Processing and

      • No. of Days with Backup Data


      Check the F1 help of those fields for the exact definition. For a first test (preferably in a non-productive system ;-)) you could just enter 0 for the first and 2 for the second parameter.


    6. Now press the Execute button (F8) and acknowledge the info popup.

    7. Congratulations! You are done. From now on the backup table will store all the data of the queue for the specified amount of time (with the second parameter set to 2 it will be kept for two days).



  2. How do I view data stored in the backup table?
    Start transaction LBWR, enter the name of the related queue, choose processing mode "Display Data of the Backup Table" and hit enter to refresh the screen.
    Now you will see the timestamps of the stored records in the block Status of the Backup Table. In the next block you can enter a selection regarding the timestamps.
    You can also specify the number of lines being displayed (this is not an exact cap, though) and the specific table/document level you would like to see (e.g. header level data).
    Next hit F8. If your selection hits some data in the backup table you will get to see an ALV list of the stored data.


  3. How does reconstructing queue data via LBWR work?
    The system selects all the specified data from the backup table (selection either via timestamps or via number of collective runs). This data will be posted to a new queue named MCEXnn_BACK. With the next collective run this new queue will automatically be processed just before the standard queue MCEXnn will.


  4. Can I disrupt my statistics with rebuilding queue data? What precautions should I take?
    Yes, you can absolutely cause harm to your statistics in BI if you are not careful. Specifically you will cause double (triple, ...) figures if you are not making sure, that the data you plan to rebuild does not exist in all data targets subsequent to the extraction queue.
    So you always should check RSA7 (BI delta queue), PSA and all subsequent data targets in BI for the data you plan to rebuild. In case of corrupted but existing data in subsequent targets you first have to delete this data before you can rebuild the data.
    Another issue can occur if you have a newer change of a document in BI and you are trying to rebuild and extract an older one. In this case you likely have a serialization problem in one of the BI data targets. This can be solved by deleting all data related to the document before rebuilding all those changes.
    You see, rebuilding queue data via LBWR takes a great deal of responsibility on the user's side. This also means the authorizations for this action (authorization object M_QU_RE with activity '01') should be limited to a small group of data extraction experts. Authorizations for viewing backup/queue data can be granted less restrictively, although the importance/secrecy of the specific queue data should be taken into account, too.

  5. What is the potential performance/DB size impact of using this in our productive system?
    It depends. 😉
    Well, just after creating the ordinary queue entry the new functionality will basically perform two possibly time-consuming operations:

    1. It will draw a stamp (counter) from the enqueue server. In our tests this always has been a non-issue, but if you are already having problems with the enqueue server in your productive system this might need some attention.

    2. It will write the complete data needed to reconstruct the queue entry to the new cluster table MCEX_DELTA_BACK (via EXPORT TO DATABASE). The performance of this operation mainly depends on the size of that table. Fortunately you can limit the size via the customizing (see question number 1 above).


    Now, for starting with the backup table I would recommend to just keep the data of one collective run (parameter "No.Coll Processing"), as a first test in your productive system. If that works just fine you can always increase one or both of the two parameters. There is no special procedure required for changing the parameters which reside in table TMCEXUPD (although make sure you are not unintentionally transporting a change of update mode with the parameter changes ;-)).


34 Comments