Skip to Content

A Safety Belt for Logistics Extraction Queues


With this blog I would like to present a new functionality to view and restore data of the queues for the extraction of logistics data to the BI. The functionality has been made available via note 1008250, as well as via support packages.

From a high-level point of view you get two new things with the note:

  1. A new table which is storing data of the queues for the Logistics BI extraction.
  2. A new transaction (LBWR) which can be used to either view the data of this table or to rebuild queue data from it.

So why should you care about it? Well, you should if you ever wanted to

  • view data sitting in one of the MCEX queues or
  • restore data which has been lost for your BI without doing a new init/setup.

For example you are enhancing your extractor with new fields but for some unknown reason during the delta process the fields’ contents gets lost. Now you have one more point in the processing chain where you can easily check the data, without debugging.

Another example would be data loss due to RFC problems in one of your queues: With this functionality you can rebuild the missing LUWs without doing a setup of data (init).


Questions and Explanations

For a detailed technical documentation you might want to refer to the report documentation of report RMBWV3RE. In future releases this documentation will also be available via SAP Reference IMG.
Guessing what might become frequently asked questions, I am trying to answer some here:

  1. What is needed to get this started?
    1. Find out the name of the queue you are interested in (usually MCEXnn, where nn is the application number from the LO Cockpit).
    2. Start transaction LBWR. You will need the proper authorizations for this, though (see note 1008250).
    3. Enter the queue name in the first input field. Press enter again to refresh the screen.
    4. Make sure the Processing Mode “Customizing Changes Only” is selected.


    5. Scroll down for the Customizing block of the screen. Here you will see two parameters:
      • No.Coll Processing and
      • No. of Days with Backup Data

      Check the F1 help of those fields for the exact definition. For a first test (preferably in a non-productive system ;-)) you could just enter 0 for the first and 2 for the second parameter.


    6. Now press the Execute button (F8) and acknowledge the info popup.
    7. Congratulations! You are done. From now on the backup table will store all the data of the queue for the specified amount of time (with the second parameter set to 2 it will be kept for two days).
  2. How do I view data stored in the backup table?
    Start transaction LBWR, enter the name of the related queue, choose processing mode “Display Data of the Backup Table” and hit enter to refresh the screen.
    Now you will see the timestamps of the stored records in the block Status of the Backup Table. In the next block you can enter a selection regarding the timestamps.
    You can also specify the number of lines being displayed (this is not an exact cap, though) and the specific table/document level you would like to see (e.g. header level data).
    Next hit F8. If your selection hits some data in the backup table you will get to see an ALV list of the stored data.


  3. How does reconstructing queue data via LBWR work?
    The system selects all the specified data from the backup table (selection either via timestamps or via number of collective runs). This data will be posted to a new queue named MCEXnn_BACK. With the next collective run this new queue will automatically be processed just before the standard queue MCEXnn will.


  4. Can I disrupt my statistics with rebuilding queue data? What precautions should I take?
    Yes, you can absolutely cause harm to your statistics in BI if you are not careful. Specifically you will cause double (triple, …) figures if you are not making sure, that the data you plan to rebuild does not exist in all data targets subsequent to the extraction queue.
    So you always should check RSA7 (BI delta queue), PSA and all subsequent data targets in BI for the data you plan to rebuild. In case of corrupted but existing data in subsequent targets you first have to delete this data before you can rebuild the data.
    Another issue can occur if you have a newer change of a document in BI and you are trying to rebuild and extract an older one. In this case you likely have a serialization problem in one of the BI data targets. This can be solved by deleting all data related to the document before rebuilding all those changes.
    You see, rebuilding queue data via LBWR takes a great deal of responsibility on the user’s side. This also means the authorizations for this action (authorization object M_QU_RE with activity ’01’) should be limited to a small group of data extraction experts. Authorizations for viewing backup/queue data can be granted less restrictively, although the importance/secrecy of the specific queue data should be taken into account, too.
  5. What is the potential performance/DB size impact of using this in our productive system?
    It depends. 😉
    Well, just after creating the ordinary queue entry the new functionality will basically perform two possibly time-consuming operations:

    1. It will draw a stamp (counter) from the enqueue server. In our tests this always has been a non-issue, but if you are already having problems with the enqueue server in your productive system this might need some attention.
    2. It will write the complete data needed to reconstruct the queue entry to the new cluster table MCEX_DELTA_BACK (via EXPORT TO DATABASE). The performance of this operation mainly depends on the size of that table. Fortunately you can limit the size via the customizing (see question number 1 above).

    Now, for starting with the backup table I would recommend to just keep the data of one collective run (parameter “No.Coll Processing”), as a first test in your productive system. If that works just fine you can always increase one or both of the two parameters. There is no special procedure required for changing the parameters which reside in table TMCEXUPD (although make sure you are not unintentionally transporting a change of update mode with the parameter changes ;-)).

You must be Logged on to comment or reply to a post.
  • Hi Bernd,

    This is a welcome development! A few months ago I had to reconstruct logistics and was forced to run init/setup. And, if this tool was available then it would saved a couple hours of downtime. I was left wondering SAP did not support such a feature. I’m happy that it now does.



  • Hello Bernd

    This is Great. This will make LBWQs lot more reliable. Thanks a lot.

    Do you have any good documentations that can help us understand qRFC queues and how to manage them. It will be of a great help. This is 2nd delicate piece of Technology to support the integration of ECC and BI.

    Pankaj Gupta

    • Thanks, I am glad you like it.

      I am not aware of any general documentation regarding the handling of qRFC queues. I guess this might be an issue because the handling differs from one application to the next.

      In LO Extraction you need to take care of
      1) scheduling the necessary collective runs,
      2) monitoring for unusual growth and(inconsistencies, blocked queues)
      3) processing and clearing the queue before any relevant DDIC changes hit the system (transports, SPs, etc.).

      Hope that helps,

      • Hello Bernd

        I would like to ask totally different question. We used to have Release information by Plug-ins. Like What is new data source or extractor per Plug-in comming. I was able to access all that info by URL that was connected with referekce to Plug-in release notes.

        Now, Plug-in are all part of the main software. I have not found proper place to look the Data Source Release notes. Suppose, I want to find new Data Source came after 2004.1 by different application components. I do not have clue how to start.

        If you can point me that will be great. If this is not appropriate question, feel free to request the removal from Blog

        Pankaj Gupta

  • Thanks Bernd,

    This new enhancement should also strengthen the importance of modeling the BI DSO / ODS Objects fields to update with ‘Overwrite’. Coupled with the new transaction, correcting data downstream to BW should be fairly straight forward now.

  • As a beginner in BW, I wonder how this wonderful technique can benefit the BI 7.0. As far as I know, the reconstruction technique is not used in the current BW version, isn’t it?
    • Hi,

      this functionality is not dependent on your BI version, but rather on the SAP_APPL/PI version in your application system. It is available for PI 2004_1_470, 2004_1_500 and ECC 600.

      So you can combine it with any BI/BW release which works with those application releases.

      Best regards,
      Bernd Sieger

  • Hi Bernd,

    Is there a possibility to selectively delete the data from the backup table.
    In many instances we might have user exits to populate the enhanced fields based on business logic and it might not have the values updated for all transactional records. This would give us the ability to load records based on document types etc in the backup table.


    • Hi Suresh,

      I am afraid that is no option, since the backup table is a cluster table and does not contain application specific fields as key fields.

      So for a rebuild run you can select either via timestamp (key field in the backup table) or via number of collective run (the timestamps of the collective runs are stored in a separate table).

      Selecting via any application specific field would mean reading all the records into an internal table and evaluating them. This would be very slow.

      From the design of the functionality deletion should be no option anyway (except for records being deleted automatically when they are old), since a backup table should provide a full backup (for the time specified), not a partial one.

      Best regards,
      Bernd Sieger

  • Hi

    Thats Gr8 news. A good blog that befits the wonderful functionality!! What is the Support package level. Is there any instance of performace and table sizing issues reported?


    • Hi Shyam,

      you can find the support package entries in note 1008250: SAPKIPZI5F, SAPKIPZI6G and SAPKH60009 for PI 2004_1_470, 2004_1_500 and SAP_APPL 600 respectively.

      So far there is no feedback about performance or table sizing issues.

      Best regards,
      Bernd Sieger

  • Hi,

    The Blog is excellent and gave us the remedy for the issue we are facing of missing data for the LO cocpit extraction. With the help of BASIS we did apply the OSS Note 1008250 to our sandbox having the following specifications:
    SAP_BASIS 620     0060     SAPKB62060     
    SAP_ABA     620     0060     SAPKA62060     
    SAP_APPL 470 0027 SAPKH47027 Logistics and Accg SAP_HR     470 0010 SAPKE47010     Human Resources
    EA-IPPE     200 0023 SAPKGPIB23     SAP_iPPE
    PI 2004_1_470 0013 SAPKIPZI5D     R/3 Plug-In (PI) 2004.1
    PI_BASIS 2005_1_620 0010 SAPKIPYJ5A     PI_BASIS 2005_1_620
    ST-PI 2005_1_620 0005     SAPKITLQG5     SAP Solution Tools Plug-In
    WP-PI 600_620     0000 –     WP-PI_600_620_Supplement
    EA-APPL     110      0023     SAPKGPAA23     SAP R/3 Enterprise PLM, SCM, Financials
    EA-FINSERV 110     0023     SAPKGPFA23     SAP R/3 Enterprise Financial Services
    EA-GLTRADE 110     0023     SAPKGPGA23     SAP R/3 Enterprise Global Trade
    EA-HR     110     0010     SAPKGPHA10     SAP R/3 Enterprise HR Extension
    EA-PS     110     0023     SAPKGPPA23     SAP R/3 Enterprise Public Services
    EA-RETAIL 110     0023     SAPKGPRA23     SAP R/3 Enterprise Retail
    DIMP      471     0011     SAPKIPME11     DIMP 471 : Add-On Supplement
    ST-A/PI     01I_R3_470     0000 –     Application Servicetools for R/3 470
    Issues are
    1) Started with transaction LBWR, entred the queue name with the application number and after that the Processing Mode drop down box (Customizing Changes Only)is not showing up. Instead letters D and X is showing up. I tried to execute with these letters and it is failing.

    2) The program language is in German. We would like to see in English.

    Can you please help us in resolving the above 2 issues.Thanks in advance.  

    • Hi,

      yes, we have been recently made aware of the fact that the English translation currently is missing from the transports attached to the note. It seems like this happened during the downport of the functionality.

      Currently we are trying to clarify with our translation team how to provide an English translation. This may take some more time. I will post an update here as soon as this has been fixed.

      Regarding the “Processing Mode” box: There should be three possible values, even if you lack the descriptions in your logon language, being ‘ ‘, ‘D’ and ‘X’. Try to select ‘ ‘ (blank) for changing the customizing.

      Bernd Sieger

  • As suggested, I attempted to work with the blank field in the “Processing Mode” dropdown box and  after filling the “No.Coll Processing” and “No. of days with Backup data” the transaction is failing. Any suggestions please.
    • Did you hit F8 (Execute) after filling the fields?

      If you tried this and it did not work for you please create a customer message on BW-BCT-LO-LIS. SAP can not fully support you via SDN. 😉

      Best regards,
      Bernd Sieger

  • Thanks for the immediate response. I tried exactly the way you wanted me to do, but unfortunately again it failed. As suggested by you,taking up with SAP. Meanwhile if you have any updates on this OSS Note please let me know.
    • Hi Veerabhadra and Bernd Sieger,  I also get exactly the same result. I have checked all corrections and the authorisation. I am also raising a note for it but wondered if you have already got it solved.
  • Hi Bernd,

    Based on your clear cut detailing we tested the LBWR transaction in QA and it works very well. This development of yours will go a long way in helping us to correct the corrupted data. In view of the excellent support received from you, we were able to overcome many issues in authorization, translation etc. Thanks a lot for the support given by you even in very odd hours. 
    I have some basic questions (why not create a table in source system and extract data by a generic extractor) on LO extraction, can I contact you further on this blog? I did go through all the blogs of Robert Negro and other documents but not getting answers.

    • I am glad I was able to help you. 🙂

      I would prefer to keep this blog very close to the topic. Feel free to send me other questions regarding LO extraction via e-mail. But please be aware that I will only be able to answer them as time permits. 😉

      For everyone: English translations for this new functionality are now available as attachment of the latest version (version 9) of note 1008250. Finally this got resolved!

      Best regards,
      Bernd Sieger

    • Hi Venkat,

      you will need at least PI 2004_1_500 on top of your SAP_APPL 500. If you have a lower PI version the coding might still work, but I can not guarantee that (since I did not test it with lower versions).

      Best regards,
      Bernd Sieger

  • Hi,
    I know we need to clear the delta queue before any relevant DDIC changes hit the system (transports, SPs, etc.). How abt this delta back up table, do we need to clear backup tables also?

    Thanks for your efforts!!


    • Vijay,

      there is not really a technical reason to clear out the backup tables prior to importing relevant DDIC changes. The old data in the backup table will be invalidated (if the change is relevant), but that is not worse than deletion. You might receive dumps when trying to use the LBWR for displaying the old data, but there should be no problem for new data being collected.

      In general it would be the “cleaner” approach to delete the old data in the backup tables first, but if it does not fit in your schedule for the upgrade/SP implementation it is acceptable to omit this step, in my opinion.

      Best regards,

  • Hi,

    I tried the backup customizing with Applications 02 and 13 with immediate success;
    this means: after then next collection run and new booking, the backup-table-content could be seen.
    In  application 45 , which is delta-queued in the same way, and was customized with the same parameters, there is no effect seen (table TMCEXUPD was checked, Delta was updated in BW ).
    Do you have an idea what problem this could be ?

    Another Question: Is there any possibility to get backups in a similar way in Application 03,
    if booking-mode is set to  “unserialized-V3-booking” ?

    Thanks for your efforts
    Best regards
    Udo Koepsell

    • Hi,

      this functionality has been implemented for application 45 as well. I just checked and the required coding is present (function module MCEX_UPDATE_45_V1, search for “mcex_delta_back”).

      So if it does not work for that application only we have to consider a bug. Please feel free to create a support message in that case, on component BW-BCT-ISR-AB.

      Regarding your second question: No, that is currently not possible or intended. Doing this would require several modifications.

      If you really want to use this functionality with application 03 you will have to use the update mode “Delta Queued” for it, I am afraid.

      Best regards,
      Bernd Sieger

      • Hi,

        thanks for your quick answer and the usefull hints.
        Maybe I found the bug in MCEX_UPDATE_45_V1:

        Imported Table is is_tmcexupd line 7 ; used in line 52 for “normal” functionality

        Local declaration of Ls_tmcexupd line 45 ; no assignment of data further on;
        used in Line 111
        “if ls_tmcexupd_45-backup_runs <> 0 or ls_tmcexupd_45-backup_days <> 0.”

        Best Regards
        Udo Koepsell

        • That looks just like the reason for the issue. If you create a message with that hint a correction should be coming up very quickly.

          Thanks for the feedback,
          best regards,
          Bernd Sieger

  • We have looked at the white paper on activating LBWR. It is working fine for 2LIS_03_BF, but when we activate LBWR for the Global Trade Management extractor 2LIS_46_SCL is is not working. I checked table “mcex_delta_back” and no records are being written for 2LIS_46_SCL. I went into debug I GTM change mode and it is executing FM MCEX_UPDATE_46_V1.

    Any Ideas on why it is not working?


    Mike Wagner

    • Hello Mike,

      unfortunately this feature has not been implemented for application 46, yet. I have just called the responsible developer for that area and there does not seem to be a technical problem.

      So maybe all that is needed is requesting the feature in a customer message or development request on component LO-GT-TC?

      Best regards,
      Bernd Sieger

  • Bernd,

    We posted a message to OSS and they created a note to implement extractor 2LIS_46_SCL for LBWR.

    Now we are getting a runtime error in program RMBWV3RE when we try to “dispay the data in the backup table” for 2LIS_46_SCL.

    We have enhanced the extract structure several times since we did our initial LBWR customizing. Could it be that there is backup data still in the LBWR queue that is in the old format? Can we delete the backup queue?


    Mike Wagner

    • Mike,

      correct, if you change one of the extract structures of the application, the queue as well as the backup table can not be read anymore.

      For this reason you are usually emptying the queues via the collective run before importing the change.

      A similar procedure can be used to clear the backup table:

      1) Set the customizing parameters BACKUP_RUNS and BACKUP_DAYS to zero (can be done in tr. LBWR).

      2) Execute the collective run for the application.
         -> The backup table should be deleted.

      3) Import your changes.

      4) Set the parameters to their former values.

      Best regards,
      Bernd Sieger

      • Bernd,

        I tried your four step proces several times, but the backup queue just wouldn’t delete. So, I took the easy way out and wrote a quick ABAP program to selectively delete from the queue back table “mcex_delta_back”. Worked great…

        Thanks for your help.


        Mike Wagner

        • Mike,

          that is very strange! I have tested the process in our test systems and it was working fine. Did you double-check that fields TMCEXUPD-BACKUP_RUNS and TMCEXUPD-BACKUP_DAYS were both zero before starting the collective run?

          However, I am glad you found a solution. If you feel like investigating why the process I have described before did not work for you, please create a customer message on component BW-BCT-LO-LIS. Then we can analyze the issue in detail.

          Best regards,
          Bernd Sieger