Skip to Content

Here are some pointers to help you out in initializing a source system and SAP NetWeaver BW in the case that you already have deltas established and need to start over.

 

  1. If you have V3, or other LO extractor delta jobs scheduled, you should stop them in the source system via tcode LBWE.
  2. Delete all initializations in your BW environment, to stop the generation of deltas. 
  3. Delete all of the delta entries on tcode RSA7 in the source system.
  4. Delete all setup tables, using tcode LBWG, for each application that you’re using in the LO extractors.
  5. Delete data in your InfoCubes and DSOs, in your BW environment, by right-clicking on the object and select Delete Data. This will delete all requests that have been loaded to the object and will additionally delete any Change Logs that have been created.
  6. Delete PSAs, in your SAP BW environment, by either manaually managing the PSAs or create a PSA Delete Process Chain that does it in an automated approach.
  7. Begin execution your setups using the OLI*BW tcodes, depending on what application you’re loading, in the source system. If you have a large number of SD documents to extract and load, you can determine, or have a business analyst provide, a list of document number ranges by year or year/month and execute mutiple, concurrent setups for those applications. When extracting this data into your BW environment, you can create and execute multiple InfoPackages, as Full Repair extractions, with selection on the same document number ranges used to create the setup table. 
  8. In most cases, the initialization of deltas should be done as an Init w/o Data Transfer and then do a Full Repair extraciton of the data.
  9. Schedule the V3, or other LO extractor delta jobs, in the source system.
  10. Execute Full Repair extractions to your BW environment.
  11. Schedule delta loads via Process Chains.

These steps can be used whether you do or don’t have “quiet time”. As some of us have encountered, “quiet time” isn’t always available. This, therefore requires us to be somewhat creative. I realize this may not be the only way accomplish this, but it’s been the process that we’ve used when we have to “start from scratch”.

To report this post you need to login first.

13 Comments

You must be Logged on to comment or reply to a post.

  1. Witalij Rudnicki
    Question to pt 8. “In most cases, the initialization of deltas should be done as an Init w/o Data Transfer and then do a Full Repair extraciton of the data.”

    What’s the benefit of this comparing to regular Init?

    Thanks,
    -Vitaliy

    (0) 
    1. Dennis Scoville Post author
      Doing an Init w/o Data Transfer is most beneficial when you’re initializing the delta for something that has a significant amount of data. This allows you to do have multiple concurrent Full Repair extraction InfoPackages, based on whatever selection criteria you choose, to be done. It will also ensure that you’re capturing deltas while your extracting data and reduce the risk of missing some records.
      (0) 
      1. Kishore Madireddy
        Hi Dennis
        Let me be first to say audible kudos to you…
        I have que in my mind. If your doing an init w/o data taransfer and doing a fullrepair.Does it pick all the data with out considering the quite time of source system?
        Say suppose i performed today a full reapir while its running there few transactions goin on with couple of timely changing status.
        How should i capture those?
        Do we need to excute repair full twice or  run any restricted full repair loads i mean any selective repair full loads ..
        Your clarification would be greatfull..
        Cheers
        K M R
        (0) 
        1. Dennis Scoville Post author
          No matter if you have quiet time or not, this process will pick up all of the data. You may even get some duplicate data if your selection criteria for the setup includes documents that were created after the initialization was done. This isn’t a problem as long as your first level data targets are overwrite.

          To reduce the risk of duplicates, you can execute your setups as LIFO (last in, first out). As a side note…LIFO processing of the extract and load processes is also preferable if you have huge amounts of data to extract and load so that you have the most up-to-date data extracted, loaded and activated first. The only areas that won’t necessarily work are some FI areas that require inception-to-date data to calculate balances.

          (0) 
          1. Kishore Madireddy
            Thank You Dennis,

            Say suppose i have an additive keyfig in the first level target (Key fig with Addition property) in such a cases what would be the better approach. Does your earlir method will work or do i need execute some sort of steps..
            Appreciate your clarification on this..
            Kind Regards
            K M R

            (0) 
            1. Dennis Scoville Post author
              If your first level target has one or more Key Figures with Summation properties, your choices are more limited. If you have the “quiet time”, going through the steps I suggested will work just fine, because your transactional system won’t be creating deltas. However, if you don’t have “quiet time” available, you’re going to have to do some manual workarounds whereby you’re going to have to analyze your first delta PSA and compare against your active first level target. If you find any duplicate keys, when comparing the PSA to the active first level target, manually delete the records from the PSA and then load. This could be a time-consuming and labourious process, unfortunately.
              (0) 
    2. venkat s
      hi,

      it was a really nice blog.

      i have a question which i am not clear with the exact difference
      i know performance of init w/o data tranfer and repair full request is better than init with data transfer
      but why should i do like this if i dont want to use any selections in info package ? how is the performance better for init w/o data tranfer and repair full request ?

      (0) 
      1. Dennis Scoville Post author
        The Init w/o Data Transfer and Repair Full extraction process doesn’t provide any benefit for system performance if the InfoPackage doesn’t contain any filters (selections). What it does, however, is reduce or mitigate risk by ensure that you’re capturing deltas whilst your doing your initial setups.
        (0) 
  2. new bw
    Hi ,

    Thanks for such a nice presentation and sharing your knowledge with others.
    Thanks a lot!!!

    how ever i can’t resist my self asking questions (please don’t mind)

    U have said we can fill setup table by LIFO method(last in first out).
    I have tried to get more details on this on sdn,but bad luck.
    can you please put some light on this that how can we fill setup table by LIFO method..?
    thanks in advance.

    Regards,
    Yash.

    (0) 
    1. Dennis Scoville Post author
      If the LO extractor has the capability of building the setup tables using dates (e.g. document date on the Purchasing, Application 02, extractors), then you can extract by a set of dates working backward. For example, if your data volumes are sufficiently small enought to pull by every half year, you could execute your setups with dates as selection criteria to do this. For example:

      Setup 1: 1-Jan-2009 through 25-Jun-2009 (current date)
      Setup 2: 1-Jul 2008 through 31-Dec-2008
      Setup 3: 1-Jan-2008 through 30-Jun-2009

      For LO extractors that only give the option to extract by document number (e.g. SD extractors), you will have to know the document number ranges for each document type, so that you can pull the latest documents first, then the next oldest second and so on.

      Then, when you’re extracting the data into your SAP BW environment, you can use the same criteria in your Full Repair InfoPackages so that you get the most recent data extracted and loaded into your SAP BW first and then have the data loaded backward from there to get to whatever date you want to have as the inception of the data.

      (0) 
    2. venkat s
      srry for bouncing back ||
      i am not clear now also

      hi,

      it was a really nice blog.

      i have a question which i am not clear with the exact difference
      i know performance of init w/o data tranfer and repair full request is better than init with data transfer
      but why should i do like this if i dont want to use any selections in info package ? how is the performance better for init w/o data tranfer and repair full request ?

      (0) 
      1. Dennis Scoville Post author
        The Init w/o Data Transfer and Repair Full extraction process doesn’t provide any benefit for system performance if the InfoPackage doesn’t contain any filters (selections). What it does, however, is reduce or mitigate risk by ensure that you’re capturing deltas whilst your doing your initial setups.

        It’s the selection piece that provides the performance efficiency because multiple, concurrent executions of InfoPackages with different selection criteria will yield better throughput than just one InfoPackage with no selection criteria.

        While the multiple, concurrent InfoPackages will most likely have lower throughput if analyzed on an individual basis, but the aggregate will be yield more throughput, assuming you don’t overload the system with too many concurrent processes in which case you run into “the law of diminishing returns”.

        Say for example you executed a single InfoPackage that extracts as a rate of 1 million records per hour and it is going to be extracting 4 million records. That one InfoPackage will take 4 hours to complete. Now, assume you could break that up into four InfoPackages by using selections on each InfoPackage and run all four concurrently. Even if the individual throughput degraded to 50% (or 500K records per hour), it would only take 2 hours to extract all 4 million records, yielding a savings of 2 hours.

        (0) 

Leave a Reply