Skip to Content
Execute conversion of logical system names (BDLS) in short time and in parallel – Intermediate

During Homogeneous system copy from PROD to QAS environment, the most important step is to “convert logical system (LOGSYS) names” using transaction code BDLS. This conversion usually takes long time for large size database tables. This Weblog steps will help you in completing BDLS in less time. The final execution times differ a lot, following these steps we are able to run BDLS conversion in 6hrs which used to run for 18hrs for single logical system conversion run for the size of 12TB.

Step 1 – Create BDLS Indexes for bigger tables for columns LOGSYS:

Step 1.1: Find out what are the largest tables in your production system which are taking long time for BDLS conversions and list them out with LOGSYS (Logical system) fields. Example: It is very easy to find out LOGSYS fields for large tables, if you have any old BDLS run logs you can see the fields as follows. Run SLG1 and enter object “CALE” and you will see this in the following screen shot – For Table EBKPF the logical system field is referred as AWSYS.

image

The following shows few large size tables with logsys field details.

Table name Columns for index
BKPF MANDT
AWSYS
CE11000 MANDT
COPA_AWSYS
COBK MANDT
LOGSYSTEM
AWSYS
COEP MANDT
LOGSYSO
LOGSYSP
COES MANDT
AWSYS
COFIS RCLNT
LOGSYS
RLOGSYS
SLOGSYS
GLPCA RCLNT
AWSYS
LOGSYS
AWSYS
GLPCP RCLNT
AWSYS
LOGSYS
GLPCT RCLNT
LOGSYS
MKPF MANDT
AWSYS
SRRELROLES CLIENT
LOGSYS

Step 1.2: Create custom b-tree (default) indexes for the above described (large tables) tables with columns described. What are the sizes for Special index would be? The index size will not large for an example: for table CE11000 size of 230GB, your special index size would be close to 20GB.

Create index for table CE11000 and example (Oracle Database) sql is as follows:

CREATE INDEX SAPR3.”CE11000~Z1″ ON SAPR3.CE11000 (MANDT, COPA_AWSYS) NOLOGGING TABLESPACE PSAPBDLSI
/
ALTER INDEX SAPR3.”CE11000~Z1″ NOPARALLEL
/
ANALYZE INDEX SAPR3.”CE11000~Z1″ ESTIMATE STATISTICS SAMPLE 2 PERCENT
/
exit;

Notes: 1)You might need additional temporary database space to support special indexes. 2)Even though most of these tables are transactional tables, the standard BDLS program will go though entire table to check for conversions. So, the creation of indexes will help in reducing the total run time.

Step 2 – Run BDLS with high commit number and run conversion in parallel: The following describes how to run BDLS for table groups and run them in parallel. For all long running BDLS runs during post processing of system copies this method will be used to reduce the amount of down time.

The documentation is developed for the following selections:
BDLS running in system : TST and Client : 300
Source Logical System : SAPPRD300
Target Logical System : SAPTST300

After the BDLS index creations are done, please follow the parallel character runs for BDLS conversions. (26 alphabets and 1 run for NOT EQUAL to A* to Z*) Please start the BDLS run for all the following 27 combinations and please use the step 2.4 table as checklist. Please make sure you have 27 to 30 BTC work process available for this run. If not please increase the background work process in RZ04 and trigger/activate the same in RZ03.


Step 2.1: Start transaction code BDLS and enter the required old/new logical system names and unselect other options “Test Run” and “Existence Check on new names in tables”. Please enter tables to be converted choice as A* and execute the BDLS run in the background.

Number of entries per commit: (1million is default) Please enter the value as 10 millions(10,000,000) Note: The value in the field Number of entries per Commit is only relevant for the actual conversion (not for the test run). To improve performance, you should choose a value as high as possible, provided that the database roll/Undo area is large enough.

image

Please execute t-cd: SM37 to make sure the job RBDLS300(the program differs for WAS640) started for A* variant.

image

Step 2.2: Please repeat the above process of running BDLS for all other tables like B*, C*…..till Z* in the background. (In parallel)

image

Step 2.3: After you trigger all 26 jobs for all A* to Z* tables, please run the last table set for Not Equal to ranges A* to Z* which covers all “/” tables and others.

Please select the tables to be converted and enter the following:

image

Please click enter and verify the old/new logical systems and other options.

image

Now please trigger the run in background.

Step 2.4: Verification for jobs and SLG1 – object CALE logs: Please make sure you have total 27 jobs (T-cd : SM37) are running/completed for BDLS report.

image

Please verify SLG1 object CALE logs for each run and make sure in the log tables are converted for logical system names.

Please see following, in this example D* tables are converted and others marked as “<<” for not mapping.

image

When monitor the other logs you may need to refresh the SLG1 screen for the newly created logs. Please run t-cd : /nSLG1 – CALE – Execute. Please do use reference numbers, which you have completed analysis on last log.

image

Serial no. Tables to be converted Status Job status SLG1 status
1 A* Completed Successful No errors
2 B* Started Running In Process
2 C*      
4 D*      
5 E*      
6 F*      
7 G*      
8 H*      
9 I*      
10 J*      
11 K*      
12 L*      
13 M*      
14 N*      
15 O*      
16 P*      
17 Q*      
18 R*      
19 S*      
20 T*      
21 U*      
22 V*      
23 W*      
24 X*      
25 Y*      
26 Z*      
27 Not Equal to A* to Z*

Step 2.5: Here attached the sql traces/cost for table CE11000 (230gb size table) for bdls conversion with index and without index. With out index the BDLS conventional run completed in 7hrs and with index it completed in 2mins.

BDLS Run with out index for table CE11000: image

BDLS Run with index CE11000~Z1 for table CE11000:
image

Step 3 – Drop special indexes:

After Successful conversion of logical systems you can drop special indexes created as per step 1.

Example sql commands:
DROP INDEX SAPR3.”CE11000~Z2″
/
DROP INDEX SAPR3.”CE41000~Z2″
/

Notes:

  1. For tables like X*, Y* you will not see any conversion, but log exists with no tables marked. These tables are skipped in the log due to no X*, Y* tables has fields like LOGSYS configured and no conversion required.
  2. Some times based on your system landscape, in QA system you may need to run few other BDLS conversions for BW, Event manager, SRM systems. Please repeat the above steps for second conversion. Example: R3 TST system refreshed using R3 PRD: the Primary R3 conversion is from SAPPRD300 to SAPTST300 and Second conversion for Event manager is from SAPEVP110 to SAPEVQ110.

In Summary:
The above described steps are in very detail, Steps 2 and 3 take very less time and step 1 might take little long time to execute and this depends up on your SAP database size. All together the total amount of time it takes to complete BDLS is less than what you usually run with standard procedure. The BDLS runs takes about 16hrs to 18hrs for VLDB (very Large Database systems) size of 13TB. With above procedure you will be completing this in 5 to 6hrs including index build time. Find bigger tables in your system and create BDLS index before you start running BDLS, Run BDLS for single tables in parallel with higher commit number, verify logs for successful conversion, verification of all table runs and Drop BDLS indexes which were created as part of step 1.

I hope this helps you in completing your BDLS runs in less time. I used to run 4 different BDLS conversion runs for single system in 70 to 72hrs before and now with above 3 steps it only takes 8 to 10hrs for VLDB system size of 12TB.

To report this post you need to login first.

25 Comments

You must be Logged on to comment or reply to a post.

  1. Doreen Anderson

    We found this posting and thought it looked great.  Trying it though; after the first A* submittal of BDLS, any subsequent attempt to run BDLS is giving an error Ideas?

    (0) 
    1. Hari Peruri
      Anderson,

      Thank you for your comments!!

      The message you are getting is not an error, you are getting a warning that the logsys is already assigned to current client where you are running bdls conversion. Please hit enter button to proceed to subimit when you get the error.

      We get this warning for most of the character submissions, which is ok. BDLS program is desgined to check logsys assignment for every run.

      Hope this helps,
      Thanks,
      Hari Peruri

      (0) 
  2. MOhammad Farooq
    Hi Hari,

    When I scheduled all 27 jobs and started to run only 1 job ran rest got canceled at the same time and I was not able to reschedule these jobs because it says logical system already change so in order to schedule remaining jobs I had to remove the logical name from SCC4 and SALE. I still not able to run more than one job.

    Please tell me what am I doing wrong here ? I have already increased the batch processes upto 28 but I am not able to run this process although I followed step by step this procedure.

    Please give me your expert advise.

    Thanks
    Mohammad
    f131f@hotmail.com

    (0) 
    1. Hari Peruri
      Hello Mohammad,

      You can not copy these batch jobs. You need to submit them from BDLS for every character bdls execution to run in background. This way you will get a popup for the warning which says logical system already assigned or created, you just need to hit enter and continue to submit in the background. The error reported is just a warning and continue to submit the parallel BDLS conversions.

      Hope this helps.

      Thanks,
      Hari

      (0) 
  3. Loukas Rougkalas

    HI Hari,<br/><br/>unfortunately , although I was optimistic about the procedure you have described , I have not been able to run the conversion in parallel as you described.<br/>I followed all the steps,I created the INDEXES but when I started running the conversions one by one in background for each table separetely (A,B) I was seeing all the jobs in SM37 getting cancelled imeediately in some seconds. Only the first one run for ~2000 seconds and all the rest which I started in parallel (B,C etc) were cancelled (failing) with the job log:The new logical system name is assigned to the client XYZ . Another process is running for conversion Loukas Rougkalas

    (0) 
    1. Hari Peruri Post author
      Loukas,

      When you run the first job with A*, it creates the target logical system and assigns the same to client in SCc4 setting automatically.

      The further character submissions will get a warning saying the logical system is already assigned. This is information error while submitting the job. You need to hit enter and submit the further B*, C*…in background. I am wondering if you submitted A* and B* at the same time one would fail because A* assigned the logical system already to client settings;

      Here is the quick feedback.
      1) Submit A* in background
      2) In next 5mins submit B* and the rest with the warning pop-up saying logical system assigned already; (you don’t need to delay it for further characters, just for first one to get to start assigning client settings)

      This should resolve the problems. We have been using this process for number of years now.

      With out this process, we can not run BDLS for 5 sets for 15TB system in 7hrs. it usually takes around 80hrs without indexes. 🙂

      hope this helps.

      Hari

      (0) 
      1. Loukas Rougkalas
        Thank you Hari for your quick response.
        Basically, our problem is not the warning message. If I can recall well, we were getting this error at the bottom bar of the BDLS console. Then I was pressing Enter, it was giving me the option for Pritning, I selected LOCP, and then it was completed. I opened a second session to SM37 to see the jobs monitor and it was already red (Cancelled). We suspected that the job is complainng because the name is already changed in SCC4 from the first table, but how are we supposed to see when the BTC job will be completed for each and every table, even though it was already failed in the jobs monitor?

        I hope you get my point.

        Kind regards,
        Loukas

        (0) 
  4. C. Mallens
    Hi,

    With a database counting 2TB and some very large tables this sounds like a logical and excellent way to speed up the conversion process.

    From tests we also concluded the following:
    – Large tables with a LOGSYS (or similar) field but no logical system names in it, also need the index (in our case, for example table COEP).

    – BDLSS is a intelligent transaction, it only calculates and converts tables which actually contain the LOGSYS/AWSYS (or any similar) fields.

    – Even though a table might contain 0 AWSYS and 1.000.000 LOGSYS entries, it does need a combined RCLNT, AWSYS and LOGSYS index to perform correctly.

    Our test results on GLPCA were with index 6,5 seconds, without the index 4,75 hrs.

    Regards, Niels.

    (0) 
    1. Raja Poranki
      Unfortunately with WAS620 SAP changed BDLS functionality by replacing RBDLSXXX program with RSBDLSMAP with which parallel processing of several tables is no longer possible. I see lot of people responded to the blog saying their subsequent jobs errored out after the 1st one and it is a known issue.

      Only workaround is to use the old program RBDLS2LS. It will generate the RBDLSXXX program which can be used for parallel runs.

      Raja

      (0) 
  5. Mylene Euridice Dorias
    Hi,

    i’d like to add a part to this blog containing the specifications for a parallel run of BDLS on a NW 640 system with a MaxDB database. Also, it would be nice to update this BLOG a bit, since i saw comments in the forums refering to this BLOG which considered it to be out of date. Along with these updates we could mentionen the functionality of ‘new’ BLDS (non-parallel) but point out how to stil get the job done using ‘old’ BDLS.

    If you are interested, please contact me.

    Thanks in advance,
    best regards,
    mylène

    (0) 
    1. Hari Peruri
      You should be able to do Re-org or running stats after conversion is done.

      This becomes normal Database to perform these as usual.

      Hari

      (0) 
  6. Eric Brunelle
    I have a 1TB ECC 6.0 system.

    BDLS was running for over 18 hours and it was still trying to process GLPCA.

    I added about 15 indexes after investigation.

    now, BDLS runs within 1 hour even with 1 BDLS run (no parallel processing.).

    I would like to thank you for helpful hints you provided in the very interesting blog.

    (0) 
  7. Derek Barrett
    We just used this method on BDLS and reduced the runtime by 80%, even without parallel runs.

    My question is, say I have a very large table, for example CE1OC01, around 200GB. We indexed this table on the COPA_AWSYS column, however, after viewing the BDLS log, no rows were updated in this table.

    I am thinking, even though the program did not update the table, that the index was still helpful in scanning the column, to make sure there were no client entries?

    So the question would be, would having an index on a table which does not get updated, still be helpful in reducing the runtime of BDLS? Doesn’t BDLS still need to scan the column on the table to make sure there are no versions of PRDCLNT212 ?

    (0) 
    1. Hari Peruri
      Hello Barrett,

      Yes, you are right. the index helps to scan the table, sequential reads to be faster to check the values for conversion.

      We have tables like GLPCA size of 700GB go through scan without one record for conversion. This usually takes hours and with index it is finishing in minutes.

      Hope this helps.
      Hari

      (0) 
      1. Derek Barrett
        Hari,

        Thanks for the confirmation, and again thanks for your original article. This article has been extremely helpful and has greatly reduced our BDLS runtime to a manageable time frame.

        Derek

        (0) 
  8. Anonymous
    Hi,

    First I would like to thank you for this fantastic article.

    We tried in 25 TB System, its completed withiin 12 hours, before it tooks more than 5 days to complete.

    Once again thank you for this article.

    Thanks
    Dhanapathy

    (0) 
  9. Josef Schmid
    Hi,
    first: Thanks for your detailled explanations.!

    We have a 1TB Database (on MSSQL 2005 – does it matter?) and I tried your proposal on a single table SWW_CONTOB (the largest one for BDLS with about 25.000.000 Rows). There was nearly no difference with or w/o index; the actual sql- statement is:
    UPDATE “SWW_CONTOB” SET “LOGSYS” = @P1 WHERE “CLIENT” = @P2 AND “ELEMENT” = @P3 AND “TAB_INDEX” = @P4 AND “WI_ID” = @P5
    Thus MSSQL doesn’t use my newly created index on “CLIENT,LOGSYS” but still uses the primary index, which consists of columns CLIENT,WI_ID,ELEMENT,TAB_INDEX.

    Do you have any suggestions?

    Thanks
    Josef Schmid

    (0) 
    1. Sri M
      Use explain plan in ST04 and check the optimizer path.
      You might have to update stats or in worst case scenario rebuild index.

      -SM

      (0) 
  10. Sri M
    Hello Hari,
    Million thanks for posting this blog, but I’m wondering how long does it take to create an index on 250 Gig Tables and count of 17 tables in my landscape?

    Can you please let me know how long it took for you to create special indexes on the above mentioned tables before starting the actual BDLS run.

    Thanks
    Sri

    (0) 
  11. Steven Foo
    From our BDLS log, I notice we have a large conversion taking place

    TABLE: /SAPHT/DRMDETL     
    Columns:
            LS_EC_SO     1     31228
         LS_PPCLMDOC     1     96462
         LS_PPRVLDOC     0     0
         LS_REF_DOC_NO     1     2436922
         LS_SD_AGR     1     1235323

    I wanted to create index for it. For this case do I create:

    1) 1 index eg. /SAPHT/DRMDETL~999 with (LS_EC_SO, LC_PPCLMDOC, LC_PPRVLDOC, LS_REF_DOC_NO, LS_SD, ADR)

    or

    2) 2 indexes for to large columns: /SAPHT/DRMDETL~888 (LS_REF_DOC_NO) and /SAPHT/DRMDETL~777 (LS_SD_AGR)

    Please advice.

    (0) 
  12. Anish John

    Hi there ,

    Very Nice post

    I see you are creating these indexes on a new table space. Any specific advantage of doing so ?  Was it done just to manage it easily ?

    What if I create it on PSAPSR3?

    CREATE INDEX SAPR3.”CE11000~Z1″ ON SAPR3.CE11000 (MANDT, COPA_AWSYS) NOLOGGING TABLESPACE PSAPBDLSI

    Regards

    A

    (0) 

Leave a Reply