Skip to Content

Hi!

It’s quite difficult to be precise in the area which this blog should belongs, SXDA and LSMW are Netweaver components and works cross solutions, but as I’m specialized in CRM and the example is with the One Order object I believe this is the best spot, I just hope this can be useful also to other SAP professional which works in a different area.

This blog is about improving the mass data transfer (insert, update, delete) experience using LSMW and SXDA and will be focused on the definition of the whole project. Please don’t skip the second part of the blog which includes the execution and references: Unlashing SXDA and LSMW Part 2 – Execution

What? Someone asked what’s LSMW? This is an old topic and I believe is covered enough trough SCN, but I think a short definition will be good: Legacy System Migration Workbench is an old framework to manage data transfer into a SAP system (create sales orders, customers, etc. supports a large number of objects).

Consultants:

  • Are you tired of depending fully on developers?
  • Are you tired of the sentence “custom report?, we can’t support you”

Developers:

  • Are you tired of manually deal with mass processing using parallelization?
  • Do suffer from itching each time you have to develop a report to upload a file from the local pc?
  • Do you still suffer PTSD after deal with complex data mapping in LSMW (events)?

If most of the answer is a YES, I believe this blog can really be useful, Otherwise, there’s a lot of stuff going on SCN 😉

The Scenario:

Update more than 5k of sales orders which have a wrong posting date.

Step 1: Create a project

Acording to SAP “A project is used to group Business Objects together for a transfer” In our scenario we only have One Business Object: Sales Order. This doesn’t make much sense in our basic scenario, but imagine you want to migrate various legacy systems to your SAP solution, you can create a project for each leagacy system and inside each project you will have all the Business Object required (Customers, Relationships, Products, etc.) or even better, each legacy have two Business Object, Legacy system 1 has Customers and relationships and Legacy system 2 has Products, this is a very nice way to organize your data transfer, right?

/wp-content/uploads/2014/10/1_558863.jpg

/wp-content/uploads/2014/10/2_558864.jpg

Step 2: Create a subproject, run definitions and tasks via Wizard

The Subproject will correspond to  Business Object itself in our case the One Order object.

/wp-content/uploads/2014/10/3_558865.jpg

Step 3: Choose the type of data transfer object (Business Object)

Depends on the Business Object type selected, the wizard will provide you the corresponding load method (LOA), load methods supported are:

  • BAPI (you can create/use your own BAPIS, in that case, the BAPIS must be generated through BAPI-ALE interface, I’m not going further on this as the main topic is wide enough)
  • Batch Input
  • Direct Input
  • IDoc

You already have the relationship between Business Object and load method delivered by SAP, and this will depend on which modules do you “installed” in your Netweaver.

You can check which objects/load programs/interfaces are available through SXDA->Goto->DX Program Library

/wp-content/uploads/2014/10/4_558884.jpg

Step 4: Review taks parameters

The task is, as the name says, each task which will be performed sequentially in each run definition, oh! I almost forgot, what’s a run definition? For that I will need to go for another “complex” scenario: Imagine you have to create customers and the source files are more than one, you can create a run definition for each file in the same Subproject (Business Object) why? for example to run the data transfer in paralel, or a better case you have different tasks for each run, first run validate file, second run no because the file doesn’t need to.

STEP 4.1: Task – Convert data

Calls the LSMW as the maping tool in order to convert the file from external format to the BAPI, Batch Input, etc. internal format. You don’t need to change nothing here, but give you the detail to build the filename for the step 4.2. It believe it’s good to know the Project, Subproject and Object fields of the second screenshot correspond to the LSMW project which will be generated once the wizard finish.

/wp-content/uploads/2014/10/5_558887.jpg

/wp-content/uploads/2014/10/6_558886.jpg

Small parenthesis:  Sometimes LSMW need a very complex input files and the customer doesn’t have/want/know to perform those conversions on the Legacy system. Here’s where another major advantage of the SXDA comes to play, the possibility to define additional steps, in this case I’m briefly talking about the Task Types: Extract data (EXT) and Clear data (PUR) , you can define a custom Function module or custom program for both Task types in order to read the source file and format it nicely in a LSMW friendly file, specially the hierarchical type ones, people whom dealt with that in LSMW knows exactly what I’m talking about, right? No more dirty stuff on the LSMW mapping events! Keep the mapping as simple as you can. If you are more interested in this topic I can write a specific one about this, but firstly let’s see the acceptance of this one 😉

You can define additional tasks on SXDA->Goto->DX Program Library->Create registration, once is registered the wizard will suggest to use it (if you create the run from scratch or if the run is already created and you want to add this new task manually)

Step 4.2: Task – Check file

It Checks if the file has a correct IDoc format, doesn’t check the data itself, only the structure. The important part here is the name of the name of the input file, the filename is the result from the mapping conversions performed by LSMW (convert a flat or not flat file to the IDoc format which can be consumed by the standard BAPI) The name convention is always the same name of the SXDAproject(Uppercase)_subprojectID_ObjectID.lsmw.conv (you can check this in the previous screenshots,  Step 4.1)

You can perform this step once everything is finished, even all the LSMW but it’s better to do it at the very beginning to avoid the “Oh I forgot”.

/wp-content/uploads/2014/10/7_558888.jpg

/wp-content/uploads/2014/10/8_558889.jpg

Step 4.3: Task – File split

In this step we split the converted file in small chunks, the number of the total IDocs generated is divided into the number of files defined, and this is quite useful, for example: check a particular block on a 100 record file it’s easier than do the same on a file which has 500k, for further information check the section Fix the errors. I also like to keep the files, sometimes the data transfer can be performed successfully but some data got lost, so it’s a good way to trace small mapping errors, loss of data, etc.


/wp-content/uploads/2014/10/9_558890.jpg

/wp-content/uploads/2014/10/10_558891.jpg

Step 4.5: Task – Load data

This is the most important step and one of the awesomeness features, you can choose the size of the transactions per commit and if you want to perform the data transfer sequentially or in parallel using a server group, the last one can increase the performance dramatically, but you must check/configure with the basis guys the server group in order to not use all the DIA processes a good tuning between parallel processing and block size is one of the cornerstone of an optimal and healthy data transfer.

I only log the error messages, because I don’t want to increase the logs necessary and I also check the Write IDoc to error files, because allow me to fix records with errors, I will explain this later on.

As we split the file in 5 we need a 1 error file for each to store the erroneous records, more files, easier to find inside the file the error, but we also will see this in the following steps.

/wp-content/uploads/2014/10/11_558892.jpg

/wp-content/uploads/2014/10/12_558893.jpg

Step 5: Choose message type

In our scenario I’m dealing with XIF interfaces (BAPI-IDoc) this can differ depending on your object type and data transfer method.

/wp-content/uploads/2014/10/13_558894.jpg

Step 6: Perform the mapping using LSMW

The convert data step doesn’t have any input /output file, this will be automatically informed once the LSMW tasks are completed.

/wp-content/uploads/2014/10/14_558895.jpg

As I pointed in Step 4.1 I already have a Project, Subproject and Object (generated by the SXDA wizard) Don’t forget to config the Inbound IDoc Processing from the LSMW perspective (settings) more info about this and LSMW at help sap portal

/wp-content/uploads/2014/10/15_558896.jpg

Let’s assume I finished all the mandatory steps (everybody who is used to LSMW knows what I’m talking about, if you are not used to LSMW there are very nice blogs step to step on SCN which cover those steps, from 2 to 6)


Now It’s time to specify the files and assign to the input structures (remember until this steps are done, the SXDA Convert Data step will be incomplete.

/wp-content/uploads/2014/10/16_558897.jpg


As you can see the LSMW automatically assigned the IDs for the Imported and Converted data, this must match in the SXDA steps.

/wp-content/uploads/2014/10/17_558899.jpg

Finally, we go back to the SXDA we see the input/output files informed (if this doesn’t happen: two cause can be a refresh on the SXDA screen, so /NSXDA  or some LSMW step is missing )

/wp-content/uploads/2014/10/18_558900.jpg

End of part 1:

I hope you are starting to see the advantages of combining both tools and looking forward to your feedback 🙂

Cheers!

Luis



To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply