Part I: Back Story, a developer’s suffering

In most cases, a company has to pass just one Material Data Migration Project at a time, in some others, as the company is growing, there might be one or the other project to integrate other company’s material data. I have a customer, which is a fast growing company. I can’t recall a year without a migration project. In fact: during the last years there were three or more migration projects per year and there is a queue of migrations, waiting to be processed.

Due to privacy reasons and because SCN is not a pillory, the customer’s name won’t (and shouldn’t) be mentioned here. It’s just an example for problems, which can appear likewise in many projects at many customers.

Before I joined their SAP Competence Center (as an external, freelancing developer), they worked with single-use reports to migrate the new companies’ data. In the past, they tried to use LSMW, but since several external developers failed by migrating material master data with LSMW, I was not allowed to use it! In this single-use reports, it was hard coded, in which way fields are to be filled depending on their material type and it’s display-only/mandatory-customizing, as well as standard values, which are to be used by default, if it’s undefined or empty in source system. Hard coded inserts of MVERs, additional MARCs/MARDs, MLGNs/MLGTs, etc. Some flags appeared from nowhere and there was no way to separate the overall usable coding from the project specific code (what results in the fact, that the whole program was project specific, so they had to code another from scratch for each project). This coding was called “pragmatic”.

I had to obey – knowing, that I would take great risks if I would try other ways. So I did as I was told and used – under protest – hard coded single used reports. As we were pressed by time, no discussion arose about it. And – I must admid – my last material data migration project lay back 15 years. For the sake of peace and quiet, I did as I was advised.

And guess what: This project was a mess – for my nerves and my health. Instead of being proud of my work, I hated my coding. After I made all requested changes it was impossible to tell by whom they were required. Of course, at Going-Live, all data have been migrated in time and correctly (hey, that’s what I am payed for!), but you don’t want to know how much money they had to pay. I won’t quote, what I said to them, after passing the project (it wasn’t very friendly, but honest), but I said, that I won’t do another migration project in a similar way, I wanted to go my own.

Because the next migration project was already announced, I knew I had find a solution for this and the most important items were easily to identify:

  • Separation

between frontend and backend features; the single-run-programs, used in the past, were designed to be started by the developer and noone else. I wanted to have an application, which can be started by everyone after a short briefing. And I don’t want to test the whole migration stuff just because the frontend changes (S/4HANA is just around the corner, even for this customer!)

  • Exception handling

Of course, I want to work with Exception Classes….

  • Documentation

I hate undocumented development objects and even most of  SAP’s are not documented, I prefer to do that (if the customer does not want to pay the documentation, I even do it in my spare time). So each class, each interface, each component, each program, table and data element has to be accompanied by a documentation. Expectation was high: For an experienced ABAP OO developer, single workday of eight hours has to be enough to perform the full maintenance program.

  • Testing

mostly it works like this: Try a few different cases (just one in most cases) and if they don’t get a dump, the app is working fine per definition. I love to test classes and I want to have a minimum of test effort. A test class is written once and a well defined bunch of test cases (growing and growing, because each issue from the productive system has to be simulated as well) can be processed by a single click. This results in the effect, that no working feature can be destroyed by developer failures.

  • Separation of concerns

It would have to have a reusable and a project specific part. In each project, there are some essentials, which have to be developed only once to be used in every migration project. On the other hand, there is always  project-specific code, which can not be handled in the reusable part. On closer inspection, there appears a third layer, which bundles similar projects, between this two layers. We’ll get deeper into that later. In particular, the „you need multiple MARAs when you want to create multiple MARCs/MARDs/MLGNs/….“-thing (more infos about it below), I wanted to code once!

  • Field status determination

As the FM MATERIAL_MAINTAIN_DARK does, I want to read the customizing to determine input/output/mandatory-attributes – not just to send a simple error message and abort (like the FM does), but to have the chance to fix the problem automatically. It turned out, that the customer was wrong: Reading the customizing was much faster and easier to implement than collecting the filling rules from all functional consultants! In addition to this, I want to determine the views, I have to create, from the customizing.

  • Protocol

Each callback “why does have field X in material no. Y value Z?” has to be answered by a protocol, which can be inspected by the functional consultants, so there is no need to bother the developer. To get this, all FM messages and all data manipulation have to be accompanied by a protocol entry.The problem was to sell this solution to my customer. So I needed two things: A good, advertising-effective name and a calculation, that my solution is cheaper than the single-run-programs, used in the past. For the name, I had to exaggerate a bit and I chose „Material Data Migration Framework“ – you can call a box of cigarettes a ‘smoking framework’ and every CIO will buy it! – and replaced in it’s abbrevation from MDMF to MAMF to make it speakable like a word.The calculation was  simple: I just made a bet, stating, that I would cut my bill, if the costs were higher than those of that last project. To make a long story short: The costs have been much lower (and much faster, as well!), even most of the coding was reusable, so the costs in the following projects will be FAR lower. They never had such a smooth migration project.

Part II – Elementaries


  • In the text below, I use $ as a variable for the customer’s namespace, in most cases Z or Y, in some cases something like ‘/…./’.
  • The migration tables’ dependencies, explained at first, will be called “object hierarchy”, which must not to be mixed up with the “class hierarchy”, which will be explained later.
  • I won’t post any coding, here – because the customer paid for this coding, so they own it.

At first, we need a package to collect all related development objects: $MAMF.

For material master data migration, we won’t stop using FM MATERIAL_MAINTAIN_DARK, which works in logical transactions, as I mentioned before. More details are explained in it’s documentation. The most important fact is, that the migration tables’ records are related to others of the same material master data set (material number). One example: To post a material master data set with multiple MARC-records, with multiple MARD-records each, there have to be multiple MARA-records (in the single used programs this problem was solved by inserting the multiple entries directly).

This was the high order bit for the decision to develop object-oriented. I realized, that I would have to interpret each record of each migration table of FM MATERIAL_MAINTAIN_DARK as an object, because an object has a constructor and a destructor. This means, that a MARD-record can check at construction, whether or not there is a MARC-record related to the same plant. If not, it fires the MARC record’s constructor to generate one and this constructor checks, if there is a MARA-record, using the same transaction number TRANC. This results into an object hierarchy.

So I need a class inheriting hierarchy, which differs – as mentioned above – from the object hierarchy: A basic class $CL_MAMF_MMD, same for all material master data migration projects and a subclass $CL_MAMF_MMD_xxxx for each migration project, dealing with the project specific steps (xxxx is a migration project ID).

Anticipatory, it will be predicted, that we’ll learn, we’re gonna get some other basic classes, i. e. $CL_MAMF_PIR… for Purchasing Inforecords, $CL_MAMF_BOM, etc., which results in a „higher level (root) class“ $CL_MAMF for all migration projects. But for now, this is irrelevant.We need this hierarchy for all migration table types: one for MARA_UEB, one for MARC_UEB, one another for MARD_UEB, etc. For LTX1_UEB, we gonna do some special: A special class for each long text with name = Text-ID; BEST, GRUN, PRUE, and IVER. For the Sales Text (Text-ID 0002), we take the text-object MVKE for better identification of the class and (because, there already is a MVKE_UEB table) change it to MVKT. All this classes inherit (as $CL_MAMF_MMD does) from $CL_MAMF, which means, they are on the same level like $CL_MAMF_MMD). To repeat it: The object hierarchy must not to be mixed up with the classes hierarchy!The root and the basic classes’ instance generation is to be set to “abstract”, the project specific classes will be set to private and they are always final to avoid project-to-project-dependencies.

$CL_MAMF                root class

$CL_MAMF_MMD_xxxx       basic class

$CL_MAMF_MMD_xxxx_nnnn  project specific class

     xxxx = table / long text (MARA, MARC, …, BEST, ….)    (not applicable for migration process controlling class)

     nnnn = migration ID


For each migration project, we just have to make a new subclass to each of the basic classes (except the data, we don’t want to migrate – we won’t need a BEST_XXXX-class in a migration project, which is not supposed to migrate purchasing order texts).The controlling class (…MMD) has to have a class constructor, to get some customizing tables (particularly the field status of MM01-screen fields). This class will also have a method post, which posts the whole material data set.

All classes do have a protected constructor, because we have to adopt a modified Singleton Design Pattern (a so called Multiton), to administrate the instances in a static internal table MY_INSTANCES, containing all key columns of the related migration table and a column for the objects, related to this key columns.

The next step, I did not implement, but it seems to be a good idea for the future:The following methods could be bundled in an interface $IF_MAMF_LT, implemented by all basic classes and inherited to all project specific classes.

Because ABAP does not support overloading, we have to abstract the importing parameters, which has to be explained: We store a data reference in this class, written by a set- and read by a get-method. So we can be sure, that every data object can be stored. Alternatively, we can’t use an interface for that, because each migration table has it’s own structure.

A free method provides a destruction service, including automatical destruction of all subordinated objects.

A factory method builds the classname by concatenating classname and migration ID to return an instance of a basic classes’ subtype.

An instance creator method get_instance which checks the existence of the superior object – if this check fails, the constructor of its class will be called – and calls the constructor of his own class to return a unique instance.

The results of this concept are, that the dependencies between the migration tables have only to be coded once (in the basic classes) but used in each migration project. No developer of a migration project has to care about this stuff, he just creates objects he needs, the objects themselves  will care about the technical dependencies, the MATERIAL_MAINTAIN_DARK needs.

And, as explained earlier, we don’t want to code fields contents hard in the migration program, so we have to read the customizing the field’s status. MATERIAL_MAINTAIN_DARK does that, too, but only for firing an error message and abortion. This has two consequences: On the one hand, we can copy the coding instead of re-inventing it and on the other hand, we can avoid the abortion.

The method get_field_status returns an indicator for obligatory and output-only fields and in combination of the field’s content we can find filled output-only fields (which have to be cleared) and empty obligatory fields. For this fields, we need a get_(fieldname) method, which returns a default value – implemented in basic class for all projects or project specific in the final class. This methods (which will be hundreds) shall be created automatically and in most cases, they will be empty (means: take data from source). Same for set-methods to manipulate the saving process for each field. An example for a manipulated field content is MARA-BISMT, which contents the material number of the former system. My customer has multiple old material numbers, because (for example) company A has been migrated to plant 1000, company B to plant 2000. For this reason, they defined a table to store the BISMT per plant. The easiest way to do that in MAMF, we implement this in the method $CL_MAMF_MMD_MARA_nnnn->set_bismt( ), which stores  the relation between the former and the current material number in the table for each migration project (means: for each plant).

Part III  – The Migration Cockpit

I’ve always been of the opinion, that using an app has to be funny for the user, not just performing their works duties. So the user’s view on each migration project is very important: the Migration Cockpit application, which will be a report with the transaction code $MAMF and follows the current design rules of SAP: Beside the selection screen, the report itself won’t have any coding, only method calls. The coding will be placed in local classes, lcl_cockpit for the main coding and lcl_cockpit_evthdl for the eventhandler, because I prefer to handle report PAI by raising events, i.e. when the user strikes F4, an event will be raised and the value help is implemented in the eventhandler class.The selection-screen is splitted into three areas:

  1. The header line with migration ID and it’s description
  2. A bunch of tabstrips, one for each migration object. By now, we only need a tab for material master data, but we want to have the chance for getting more to have a single point of migration works for all material data.
  3. A docking container, displaying a short briefing, what to do to migrate the migration object from the foreground / active tab.

To define a migration project, we need a customizing table with the columns migration_id, description (I don’t want to maintain this text in multiple languages, because that will be the new companie’s name, so no language field is needed) and a flag for each migration object, with a generated maintenance screen. The Cockpit will read this table to show this data and to disable tabs for all migration objects, we don’t want to migrate. A button in the cockpit’s toolbar will open a pop up for table maintenance.The cockpit will have three modes:

  1. Post online,
  2. Post in background job, which has to be started immediately (after a single click) and
  3. Post in a job, we plan to run later.

In both background modes, we need an additional step for sending a SAP express mail to inform the user, that the migration run has been passed. All run modes can be processed as a test run or a productive run. And we have to put some protocol related buttons into the screen.

Now we come to a special feature: Buffering migration data! In the messy migration project, I talked about earlier, we had to migrate about 70.000 materials, loaded from an Excel file and enriched with additional data directly from the source system via RFC. This takes hours and a simple network problem can disconnect the migrating person’s SAPGUI from the application server, causing an interrupted migration. To make it possible to upload the migration data from the client to the application without waiving the background processing and to speed up the migration run, we have to buffer the data on the application server. To avoid creating application server files from the source system’s data, we will save all data in a cluster table, INDX in this case. Advantage: We can store ready-to-migrate SAP tables. And the flexible storage in cluster tables allows us to save not only the data from SAP tables, but the excel file as well and the selection screen may show, who has buffered when. And maintaining a cluster table is much easier than managing files on the application server.

The class hierarchy may look like this:

$MAMF_BUF           Basic class

$MAMF_BUF_RFC       RFC specific class

$MAMF_BUF_RFC_nnnn  project specific class for a migration via RFC

$MAMF_BUF_XLS       XLS-based specific class

$MAMF_BUF_XLS_nnnn  project specific class for a migration, based on XLS-files

So, the migration process will have two steps: Buffering the source system’s data and migrating the buffered data. For multiple test runs, you’ll buffer once for all test runs, which is a nice time saver. And now we can see the third layer between the migration project and the migration object: the migration process, because all RFC-based data collections are similar to each other, as well as Excel-only based migration projects are similar to each other and so on. This differentiation only works for the buffering process, after that, we have a standard situation for all migration projects: The data can be found in INDX, sorted into SAP structures MARA, MARC, etc., so we don’t need this third layer in the migration classes, described earlier.

Of course, the brief instruction in the docking container has to be fed by a translatable SAPscript text and it needs only a handful of steps to implement it. Besides, the cockpit will have an extensive documentation to explain each step in detail.

Part IV — Saving Protocols and look ahead

A migration protocol particularly should support two ways of analysis: on the one hand, we have to analyze, what errors occured during migration run to fix this problems. On the other hand, some functional consultants may ask the developer “Why does field x of material no. y has value z?” and the question may be allowed, why the developer has to figure that out. To avoid overloading the developer with questions like this, we should write all data manipulation in the protocol, so each difference between source data and the migration data, we send to MATERIAL_MAINTAIN_DARK can be read in the protocol. All undocumented differences between this and the posted material data were made by the SAP system.

At first: The application log is not appropriate to do that, because it can not be filtered properly. I tried it this way and it was a mess. So we’ll define a transparent table in the data dictionary to store the protocol in. Each insert has to be committed immediately, because the rollback, caused by a program abortion (the worst case scenario) would send all protocol entries up into the Nirvana. This table $MAMF_LOG_MMD needs to have the following columns: Migration-ID, No. of migration run (we gonna need a few testruns, I’m afraid), Testrun/Productive-Indicator, Material no., message text, person in charge. By filtering a SALV based list, the functional consultant himself can retrace the “migration story” for each material no. of each migration run, and he can do that years after the migration, if he wants to. And he is able to filter the list for the messages, which are relevant just for him. If a field content i. e. from MBEW does make any trouble, the name of the FI-Consultant has to be placed in this column.

The Migration Cockpit needs a button on the material master data tab, which reads the table and filters the list for the last run (which is the most relevant in most cases), but as said before, the consultant is able to manipulate this filter rules to meet his individual requirements.

What’s next? There are more material data to be migrated, so – as mentioned before – there will be more basic classes beside $CL_MAMF_MMD, i. e. $CL_MAMF_PIR for purchasing inforecords migration, $CL_MAMF_BOM for bill of materials and $CL_MAMF_STK for the stocks migration. Although the migration process will be way different, we have the chance to migrate all material data with one migration cockpit. For this reason, we need a root class $CL_MAMF to make the migration framework extensible (the magic word is “dynamic binding”) without changing the root classes coding.

In conclusion, we do have an application, that separates the UI layer from the business logic and the reusable from individual coding, is easy to use even for non-developers and extendable. With this appliction, I have a lot of fun and no frustration in my migration projects and I learned much about OO concepts and Design Patterns (even when they are not all described here, I did of course, used them), the customer is thrilled how easy and fast we can migrate material data (which is important, because without material master data no orders, no purchases, no stocks, etc.).

Discussing the efforts

Yes, the question is allowed, why I put so much effort into such a simple thing like migration of material data. Well, it isn’t that simple what it seems to be and the quality of this data is underrated. I often saw duplicates, missing weights, etc. And – we shouldn’t forget this important fact – this was a really funny software development, kind of a playground, because I had the chance to work alone: I defined the requirements, wrote the concept, developed the data model and wrote the code, tested it and after all I could say: This is completely MY application. Noone took this hand on my application and I never had to hurry, because the raw concept in my head was finished before the migration project started. And in each following migration project, I was a little bit proud, because now we do have a standard process for this and we’re able to do a migration within 2-3 days without being in hurry.

I hope you had fun, reading this and perhaps you learned a bit 😉 If you have any questions, suggestions for improvements, comments or anything else, feel free to leave a comment.

Disclaimer: English ain’t my mother tongue – Although I do my very best, some things maybe unclear, mistakable or ambiguous by accident. In this case, I am open to improve my English by getting suggestions 😉

To report this post you need to login first.


You must be Logged on to comment or reply to a post.

  1. Christian Punz

    HI Ralf,

    Though I understand your intention to solve that type of problems I have to say that I did my (customer’s ,-) migrations with LSMW. Complex objects like production orders were handled by Z-programs. Yes, there are some drawbacks with LSMW but nobody (including me) wanted to think about a different way of migrating the objects. We took LSMW-objects from former projects and honed them to fit the new requirements.

    Anyway, very interesting article!



    1. Ralf Wenzel Post author

      I offered a few ways, including eCATT and LSMW and they did migrate customer data via LSMW, but the head of material master data wasn’t flexible in his opinion. He wanted an ABAP and now he got an ABAP, but reusable.

      Anyway, he is very happy with this solution and I could show, that my concept works very fine 😉

  2. Alejandro Jacard Plubins

    Since you have so many migrations projects, have you considered evaluating SAP Data Services or the SAP Landscape Transformation products?.

    The first is an ETL, but has many default connectors for SAP Master Data using IDOCs, the 2nd, is an advanced tool initially used only by SAP internal consultant when dealing with SAP server mergers or company mergers.

    In both cases, you can have people specialized in data cleasing reviewing the quality of the information before loading it to the destination system.

  3. Ralf Wenzel Post author


    Recently, I improved the framework, so that I do not need project specific logic, but only customizing entries (bundled in a well-documented view-cluster). The first only-customizing-based migration worked very well!


Leave a Reply