Whether you are rolling out a new SAP deployment, or performing a Continuous Improvement Review of an existing solution that has been up and running for a few years, there are three areas that can make or break your Super Users over the lifespan of the solution. It can be difficult to decide how to configure SAP QM, especially during new deployments. That is why it can be helpful to use a quantitative matrix to rank configuration options based on their long-term usability.
SAP QM is a complicated beast, and certainly there are many aspects of it that will not be covered here. However, the three areas discussed below will influence the majority of ongoing maintenance work that must be carried out by your Super Users. As you examine the 3 steps below, consider that a site using the least efficient configurations – assuming a high level of new materials and specification changes – would be spending up to 150 man-days annually per plant to maintain the solution. Whereas, sites with the most efficient solution could drop to an estimated 20 annual man-days globally.
1) Inspection Plan Setups
If you have decided to use Inspection by Task List, you will realize there are several options on how to direct your Super Users to setup the data. The options range from being very specific (every material gets a unique inspection plan in each plant) to more broad-sweeping (use Reference Operation Sets as templates for common test scenarios). The option that you choose will have effects that stretch far beyond Go Live Day. So, it is important to consider how the users will maintain the data in the future.
When deciding how to setup Master Data, consider all of your options and business requirements, and then rank them in terms of long-term usability. For example, in the matrix below I have presented 4 common SAP QM options for setting up Inspection Plans. I have then scored them against 3 criteria. As in golf, the low score wins.
I have assigned scores to each of these, but your own rankings may differ based on your particular business needs. For example, assigning one material across different plants to a single inspection plan may not work if the plants are using different units of measure. Likewise, the use of Reference Operation Sets could be very useful if there are certain tests performed across material types (for example, all Finished Goods have Sensory Testing) and can also reduce the number of Super Users involved by allowing for centralized control. But this may not be very beneficial where there is little commonality across the scope of materials undergoing Quality testing. For most businesses, complexity will require the use of a mix of all of these options, incorporating the most efficient option first, and the less efficient only when necessary. As you can see, the bottom line is: when you increase the uniqueness of inspection plans, you also increase the amount of time Super Users have to spend maintaining them.
2) Master Inspection Characteristic Setups
The next step is to figure out how to setup Master Inspection Characteristics (MIC’s). You can decide to setup all data in the MIC in advance, or allow the users to setup data later in the Inspection Plans. Each option will have pros and cons. If the MIC’s are setup as Reference Characteristics, then they are effectively locked down and cannot be edited in the Inspection Plan. This can gain efficiency when setting up plans, because users just insert the MIC and move on without having to enter any further data. However, it also means anytime a test specification changes, a new MIC must be created. If you are also linking these to Class Characteristics, this is an added task – the User must not only create an MIC, but also a Class Characteristic, and assign the material and the characteristic to a Batch Class. This is where you lose efficiency.
I have set out below a matrix on MIC setups. They are broken into four categories. As always, your scores may differ based on specific business requirements.
A careful look at the type of tests to be done will result in the best efficiency. Breaking these tests into buckets can help identify which tests are common enough to be setup as either Reference Characteristics, or incorporated into a Reference Set. Other tests that are common but have slight variations may be best as a Complete Copy Model. These MIC’s have specifications and control indicators present, but the user can change them in the individual Inspection Plans, and changes to the MIC master data do not cascade to the Plans. Incomplete Copy Model is the most free, but also requires the most amount of work at creation as users must enter all specifications every time they use it. In an ideal world, the person making the MIC’s would determine which bucket the test falls into and make the best type of MIC required. This is why educating your Super Users on system functionality can pay dividends in the long run.
One point that is often considered when deciding which type of MIC to setup is how easily a Mass Replace could be carried out in the event of specification change. For this reason some may be inclined to use Reference Characteristics because they are the only ones that can be mass replaced via QS27. However, I have rarely found a Mass Replace to be viable in real life. Often times a specification changes for one material only. For example, just because the pH range changes for Vanilla Flavor, this doesn’t mean the pH has changed for Mango Flavor. Yet these two materials may be using the same MIC. If Vanilla Flavor is the only change, then a change to the Quantitative Data in the Inspection Plan actually takes less effort than creating a new MIC and replacing it. In fact, Mass Replace should not be a consideration at all unless the business plans on using one MIC in many plans (20+) and changes to that MIC will apply to all materials equally. Even in this case, it may be more efficient to incorporate a Reference Operation set instead. Changes to the Reference Set act like Mass Replace, but also offer more flexibility in that tests can be added or removed, and work centers can be changed.
Of course, there are scenarios where it may be necessary to setup a Reference Characteristic because it is linked to a Class Characteristic. This typically happens when another area outside of QM requires access to test results. The results can then be stored as Batch Data. But setting MIC’s up this way should not be the norm, as it is creates a lot of unnecessary complications for users down the line. It also increases the amount of Super Users involved, as the volumes of new MIC’s being created will not make centralized control plausible.
3) Certificate Profile Setups
There are only two considerations I will discuss around Certificate Profiles: whether to use MIC’s or Batch Characteristics. The COA can pull data from either the MIC (inspection lot) or the Batch Characteristic (batch data). Using the MIC is obviously the most efficient because tests can be added by simply copying from an Inspection Plan. When using Batch Characteristics, they must all be entered into the Certificate Profile manually, which can be very time consuming. However, there are a couple of reasons why users decide to use Batch Characteristics to report test results:
- When the COA is printed, it will look for data in the plant that is sending the delivery. If the delivery is being sent from a plant code for a third-party warehouse or a distribution center, SAP will not find an inspection record. By having the inspection data stored in the batch data you can print the COA no matter which plant is sending the delivery to the customer. However, if you look back to our scores on Reference Characteristics linked to Class Characteristics, you can see these are incredibly inefficient and will result in lifelong suffering for your Super Users. It may be better to achieve this aim through one-time ABAP customization.
- The second point is: users can edit the results printed on the COA by changing them in the batch data, whereas users cannot edit MIC test results once the Usage Decision is made on an inspection lot. You may see this as either a plus or a minus. From my point of view, this would be more of a liability as the official inspection record results would not match the COA issued to the customer. However, for some purposes this could conceivably be an advantage.
I have included three considerations in the matrix below:
Bear in mind, you can use a mix of these options. As with MIC’s in the previous section, it is best for users to understand the appropriate place for each option. If you want to print a particular result from the batch data, then setup that characteristic to pull from Batch Characteristics. This does not prevent you from having all other characteristics in the Certificate Profile link to MIC’s.
I hope this guide will assist in understanding the 3 basic building blocks used when setting up inspections in SAP QM. We have considered here only Inspection by Task List, but it may also be worth investigating the use of Material Specifications, which is beyond the scope of the article. If you have found your existing solution scores very high numbers on each of the above matrices, I hope you will take some time to quantify the efficiencies that could be gained by changing your practices. The one-time pain of performing a Continuous Improvement Review and possibly customising a program could save many man-hours of work, and you will have much happier Super Users as a result.
Cross-posted on LinkedIn