DDMRP in S/4 HANA has been introduced with 1709 version. However, with 1809 version the apps are further enhanced with analytical functionality. In this blog I will cover Demand Driven replenishment setup with SAP S/4 HANA 1809 covering all Five demand driven components.
Components of DDMRP
S/4 HANA Demand -Driven Replenishment built up with underlying demand driven components suggested by DDI including Embedded Analytics functionality.
To know more about DDI components Please refer my blog-https://www.linkedin.com/pulse/ddmrp-supply-chain-strategy-new-era-venkadesh-seetharaman/
Demand driven Replenishment supports inventory optimization in all three medium such as Make (Manufactured), Buy (Purchased), Transfer (Stock Transfer). We will see simple scenarios explaining Demand driven replenishment in S/4 HANA with these three mediums in my series of blogs.
Demand Driven replenishment functionality in S/4 HANA is managed through Fiori apps
I.GUI Configuration -Pre-requisite
MRP type ‘D1’ is required to assign DDR-Relevant Materials in Material Master.
Along with D1 MRP type, lot-sizing procedure ‘H1’ (Replenished to Maximum stock level) is mandatory to buffer the stocks up to maximum level in the decoupling points. With this safety stock and re-order point act as mandatory fields. Refer Buffer chart for significance
2.Buffer Stock Positioning
As a Pre-requisite, it is mandatory to maintain Buffer profile(s) maintenance table (PPH_BF_PRFLDET_V), the corresponding values from the table are fetched based on the output from the Classification and fed as Input for buffer profile calculation. Multiple Buffer profiles can be defined for same plant with different values. Variability factor (Based on Demand Variation), Lead time Factor (Based on Decoupled Lead time) is crucial as per Business case.
From 1809 version of S/4 HANA, this has been managed through APP” Buffer Profile Maintenance” as well.
3.Buffer Profile Assignment and Spike Maintenance
Multiple Buffer profiles can be assigned to a Plant with spike Horizon parameters as shown below in GUI table(PPH_BF_PRFLASG_V).
Spike Horizon Constant
A constant, usually measured in days, which helps calculate the Order Spike Horizon when summed with the product of the Spike Horizon DLT Multiplier (SHM) and the Decoupled Lead Time (DLT).
Order Spike Horizon = (SHM x DLT) + SHC
Spike Horizon DLT Multiplier
multiplicative factor that when multiplied with the Decoupled Lead Time (DLT) and summed with a Spike Horizon Constant (SHC) helps calculate the order spike horizon, which helps identify order spikes.
Order Spike Horizon = (SHM x DLT) + SHC
A selected quantity that in combination with the Order Spike Horizon qualifies an order spike
Initially Decoupled lead time will not be available for D1 Materials.DLT will be available only after determining Lead time classification. Hence in MD04 we can only see Spike horizon as Spike horizon constant.
We will see how the spike horizon is determined at the end of the Lead time classification
II. Configurational Apps
1.Schedule Product Classification
The significance of product classification lies in correctly identifying strategic inventory positioning. The classification helps you identify appropriate settings for each planning object (Material/Plant) selected based on characteristics. Material and Plant in DDMRP are classified based on Good Issue value (ABC), BOM Usage (PQR), Good issue variability (XYZ).
ABC classification works based on Pareto principle. It is based on segmenting the products on the inventory to 3 different categories based on the identical products.
From the above graph we can see:
Category ‘A’ contains smaller number of products that drives significant revenue on the Total Inventory,
Category ‘B’ contains moderate number of products and revenue on the Total Inventory
Category ‘C’ contains large number of products with lesser revenue on the Total Inventory
Product classification with DDMRP in S/4 HANA can be run at location level, the classification run ensures that it checks past consumption’s with respect to the threshold for each product at the defined location level.
Mandatory selection criteria in the app includes Plant, number of days in past (Depends on the business case) and Mandatory Parameter to consider the Threshold for ABC classification are ABC threshold values in %(Depends on Business case)
A classification technique used to segment the product in a location based on its BOM Usage.
From the above graph we can interpret,
Category ‘P’ : The product(s) with higher number of BOM Usage in a location
Category ‘Q’: The Product(s) with Moderate number of BOM usage in a location
Category ‘R’: The Product(s) with Less number of BOM usage in a Location
The selection of product(s) in each region based on the defined threshold values in absolute numbers (Depends on Business case)
Like ABC classification XYZ uses past data defined in the evaluation period to determine the future outcome. The main Aim of XYZ classification is to determine the spread (Variability) in the demand (Sale orders) for the product.
Category “X”: The Product with very less variation in the demand is normally identified in the region “X”.
Category “Y”: The Product with very moderate variation in the demand is normally identified in the region “Y”
Category “Z”: The Product with Large variation in the demand is identified in the region “Z”
With XYZ analysis, products are statistically evaluated to obtain the coefficient of variation for the actual demand (Sales Order, Back-order, Qualified Order Spike). The products are then classified based on the coefficient of variation using the thresholds set for variability classification in the Fiori APP.
Myth behind Classification
The Main intuition behind running product classification is to segregate the products into any one of the combinations based on the Demand variability by Goods issue, Values of Goods issue, BOM usage.
Based on the above inputs, SAP DDMRP Analytics engine run determines any of the below category for the Make, Buy, Transfer combinations.
2.Mass Maintenance of Products (DD)
The output of above classification can be directly viewed in Fiori tile “Mass Maintenance of Products (DD)”.
All the products in my BOM structure is displayed including MRP type ‘PD’ with classification indicator. But the ‘D1’ Materials alone are classified as we ran the classification app before.
So, in this app we can change the values in Mass for products including MRP type based on the classification indicator.
Ex: Based on Planners decision, MRP type ‘PD’ material can be changed to ‘D1’ as well. The change in the app will update the material master.
3.Schedule Lead time classification of Products (DD)
Schedule lead time classification is 4th classification of Demand driven Products. This app is used to determine the DLT (Decoupled Lead time) of a planning object. In this app, we will classify the Lead time based on the threshold for Products with category Make, Buy and Transfer.
Decoupled Lead time will be determined after classified ABC, XYZ, PQR using Product classification app in the beginning.
|Note: This is the reason why Schedule lead time classification is managed separately in a different app.|
coupled lead time is important to determine buffer positioning. Lead time is classified/compressed based past Process order/Purchase order, if there are no past data then the DLT will be determined based on the Lead time maintained in the Material master.
Actual Decoupled Lead time will be updated in Material master MRP3 view as Total Replenishment Lead time after running this classification app.
With the above example Decoupled lead time for SFG1 material is 40days (10 days of SFG1 lead time + 30days of PACK1 Lead time).
Similarly Lead time for SFG2 is 32 days (Lead time of SFG2 20days + 10days of PACK2 + 2days of GR time)
4.Spike Horizon after classification run
Order Spike Horizon = (SHM x DLT) + SHC
= (1 X 40) + 1 = 41 days
End of spike horizon is 41 days now(ie 29.Apr.2019).Initially it was only 1 days as DLT was not known.
So, with this Blog I would like to conclude saying that the output of above classification techniques are fed as input for buffer proposal calculation. Which will be continued in my next blog.