Forecast Volatility Measurement with SAP IBP
This Forecast Volatility topic is aimed at Demand Planners and IBP practitioners. I will showcase how organizations should capture forecast change cycle over cycle and explain why this matters.
Forecast Quality is typically measured by calculating Forecast Error and Bias. I recommend enterprises to also measure forecast volatility: how the forecast for a given month evolved across historical planning cycles.
Best practice is to capture planning notes whenever a significant change is made to consensus demand and automatically alert the stakeholders downstream: supply planners or Finance. Such planning notes should capture both the reason for this big change and why it could not be captured in earlier planning cycles.
Finance review step should include a Cost To Serve Impact Analysis for all significant changes in forecast, if this change is within lead time. For example, we might be forced to buy materials at premium prices or incur expedited shipping.
Let’s quickly revisit the concept of lag snapshots. In January 2021 planning cycle, we will have forecast values from January 2021 onwards. Lag is defined based on the different between the month being forecasted and the month of the planning cycle. So, January 2021 forecast is lag 0. February forecast is lag 1, March forecast is lag 2, and so on, for the forecasts generated in January 2021 cycle. Keep in mind that we will have a forecast for February in February cycle as well, which will then become lag 0, as opposed to lag 1 here.
Let’s define forecast volatility with an example. We have a monthly sales & operations planning process with a 2-year forecast horizon and monthly buckets. You can see the forecast values across cycles: we showed you January cycle to explain lags. You can also see February cycle, March cycle, and so on. We have forecast data from all the 12 planning cycles in 2021: this will help evaluate how the forecasts have evolved for a given month, say December 2021. You can also see the corresponding monthly lag snapshots for each of these cycles. This is in-sync with the concept of lag snapshots we explained earlier.
Forecast Volatility quantifies changes in Forecast across planning cycles. For example, we could calculate % change in current cycle (lag 0) vs. previous cycle (lag 1). We calculate Forecast Volatility by finding the absolute difference between consensus demand snapshots for lag 0 vs. lag 1. Then, we divide this absolute difference by consensus demand snapshot for lag 1 to calculate Forecast Volatility %. We only calculate Forecast Volatility when we have values in both lag 0 and lag 1. Else, we leave it blank. You can see that we have an 89% change in Forecast for the month of March 2021: as we compare forecast from March cycle (lag 0) vs. February cycle (lag 1). Also note that we did not calculate Forecast Volatility for January 2021 month, as we do not have a value for consensus demand snapshot lag 1 in this case.
We can also calculate Forecast Volatility current cycle lag 0 vs. 2 cycles earlier which is lag 2. In this case, We use consensus demand snapshot lag 2 instead of lag 1. You can see that we have a 41% change in Forecast for the month of March 2021: as we compare forecast from March cycle lag 0 vs. January cycle lag 2.
Our design is flexible to compare any lags to any other lags. The choices for lag values should be driven by average procurement, production, and transportation lead times for a given enterprise. Let’s say we take 2 months on an average to procure raw materials, 1 month to produce Finished Goods, and 1 month to ship product from Plant to DC. We will need to trigger procurement in April and production in June to service demand in August. Let’s assume we have a working capital constraint and cannot hold much in terms of inventory buffers for raw materials or finished goods. We trigger Production based on lag 2 demand, but buy Raw Materials based on lag 4 demand. If we substantially increase demand in lag 2 compared to lag 4, we may have production capacity, but likely will run into raw material availability issues. This could result into buying raw materials at premium prices and/or expediting Shipping. There is an impact to the Cost to Serve, as we scramble to crunch the 4-month lead time into 2 months to support the demand upside in lag 2.
See Forecast Volatility trends: Lag 2 vs. Lag 4 across 2021. The average volatility is 20%, which is manageable, but we have two outliers: 53% in May and 35% in October. Quite clearly, we need to drill down into the root causes for such large changes in demand. It’s ok to have significant changes, if the external demand environment has undergone an unforeseeable shock. However, we need to conduct proper root cause analysis and make sure we do not have high volatility in forecast due to issues in the sales forecasting process: such as sales team logging opportunities late in CRM. We could also have other challenges, such as Machine Learning Gradient Boosting models overfitting the forecast due to a large parameter value of maximum tree depth. We have also included lag 0 vs. lag 1 and lag 0 vs. lag 2 for reference here. Forecast volatility can be averaged up to Business Unit and Region level, just like forecast error or bias.
We recommend to set-up alerts to capture large changes in Sales Forecast or Statistical Forecast. Large changes in Consensus Demand should be documented with Planning Notes in SAP IBP: what’s changed and why we could not see this change coming. We recommend to track Forecast Volatility at key lags based on when we need to commit to RM procurement or FG production in advance of firm demand. Track large changes as a supply chain case and understand the impact to cost to serve.
As always, good luck with your endeavors in delivering exceptional customer success and value through your SAP IBP or S/4HANA powered supply chain transformation engagements.