Skip to Content
Author's profile photo Radhakrishnan G K

Forecast error calculations in APO DP

These documents describe the Forecast error calculations in APO DP and internal  calculations for the same.

Applies to: Industries which have implemented “SAP SCM-Demand planning” (Release version from 4.1…).

Authors: G.K.Radhakrishnan & Pravin Ramchandran

Author Bio

w1.JPG

G.K.Radhakrishnan (APICS-CPIM ), is working as a SAP APO-DP consultant in Accenture has a consulting experience of more than three years with a domain experience of 7years in supply chain.

PRAVIN.jpg

Pravin Ramchandran is working as a SAP APO-DP consultant in Accenture has a consulting experience of more than five  years with a domain experience of 7years in supply chain.

Please have a look at the SAP help link below which covers the formulas for forecast errors.

http://help.sap.com/saphelp_46c/helpdata/en/a5/6320e843a211d189410000e829fbbd/frameset.htm

Introducton to error definitions:

MAD: The mean absolute deviation gives the mean average difference between the forecasted value and the historical value in the ex-post forecast.

MPE: Is the mean  percentage error between the forecasted value and the historical value in the ex-post forecast.

MAPE: Is the mean absolute percentage error between the forecasted value and the historical value in the ex-post forecast

MSE: Is the mean square error between the forecasted value and the historical value in the ex-post forecast.

RMSE: Is the root mean square error between the forecasted value and the historical value in the ex-post forecast.

Selecting forecast errors

MAD is used for low volume / sporadic demand pattern, whereas MAPE is for high volume / fairly consistent and regular demand pattern.
MAPE is a relative measure .So if a volume of a product is very low then minor errors in the  will also show huge % error. This may mislead the user.

MPE numbers may mislead planners as it considers the sign (+/- ). So the error gets net off due to positive and negative numbers getting added up resulting a smaller number.

Mean Square Error -MSE is a highly sensitive number ( as shown in the excel calculations ) due to the squaring effect.Even the small increase in the error will lead to a high increase in MSE.This can be used for Premium products as inventory and stock out cost may be very high( A class items in ABC classification ).

Sample calculations  

The below screen was taken on Oct 2012 .So we have history till Oct 2012 and Statistical forecast is calculated from Nov 2012

1.JPG

The Horizons for the same are show below

2.JPG

As Seasonal model was used so we have a initialization of 12 months.

PRAVIN.jpg

Corrosponding calculations are:

calculations.JPG

Error Total=sum of (Difference of Actual and Ex-Post)
           = 539  (Minor differences is  due to rounding which are explained later  )

MPE= sum of (%  Difference of Actual and Ex-Post  vs actuals )/N+1
    =(-24.90)/13  
    =-1.91554817731288

      The function model takes N+1 as is shown in the debug screen later

MAPE=Sum of (Absolute value ((%  Difference of Actual and Ex-Post  vs actuals )/N+1
    =124.902126305067/ 13
    =9.60785587

MSE=Square of “Difference of Actual and Ex-Post”/N+1
   =1093163/13
   =84089.4615384615 (Minor differences is  due to rounding which are explained later  )

RMSE=SQRT(MSE)
    =SQRT(84089.4615384615)
    =289.981829669484 (Minor differences is  due to rounding which are explained later  )

Internal calculations of the Function module

Below steps are needed to understand the internal calculations of forecast errors.

1.Load the selection in planning book

2. Go to SE37 and put a break point in function module /SAPAPO/FCST_CALCULATE_ERRORS

3. Type /h in the command window

4.Click on the generate univariate forecast

4.JPG

5.This will take you to the debugging screen

Function module /SAPAPO/FCST_CALCULATE_ERRORS is used to calculate the errors except MAD.

5.JPG

The functional module is taking N+1 as number of periods .In this example we have ex-post forecast for 12 periods but all the error calculations are based on 13 periods

The internal calculated values are shown below

6.JPG

Internal calculations from I_FCSTVIEW

/wp-content/uploads/2013/04/cal_200546.jpg

As shown in the above screen, the numbers are not rounded for greater accuracy but the expost forecast seen in the planning book are  rounded.

Relevent notes to correct the forecast error calculations are

SAP Note 1616763 – Incorrect forecast error calculation

SAP Note 1746920 – MAPE error calculation incorrect

Assigned Tags

      16 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Hi,

      just to clarify for myself... this division by (n+1), is it ok? Or is it a bug that is corrected by the OSS notes you quote at the end of your text? I am missing whether you are documenting the bug or showing how the calculation works.

      regards,

      J.

      Author's profile photo Radhakrishnan G K
      Radhakrishnan G K
      Blog Post Author

      Hi James , Sorry for the delay in reply.

      If we compare standard formulas available in statistics text books then it is N ,but APO takes N+1.

      The notes 1616763, 1746920  are other bug fixes in the calculations of the error.

      The basic objective of the document was to help other DP consultants on how these are calculated ( especially data rounding part ) and how to debug the planning book.

      As we have too many settings in the univariate forecast tab and types of forecast models ,settings and parameters so the results can vary and may need further debugging of the code.

      Hope this helps

      RK

      Author's profile photo Former Member
      Former Member

      May be you should rise an OSS note about this. From a statistics point of view, one sometimes argues whether one should use 1/N o 1/(N-1), but I agree that 1/(N+1) is most probably wrong.

      Even model ranking will go wrong most of the time as different models have different ex-post horizons...

      Thanks for sharing,

      J.

      Author's profile photo Former Member
      Former Member

      N-1 denominator is used in sample statistics 🙂 A hard to explain concept, this comes from estimation theory that a sample statistic like mean or standard deviation when divided by N-1 (and passing the test hypothesis) results in chosen test being a better estimate of the population statistic.

      N+1 may have similar connotations in Deutsch 🙂

      GK / Pravin: Thanks for this nice document.

      I was trying to evaluate a method to calculate "Post facto" forecast error rather than an Ex-post error. What is the best way to do it ?. I did it using macros in data view but this is kind of inefficient and slow and also not a consistent approach.

      I wanted to measure the forecast of period P created in period P-n with actual sales of P and this time difference has to be variable based on material lead times (from material master).  This means for some materials that I am forecasting (directly or through disaggregation from e.g. a material attribute forecast),

      e.g. I would like to measure the "final" demand plan (that is being released to SNP) created in period P-5 against sales in period P but for some material I would like to measure the error against forecast created at start of P-1 against sales in P. This means I want to respect lead times of the materials when considering the "firm" horizon of a forecast key figure (any one key figure that represent final demand plan....however it is derived / using a model or without using a model).

      This is for management reporting and measuring the sales and demand planners efficiency and wisdom and not the model's accuracy per se.

      Appreciate a tip or two.

      Author's profile photo Radhakrishnan G K
      Radhakrishnan G K
      Blog Post Author

      Hi Borat

      One possible solution for your issue can be as follows:

      1.Extract Forecast every week from APO-DP-PA to BW and store it in the infocube

      2.Add a "release week" or "extracted in week"   and a "lag number = calweek- release week "  in the data flow so that we know when it was extracted and the lag number

      3.Create a Multiprovider and add the forecast cube and sales cube

      4.Create a  bex report on this Multiprovier

      5.The bex report should have two inputs :

      a) Release week or lag number

      b) Calender week

      So that it will  filter forecast data based on lead time ( lag number ) / or simply based on release week and pull sales from sales cube and few calculated KF for the errors.

      Author's profile photo Former Member
      Former Member

      Thanks GK, James,

      Actually proposed the same verbatim to the client this morning 🙂 though I wasn't myself sure in entirely esp. because of the variable lead time thing. the lag thing though approximate is a good idea. I will add a third dimension of lead time from material master too. Let me see how this shapes up. I will post my outcome here.

      Much appreciate this re-assurance.

      Thanks once again

      Borat

      Author's profile photo Radhakrishnan G K
      Radhakrishnan G K
      Blog Post Author

      Hi Borat

      One option for variable lead time can be to use Infoset.This is another BW virtual provider just like the Multiprovider

      Assumptions for the below solution are : Forecast is extracted on a weekly basis to BW and Lead time is also converted in Weeks and stored in a info object

       

      Steps:

      1.Create a info object which has product number and lead time in weeks.This needs to converted by abap code in the data modeling.

      2.Create a info set on the forecast cube of BW (earlier steps ) and the info object  of  step1.Link the lead time with the lag number in the infoset

      Now the infoset will filter data for only those entries which has lag=leadtime

      3.Create a Multiprovider on this infoset +sales cube

      4.Create a bex report which has only cal week as input as infoset will filter all the extra data

      Know Issues: If data volume is very huge then infoset based report may cause performance issues (long query execution )

      Author's profile photo Former Member
      Former Member

      Fantastic. Let me plagiarize this and sell this for 5000 dollars 🙂 Will part when we meet. This will definitely work. I can visualize.

      Author's profile photo Former Member
      Former Member

      Check the new functionality of DP Customer Forecast Analysis (EHP1 or 2, I don't remember), I think it is more or less the idea behind the "Waterfall analysis" (not sure, however, description is a bit obscure).

      Expert Radhgk shows the usual approach to do this, BW is the best choice as with macros etc... you would need to store your whole forecast at least as many times as periods in your lead time, requiring too much space in liveCache.

      regards,

      J.

      Author's profile photo Former Member
      Former Member

      As someone once said, when the difference between 1/N and 1/(N-1) matters, then you are no longer truly doing statistics (too few data points)...

      Author's profile photo Ada Lv
      Ada Lv

      A nice blog post~

      Author's profile photo Former Member
      Former Member

      I have a situation where I am running Automatic Model Selection and Parameters Tab is set with MAD for Error Measure and in the Message Tab, the system identifies that Season is positive and Trend is negative. But still system selects the constant Model. Why does this occur? Has anyone observed this behavior?

      If I change the Error Measure to be MAPE then the Automatic Selection selects the Seasonal Model. The MAPE error is smaller than the MAD error.

      Author's profile photo Former Member
      Former Member

      I haven't tested it. but here is the science as I understand it

      MAD and MAPE are completely different measures.

      I am pasting a statement from a doc written by a learned man outside of SAP world.

      Here is what he says comparing RMSE to MAPE. So you can intuitively base this on MAD and MAPE

      In general, the standard deviation of the
      noise terms grows as the square root of the number of periods being aggregated.

      Forecast RMSE grows as the square root of
      the number of periods being aggregated, and the MAPE falls as the inverse of the square root of the number of periods being aggregated.

      As a rough rule of thumb, for example, if we compared weekly forecasts to monthly forecasts, we would expect the monthly results to have twice the RMSE and half the MAPE

      Author's profile photo Satish Waghmare
      Satish Waghmare

      Thanks a lot for such valuable information G.K.Radhakrishnan & Pravin Ramchandran.  Keep it up 🙂

      Thank you

      Satish Waghmare

      Author's profile photo Christopher Smith
      Christopher Smith

      Great article guys, I keep referring to this time and again... thanks for the gift that keeps giving.

      Author's profile photo Christopher Smith
      Christopher Smith

      Great blog post!