Skip to Content

Here you can find the result of an investigation I have done on SAP BPC write back parameters that can be maintained as model parameter for planning and consolidation in SPRO transaction (a general description can be found on BPC admin guide).

RECLEVEL_NR specifies the maximum number of records to be written for which a record-based locking is always applied.


This is quite important because a wrong setting of this parameter can affect performance and overload the en-queue server by generating a huge number of locking selection range.

In case the number of submitted records in a BPC input form or in the result of a Data Manager Package exceeds the value of RECLEVEL_NR, we will have 2 different behaviors:

  1. NON-SPARSE records
  2. SPARSE records

The choice of these 2 scenarios in theory should depend from parameter SPARSITY_COEF, but in implementation I checked the sparse behaviour was never possible, no matter what the value of SPARSITY_COEF is.

cl_ujo_writeback check_concurrency_lock.JPG

This is screenshot is taken from BPC release 801 SP level 12. As a result of these 2 scenarios we can have:

  • record-based locking (the same we have if RECLEVEL_NR is greater than the number of submitted records)
  • a between range locking: instead of adding a record for every member to lock for a specified dimension, we will have a single record defining an interval from lowest and highest member. This means we avoid to overload the server with a huge locking selection, but on the other side we risk to lock too many members. Especially if we are in a sparse situation.


NON-SPARSE records

The locking selection is created using BPC parameter INTERVAL_NR. From BPC admin guide we know that:


“In the situation where record level locking is not being implemented and the data set being saved is NOT sparse, any dimensions with           less than this number of distinct member values in the dataset will be locked using their single values. If the dimension has more than this           number of records, the range between the low to high values will be locked.”


So if the members of a dimension are less or equal to INTERVAL_NR, record based locking for that dimension will be implemented; otherwise the locking will be done using an interval that considers only the lowest and highest member.

E.g.

If I am going to write 3 members for Time (2016.01,2016.02 and 2016.12) and INTERVAL_NR is 2, the interval will lock every time member from 2016.01 to 2016.12.As a consequence:

  • I have generated just 1 record instead of 3 in the locking table (positive thing)
  • I have locked 12 months instead of the 3 affected by the changed (negative thing)


SPARSE records (actually in my current version is never implemented).

The locking selection is created using BPC parameter MULTIPLY_COEF.


“In the situation where record-level locking is not being implemented and a sparse data set is being saved, this value specifies the maximum           number of members for which you can implement record level locking (that is, when to swap to using a BETWEEN range in the lock table).”


In this case it will apply the range selection by considering the number of members in every dimension.


E.g. 

See below example where the model has 4 dimension and MULTIPLY_COEFF is equal to 15. Please note that the table is a sorted table by number of members in each dimension: so category (1 member) is first and account (7 member) is the last.

img2.JPG

Conclusion

I would advice to keep parameter RECLEVEL_NR to default value 10 and to not increase it.

Decision of type of locking to be implemented in this way will depend from parameter INTERVAL_NR (I assume that we are always in a NON SPARSE scenario).

Impact on parallelisation

When a parallel BPC process(RUNLOGIC_PH or BPC parallel framework) is implemented, it is highly recommended to have a low value of RECLEVEL_NR.

RUNLOGIC_PH with a RECLEVEL_NR equal to PACKAGE_SIZE usually has some process that fails because of an overloading of the en-queue server.

Below you can see the result of some test I have done on our BPC system:

RECLEVEL_NR = 40000 and INTERVAL_NR = 10 -> some process failed. It took ~14 minutes

RECLEVEL_NR = 10 and INTERVAL_NR = 10 -> Succeeded. It took ~8 min 30s

RECLEVEL_NR = 10 and INTERVAL_NR = 1000 -> Succeeded. It took ~8 min 40s.

To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Gersh Voldman

    HI Alessio,

    I completely agree with your recommendation of keeping both parameters to a minimum when running parallel processing. Unfortunately this can lead to some “false” locking conflicts. Example, if you run parallel by Quarters then one process can lock FEB – MAR while another APR – MAY.

    My recommendation would be to define this minimum by estimating how many members can be in one parallel process in the Dimension on which you split the data. Increase that minimum by 20% for safety reasons or find out how fast that Dimension can grow.

    Regards,

    Gersh

    (0) 
    1. Jef Baeyens

      In my opinion, this is all way too error prone for a financial system.

      BPC end-users should be able to keep full flexibility in data selections and still be able to benefit from performance enhancements using parallel processing without any locking conflicts or other errors.

      On one occasion, end users could trigger a package with only 1 cost center / 1 time member.
      On the next, end users might trigger a package with 200.000 cost centers and 48 time members.

      Conclusion: A BPC developer still cannot easily make script packages performant and at the same time failure proof.

      (1) 
      1. Gersh Voldman

        You have to set it up for the worst case scenario. You just have to know what it is.

        It doesn’t matter how many Cost centers or Time members user selects. What maters, is how many of them will be in each of the processes and that depends on how you setup parallelisation.

        Conclusion: BPC developer has to know what s/he is configuring. I think this blogs helps in understanding those configuration issues.

        (0) 
  2. Christian Schoenbein

    Thanks for the blog post. That was quite helpful.

    Maybe two remarks:

    1. Since SAP Note 1923357, the selection is always set to non-sparse (see  CL_UJO_WRITE_BACK=>CHECK_CONCURRENT_LOCK, line 81). That’s probably what you mean with “never implemented”.

    2. Since SAP Note 1739875, the default value for RECLEVEL_NR was increased to 40.000. So, for the scenarios described here, it will have to be lowered again.

    (0) 

Leave a Reply