Skip to Content

Utilizing HANA Native Calculations with BPC 10.0 NW

As all of you probably know, BPC NW on HANA has been available for about two years now. Initially, the focus of BPC on HANA was on accelerating the reporting, which yielded some impressive results when compared to BPC NW on a classic database.

It was, however, always clear, that it was necessary to accelerate the calculations in order to deliver on the vision of true agile planning & closing processes.   This would enable customers to react faster to changing business conditions. This becomes even more important as Big Data enters the space promising great benefits to our existing planning processes. Consequently, recent BPC service packs have brought significant enhancements with the acceleration of allocations and dimension formulas through HANA. In every project, however, there will still be the need for at last one complex, sophisticated calculation that is at the heart of the process and is only possible to be implemented with custom logic.

Customer Situation

At a customer project we had exactly that situation in which we had several complex calculations that would need to be run in default logic. One example being  the detailed labor calculations with a large number of job families with different rates, overtime factors and a large number of drivers like shifts per day, coverage per operation, etc. To complicate it further, all drivers and rates were planned at different levels in the hierarchy (e.g. some of them for each job family, others at groups of job families).

Since there were still uncertainties about the functional details of the calculation, we decided on an agile approach, going through several iterations while closely working with the end users for validation. For this, it was decided to use Script Logic since it was fast to write and easy to adapt throughout the iterations. Of course we always anticipated that ultimately the Script Logic would have difficulties fulfilling the performance requirements, but once we had a calculation in place and the results would fit the user’s expectations, the Script would still serve as a very detailed specification for the subsequent optimization tasks.

Once we got to this point, the most common approach would have been to re-implement the calculation logic in a custom BAdI in ABAP, which would read the data from the cube, loop over it, query additional reference data, calculate the result and save it back to the cube in the end.

However, since we were using BPC on HANA we wondered whether we could achieve the same by implementing and executing the calculation directly in HANA thereby leveraging HANA’s massive parallelization and optimization and at the same time avoid bringing the data first up to the ABAP layer and then the result back down into the database.


Custom Native HANA Calculations with BPC

The good news is that since BPC 10.0 SP11 there’s an option to create a native HANA data model (SAP Note 1902743), originally meant to improve the write-back performance and internally used for example for the allocations. As a side effect, the native HANA model also provides us with the ability to directly interact with the BPC data model in the HANA database:

  • Queries are possible on the BPC-generated OLAP views for each cube ($BPC$V$OLAP_*)
  • Write-back is possible using a BPC-generated table for each cube that is periodically merged back into the OLAP cube. $BPC$P$* is the table, while $BPC$V$OLAP_*P provides an OLAP view on top. The write-back data is stored as a delta to the main cube.

These two resources allowed us to come up with an approach to implement our calculations using native HANA SQLScript that is triggered through an ABAP BAdI, which in turn is (as usual) triggered via Script Logic. The ABAP BAdI is also able to pass the calculation context (e.g. Entity, Category, Time) to the HANA procedure in order to limit the calculation’s data range.

The HANA procedure queries the data through a calculation view that unions data from the BPC cube’s OLAP View as well as the Delta-Table, which includes the write-back data that hasn’t been merged back into the main table yet. The calculation is then performed using SQLScript relying either directly on SQL statements or the procedural features, such as loops.

At the end, the procedure calculates a delta between the result data and the current cube data and writes it back into the BPC-generated write-back table.


Of course, relying on the BPC-generated data model in HANA comes with the downside that it is sensitive to technical name changes as well as that it effectively circumvents the BPC write-back security. The first point is being addressed in the design through the use of the calculation views as an additional abstraction layer and could be completely solved by enabling the technical name stabilization for the BPC model. With regards to security, it has to be noted that the calculation (including write-back) is restricted to the context the current user has been working on and he is authorized to. If this is still a concern, it would always be possible to not write back the data directly, but return it to the ABAP BAdI, which could then write it through the proper BPC processes including security checks.

Benefits & Conclusion

By leveraging native HANA features for calculations, the approach provides additional performance benefits over the use of ABAP. We’ve used HANA as our primary calculation tool and have implemented different kinds of calculations, such as:

  • Data transformation and data movements from one cube to another
  • Driver-based calculations as in the case of the labour mentioned above
  • Carry-forward calculations, where the opening balance of one month is determined by the closing balance of the previous month

None of our calculations ran more than 1-2 seconds for query, calculation and write-back of one Entity generating hundreds of thousands of records. And that is before any additional optimization on the SQL statements. Of course the Script Logic and ABAP layer generate a small overhead, but that would be given even in the case of implementing the logic in ABAP. We’ve seen tremendous performance benefits when comparing with our initial script logic implementation where one batch job ran in 5 seconds instead of 6 minutes including any overhead.

Other than performance, an additional benefit is that it allows leveraging advanced HANA functionality, such as predictive algorithms or Smart Data Access to other data sources within our BPC planning processes. This combined with the performance improvements will be extremely interesting especially for Big Data scenarios. And even though the approach can really be used to speed-up any long-running and complex calculation, it will position BPC on HANA as the planning tool of choice for these kinds of projects.

I’m planning to write a follow-up to this to include more technical information and also show a few code snippets we used. In the meantime, I’d be really interested to hear your opinion on this!


Obviously, everything we do here is not officially supported by SAP. Hence, just like most ABAP functionalities that we commonly use to interact with BPC, it has to be checked as part of regression testing for every BPC update.

You must be Logged on to comment or reply to a post.
  • Really good information focuses on faster calculation capabilities in BPC on top of HANA..!! Will wait for your next technical article..Thanks. 🙂

  • Hi Phillip,

    I am thrilled to read your post! This is something we at SanDisk are very eager to implement and have been looking for. We recently went live with BPC on HANA and are very heavy users of custom BADIs for our calculations. Our models are also very large, in the hundreds of millions of records.

    With HANA we are seeing tremendous performance improvements in our custom BADIs, but want to take the next step to push the calculations down to the HANA layer and get as close to real time as possible.

    We were thinking about approaching this with Calculation views built on top of the BPC HANA model (using F tables and dimension tables), and querying these views from the custom BADIs. But we were missing the mark on two fronts. First I had not realized that the un-merged records would have to be fetched separately. Secondly, we were still thinking of relying on the standard BPC write-back process with the BADI handing data back.

    I am very intrigued by your approach and would like to learn more.

    I will be at SAPPHIRE in June speaking about our experience with BPC on HANA and I would love to meet up with you if you are planning to be there as well. SanDisk would love to pursue this approach you have described here.


    Nitin Goel

  • Hi Philip,

    Thanks for sharing.

    I have been looking for speeding up custom BPC BADI on BPC on HANA. Really waiting for your next technical article!!!


    Karan Arora

  • Hi Philip,

    Did you get a chance to make further progress on this? I am very interested in seeing if someone was able to get this approach to a productive environment.



    • Hi Nitin,

      I’m not aware of any productive usage so far. We might leverage these techniques at a large implementation project at the same customer, but this is still in implementation.

      I’ll keep you updated.

      Did you actually try the approach? I’d be very interested in your experiences.



      • Hi. We have tried this approach but we have rejected due to the fact that the views you mention in the blog are created by activating the parameters ENABLE_HANA_MDX & ENABLE_NATIVE_HANA_MODEL, this takes some huge disadventages for the moment as it is incompatible with the use of some important badi’s and the reporting on several hierarchies. Have you found some kind of workaround? Thanks, Jorge

        • Hi Jorge,

          you’re right that there are some limitations to activitating the two options, mostly:

          – You cannot use the writeback BAdI

          – You cannot have a report on members from multiple hierarchies in a single dimension

          From my experience it wouldn’t see any of those as show stoppers per se, but of course that depends on your specific requirements.

          So far I have no workaround and I don’t know if there are any plans to address these limitations from the product side. Depending on what you use the writeback BAdI for, maybe it’s worth investigating the possibility of using the native disaggregation functionality that comes with BPC on HANA.



          • Thanks for the answer.

            For the moment we’re just using it to bring back some calculated data to AS, but we’re not writing it right in DB layer.

            I’ve just read in note 1904344 that at least the problem with the reporting on members from multiple hierarchies is solved in revisión 83 of HANA DB.

            No news about incompatibility with BAdI.

            Regards Jorge.

  • Hi Philip,

    Many thanks for the information above – very interesting to see how this approach would improve performance.

    Are you in a position to share the example code that you used as you noted in your first post?

    I am creating a Proof of Concept in SAP BPC 10.1 on HANA and plan to use HANA SQLScript to benchmark performance gains. Similar to Sandisk, we will be implementing a high number of Custom BAdIs to facilitate planning of 180,000+ work force members

    Many thanks,


    • Hi Nick,

      unfortunately, I haven’t had the chance to complete the deep dive document / blog post yet.

      I’m also not sure how far the sample code would actually get you, since most of the magic happens in the SQLScript, which highly depends on the kind of problem you’re trying to solve.

      In comparison, the ABAP layer is more reusable, but almost trivial to use. Below is what I shared with Nitin on one occasion:


      Let me know if you need more information. I would also be very interested into the results of your proof of concept.



      • Philip,

        Can you give me some tips to read and write data to dimensions/models within HANA DB ? Currently we have BADI’s. I am trying to implement all the logic, reading and writing at the DB level.

  • Hi Philip,

    I saw at your last benchmarks between BADI and HANA calculation and it is very impressive.It confirms that this approach is very relevant when we deal with huge amount of data.

    We have a customer who expects to run calculation in the Default logic when users save data. The calculation uses thousands of data (not so huge volume) to calculate currency translation and Intercompany eliminations but it will be executed by many users at the same time (we may have about 500 concurrent users). Do you think that using SQL Scripts is still relevant in this case ?



  • Hi Philip,

    Thanks for the great info provided here ! I am working on creating a POC for one of our clients, to showcase the native HANA capabilities, by moving their complex calculations to the DB layer.

    I have just successfully created a sample calculation, and it’s working fine. I have a doubt about the Key column in the $BPC$P$* table. I am pushing NULL values in that column, and i can still see the data in my reports(so assumed it’s fine), but is it the correct way to do it?



    • Hi Sanuj,

      I think in my cases I used 0 for all records. From my experience, the key column doesn’t really matter.

      How are your experiences so far? it would be great if you could share some benchmark figures.



      • Hi Sanuj,

        I am also trying to implement simple calculation, but it’s giving me an error message. Are we supposed to write insert/upsert to populate the data in $BPC$P table. Can you please provide the steps or the SQL query that you’ve written.



        • Hi Vijay, yes you have to insert in the $BPC$P table, and it takes the SIDs of the dimension members. You have to write the SQL to do the calculation, and push the calculated value back with the relevant SIDs. Check the table structure so that you are inserting into the correct columns.And one more thing, i did’nt have access to write into the table with my ID, so i ran the procedure from DM package->Script Logic->BAdI->Procedure to insert.


      • Hey Philip, I did some testing on the logic, and have some comparison figures between ABAP BAdI and HANA SP. I implemented an initialization logic for my planning application.Basically, it has to remove the previous cycle data, copy over the current cycle to previous cycle and delete some of the data for computed data sources. Chose it as it’s quite data intensive.

        The original BAdI was taking around 30 mins to process some 4 million records(we also had to increase Application server memory to prevent dumps), and the HANA procedure did it in 45 secs !

        It’s looking good, barring the drawback of Write back BAdIs, i think this approach is solid.