Skip to Content

It all started as usual when a colleague entered my office with a simple question. What is the maximum volume we can process in an SAP G/L? This is a question that can only be answered by concrete benchmark with customer data.I could have used the classic SAP consultant response: it depends, but knowing the guy (a former consultant) I knew I could not get out of it so easily. So I used the second consulting trick: please explain more your context , a nice way when you are a consultant to give you time to think about the answer because the question was very clear in the first place.So he started to explain that a customer wanted to post about 8 millions journal line entries per day and he (the customer) was not sure about the performance. I let him finish his explanation and now it was time to answer.

So let me think…

First of all you have to split the problem
– the loading of data
– the retrieving ( of course if you enter data, one day or another you will be willing to look at it)
– the removing ( also called archiving)

The loading of data
Most of the time aggregation takes place before hitting the G/L thus reducing considerably the volume. However most of the customers requires to keep in the G/L the contract information , not on a balance level but as analytical information in the posting document. In that case the volume is not really reduced.The solution for that is probably to implement Bank Analyzer, in its pure accounting subledger form thus storing the details in a very high performance layer (Result Data Layer). Implementation is simple and fast and benefits can be high.Even with a subledger scenario you can end up with a high volume. But thanks to SAP infrastructure we can achieve a good performance. One benchmark done in 2002 for a financial institution showed that we could achieve with a good tuning and not a “Startrek” type machine a throughput of 210 to 416 posting lines per second depending on the number of special ledgers posted. As usual these numbers reflect the stress test conditions given at that time but it is a very good idea of the potential.     

The retrieving
the next issue will be on what granularity level you want to keep balances. The higher (finer) the granularity is the higher the risk to encounter performance problem exist.Here again I can only stress that the subledger scenario will give all the possible aggregation and G/L should only keep what is needed from a business point of view. From a pure G/L point of view one must refer to OSS note 820495 and related notes that will give very clear indications.

The removing
Archiving is often a forgotten subject until … it becomes a problem. Once again our benchmark has shown a performance of 4 millions line items archived in 30 mns and deleted in 2 hours so again something manageable and scalable. As a conclusion, I could reply to my colleague yes our customer, if I refer to the figures produced 7 years ago, can process 8 millions postings per day directly in G/L but why not also preparing the future and implement the Bank Analyzer subledger with the simple Imported Subledger Document scenario? 

High Volume G/L, YES WE CAN.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply