A lot of developers are afraid of mass data scenarios. And indeed they can be frightening:
- Your give up control: you start a report and it runs hours and hours. Is the report still alive or loops endlessly? Perhaps this report uses parallelization using asynchronous RFCs – and the situation gets even more difficult to understand.
- Logging is complicated. Of course you can log everything perhaps using LOG-POINT but will you be able to evaluate a huge bunch of data?
But you can cope with it:
- If your application does a COMMIT WORK after a certain number of business objects are processed you can verify that the report doesn’t loop endlessly.
- You can trace the report and detect problems like functions modules that need too much time to do their work, slow database SELECTs, INSERTs and so on.
Nevertheless there is still one big uncertainty: How will your application deal with mass data in productive environment? In the ideal case, your application features linear scaling. An application has linear scaling when the amount of data to be processed (the problem factor) and the resource requirement (time, processors, main memory) grow at the same rate.
Nonlinear Scaling and Falling Throughput
Unfortunately in most cases you won’t achieve linear scaling. Even if you do parallelization often your application will run very fast at the beginning and will slow down later. There are a lot of reasons for it: SAP standard modules like SAP Business Partner have their own caching mechanism in internal tables of function groups and won’t release their data. So your report needs more and more memory and this can slow down the whole server. And of course it’s possible that your application uses too much memory, too. But it gets even worse: When you are doingLOOPs inside LOOPs large internal tables the result can result to quadratic scaling which is a nightmare.
ABAP developers have good reasons be afraid of non-linear scaling. On your development system everything is OK: you checked database indexes, your traces look good but in productive environment even a good administrator who does a thorough performance analysis can’t speed up your application.
So I suggest that you test mass data scenarios as early as possible in a realistic environment. If your application scales well in mass data scenarios then you can optimize your application on your development system without further mass tests.
How to Analyse Scaling Issues
Let’s assume that your mass data scenario does not have an inherently non-linear nature (which is possible but unlikely for most cases of business programming) but the number of processed business objects decreases every hour during run time.
In this case the first thing I do is to check the memory. The Memory Inspector has an API CL_ABAP_MEMORY_UTILITIES. You can use the methodWRITE_MEMORY_CONSUMPTION_FILE( ) that creates a memory snapshot. Unfortunately these snapshots can be very huge so I’m using a trick to reduce their number:
- I create a class with a timestamp as static attribute that contains the last time I did a memory snapshot. If a certain time (say one hour) is passed then I do another snapshot. You only have to call this method quite often (perhaps every time a business object to be processed) and you’ll get a reasonable number of snapshots that you can analyze to detect memory leaks,
- Usually I call this method as a functional method in an ASSERT ID statement within a special checkpoint group so that I can switch that behaviour on and off in a checkpoint group using transaction SAAB even in productive environment.
Memory leaks can give you hints about bad scaling behaviour but you’ll also need information about runtime behaviour of function modules and methods. Therefore you can use the same trick as described above and give it out using a LOG-POINTonce every hour. You can use the enhancement framework to define pre- and post-methods that perform the measurement.
Debugging Batch Jobs
But how do you know which functions modules and methods are critical? Sometimes I use the following trick: Within an functional method of an ASSERT ID statement I let the program run into an endless loop after a certain time (say 2 hours). After switching the corresponding checkpoint group on using transaction SAAB I start the program and I catch it using SM50. Then I get out of the endless loop using the debugger and then I do a little bit debugging. If your application suffers from nonlinear behaviour you will be able to find “slow” methods within short time and you can debug them.
Don’t Work Overtime – Work Effectively!
If your application suffers from nonlinearity then working 20 hours a day won’t help you much. You have to be creative and have to make the right decisions. So better go to bed early, spend time with your family to get strength and a clear mind even in times when management wants you to work 60 hours a day. The reason is simple: either there is a simple problem or a serious (perhaps architectural) problem with your application -in both cases you need a clear mind to find and fix it. Without much sleep you will likely make the errors and the situation will get worse.
Sometimes changing parallelization strategy will help you. If an SAP framework has an inappropriate caching strategy that doesn’t release allocated memory then it is likely that your application is so slow after a certain time so that it doesn’t make sense to use that internal mode any more. In this case a parallelization strategy with a lot of small packages may help you.
Within your application system it should be very easy to avoid memory leaks. Perhaps you can improve your caching strategy or you can use weak references i.e. the classCL_ABAP_WEAK_REFERENCE.
Sometimes you will have to do bigger changes. In this case you are lucky if your application has lots of unit tests so that you can change the code without introducing bugs.