This document is based on one of the major issue on memory bottleneck we are facing in our HANA landscape where we have BODS, BW and HANA DB all running on the same server. The Capacity of the server is quite high with memory of around 512GB and is distributed among different servers. To get the detail on which user is using how much RAM please run the following script:







for m in `ps -eo user,rss –sort user | sed -e ‘s/  */ /g’ | awk -F'[ ]’ {‘print $0’}`; do

  nu=`echo $m|cut -d” ” -f1`

  nm=`echo $m|cut -d” ” -f2`

# echo “$nu $nm $nu”

if [ “$nu” != “$ou” ] && [ $(echo “$nm”|grep -E “^[0-9]+$”) ]


      if [ “$tempsum” -ne 0 ]; then echo “Printing total mem for $ou: $tempsum KB”; fi



     let “totalmem += $nm”


     let “tempsum += $nm”

     let “totalmem += $nm”



echo “Total Memory in Use: $totalmem KB / $(free | grep Mem: | awk ‘{print $2}’) KB”



Note: Save the above command in file and give it execute permission before running it at command prompt.

The output of the above script is as follows:


From the above screen it is clear that only two users (p1badm and p1wadm) is utilizing almost 98% of the total memory. Since HANA system is also installed on the same server so it is always good to know the maximum memory utilized as well as day-to-day utilization of the memory on HANA DB.

One can see the Peak memory utilization of HANA DB in the license detail through HANA studio.


Also for detail on memory utilization on HANA use the SQL command given in the attached file “HANA_MemoryOverView.txt: file, which needs to be executed on HANA Studio.The Output of the above command is as follows:


From all the above analysis it is now clear that either BOBI (user ID p1badm) or BW (user ID p1wadm) system is culprit. This is the way we can narrow down our analysis to a specific system.

After analysis to SAP BW and BOBI systems we could not be able to find any batch job or user using high amount of memory at a single point of time. We also find that memory utilization as a sum for specific users is high but that is not happening in one single run. (One can analyze the SAP system by using ST03 and ST06 transaction on memory utilization.)

Through “free –m” one can see memory detail in MB.


The above figure shows that only 8.4GB memory is available from memory pool of 512GB, which is alarming.

Since current utilization is under control hence it is issue with memory release at OS level. To identify the obsolete memory sitting idle at OS level one can use “/opt/proc” file where detail on “Active” and “Inactive” memory is given starting from 6 line. The output of the file is as follows:


Note: Inactive memories are those memories which is obsolete and has no any pointer which is active inside the system. Which means if they can be retained then there would not be any impact on the currently active business transactions and even no impact on the system as a whole.

From the above figure it is clear that around 169GB of space is held by the system as part of Inactive memory which is quite huge.

The best way to retain these inactive memory is to restart the application or reboot the server but this will impact the Business. So execute the following command to retain the “Inactive” memory:

free && sync && echo 3 > /proc/sys/vm/drop_caches && echo “” && free

Note: This command shows you the current state of the memory according to “free” and then clears the memory buffers/cache and then shows the new details as reported by “free”. Keep in mind though that if you still have high levels of usage after running this command, it is probably actually being used by a process and as such cannot be cleared without killing the process or restarting the server.

The output of the above command when run as user ROOT is as follows:


From the above screen it is clear that around 109GB of Inactive memory has been released. To get more detail on current memory availability execute “free –m”.


Hence one can see a significance amount of memory is added into the list of free memory which was earlier only 8.4GB and now increased to 267GB.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply