Skip to Content
Technical Articles
Author's profile photo Rajarajeswari Kaliyaperumal

HANA DB memory usage Vs GLOBAL Garbage collection Block-Column Store version consolidation

HANA Garbage Collection block=>Column Store version consolidation.

If you are a HANA DBA and are facing issues with system performance, Garbage collection is one of the area where we might have to look and check for any GLOBAL block. Another type of garbage block is also present called TABLE level block. TABLE level blocks are not very serious once if their START_TIME is current when compared to GLOBAL block

In HANA , garbage collection is responsible for removing unwanted or deleted  data from memory. For example, if we are deleting records from a column table, we do not cleanup the memory , we just make those records invalidated and these rows will continue to consumer memory in the name of RAW_RECORD_COUNT in both main and delta memory .(Check for RAW_RECORD_* columns in M_CS_TABLES to get more information)

Garbage Collection in HANA is too generic and it can refer to =>Row Store Version Consolidation,Column Store Version consolidation,Memory Garbage collection, Persistence Garbage collection ,LOB Garbage collection  etc

Here , I will concentrate on Column Store version consolidation.

How to find out if your if our hana DB is affected by this?

Execute the below query in your system . If the difference between Raw record count in main memory is significantly higher than the current record count, one of the reasons could be blocked garbage collection , despite other DML operations that could be running on the respective table.

select top 50 table_name,record_count,raw_record_count_in_main,memory_size_in_total,raw_record_count_in_main-record_count as diff  from m_cs_tables order by diff desc

To double confirm, execute the SQL HANA_GarbageCollection_Blockers_Current” from OSS note 1969700 to check if there are current garbage collection is blocked.  Any garbage collection that is blocked more than 5 or 6 hours will cause severe performance problem in the system.

The same can also be checked via Studio -> System Information->Garbage (type here)->We will get 3 views for the same and I normally prefer the middle one.

Here, check for the START_TIME and if it is greater that a specific threshold (As in our systems, there will always be a block we check if only for every 6 hour, however SAP claims that there should never be a garbage collection block in the system and if it is there it has to be released)

So,How to unblock garbage collection if the START_TIME is higher than normal?

Here we need to find out the connection ID that is blocking the garbage collection and after approval , we have to cancel the involved session/connection id so that , the block gets removed.

ie,In the above screen shot we can see that the CONNECTION_ID 562,952 is blocking the garbage collection and hence we can cancel this session to remove the block.



So,What if the  garbage collection block’s START_TIME is almost current(less than an hour or so) ? How to handle this?

In this case we need find out the table which is affected and we need to force a compression optimization. Once this is done, it will indicate HANA to consider this table for the next garbage collection run  after optimization is completed . (Do not check the raw record count immediately after running compression optimization. It would take around 30 minutes to be cleaned up by garbage collection in our systems)


In job progress we can view the progress of compression optimization.

The above optimize compression will ensure that the garbage collection cleans up unwanted raw records during the next garbage collection run.

OSS notes:

Refer note 2169283 – FAQ: SAP HANA Garbage Collection for more information on the same.


Thanks for reading!
Follow for more such posts by clicking on FOLLOW =>
Like and leave a comment or suggestion if any!



Assigned Tags

      1 Comment
      You must be Logged on to comment or reply to a post.
      Author's profile photo Arti Agrawal
      Arti Agrawal

      Thank you Rajarajeswari, Excellent.