Skip to Content
Author's profile photo Jens Gleichmann

HANA Partitioning – 2 billion rows limitation – Part I: BWoH / BW/4HANA

Some of you may already know the limitation of 2 billion entries in a single table / partition.
Another hint regarding this limitation is the alert ID 17 ‘<table> contains xxx records’. The default threshold of this alert is 300,000,000 rows. This limitation applies to BSoH / S/4HANA and BWoH / BW/4HANA => general HANA limitation. But both have its own distribution rules. So I will split this blog into two parts BWoH and BSoH.

At first you have to know that you have to execute manual steps to create new partitions and distribute the data! I have heard this very often that some customers think this is a automatically task.
There is one exception: If you use dynamic range partitioning, but also this must be implemented manually.

Test Environment:

VMware 6.0
11xCPU: Intel(R) Xeon(R) CPU E7-8880 v3 @ 2.30GHz
SLES for SAP 12 SP1
HANA 1.0 SPS12 Rev. 122.11

Type SAP Note
BWoH 1908075 – BW on SAP HANA: Table placement and landscape redistribution
BWoH 2334091 – BW/4HANA: Table Placement and Landscape Redistribution
BWoH 2019973 – Handling Very Large Data Volumes in SAP BW on SAP HANA
BSoH 1899817 – Business Suite on SAP HANA database: table placement

If you face such an issue in a BW system, you have to check first if you can solve it in the application layer. So check if you can use partitioning options in the backend system with SAP note 2019973.
When this is not possible anymore, than you have to do the following manual steps.

1. Check table distribution rules / table placement

Follow the steps in SAP note 1908075 for BWoH / 2334091 for BW/4HANA.
Download the attachment and choose your script regarding your HANA revision and topology.
In this example I have chosen the most spread variant: HANA 1.0 SPS12 Scale-up
Normally at the time of installation or migration this step is already performed but some thresholds may have changed over time. So first check the current parameter and thresholds:

SET ('table_placement','same_num_partitions') = 'true' WITH RECONFIGURE;
SET ('table_placement','max_rows_per_partition') = '1500000000' WITH RECONFIGURE;

If all the values match to the values from the downloaded SQL script, there is nothing to do for you.
In the other case replace $$PLACEHOLDER with the SAP schema and execute the script with a user with sufficient priviliges.

The current rules explained also in the attached pdf inside the same folder as your SQL script:
InfoCube fact tables are ROUNDROBIN partitioned on first level (
DSO tables are HASH partitioned on first level ( They have 1 partition on first level regardless of the number of records in the tables – except for tables with more than 1.5 billion records, see remarks below. Tables for InfoCubes and DSOs are located on the master node.
InfoObjects tables ( are not partitioned, except for InfoObjects with high cardinality. Those tables are HASH partitioned on first level. All other InfoObject tables are not partitioned i.e. they do not have a partitioning specification. InfoObject tables are located on the master node.
PSA tables ( and errorstack tables ( are HASH partitioned on first level. They have 1 partition on first level regardless of the number of records in the tables – except for tables with more than 1.5 billion records, see remarks below. PSA and errorstack tables are located on the master node.
SAP HANA DataSources ( can have an insert and an upsert table. The insert table is dynamic range and the upsert table is HASH partitioned on first level.
Temporary BW tables (, and OpenHub tables ( are not partitioned i.e. they do not have a partitioning specification. They are located on the master node.

The number of partitions on first level according to the rules above is only set when the table is initially created; the number of partitions on first level is not adapted dynamically.
The number of first level partitions of a table does not exceed the number of nodes that are potential valid locations for that table. This rule is disregarded if a higher number of first level partitions is required to avoid first level partitions with more than 1.5 billion records (global.ini, section [table_placement], max_rows_per_partition = 1500000000).

=> this means a split will not performed before the table has reached 1.5 billion rows.
=> in a scale-up system all tables are created without partitioning besides the table was above 1.5 billion records at the time of the migration – SUM will take care


2. Check table grouping

If you have an old release or your migration was not completed correctly, some tables have a missing table group type.

select * from "SYS"."TABLE_GROUPS" where TABLE_NAME like '%<TABLE_NAME>%';

When the group type is missing, you have to implement the latest notes for report RSDU_TABLE_CONSISTENCY (currently: 2264053 – RSHDB: SAP HANA Tools 08 for BW) and execute the it to classify the tables.
Usage: 2569097 – How to use of report RSDU_TABLE_CONSISTENCY [VIDEO]

3. Repartitioning plan

In the SQL select statement before you checked the table group type. In the output you’ll also found the group name.
This one we have to use in the reorg plan => the steps can be found at the end of SAP note 2019973.
Note: In case of a scale-out system you can skip this step and execute a table redistribution.

Be sure that you execute this steps in one SQL session with the schema owner or a user with sufficient priviliges to tables and content!

1. Generate a reorg plan

call reorg_generate(6,'GROUP_NAME=><your-BW-group-name>');

2. Check the generated plan

select * from reorg_plan;

If there is no output, you have to check if your table is above the defined threshold (max_rows_per_partition). I have adjusted the value that my plan results in 8 partitions => currently: 1.

3. Execute the plan

call reorg_execute(?);

Check the free ressources (data+log disk space + CPU + memory) of the system and the configured parallelism.

Adjust the parameters or execute it only in a time frame without high business load.
indexserver.ini => partitioning => split_threads => 16 (default)
indexserver.ini => partitioning => bulk_load_threads => 4 (default)

4. Check the status of the plan execution

select * from reorg_steps 
where reorg_id = <value returned by reorg_execute before>;
select * from reorg_steps where reorg_id = 1;

Normally reorg_id should be 1. Now you can monitor the process via the statement and the running sessions.

You will see the splitting per col as adjusted via split_threads. The number of ‘ZipResultJob’ threads correlate with the value of parameter bulk_load_threads.
In my example the splitting operation (step 1-8) tooks 50min for 1.8 billion rows (23GB of data).

Note: Be aware of this limitation and the tasks to split the table to solve the issue. If you won’t do this no more inserts are possible if you reach the limit and you will get a lot of errors in the backend.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Hemanth Kondeti
      Hemanth Kondeti

      Hi Jens,

      Very good Information.

      Can you please help us in how to calculate the values of split_threads, bulk_load_threads.

      I didn't found much info on these parameters.

      Also please share SAP Notes related to these Parameters.

      Author's profile photo Jens Gleichmann
      Jens Gleichmann
      Blog Post Author

      Hi Hemanth,

      for sure, here are the related notes:

      • 2044468 - FAQ: SAP HANA Partitioning
      • 2222250 - FAQ: SAP HANA Workload Management
      • 2600030 - Parameter Recommendations in SAP HANA Environments
      • 2036111 - Configuration parameters for the SAP HANA system

      split_threads should not be set higher than the available amount of logical CPUs known to the system.

      bulk_load_threads should be set to a value which correlates with 25-30% of the value of split_threads.





      Author's profile photo Hemanth Kondeti
      Hemanth Kondeti

      Hi Jens,


      Thank Q for the quick reply.

      Waiting for the part 2 of your blog.

      Author's profile photo Paras M
      Paras M

      Hi Jens,

      How to make a plan for such kind of partitions?

      if my cube records are from inventory data sources and having 10 years record in a cube. every day some millions records are updating in a cube, partition plans are not cleared can you please please elaborate or give some examples on it?that will be the great help for my scenario, thank you.



      Author's profile photo Jens Gleichmann
      Jens Gleichmann
      Blog Post Author

      Hi Paras,

      what do you mean with "make a plan"? You have default rules the system is following when you execute such a repartitioning (see distribution rules / table placement). These rules and configs can be adjusted to your needs. On this base the plan will be automatically created.

      Please give some details on your scenario:

      • Is your cube already partitioned?
      • How many records does it contain?
      • why you want to partition it?

      There is no partitioning plan / statement which you have to create by your own like on an ERP system.



      Author's profile photo paras mali
      paras mali

      Thank you jens.

      My cube is already partitioned in March-2017. earlier method user for partition round robin 6.

      Cube has 6 partitions and each partition has 1.73 (1733897826) billions record .

      Now 2 billions is the limitations for cube per partitions, before it fills each partitions  need to do again i feel so, else my data loading  may be fail.


      i  understand from basis team, system will automatically placed the records in each partitions and i have to check cube consistency in RSDU_TABLE_CONSISTENCY  . when basis does the partitions at DB level.

      Author's profile photo Jens Gleichmann
      Jens Gleichmann
      Blog Post Author
      1. check if partitioning / table placement rules are up-to-date for your scenario
      2. check object via RSDU_TABLE_CONSISTENCY (update it with the latest note fix)
      3. check HANA paremeter for partitioning as named in the blog
      4. follow blog starting with 3) Repartitioning plan
      5. generate the plan and check if more than 6 partitions are created
      6. execute the reorg in low-load time

      You can also trigger a complete landscape reorg which will the issue if your system is setup correctly. Please read all the named notes in the blog and you will resolve your issue.

      If you are still struggling with it, just open a question (Q&A section) with detailed information about your issue and formulate your questions. Currently all your questions are covered by the blog or the named notes.

      Author's profile photo Kishore Kumar Challa
      Kishore Kumar Challa

      Hi Jens,

      would like to ask your suggestion here,

      I have set the Dynamic range threshold to 100 million records of the inbound table of cube like Adso.

      i see the threshold set to 100 million in “m_cs_partitions” for inbound table. Have executed the Table Placement rules valid for our system such that range set would remain permanent.

      Also ours is single node system.

      Now when i activate the ADSO, the Dynamic range threshold in m_cs_partitions of the inbound table gets reset to Zero. This happens only when the inbound table has zero records & i activate the ADSO.

      Please let me know if this is an expected behavior. As i always activate the data requests and ensure that inbound table has zero records. i have tried replacing the table placement placeholder with schema however the behaviour is still the same.


      Best Wishes,

      Kishore Kumar Challa