Skip to Content

In general nearly everyone knows that with new installations based on NW750 there are no cluster tables anymore. When you migrate to HANA one prerequisite is that all cluster and pool tables have to be declustered/depooled.
But is this really correct? In my last projects there were a lot of issues regarding cluster and pool tables. SUM and SWPM have not worked as expected and so some data was not migrated / depooled / declustered automatically in cause of bugs.
So I want to shed some light on this topic to help to understand the usage, procedure and checks.

  1. Clarification Cluster and Pool tables
  2. Check Pool and Cluster tables in the system
  3. Procedure of depooling and declustering using R3load
  4. Splitting behaviour
  5. ABAP dictionary adaption and checks
  6. HANA checks
  7. Technical takeways
  8. SAP notes

 

1) Clarification Cluster and Pool tables

First of all what are cluster and pool tables in detail? (SAP details regarding releases in note 1892354)

  • Cluster tables combine information from several tables logically belonging together. They allow efficient access to a whole application object without incurring joins on database level. This means they can only be logical read by DBSL. A normal native SQL won’t result in correct data by default.
  • Pool tables combine a large number of individual small tables into one database table, which adressed problems with large numbers of individual database objects. This tables can be read by native SQL by default without any trouble.

Do you know how many cluster/pool tables existing in an ERP EhP8 system? Have a look into table DD06L. In my system I count 162.
Cluster: 53
Pool: 109


2) Check Pool and Cluster tables in the system

At first it is wrong that HANA can’t run with cluster and pool tables. HANA can handle it, but it is not wise to do so for technical reasons.
There are still some cluster / pool tables after the migration by default!
Check out your systems and the note 1849890:
“There are pooled tables that cannot be converted into transparent tables for technical reasons. Keep these tables as pooled tables. This might apply to the following tables:
GLP1, GLP2, GLPPC, GLS1, GLS2, GLSPC, GLT1, GLT2, JVS1, T157T. It might also apply to all pooled tables with the prefix M_.”

Check it out by yourself for your systems (DBACOCKPIT->Diagnostics->SQL Editor or HANA Studio or any other SQL Client):

SELECT * FROM M_TABLES where TABLE_NAME in (SELECT TABNAME FROM DD02L WHERE TABCLASS = 'CLUSTER') ORDER BY record_count DESC;
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT TABNAME FROM DD02L WHERE TABCLASS = 'POOL') ORDER BY record_count DESC;

#All in one:
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT TABNAME FROM DD02L WHERE TABCLASS = 'POOL' OR TABCLASS = 'CLUSTER') ORDER BY record_count DESC;

So which tables should be declustered / depooled after migration and what exactly happens to data and ABAP dictionary?
Ok, let’s start with which tables currently existing in the system with such classification. This can be answered by a DB query on table DD06L which works on anyDB _AND_ HANA:

 

SELECT * FROM M_TABLES where TABLE_NAME in (SELECT SQLTAB FROM DD06L WHERE SQLCLASS = 'POOL') ORDER BY record_count DESC
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT SQLTAB FROM DD06L WHERE SQLCLASS = 'CLUSTER') ORDER BY record_count DESC

#All in one:
SELECT * FROM M_TABLES where TABLE_NAME in (SELECT SQLTAB FROM DD06L WHERE SQLCLASS = 'CLUSTER' OR SQLCLASS = 'POOL') ORDER BY record_count DESC

 


 

3) Procedure of depooling and declustering using R3load

Second step after we identified the filled tables is what happens in case of declustering/depooling?
You can select which transparent table belongs to a cluster/pool table by selecting DD02L:

SELECT TABNAME, TABCLASS, SQLTAB FROM DD02L WHERE TABCLASS = 'POOL' OR TABCLASS = 'CLUSTER';

Where TABNAME is the transparent table and SQLTAB is the cluster/pool table.

I take an example with a well known cluster called CDCLS. This one is normally one of the biggest in an ERP system.
SE11 layout:

This table will be splitted in two transparent tables called CDPOS and PCDPOS. This can be done in the ABAP layer described in note 2227432 or when you use SWPM or SUM DMO in a migration procedure => also after failed migration it can happen afterwards with note 2054699.

For small tables you can use the ABAP approach. For bigger clusters like CDCLS I strongly recommend to use the R3load procedure because of the size and duration. R3load is called with an option decluster=true.

This result in different ways:

  • for pool tables it is easy to select the dataso no difference to other tables
  • for cluster table a logical structure mapping will be created.

I won’t go to deep because the most migrations today will happen via SUM and here everything happens automatically, but some words to understand the high level:

#Declustering will happen with this option in the properties file
export_monitor_cmd.properties:
decluster=true

Packages (depending on procedure SWPM/SUM):

  • SAPCLUST.STR => Cluster table structures
  • SAPCDCLS.STR.logical => logical structure of CDCLS

 

tab: PCDPOS
att: SDOCU 2 ?N Tc all PCDPOS~0 SDOCU 2
ref: CDCLS
fld: MANDANT CLNT 3 0 0 not_null 1
fld: OBJECTCLAS CHAR 15 0 0 not_null 2
fld: OBJECTID CHAR 90 0 0 not_null 3
fld: CHANGENR CHAR 10 0 0 not_null 4
fld: TABNAME CHAR 30 0 0 not_null 5
fld: TABKEY CHAR 70 0 0 not_null 6
fld: FNAME CHAR 30 0 0 not_null 7
fld: CHNGIND CHAR 1 0 0 not_null 8
fld: TEXT_CASE CHAR 1 0 0 not_null 0
fld: UNIT_OLD UNIT 3 0 0 not_null 0
fld: UNIT_NEW UNIT 3 0 0 not_null 0
fld: CUKY_OLD CUKY 5 0 0 not_null 0
fld: CUKY_NEW CUKY 5 0 0 not_null 0
fld: VALUE_NEW CHAR 254 0 0 not_null 0
fld: VALUE_OLD CHAR 254 0 0 not_null 0

tab: CDPOS
att: SDOCU 6 ?N Tc all CDPOS~0 SDOCU 6
ref: CDCLS
fld: MANDANT CLNT 3 0 0 not_null 1
fld: OBJECTCLAS CHAR 15 0 0 not_null 2
fld: OBJECTID CHAR 90 0 0 not_null 3
fld: CHANGENR CHAR 10 0 0 not_null 4
fld: TABNAME CHAR 30 0 0 not_null 5
fld: TABKEY CHAR 70 0 0 not_null 6
fld: FNAME CHAR 30 0 0 not_null 7
fld: CHNGIND CHAR 1 0 0 not_null 8
fld: TEXT_CASE CHAR 1 0 0 not_null 0
fld: UNIT_OLD UNIT 3 0 0 not_null 0
fld: UNIT_NEW UNIT 3 0 0 not_null 0
fld: CUKY_OLD CUKY 5 0 0 not_null 0
fld: CUKY_NEW CUKY 5 0 0 not_null 0
fld: VALUE_NEW CHAR 254 0 0 not_null 0
fld: VALUE_OLD CHAR 254 0 0 not_null 0
fld: _DATAAGING DATS 8 0 0 not_null 0

 


4) Splitting behaviour

Be careful if you have splitted your cluster tables with SWPM and R3ta. This will result in a behaviour which differs from transparent tables:
CDCLS-1*
SAPCDCLS-1*
SAPCDCLS.STR.logical

The SAPCDCLS* files will be the correct files for searching for errors. These are needed and automatically created by the procedure for the logical mapping. Don’t get irretated by import/export logs as well as on things like the migration time stats from migtime.

 


 

5) ABAP dictionary adaption and checks

Ok, now we know the declustering and depooling happens during export. But as you know also the ABAP dictionary has to be adjusted in other case the tables are not known and can’t be used by the ABAP stack.
For this the following reports exist:

  • RUTCSADAPT => adjust cluster dictionary structures
  • RUTPOADAPT => adjust pool dictionary structures

The reports will be called automatically by SWPM (if you check declustering/depooling in dialog phase) and SUM. They will just adjust the dictionary, they won’t migrate or decluster data!

This can only happen if the cluster tables are empty which should be the result of the declustering/depooling, because all data imported into the transparent tables.
To check if all data were transfered successfully execute report SHDB_MIGRATION_CHECK (note 1785060). This report should always be part of your cutover procedure as postprocessing task.

So if this report finished without errors AND warnings you should be fine, shouldn’t you? Seems to be wrong because the report won’t check all cluster / pool tables. Some of them are excluded for reason (see 1849890), some are just not checked. For example pool tables like KAPOL, KBPOL, UTAB. There is no official documentation for there existence as pool table on HANA.
They are getting depooled but not checked by the report. May be SAP will adjust documentation and the check report in the future.

 


 

6) HANA checks

There is another check on HANA with SQL statement from the SQL collection attached to note 1969700.
You can use the Minichecks (HANA_Configuration_MiniChecks*) or HANA_Tables_SpecialTables.
The statements will check if there are still any records in tables ‘CDCLS’, ‘EDI40’, ‘KAPOL’, ‘KOCLU’ and ‘RFBLG’ .
Other once like KBPOL or UTAB won’t be checked in the current version.
I currently have several customers with different pool tables which were correctly splitted and filled into transparent tables, but some entries existing in the new tables AND in the old original pool table. The dictionary structures are correct the transparent table are in use and there were no migration errors. OSS messages are still in processing why this happened… I assume that the procedure is buggy or there are technical reasons which are not offically documented.

 


 

7) Technical takeaways

  • difference between cluster and pool tables
  • design cluster and pool tables on HANA
  • depooling and declustering procedure
  • ABAP adaption with reports RUTPOADAPT and RUTCSADAPT
  • check report SHDB_MIGRATION_CHECK
  • HANA statements for checking cluster/pool tables: HANA_Configuration_MiniChecks* and HANA_Tables_SpecialTables

8) SAP notes

1849890 – Changing tables from pooled to transparent

1785060 – Recommendations for performing the migration to SAP HANA

1892354 – SAP Strategy for Cluster and Pool Tables

2054699 – Subsequent declustering of an SAP system on SAP HANA using R3load

2227432 – How to: Declustering and depooling with NW 7.4 SP03 and higher on databases other than SAP HANA

1784377 – Checking pool tables and cluster tables

2634739 – How to get a list about the existing pool and cluster tables in the system?

1920165 – Downport: Find access to physical pool/cluster tables

1897665 – R3load: declustering support for tables in / namespace

To report this post you need to login first.

4 Comments

You must be Logged on to comment or reply to a post.

  1. Douglas Hoover

     

    Thanks for the detailed description here…

    I am assuming this is only an issue in SoH and not S4H or are there still clustered or pooled tables in S4?

    Thanks

    Doug

    (0) 
    1. Jens Gleichmann Post author

       

      Hi Doug,

      a legitimate question. Let’s find it out. I’m using a S/4HANA 1610 for this test:

      DD02L:

      => 0 rows => no filled pool or cluster tables which are currently existing in DD02L

      => in my system there are still some entries for pool tables, but they are not filled


      So there are no filled pool or cluster tables left in a S/4HANA system through DD02L but the dictionary still has some information about some pool tables which in my case are not filled. As consequence the tables like T157T were also be created as transparent table:

      But you can also see with DD06L a pool table called ATAB is existing with entries. So yes, it is still possible in a S/4HANA system that pool tables can exist. Cluster tables also exist but they are  declustered and not filled. It was a fresh installed demo21 S/4 system directly from SAP. So no errors from a migration / transformation or something like this. I’m not sure if the entries in ATAB should be there… It might not be representative for all other systems in cause of transformation errors.

       

       

      Regards,

      Jens

      (0) 
  2. Samuli Kaski

    Thanks Jens. Can you maybe share what kind of problems, issues or regressions you have seen due to declustering and/or depooling?

    (0) 
    1. Jens Gleichmann Post author

      Yes, of course. Some bugs in SUM, SWPM and R3load led to missing data in target system. In some case the data weren’t migrated due missing SGN (R3load signal files for parallel export/import procedure) and in another case the reports for the dictionary adjustment weren’t executed which lead to filled transparent tables but the system still worked with cluster/pool tables which were empty at this point in time.

      My lessons learned from this errors:

      1. check existence of transparent tables after migration / transformation
      2. check row count of old cluster / pool tables
      3. check report SHDB_MIGRATION_CHECK
      4. use table checker / SUM table comparison tool
      5. Do not change your working set of tools between sandbox till prod system besides there are really bugs which take effect in your scenario not just because there might be a newer better cooler version of the tools… This can lead to new bugs.

       

      Regards,

      Jens

      (1) 

Leave a Reply