Additional Blogs by SAP
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member205217
Discoverer

     A bit of background on this endeavor. I'm currently running a huge migration project, going from Oracle to HANA, by huge I mean productive Oracle database is of 80 TB. We're using PCA and Delta Q cloning along side as just a regular migration would not work in a long weekend downtime. That being said, anyone that has some extended experience with migrations would agree with me that this poses some challenges from a logistics perspective. Table splitting for example even if executed during up time cannot be done too far in the past as the split options might not be relevant anymore. If one does it too close to downtime window then might not finish in time with table splitting. Hence we need to have ways to automate as much as possible. In this post I want to show a way to automate getting the WHR files for top <N> partitioned tables w/o executing SWPM or R3ta. The reason for not using SWPM/R3ta is that splitting partitioned tables is not working as expected (will come back to this in another post).

Why splitting partitioned tables by partitioning column?


     Assumption: It is most of the times better to split a partitioned table by the partitioning key (column) rather than allow R3ta to find the most representative column within the primary key index. Oracle will take advantage of a mechanism called partition pruning to select rows instead of going via index. In a nutshell Oracle will directly go to the partition(s) needed to get the rows.


How to proceed In a nutshell ...


1. Extract top partitioned tables from SAPSR3 schema and

     1.1. Generate the <TABLE>.STR files for each table

     1.2. Generate the <TABLE>.WHR files for each table

2. Use <TABLE>.WHR to get to <TABLE>-<N>.WHR files where n gets a value between 1-No.OfSplits.


The procedure


#1. Get the top partitioned tables from SAPSR3 schema.


a. get top <N> partitioned tables and extract their partitioning key then create R3ta_hints.txt file.


Note: top <N> partitioned tables it's consultant's choice. I play with the value until I get the lowest sized table to about 20G and split all tables by 5Gb slices.

Note: throughout this exercise I'll create a couple of locations to store files. You can use your own as long as you don't lose track.

Note: all scripts must be created and executed with <sid>adm


$ mkdir -p /tmp/marius

$ cd /tmp/marius

$ sqlplus / as sysdba


SQL> set linesize 300;

SQL> set pagesize 1000;

SQL> spool get_partitioning_key.sql


SQL> select * from

(

with

table_segments as (

  select

   s.owner, s.segment_name table_name, sum(s.bytes) total_size

  from

   dba_segments s

  where

   s.segment_name in (select table_name from dba_tables where owner='SAPSR3' and partitioned='YES') and

   s.segment_type like 'TABLE%'

  group by

   s.owner,s.segment_name

),

lob_segments as (

  select

   l.owner, l.table_name, sum(s.bytes) total_size

  from

   dba_lobs l join dba_segments s on

    l.owner=s.owner and

    l.segment_name=s.segment_name

  group by

   l.owner,l.table_name

)

select

'select name || '','' || column_name from dba_part_key_columns where name='''||tab.table_name||''';'

from

table_segments tab left join lob_segments lob on

  tab.owner=lob.owner and

  tab.table_name=lob.table_name

  where tab.owner='SAPSR3'

order by round((tab.total_size+nvl(lob.total_size,0))/1024/1024/1024,3) desc

)

where rownum<=300;


Adjust:

     - schema from SAPSR3 to the actual one;

     - 300 to whatever number of top tables is needed.


The output of this SQL statement will be a file called get_partitioning_key.sql which we need to execute. Before we can execute this file we need to pretty format it as the spool command from plsql will add some additional unwanted text in it. So edit the file and remove everything but lines like:


select name || ',' || column_name from dba_part_key_columns where name='''||tab.table_name||''';

Now, execute get_partitioning_key.sql as follows:


$ sqlplus / as sysdba

SQL> set heading off;

SQL> spool R3ta_hints.txt

SQL> @get_partitioning_key.sql

SQL> exit


And again cleanup the file deleting empty lines. You can get this with a vi command line e.g.:


$ vi R3ta_hints.txt

:g/^$/d


Checkpoint 1: /tmp/marius/R3ta_hints.txt exists.


b. get top <N> partitioned tables and add number of split slices, then create tables.txt file


$ cd /tmp/marius

$ sqlplus / as sysdba


SQL> set linesize 300

SQL> set pagesize 1000

SQL> spool tables.txt


SQL> select * from

(

with

table_segments as (

  select

   s.owner, s.segment_name table_name, sum(s.bytes) total_size

  from

   dba_segments s

  where

   s.segment_name in (select table_name from dba_tables where owner='SAPSR3'and partitioned='YES') and -- get top partitioned tables

   s.segment_type like 'TABLE%'

  group by

   s.owner,s.segment_name

),

lob_segments as (

  select

   l.owner, l.table_name, sum(s.bytes) total_size

  from

   dba_lobs l join dba_segments s on

    l.owner=s.owner and

    l.segment_name=s.segment_name

  group by

   l.owner,l.table_name

)

select

tab.table_name||'%'||ceil(round((tab.total_size+nvl(lob.total_size,0))/1024/1024/1024,3)/5)

from

table_segments tab left join lob_segments lob on

  tab.owner=lob.owner and

  tab.table_name=lob.table_name

  where tab.owner='SAPSR3'

order by round((tab.total_size+nvl(lob.total_size,0))/1024/1024/1024,3) desc

)

where rownum<=300;


Adjust:

     - schema from SAPSR3 to the actual one;

     - 300 to whatever number of top tables is needed.

     - .../1024/1024/1024,3)/5 - change 5 to whatever size of a split needed, e.g. 6 as in 6 GB


The output of this SQL statement will be a file called tables.txt that contains entries like e.g. <TABLE_NAME>%<No.Of.Slices>. Edit the file and clean it up just like we did for get_partitioning_key.sql earlier.


Checkpoint 2:

     /tmp/marius/R3ta_hints.txt exists.

     /tmp/marius/tables.txt exists.


#1.1 Generate <TABLE>.STR for each table.


We shall use the latest tables.txt file and a couple of short scripts to generate the <TABLE>.STR files. See below:


$ cd /tmp/marius

$ vi script1.sh


#!/bin/bash

table_list=${1}

x="T "

for line in `cat $table_list`; do

table=`echo $line | cut -f1 -d"%"`

new_table=$x$table

echo $new_table

done


$ ./script1.sh tables.txt >> tables_for_str.txt

We generate <NAMETAB>.STR


$ mkdir -p /tmp/marius/STR

$ R3ldctl -i tables_for_str.txt -p STR -l R3ldctlExport.log

$ vi script2.sh

#!/bin/bash

table_list=${1}

for line in `cat $table_list`; do

table=`echo $line | cut -f1 -d"%"`

new_table=$table

echo $new_table

done


$ ./script2.sh tables.txt >> tables_for_str_2.txt


Now we extract from <NAMETAB>.STR and create <TABLE>.STR

     - download SPLIT_4-20009115.SAR from SMP

     - stage and unzip the file to /tmp/marius/SPLIT


$ setenv JAVA_HOME <Java6 location>/jre

$ $JAVA_HOME/bin/java -classpath /tmp/marius/SPLIT/split.jar com.sap.inst.split.PackageSplitter -strDirs /tmp/marius/STR -tableFile /tmp/marius/tables_for_str_2.txt -str -outputDir /tmp/marius/STR


Checkpoint 3:

     /tmp/marius/R3ta_hints.txt exist.

     /tmp/marius/tables.txt exist.

     /tmp/marius/STR/<TABLE>.STR files exist.


1.2. Generate the <TABLE>.WHR files for each table


To generate <TABLE>.WHR files we're going to use another piece of great SAP technology, I'm talking here about SAPuptool that comes with SUM (Software Update Manager) a tool introduced by the Data Migration Option (DMO). SUM is available on SMP and is part of the SL TOOLSET 1.0. (any version available).


Prereqs:

     - download SUM from SL TOOLSET 1.0 available on SMP

     - stage and extract the SAR archive to /tmp/marius/, this will create /tmp/marius/SUM folder


a. generate each <TABLE>.WHR individually


$ mkdir -p /tmp/SCRIPTS

$ cd /tmp/SCRIPTS

$ vi whr_split.sh


#!/bin/bash

table_list=${1}

split_file=${2}

where_directory=${3}

for line in `cat $table_list`; do

table=`echo $line | cut -d ',' -f 1`

column=`echo $line | cut -d ',' -f 2`

no_of_splits=`cat $split_file | grep $table | cut -d '%' -f 2`

log_file=`echo $table | sed 's/\//_/g'`

segment_size=`echo "scale=4; 1/$no_of_splits" | bc -l`

segment_size="0$segment_size"

nohup /tmp/marius/SUM/abap/bin/SAPuptool splittable table=$table keyfields=$column segmentsize=$segment_size count=estimate checkbounds wherefile=$where_directory/$log_file.WHR >> $where_directory/$log_file.LOG &

done

$ mkdir -p /tmp/marius/WHR

$ ./whr_split.sh /tmp/marius/R3ta_hints.txt /tmp/marius/tables.txt /tmp/marius/WHR

Note!!! Execution of this file will trigger in background execution of SAPuptool which will subsequently generate WHR files for each partitioned table by partitioning column. Script was tested with 300 parallel SAPuptool processes and increase of CPU utilization was merely 10%.

Check that all SAPuptool processes are finished before moving to next step. Use "ps -ef | grep SAPup | wc -l" or any other way to do it.

b. Clean up each *.WHR file to match output of R3ta output.

$ cd /tmp/SCRIPTS

$ vi rm_cnt.sh

#!/bin/bash

workdir=${1}

for file in `ls $workdir/*.WHR`; do

  fullpath="$file"

  tmpfile="$fullpath.temp"

  sed -n '/^cnt: .*$/!p' $fullpath > $tmpfile

  mv $tmpfile $fullpath

done


$ ./rm_cnt.sh /tmp/marius/WHR


2. Use <TABLE>.WHR to get to <TABLE>-<N>.WHR files where n gets a value between 1-No.OfSplits.


e.g. for table DBTABLOG%5 would have:


DBTABLOG-1.WHR

DBTABLOG-2.WHR

DBTABLOG-3.WHR

DBTABLOG-4.WHR

DBTABLOG-5.WHR

DBTABLOG-6.WHR


generated from DBTABLOG.WHR.


$ setenv JAVA_HOME <Java6 location>/jre

$ setenv MIG_HOME /tmp/marius/SPLIT

$ cd /tmp/marius/SPLIT

$ whr_split.sh /tmp/marius/R3ta_hints.txt /tmp/marius/tables.txt /tmp/marius/WHR


Checkpoint 4:

     /tmp/marius/R3ta_hints.txt exist.

     /tmp/marius/tables.txt exist.

     /tmp/marius/STR/<TABLE>.STR files exist.

     /tmp/marius/WHR/<TABLE>-<N>.WHR files exist


Last but not least we've to use this files that we generated until now, which means we need to copy *.STR and *.WHR to export location:


- suppose our export location is /export


$ mkdir -p /export/ABAP/DATA

cd /tmp/marius/STR

cp *.STR /export/ABAP/DATA

cd /tmp/marius/WHR

cp *.WHR /export/ABAP/DATA


Nothing else that needs to be done for top partitioned tables.


Note: Make sure to include these files in the whr.txt file and then add this file to SWPM screen before starting the export.

Note: Please *test* at least one complete export/import using this procedure before production use.

2 Comments