Please find below a description and reasons for compressing selected periods of data in planning cube with audit mode turned ON.
Summary of the issue:
We have encountered issues with planning functions (mainly currency translation) processing too many records. The initial idea was to compress the cube including zero elimination (including records removal that sums up to 0).
Planning cubes are in the audit mode. It means that the system adds key values with an exact timestamp, source etc. for every new tuple. It is necessary to store audit and changelog for at least last six months. Those two points combined resulted in the compression not being possible.
A remedy for that issue was to create an automated process that temporarily duplicates cube’s data older than 6 months based on dynamic selection (DS1) and duplicates it once more but this time with static audit information (DS2).
Once it’s done and the compression happens duplicated data from the first data set (DS1) gets removed and the second data set (DS2) gets compressed. Data from the last six months stays as is until the next run.
The process is scheduled to run at Saturday morning after daily chains finish once every 3 months (event based).
This process enables compression for the data in the planning cubes that is older than 183 days. Normally it wouldn’t be possible as the tuples are reported on the Audit Dimension level (Time Stamp/User/Audit Mode/Source of Data) and therefore every transaction/tuple is stored separately and not available for compression.
e.g. Table /BIC/F* currently stores 96 MLN records, 2,53 GB of data, after the compression it should be trimmed by half. It will reduce the number of records processed in planning sequences.
Process Description & Development reason
The process contains two main steps made out of two transformations from each cube to itself.
- First transformation duplicates the data older than 183 days that has not yet been compressed with Audit Dimension filled statically (with constant values).
- Second transformation reverses (zeroes down) Key Figures and therefore all transactions older than 183 days that have Audit Dimension filled dynamically (with different values).
Please find new data flow visualization below.
The main reason for process development was limiting data processed by daily planning sequences. Currency translation sequences were failing on both COPA cubes.
Process is scheduled to run once a quarter on Saturday morning automatically via a process chain that contains all the implemented logic run in adequate order.
Process chain waits until it gets an information (event) about daily/weekly chains end.
Logic – Process chain
Process chain consists out of two modules.
- First meta one is to create a backup of the cubes, execute data loading via 1st transformation (clearing audit) and 2nd transformation (reversing key figures).
Please mind those data loadings usually have to be divided based on FISCPER selections as they are processing lots of data. About 3 to 5 MLN per DTP is advised.
- Second sub one is to update corresponding data targets (so the new requests get compressible) and compress the cube.
Logic – Data flow
Transformations enabling compression are carried out through InfoSources, 2 for each InfoCube.
- Loading through InfoSources duplicates data older than 183 days with statically filled Audit Dimension like in the snapshots below.
- InfoSource1 (transformation) – used for clearing audit
- InfoSource2 (transformation) – used for zeroing out Key Figures
- Loading through InfoSources duplicates original data with reverse Key Figure values (zeros it down) with data older than 183 days that have Audit Dimension filled dynamically. Zeroing down happens in transformation EndRoutine for each field manually like below:
LOOP AT RESULT_PACKAGE ASSIGNING <RESULT_FIELDS>.
<result_fields>-/bic/… = <result_fields>-/bic/…* -1.
Data older than 183 days is selected in each DTP filter routine made on “Audit Time Stamp” time characteristic like in the snapshot below.
Data selection for the reverse Key Figure loading is based on filter selection made on “Audit Mode” characteristic value, not equal “OFF” like in the snapshot below.
At the end of the last process chain event is triggered only on Saturdays.
The decision is made with a formula like below:
Event is triggered via BTC_EVENT_RAISE programm. It has to be created in SM62 t-code.
From the time of the earliest end of the last nightly chain compression chain is waiting for the event from the previous step via Interrupt process:
The process lowered the number of entries and data amount like below (you can check stats from DB02 or DBACOCKPIT t-codes).
Estimated Maximum Memory Consumption (GB) lowered by 61% (Δ 1.91 GB)
Current Memory Consumption in Main Storage (GB) lowered by 64% (Δ 1.85 GB)
Current Total Memory Consumption (GB) lowered by 67% (Δ 2.11 GB)
Size on Disk (GB) lowered by 9% (Δ 0.42 GB)
Number of Entries lowered by 52% (Δ 58 488 369)
And since the compression planning sequences are running multiple times faster.
If in doubt, please let me know.