Replication APO Planning Data from LiveCache To HANA for Analysis
Applies to:
SAP SCM 7.2 Support Pack 8 and HANA 1.00.74.390550 (NewDB100_REL)
Summary
This document gives an insight into configuring function to replicate planning data (transaction data) from your SAP APO Live Cache to an SAP HANA system through a secondary database connection. Download and Install the HANA content model for Business Analysis.
Master data is replicated using SAP Landscape Transformation (SLT) but that is not possible for planning data as it does not store in database tables, but in SAP liveCache.
We run the reports or schedule the delta to push the planning data in Demand Planning (DP) and Supply Network Planning (SNP) to be available HANA system for Business to analysis.
Author: Santanu Bose
Company: Wipro Ltd.
Created on: 26 July 2014
Author Bio
Table of Contents
Step 1: Install HANA DB Client in APO system………………………………………….3
Step 2: Setup Secondary DB Connection with HANA…………………….………….…3
Step 3: Set up a replication model and activate it.………………………………………4
Step 4: Run Full Replication..……………………………………………………..…………5
Step 5: Run Full Replication and Scheduling………………………………….…………6
Step 6: HANA Content Model installation…………………………………………………9
Step 7: Run Business Analysis on data in HANA………………………………….…..11
REFERENCE………………………………………………………………………………….12
Disclaimer and Liability Notice…………………………………………………………….12
Step 1: Install HANA DB Client in APO system.
Need to install HANA Database client in SCM (APO) System.
Step 2: Setup Secondary DB Connection with HANA
Create Secondary DB connection (/ndbco)
Step 3: Set up a replication model and activate it.
Create Replication model (SAPAPO/REPL_MOD_VIE)
Define the Key Figures:
Activate Replication Mode (/n/SAPAPO/REPL_MOD_MNG)
Upon activation the tables should be created in HANA database.
Step 4: Run Full Replication.
Replication of Planning Data to SAP HANA (/SAPAPO/REPL)
Step 5: Run Delta Replication and Scheduling
Schedule delta replication (/SAPAPO/REPL_DELTA )
The table’s data should be available in HANA database.
Step 6: HANA Content Model installation
Download HANA Content Model (/SAPAPO/REPL_REPLICATION_MODE)
Install the HANA Content Model using import function in HANA Studio
Step 7: Run Business Analysis on data in HANA
Run analysis on the data in Calculation view been installed.
REFERENCE
Disclaimer and Liability Notice
This document may discuss sample coding or other information that does not include SAP official interfaces and therefore is not
supported by SAP. Changes made based on this information are not supported and can be overwritten during an upgrade.
SAP will not be held liable for any damages caused by using or misusing the information, code or methods suggested in this document,
and anyone using these methods does so at his/her own risk.
SAP offers no guarantees and assumes no responsibility or liability of any type with respect to the content of this technical article or
code sample, including any liability resulting from incompatibility between the content within this document and the materials and
services offered by SAP. You agree that you will not hold, or seek to hold, SAP responsible or liable with respect to the content of thisdocument.
Hi Santanu - great writeup of the work we had done. I'm glad I could help you resolve some of these hurdles and get it up and running.
Happy HANA,
Justin
Also to add on here, it's important that your secondary database connection has the same name as the SLT connection to HANA. The reason being is that the target schema for SLT will be the same as the APO, ie the livecache data will be dropped into tables within the same SLT replication schema.
In addition, as a prerequisite there are a ton of OSS notes to consider applying in SCM depending on what version you are on. These will affect both the replication functions as well as the Supply Chain InfoCenter if that is part of your solution.
1784479 - OSS notes relevant if you are on SCM SP06
1863896 - OSS notes relevants if you are on SCM SP08
1904500 - OSS notes relevants if you are on SCM SP08
Happy HANA,
Justin
Thanks Justin for sharing your thought and knowledge of OSS messages.
But can we get some more clarity on the same naming for secondery database connection in SCM and SLT to HANA. Cause I think the user name parameter of the Database connction is the driving factor to connect to the targer schema in HANA and yes that has to remain same in both SLT and APO to get the master and transaction (planning) data unders same SLT repliaction schema.
Thanks,
Santanu
Thanks Santanu & Justin for helpful information.
@Justin: Also to add on here, it's important that your secondary database connection has the same name as the SLT connection to HANA. The reason being is that the target schema for SLT will be the same as the APO, ie the livecache data will be dropped into tables within the same SLT replication schema.
How is this possible, could you please elaborate? I mean we will create SLT schema and in HANA Studio it gets created. But what would be the password for the same schema as i believe we never log in with the same schema user ID. Please help identifying the password for SLT schema.
Hey Sonil, the actual user that is used in the secondary DB connection in APO needs to be the same user as the HANA schema owned by SLT.
You're right, by default you won't get the password for the SLT user defined from the initial SLT configuration, a few extra steps are required. In the example above the dbco doesn't show a realistic user.
1) Set up SLT connection from APO->HANA, you should now have a schema like "APO_HDB" created in HANA and also the same named user created.
2) Ensure that the SLT master job is stopped
3) As system user or similarly provisioned security admin, change the password for user APO_HANA in HANA.
4) In SLT, in dbco, update credentials for the HANA connection with the new password.
5) For the dbco configuration step in APO (like laid out by Santanu), use the same APO_HANA user to connect.
6) In SLT, restart the master job and the new credentiala should work.
7) APO replication config can continue
In this way, the live cache replication will drop the data into the same schema that SLT uses for replicating actual tables from APO. This is not super clear in the reference material, it's only lightly referred to, but the above steps should make it more clear.
Happy HANA,
Justin
Thanks Justin. This is indeed very helpful insights.
Well, I have quick question, the method of Live cache data will take care of ONLY time series planning data but is there any way we can load Order based planning data in HANA? Just curious if there is any way we can do so.
Once again much appreciated !
I used System user, but getting an error
Could not open connection <CONNECTIONAME>.
DBI error 16 occured.
Please note, I have used connection name same as APO server connection created in SLT.
Hi Santanu,
Nice blog on how to replicate planning data.
Below document from ASUG will also be very helpful:
http://events.asug.com/2013AC/Supply%20Chain%20Management/1201%20New%20Operational%20Analytics%20for%20SAP%20Advanced%20…
Below SAP notes will also help:
1784479 - Collective note for APO Planning Data Replication to HANA
1863896 - Data replication from SCM APO to SAP HANA Live
Regards,
Vivek
Reports mentioned in PPT are out of the box, how can I see / execute these reports?
Can you please help?
Hi Shantanu,
Thanks.
Can you please throw some light on 'How to Transfer this Replication Model Configuration' to other Environments such as Quality and Production from Dev.
This would help a lot .
Regards
Kamal
Kamal,
I created an OSS message to ask SAP this very question. According to SAP OSS the replication model needs to be created locally.
Thanks,
Matt
Thanks Matt,
You mean this Config has to be created manually in all environments i.e. Quality , Prod etc.
Correct.
Regards
Kamal
Yes - this is an unfortunate fact - none of the replication models are transportable. I have taken 3-4 models to production at my current client and in each system it must be created manually. This means you must take care in the model creation or risk missing a measure.
Regards,
Justin
Hi Shantanu,
I have followed your instructions, and I am sending data out to Hana from live cache. One gap that we find is that we cannot see the initial bucket in the Hana DB. Is it possible to send this to the Hana DB?
Thanks,
Matt
You don't have any control on the time buckets that are replicated. This is all going to be set in APO based on the time horizons for the specific planning data view (I assume). In the 4 models I have worked on (3 SNP), there was never an initial bucket available in any of them.
I'm not sure how an INITIAL bucket would be replicated, since the C_MONTH is part of the characteristics "key".
It may be worth opening an OSS message on this. There are only a handful of processors that specialize on this solution and they are usually quite fast and knowledgeable.
Regards,
Justin
Justin -
Thanks much for your reply really appreciate it.
I do not have any access in SAP source systems for last one year so could not check on this 🙁 as I have been working in Oracle ERP as source.
Thanks,
Santanu
Hi Santanu,
Doe this model support real-time reporting from APO LIVE Cache? We have a requirement to have reporting on APO LIVE Cache data and would like to know if that is possible.
Thanks,
Balaji Venugopal
The best you can get is near real time by setting the replication job frequency in APO appropriately. I don't see a reason you couldn't set this to run every 1 minute if you really want to.
Regards,
Justin
Justin, By any chance do you know how to see / execute out of the box reports mentioned in ASUG pdf in one of the comment.
Thanks
Kiru