Migration from POSDM to POSDTA
In my first blog I wrote about the key differences between POSDM and POSDTA (If you haven’t read it yet, here is the link https://blogs.sap.com/2019/03/03/posdm-vs-posdta/
In my second blog I talked about the journey from POSDM to POSDTA and the various approaches,here is the link https://blogs.sap.com/2019/03/04/journey-from-posdm-to-posdtacar/
In this blog lets focus on what are the 3 key elements of Migration from POSDM to POSDTA(CAR).
a)Current configuration needs to be recorded into a new transport
b)Config TR needs to be released and imported into CAR(POSDTA) with “Ignore component version” flag
a)Configuration should only be transported after the code
b)Additional configuration will have to be in place to accommodate new features in POSDTA such as order channel ,data status determination, loyalty distribution etc(depending on the customer’s business requirement)
c)SLT is enabled and master data is being replicated into CAR
a)PIPE(POS Inbound Processing Engine) related Custom BAdIs within POSDM can be transported across to POSDTA/CAR.
b)Required activation of BAdIs need to occur after import(step a above)
a) Standard tables such as /bi0/pplant, /bi0/pmaterial , /bi0prpa_mean and other such tables should be replaced with new ABAP tables(HANA views replicated from ECC into CAR as part of SLT)
b)BW objects that were part of POSDM needs to be replaced with the new ABAP objects.
3.Transactional data(TLOG data)
a)This is generally done using the POS Transaction Data Migration Report (T code – /POSDW/MIGR) from POSDM to POSDTA(CAR)
b)Range of stores, business days can be used as a filter criteria to migrate transactional data from POSDM to POSDTA(CAR).
a)Relevant sizing activity is done considering the current volume/size of the Transactional data in POSDM for the POSDTA(CAR) data storage.
b)Data in POSDTA will be stored in an uncompressed flat format(in /POSDW/TLOGF) compared to /POSDW/TLOGS in POSDM.Also there are some additional standard fields introduced in POSDTA when compared to POSDM. So due diligence is required before importing the transactional data into POSDTA.
1.Any custom extension segments that are part of POSDM structure will need to be analysed and agreed whether they need to be stored in /POSDW/TLOGF table or /POSDW/TLOGF_EXT tables depending on the usage of these fields and the configuration in POSDTA.
2.HANA sizing and SLT sizing are key activities that need to occur prior to data migration activities
3.BW transports are not relevant for CAR, hence no need to transport/import them across
4.Any SAP Notes etc and Basis transports specific to POSDM is not required to be imported/transported across POSDTA/CAR
5.When it comes to historical data migration, any data that’s already archived in POSDM can’t be migrated into POSDTA/CAR. The only way it can happen is through resend the TLOGs to POSDM and then migrate.
6.Make sure any tasks that are set to “immediate processing” are turned off during migration of transactional data to avoid unnecessary processing of data within POSDTA. These tasks can be turned later on when POSDTA is ready for sending the data to downstream systems.
Thanks for this.
We are currently just using POSDTA on HANA with a possibility of implementing other parts of CAR in the future.
The upgrade from legacy POSDM was rather seemless and none of our custom code needed changing since we used the standard PIPE functions to read and process TLOG's.
While it is faster and quicker and thus very tempting to directly query TLOGF now that it is a standard flat uncompressed table , looking at the current architecture (rowkey-parentkey and so many repeated columns) it seems a bit messy to me and it won't surprise me if they change it going forward.
Even though TLOGS are uncompressed they can still hold the duplicate transactions, so direct query on TLOGF is not recommended. If you use join considering other dependent tables it might work though. 🙂
Could you please let us know the dependent tables that to be used in join conditions with TLOGF to restrict duplicate transactions.
Tlogs are stored in table irrespective of the validation. So when you use them direct for query you need to put either custom code to restrict duplicate (i.e. delete duplicates from Internal table considering key fields) or enable data status in Tlog to ensure it is good to use which should be updated by your tasks.
Thank you so much for the reply.
We are facing issue even Data status field is enabled, i am not sure why it is not identifying the duplicate or error transactions. Even for duplicate transactions is marked with DATASTATUS as 2.
There is one more field TRANSSTATUS in /POSDW/TLOGF table but it is always blank. Is there a way to populate this field with status by implementing any BAdI?
We are using SLT to create HANA views and BW data models to extract data from /POSDW/TLOGF table.
Could you please help us provide details on how to check and delete duplicates and restrict them from reporting.
Also would like to know what is the us of /POSDW/TSTAT? Can we use this table and applying some logic to identify duplicates of /POSDW/TLOGF and pass indicators?
I am not sure why Datastatus is not getting updated correctly. /POSDW/TSTAT stores the task status of individual transaction at task level which you can use as condition to filter out the duplicate based on the status of duplicate check task.
Thank you for your reply.
Would like to know what is the relation between /POSDW/TLOGF and /POSDW/TSTAT to identify error/duplicate records for Sales, Finance, Movement type transactions to restrict them to flow to BW.
It is good to hear your migration/upgrade went quite smoothly. Inventory visibility would be definitely a good feature which you can enable and if you have Lumira you can view some of the standard reports, alternately you can make use of HANA VDMs to view current stock(processed and unprocessed sales) and cross company stock in transit).
Personally I don't foresee any changes in TLOGF table structure going forward as its being referred by various other components within CAR as well especially for analytics/reporting as well 🙂 Having said we only can wait and watch what comes on our way 🙂
I'll suggest first to create separate post for your query as discussion happening here is not relevant to blog posted by Aram. Wider consultants can help you with your query on relevant post.
To answer your query for last time in this post as it is not relevant to title of the post look at transaction index to join these two tables.