Some days ago, Mark Finnern wrote to me to offer his congratulations on my ascending the throne of best forum contributor on SDN (really ?!) and he asked “Have you ever thought of writing a Weblog? I think people would really love to read some summary posts to often asked questions as they do in the BSP area…”
…OK, LETS START !
Just a little introduction…
For extracting logistic transactional data from R/3, a new generation of datasources and extractors, no longer based on LIS (Logistic Information System) information structures, was developed starting from BW Release 2.0B and PI 2000.1 or PI-A 2000.1 (valid from R/3-Release 4.0B ).
The tools for the logistics extract structures can be found in the IMG for BW (transaction SBIW): access into your OLTP system, choose Customer-Defined DataSources -> Logistics -> Managing Extract Structures.
The Logistics Extract Structures Customizing Cockpit (you can directly see it by transaction LBWE) represents the central tool for the administration of extract structures.
Ok…but, in other words, what is this the Logistic Cockpit (LC)?
We can say that its a new technique to extract logistics information and consists of a series of a standard extract structures (that is, from a more BW perspective, standard datasources), delivered in the business content thanks to a given plug-in.
But, what is the logic behind these datasources that allows to manage the logistic flows towards BW in delta mode after an initial load of the historical data (done with update mode Delta Initialization, by retrieving data from the setup tables, which are assigned to each extract structure, and are filled using special setup transactions in OLTP) ?
Following from many questions posted until now in SDN BW Forums, in this weblog we will focus only on the delta mechanism of the LC and not on the other tasks we can manage inside it, like the necessary steps for activating and carrying out successful data extraction or the maintenance of extract structures and datasources (but, dont worry, it will arrive also a summary weblog dedicated to these important procedures in the next days !).
The V3 Update
Unlike the LIS update (well, I know that you are asking but how does this old LIS update work ???…my dear, another weblog will arrive very soon also for this topic…sorry for the waiting, but I have a lot of things to do !), data is transferred from the LIS communication structure, using extract structures (e.g. MC02M_0HDR for the header purchase documents), into a central delta management area.
This transfer takes place thanks to the V3 update with a specific (scheduled) job and is therefore temporally detached from the daily application operations; the main consideration is that the delta management acts as a buffer (not depending from the application business) containing data that can be requested from BW via infopackage with update mode delta.
The following picture shows (with an high-level view) the interaction between the LIS communication structure and the V3 extraction technology.
We said that for updating the extraction of transactional data from the different logistics applications (MM, PP, SD and so on), the technology for collective updates (V3 updates) is used (until PI 2003.1).
This means that the data is collected in the R/3 update tables before the transfer to the interface: the data is retrieved there by means of a periodic update process that needs to be started in order to transfer delta records to the the BW system delta queue.
During this V3 collective run (that you can start and schedule from LBWE for each application component), the data is transferred to the BW delta queue (that you can see from RSA7 (see the picture below) or LBWQ transactions), from which they are retrieved by means of (delta) requests from the BW system.
V1, V2, V3…
When scheduling what
Normally in R/3 there are three types of update available:
- Synchronous update (V1 update)
- Statistics update is carried out at the same time (synchronous) as the document update (in the application tables).
- Asynchronous update (V2 update)
- Document update and the statistics update take place in different tasks.
So, V1 and V2 updates dont require any scheduling activity.
- Collective update (V3 update)
- As for the previous point (V2), document update is managed in a separate moment from the statistics update one, but, unlike the V2 update, the V3 collective update must be scheduled as a job (via LBWE).
Remember that the V3 update only processes the update data that is successfully processed with the V2 update.
This is a key task in order to properly manage the right working of the BW logistic flows.
In fact, scheduling timing process is very important and it should be based on the basis of
1) the amount of activities on a particular OLTP system and on
2) the particular requirements related to the updating needs of data displayed in BW reports.
For example (relating to the first point), a development system with a relatively low/medium of new/modified/deleted documents may only need to run the V3 update on a weekly/daily basis.
Instead, a full production environment, with really many thousands of transactions everyday, may have to be updated hourly, otherwise postings will queue and can affect performance heavily.
About the second point: if, for example, the reporting timing refers to a monthly periodic view, successfully monthly scheduling the V3 update will ensure that all the necessary information structures are properly updated when new or existing documents are processed in the meanwhile.
Finally, the right choice will be the result of all these considerations; by doing so, the information structures in BW will be current and overall performance will be improved.
Its possible to verify that all V3 updates are successfully completed via transaction SM13.
SM13 transaction will take you to the Update Records: Main Menu screen:
On this screen, enter asterisk as your user (for all users), flag the radio button V2 executed, select a range date and hit enter.
Any outstanding V3 updates will be listed.
At this point, its clearthat, considering the V3 update mechanism, the main requirement is that the delta info have to be transferred in the same sequence to the BW system as it occurred in the OLTP system.
Just a consideration…if we had to load our delta records only in a cube, there would be no problem: everything goes in append and, in the end, well find the final situation right displayed thanks to the OLAP processor!
But since updating in ODS objects is permitted in the logistics extraction for almost all DataSources, we have to consider any effects that can derive from the overwrite update mode (specific of the ODS object).
For example, the consistent storage of a status field (e.g. delivery status) in ODS objects can only be ensured only with a right (serialized) delta sequence: if the record with open delivery status (created as first record in R/3) arrives later than the record with closed delivery one (created as second one in R/3), we would have a false representation of the reality.
Considering that, the sequence of the existing data records is recognized by and taken into account when reading and processing the update data (step A of the picture), as well as when transferring data to the BW system (step B).
Since the normal existing update methods actually does not recognize the serialized processing of update data, the Serialized V3 Update function was created (also thanks to subsequent several corrections in SAP Basis) in order to be able to serialize step A.
In the next episode…
…well analyze the limitations of the serialized V3 update that will conduct to the new delta methods (from PI 2002.1) and to the final retire of our serializer starting from the PI 2003.1.