cancel
Showing results for 
Search instead for 
Did you mean: 

Extraction job in R/3 is taking too long to load data in BI

former_member189845
Active Participant
0 Kudos

Hi All,

We are facing issue in production, where one of extraction job from R/3 to BW is taking more than 7 hours of time.

Here Data source is the customized one, and extractor structure is based on view where consists of 9 tables. Problem here the same job where used to take only 15 minutes for records 458000 but after the migration happened ( Data center migration moved from one vendor to other vendor ) and after that only the issue is there where job is taking 7 hours for similar amount of records.

We checked the same scenario in quality, where there is no issue the job is taking only 15 - 20 minutes of time.

Through the first stage of analysis we had done the trace details for long running job in R/3 , As per the trace details the job is taking its maximum time at the DB end (Which is 82.1%). In ABAP its taking 17.9% of its total execution time and in DB its taking 82.1% of its total execution time.

Maximum time taking while calling the two exits


In Exit EXIT_SAPXXX_002 its taking the time at DB end while selecting the data from two tables. At this statement took 31.3% + 10.5% of total execution time. 


In function Z_BWEX_DS_MD_ZBW_XXX taking 40.3% of total execution time at the fetch statement.

I tired analysing with the subset of data i.e. by limit the data being processed by restricting it but unfortunately there is no selection available.

And also will check the system performance at that time and number of available back ground work process are free.

Don't know what exactly that cause the issued after the migration happened. We have raised an OSS message for that.I'm trying to understand why/what is use of the first exit, where two expensive select statement on two tables meant for ?

Note: Here our BW system DB is HANA , and our R/3 is oracle DB


Can you guys help me with the issue.


Thanks in advance for your help

Siva

View Entire Topic
former_member450139
Discoverer
0 Kudos

Hi Siva,

Since the issue started and is happening after datacenter migration, I would assume that there is no application level change ( like code, volume , job variant and job schedule)  and no application level resource issue and most of time is on database side and in sequential read.. With those assumption, then the  issue  from application server point view, it is a database related performance issue. It could be an Oracle CBO issue, it could be a network traffic/latency issue, it could be an IO contention issue or Oracle Server capacity issue..You need to validate those area one by one.

A lot of my assumption can be validated via job runtime  statistics  which tell where the time is spent by the job.  Job runtime statistics can point you to right direction on what you should focus on investigation. For example, if job statistics show that the job spent a lot of time in "waiting for work process", then shortage of work process is a contributor to this issue. But this is unlikely case based on what has been shared so far. If table level statistics collection is enabled in your system, you can find out top most expensive table the job accessed during the execution.

You can use ST05 to do a trace to find the most sql statement and their execution plan.. Some times  you can use info from STAD to find out the expensive SQL from ST04 transaction... from there, you can analyze the execution plan to see whether plan switch is a concern..  ST04 can tell you where time is spent by a SQL... That information can normally tell you the next direction of tuning if plan is correct...

I knew you did a trace but duration is only a few minutes -- considering the job duration is 7 hours.. not sure of how typical your trace is....

Hope this help. If you need more information on SQL tuning, ST04, ST05 and STAD, following blog might help you. http://www.sap-perf.ca/

Eric

Clemenss
Active Contributor
0 Kudos

Also check table size category - once exceeded access slows down!

Regards Clemens

former_member189845
Active Participant
0 Kudos

Hi Clemens ,

Can you please be more specific about the table size , i see that table called MVKE is size category 3 and records stored is billions

Thanks,


siva