Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member

This Article will provide the information on Row Per Commit and Array Fetch Size. These two BODS Performance techniques improve the job execution time in practical scenario.

Row per commit :

For the best performance, BODS recommends to set the Rows per commit value between 500 and 2000. The default value for regular loading for Rows per commit is 1000. The value of Row per commit depends on the numbers of column in the target table. While adjusting the Row per commit value in the target table following should be keep in mind, the default value is 1000 and maximum value is 5000 only and do not enter any negative and non-numeric value. Row per commit is target side performance tuning technique.

The formula for calculate the Row per commit value is: max_IO_size /row size (in bytes)

Following scenarios will show the impact of Row per commit in the BODS Job. For further details, find below the screen shots :


In first scenario: Row per commit is 200 and source data row count is 18116224 records. It take 229 seconds (3min,49secs) to load the data into target table.

In second scenario: Row per commit is 2000 and source data row count is same as above (18116224 records). It takes 136 seconds (2min,16secs) to load the data into target table.

Hence, in the second scenario job executive time gets reduce because RPC value set properly based on data load.

Note: The performance becomes worse, if we set RPC too high for our environment or if we used an overflow file or our initial transactions failed than in such conditions the Rows per commit value behave like it sets to 1.


Array fetch size :

It indicates the number of rows returned in single fetch call to a source table, default value is 1000. AFS value reduces the number of round trips the database and hence performance for the table reads improves. Array Fetch size is source side performance tuning technique.

For Example: If you enter 50 and your query retrieves 500 rows, the connection executes 10 fetches to retrieve your 500 rows data.

Drawback:

  • Data retrieve row by row if you enter 1 for array fetch size, in such scenario array fetch size is deactivated and slow the server performance time.
  • If you keep array fetch size is too high then it will cause Fetch Out of sequence error, then in such scenario receiving end packet is corrupted and has to be re-transmitted.


Following scenarios will show the impact of Array Fetch size in the BODS Job. For further details, find below the screen shots :


In first scenario: Array Fetch Size is 100 and source data row count is 18116224 records. It takes 292 seconds (4min, 52secs) to load the data into target table.

In second scenario: Row per commit is 1000 and source data row count is same as above (18116224 records). It takes 259 seconds (4min,19secs) to load the data into target.

If Array Fetch size value is set properly for large scale of source data load in complex ETL job, then performance will be improve more in terms of job execution time.

Hence, in the second scenario it shows when the Array fetch size value is 1000 (greater) than the previous one so the job executive time gets reduce.


Hope this document helps those who are new to BODS. In future will discuss more about BODS performance tuning techniques. Thanks for your time...

7 Comments
Labels in this area