Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Frank-Martin
Product and Topic Expert
Product and Topic Expert
I have good news for Db2 LUW database administrators: We have optimized the local client copy by using the SAPTOOLS.BULK_INSERT procedure.

Background


A local client copy in an ABAP system loops over all client-dependent tables and copies all rows of a given source client to a target client. In the default local client copy algorithm, the data of the source client is selected and inserted into the target client using ABAP SQL. This implies that all data flows over the ABAP application server where the client copy was started, and the overall runtime of the client copy is strongly impacted by the network latency between database server and ABAP application server. That’s why we always recommended that you run the standard client copy on a local SAP application server with low network latency.

Room for Improvement


Obviously, there was room for improvement since the client copy does not necessarily need to flow the data over the SAP application server. Instead, the data flow can be pushed down to the database server. This blog post describes how we did exactly that for Db2 LUW .

Prerequisites


To make use of this optimization, your system must be running on Db2 version 11.5 Mod Pack 8 or higher.

What About INSERT … SELECT?


By far the fastest way to copy client data from a source client (‘SRC’) to a target client (‘TRG’) in one client-dependent table (“TAB_CLIENT” with client column “CLIENT”) is to use an atomic INSERT … SELECT statement like this:
INSERT INTO "TAB_CLIENT" 
SELECT 'TRG' AS "CLIENT", <other columns> FROM "TAB_CLIENT"
WHERE "CLIENT" = 'SRC'

In this way, no data needs to be transferred onto the SAP application server, and the application server needs to execute only one SQL statement per table on the database server. Therefore, you can neglect network latencies from application server to database server.

You can find this approach in SAP Note 2761821 for HANA and ORACLE. The same approach would work fine on Db2 LUW, too, but only for small or medium-sized client-dependent tables.

Restraints


However, such atomic INSERT … SELECT statements may cause problems for very large client-dependent tables where one INSERT … SELECT statement would generate a huge log volume in a single database unit of work. Such a huge log volume may cause a log-file-full condition, or the application may be forced off when the configured MAX_LOG volume has been reached. At the same time, such an atomic statement would require a huge number of row locks and would likely cause a lock escalation.

There is no good workaround for the described problems for large tables. For example, using ALTER TABLE … ACTIVATE NOT LOGGED INITIALLY could avoid the log volume problem, but this is not possible in HADR environments and would prevent rollforward recoverability until a fresh backup is taken. Also, LOCK TABLE could prevent the lock escalation problem, but it would lock the table also for applications running in other SAP clients while the client copy is active.

It’s also not easy to split the single atomic INSERT … SELECT statement into multiple statements using WHERE clauses. The generation of appropriate split predicates for such WHERE clauses is time-consuming.

Therefore, we decided on a different pushdown technique on Db2 LUW as described in the following.

The SAPTOOLS.BULK_INSERT Procedure


To optimize the local client copy problem, you need an SQL statement that performs the same operations as the INSERT … SELECT statement mentioned above without the logging and locking problems. As of Db2 11.5 Mod Pack 8, the SAPTOOLS schema contains a BULK_INSERT procedure that fills this gap. The SAPTOOLS schema is automatically updated in the post installation step of a Db2 Mod Pack or Fix Pack upgrade by running the db6_update_db script.

BULK_INSERT can be called as follows:
CALL BULK_INSERT ( <tabschema>,  <tabname>,  <select statement>,  <options> )

where <tabschema> and <tabname> describe the target table of the bulk insert.

The <select statement> specifies the query that selects the data to be inserted into the target table. The result set of the query must provide exactly the same column names as the target table.

BULK_INSERT inserts the data provided by the query into the target table and automatically performs intermediate COMMIT operations to release logs and locks. If the BULK_INSERT procedure fails with an unexpected SQL error, part of the data may already have been inserted into the target table and the caller is responsible for cleaning up the data.

With <options>, you can add additional options in a comma-separated string to the procedure. The <options> string may, for example, include the following:

  • ‘TRACE’ (enable trace written to DIAGPATH)

  • ‘COMMIT_COUNT=<commit count>’ (default 128*128)

  • ‘IGNDUP’ (if the INSERT should be done ignoring duplicate key errors similarly to the INSERT … IGNORE DUPLICATES clause)


So, the good news is that the INSERT … SELECT statement above can be represented by the following BULK_INSERT statement:
CALL BULK_INSERT (
CURRENT SCHEMA,
"TAB_CLIENT",
'SELECT ''TRG'' AS "CLIENT", <other columns> FROM "TAB_CLIENT"
WHERE "CLIENT" = ''SRC'' ',
'' )

Please note that it’s mandatory to use the ‘AS “CLIENT” correlation name in the BULK_INSERT procedure. If you’re using a statement literal in the <select statement>, single quotes need to be escaped.

If the BULK_INSERT statement finishes without errors, it will have copied all rows from the source client to the target client just like the INSERT … SELECT statement would have done. It will return a small result set reporting the number of rows that have been inserted.

BULK_INSERT is a C procedure that uses CLI calls locally on the database server to open a cursor for the given <select statement> and to insert the rows into the target table. BULK_INSERT is not as fast as INSERT … SELECT but while it’s executing, it does not require any network roundtrips between application server and database. Data is only transferred locally in the database server. We’ve also tested SQL procedure approaches, but using a C procedure turned out to be faster.

Optimized Client Copy


To make use of the local client copy optimization on Db2 LUW, you need to upgrade your Db2 database to version 11.5 Mod Pack 8 or higher and implement the ABAP code corrections from SAP Note 2761821 . After that, a local client copy will use the BULK_INSERT procedure to optimize the client copy runtime.

Tests on a local SAP application server have shown a 30% shorter client copy runtime when BULK_INSERT is used compared to the standard client copy algorithm. In addition - and this may be the most important advantage - the runtime of the client copy will no longer depend on which SAP application server the client copy is started on because no table data is transported over the network from database server to application server and back to the database server while the client copy is running.

Conclusion


I recommend that you give the new optimized client copy on Db2 LUW a try. Please let me know how it works on your system. Maybe you can even think of other possible use cases for the BULK_INSERT procedure even if those use cases are outside the SAP environment. You could, for example, use BULK_INSERT to clone a complete table or others.

As always, any feedback is welcome. Feel free to start a discussion here or contact me directly.

Related Information


SAP Note 2761821