Skip to Content

In this blog I describe the Impact Analysis feature which is part of Software Update Manager 2.0

Abstract: The Software Update Manager (SUM) offers various downtime optimization approaches like nZDM for ABAP and ZDO. The usage of such an approach can result in certain kinds of impacts for database tables.

To prevent unexpected occurrences of such impacts during the maintenance event on your production system, you would like to identify them in advance using the Impact Analysis tool as part of Software Update Manager 2.0.

When to use the Impact Analysis?

You want to minimize the downtime of an update or upgrade by using one of the downtime-optimization approaches offered by SAP. For ABAP-based systems you can reduce the downtime by using near-Zero Downtime Maintenance or Zero Downtime Option of SUM. These downtime-optimization approaches can result in certain kinds of impacts for database tables or with respect to the additional required database space.

 

In case of nZDM, the following impacts are possible:

  • Additional daily DB growth due to change recording
  • Database triggers might have to be removed from certain tables
  • Additional DB space requirements due to table cloning

In case of ZDO, the following impacts are possible:

  • Read-only restrictions for end users on the bridge instance
  • Database triggers might have to be removed from certain tables
  • Additional DB space requirements due to table cloning

 

To prevent unexpected occurrence of such impacts during the maintenance event on your production system, you would like to identify them in advance. This can be achieved by exporting table statistics from your production system, and providing them to the SUM running on your sandbox system.


 

How to use the Impact Analysis in general and how to export the data?

Before the Impact Analysis can be started, you’d need to get statistical data from your production system. Don’t miss that SAP Note 2187612 must be implemented in your production system.

It’s important that the statistical data is exported from the production system because the impact of an update to your production system should be checked.

 

For this. the statistical data should be representative for the time when the upgrade is running. Let’s assume you plan to start the SUM tool on Monday and the cutover should happen on Saturday. Then, the ideal timeframe would be a dataset that captures all activities and business processes like the time of the week when the SUM is performing the update. SAP recommends to not include timeframes when transports were imported into the system. This would lead to false-positive results later in the Impact Analysis.

Now, get ready to export the statistical data from your production system. This starts by implementing the export report ZRSUPG_IMPACT_ANALYSIS_EXPORT attached to SAP Note 2402270. All relevant steps and pre-requisites are well described in this SAP Note.

Once the report has been implemented and transported into production you can call the report ZRSUPG_IMPACT_ANALYSIS_EXPORT using transaction SE38/SA38:

 

 

Fill in all fields and select a proper time frame. After this, just run the report and the file ZDIMPANA.ZIP will be exported to your local client. Now, you can upload the file to the save directory of Software Update Manager (<DIR_PUT>/save/ZDIMPANA.ZIP). Make sure, that you spell the file name and extension ZDIMPANA.ZIP in capital letters.

To get more insights about specific use-cases jump down in the blog to the downtime-optimization approach you’re running.

 


 

How to use it for nZDM (near-Zero Downtime Maintenance)?

The near-Zero Downtime Maintenance (nZDM) approach uses the so-called CRR (Change Record & Replay) technique for business transactions based on database trigger technology. With the introduction of the Record & Replay technique in nZDM that uses the Change Recording and Replication (CRR) framework, you can now capture database changes in tables on the production instance during uptime. This trigger-based change recording technique allows importing new content into the shadow instance and adjusting table structures to the new release while all users are still able to work on the production system. The recording of data changes is started automatically by the SUM and is transferred to the shadow instance iteratively after the table structure adjustment.

Recording changes means that business transactions will be captured in temporary tables. This requires temporary additional database space. Also, in order to enter the technical downtime, at least 75% of all recorded changes needed to be replayed. However, SAP strongly recommends having a replication ratio of > 95%. For additional information on this, see SAP Note 2351880.

Now, the Impact Analysis comes into play. You may have observed in the past, that predicting the amount of additional database space for the logging tables was not possible. Another thing which is hard to predict is the number of records which are captured during the upgrade.

With the Impact Analysis for nZDM you can get the answer for the following key figures:

  • Additional daily DB growth due to change recording
  • Database triggers might have to be removed from certain tables
  • Additional DB space requirements due to table cloning

 

Here’s the three-step-approach how the impact analysis can be activated for nZDM. If you already passed the phase RUN_IMPACT_ANALYSIS_UPG you cannot run it again.

 

To make use of the Impact Analysis you’ve to export the statistical data from your production system first. The file must be uploaded and provisioned to the save directory of Software Update Manager.

The idea is rather simple: SUM known which tables are touched by the update. Based on this information nZDM must switch and clone certain tables. Now, with the statistical data of your production it can be compared whether any of the switch tables either have database triggers or will be changed very frequently. As mentioned above, both can have a severe business impact.

The next picture shows you how the Impact Analysis works:

(click on the image to enlarge it)

 

You can see in this example that Table-A gets switched by the update. Hence the database trigger (e.g. SLT trigger) must be dropped. After the update the trigger can be re-created again.

Table-C will be switch as well and gets changed very frequently. Consequently, the CRR logging table may growth very fast. Additionally, the planned cutover windows might be endangered since the required replication ratio of > 75% will not be reached in time. With the Impact Analysis for nZDM it will be easier to predict how long it will take until the technical downtime can start.

Besides the described business impacts, we got some more nice key figures with the Impact Analysis result:

  • Estimation of addition database space for clone tables
  • Estimation of addition database space for CRR logging tables

With the latest version of SUM 2.0, the Impact Analysis of nZDM will be triggered one time by SUM in the background, and write the results into the following log file:

<SUM>/abap/log/IMPANAUPG.<SID>

 

Sample output:

[...]
A4 ESUPG 301 Report name ...: "RSUPG_RUN_IMPACT_ANALYSIS"
A4 ESUPG 302 Log name: "/usr/sap/SID/SUM/abap/log/IMPANAUPG.SID"
A4 ESUPG 304 Start time.....: "08.03.2018" "14:07:03"
A4 ESUPG 002 " "
A4 ESUPG 001 -------------------------------------------------------------------------
A4 ESUPG_IMPANA 007 Report "RSUPG_RUN_IMPACT_ANALYSIS" started
A4 ESUPG_IMPANA 001 -------------------------------------------------------------------------
A4 ESUPG_IMPANA 121 Header data:  "source_system_id = 'PRD'".
A4 ESUPG_IMPANA 121 Header data:  "source_license_number = '0012345678'".
A4 ESUPG_IMPANA 121 Header data:  "source_db_platform = 'AnyDB'".
A4 ESUPG_IMPANA 121 Header data:  "contains_local_objects = 'true'".
A4 ESUPG_IMPANA 121 Header data:  "contains_imports = 'false'".
A4 ESUPG_IMPANA 121 Header data:  "export_timestamp = '20170717123918'".
A4 ESUPG_IMPANA 121 Header data:  "export_version = '1.13'".
A4 ESUPG_IMPANA 121 Header data:  "export_header_file = 'header.xml'".
A4 ESUPG_IMPANA 121 Header data:  "evaluated_periods_file = 'evaluated_periods.xml'".
A4 ESUPG_IMPANA 121 Header data:  "relevant_tables_file = 'relevant_tables.xml'".
A4 ESUPG_IMPANA 121 Header data:  "irrelevant_tables_file = 'irrelevant_tables.xml'".
A4 ESUPG_IMPANA 121 Header data:  "number_relevant_tables = '2423'".
A4 ESUPG_IMPANA 121 Header data:  "number_irrelevant_tables = '117764'".
[...]
A3WESUPG_IMPANA 204 Change recording for "TABL PATCHHIST" will produce up to "4867971" log records per day.
A3WESUPG_IMPANA 205 Log table for "TABL PATCHHIST" is estimated to grow by up to "1.271" GB per day.
A3WESUPG_IMPANA 201 "TABL PATCHHIST" must be untriggered, but has "1" SLT trigger(s) on ref. system "PRD".
A3WESUPG_IMPANA 202 "TABL PATCHHIST" must be untriggered, but has "1" non-SLT trigger(s) on ref. system "PRD".
A4 ESUPG_IMPANA 130 Meta data for "TABL PATCHHIST":  "tabclass = 'TRANSP'"  "sqltab = ''"  "contflag = 'L'".
A4 ESUPG_IMPANA 130 Meta data for "TABL PATCHHIST":  "package = 'SBAC'"  "component = 'SAP_BASIS'"  "version = '740'".
A4 ESUPG_IMPANA 130 Meta data for "TABL PATCHHIST":  "sp_level = '0009'"  "switch = 'X'"  "changerec = 'X'".
A4 ESUPG_IMPANA 130 Meta data for "TABL PATCHHIST":  "britype = ''"  "readonly = ''"  "updates_per_day = '221410.0'".
A4 ESUPG_IMPANA 130 Meta data for "TABL PATCHHIST":  "deletes_per_day = '123110.1'"  "inserts_per_day = '4523450.8'"  "table_size_in_gb = '0.000'".
A4 ESUPG_IMPANA 130 Meta data for "TABL PATCHHIST":  "index_size_in_gb = '0.000'"  "slt_triggers = '1'"  "non_slt_triggers = '1'".
A4 ESUPG_IMPANA 001 -------------------------------------------------------------------------
A3WESUPG_IMPANA 204 Change recording for "TABL STXL" will produce up to "243543412" log records per day.
A3WESUPG_IMPANA 205 Log table for "TABL STXL" is estimated to grow by up to "72.332" GB per day.
A4 ESUPG_IMPANA 130 Meta data for "TABL STXL":  "tabclass = 'TRANSP'"  "sqltab = ''"  "contflag = 'W'".
A4 ESUPG_IMPANA 130 Meta data for "TABL STXL":  "package = 'STXD'"  "component = 'SAP_BASIS'"  "version = '740'".
A4 ESUPG_IMPANA 130 Meta data for "TABL STXL":  "sp_level = '0009'"  "switch = 'X'"  "changerec = 'X'".
A4 ESUPG_IMPANA 130 Meta data for "TABL STXL":  "britype = ''"  "readonly = ''"  "updates_per_day = '31231213.9'".
A4 ESUPG_IMPANA 130 Meta data for "TABL STXL":  "deletes_per_day = '62.4'"  "inserts_per_day = '212312136.1'"  "table_size_in_gb = '181.681'".
A4 ESUPG_IMPANA 130 Meta data for "TABL STXL":  "index_size_in_gb = '66.063'"  "slt_triggers = '0'"  "non_slt_triggers = '0'".
A4 ESUPG_IMPANA 001 -------------------------------------------------------------------------
A3WESUPG_IMPANA 204 Change recording for "TABL ARFCSDATA" will produce up to "4599429" log records per day.
A3WESUPG_IMPANA 205 Log table for "TABL ARFCSDATA" is estimated to grow by up to "1.012" GB per day.
A3WESUPG_IMPANA 201 "TABL ARFCSDATA" must be untriggered, but has "1" SLT trigger(s) on ref. system "PRD".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSDATA":  "tabclass = 'TRANSP'"  "sqltab = ''"  "contflag = 'L'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSDATA":  "package = 'SRFC'"  "component = 'SAP_BASIS'"  "version = '740'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSDATA":  "sp_level = '0009'"  "switch = 'X'"  "changerec = 'X'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSDATA":  "britype = ''"  "readonly = ''"  "updates_per_day = '1123121.0'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSDATA":  "deletes_per_day = '3151814.9'"  "inserts_per_day = '324493.4'"  "table_size_in_gb = '450.033'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSDATA":  "index_size_in_gb = '79.554'"  "slt_triggers = '1'"  "non_slt_triggers = '0'".
A4 ESUPG_IMPANA 001 -------------------------------------------------------------------------
A3WESUPG_IMPANA 204 Change recording for "TABL ARFCSSTATE" will produce up to "5788720" log records per day.
A3WESUPG_IMPANA 205 Log table for "TABL ARFCSSTATE" is estimated to grow by up to "1.250" GB per day.
A3WESUPG_IMPANA 201 "TABL ARFCSSTATE" must be untriggered, but has "1" SLT trigger(s) on ref. system "PRD".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSSTATE":  "tabclass = 'TRANSP'"  "sqltab = ''"  "contflag = 'L'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSSTATE":  "package = 'SRFC'"  "component = 'SAP_BASIS'"  "version = '740'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSSTATE":  "sp_level = '0009'"  "switch = 'X'"  "changerec = 'X'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSSTATE":  "britype = ''"  "readonly = ''"  "updates_per_day = '4311212.0'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSSTATE":  "deletes_per_day = '244114.6'"  "inserts_per_day = '1233393.4'"  "table_size_in_gb = '235.006'".
A4 ESUPG_IMPANA 130 Meta data for "TABL ARFCSSTATE":  "index_size_in_gb = '118.000'"  "slt_triggers = '1'"  "non_slt_triggers = '0'".
A4 ESUPG_IMPANA 001 -------------------------------------------------------------------------
A4 ESUPG_IMPANA 301 Cloned tables on ref. system "PRD" require add. DB space of "1130.337" GB.
A3WESUPG_IMPANA 302 Log tables on ref. system "PRD" will grow by up to "75.865" GB/day.
A4 ESUPG_IMPANA 311 "4" table(s) will be cloned:      "0" error(s), "0" warning(s), "0" info(s).
A3WESUPG_IMPANA 312 "4" table(s) will be recorded:    "0" error(s), "4" warning(s), "0" info(s).
A3WESUPG_IMPANA 313 "4" table(s) must be untriggered: "0" error(s), "3" warning(s), "0" info(s).
A4 ESUPG_IMPANA 001 -------------------------------------------------------------------------
A3WESUPG_IMPANA 008 Report "RSUPG_RUN_IMPACT_ANALYSIS" successfully finished
A4 ESUPG 001 -------------------------------------------------------------------------
A4 ESUPG 002 " "
A4 ESUPG 301 Report name ...: "RSUPG_RUN_IMPACT_ANALYSIS"
A4 ESUPG 304 Start time.....: "19.09.2017" "14:07:03"
A4 ESUPG 305 End time ......: "19.09.2017" "14:11:12"
A4 ESUPG 002 " "
A4 ESUPG 001 -------------------------------------------------------------------------

 

How to use it for ZDO (Zero Downtime Option)?

The Zero Downtime Option is currently “available on request” as described in the SAP Community blog “Zero Downtime Option of SUM (ZDO) is “available on request”“. If you want to use ZDO please follow the process described in SAP Note 2163060 – Prerequisites and Restrictions of Zero Downtime Option of SUM.

A technical system downtime during an update can be expensive. For this reason, the ideal solution would be to run an update without having a technical system downtime. The idea of ZDO is to have a bridge subsystem in parallel with the upgrade subsystem.
During the maintenance event, users can continue their work on the bridge subsystem. The bridge subsystem contains all data of the production system that users need to continue their work. It’s important that all data which comes along with the update must be hidden for the business users. Therefore, all database tables touched by the upgrade (e.g. by importing new table content, structural changes of database table, etc.) must be cloned. This prevents the bridge subsystem running on the source release, of seeing data that belongs to the target release.

With the Impact Analysis for ZDO you can get the answer for the following key figures:

  • Read-only restrictions for end users on the bridge instance
  • Database triggers might have to be removed from certain tables
  • Additional DB space requirements due to table cloning
  • New with SP03: Tables that will be smart-switched but have a high number of changes

 

Here’s the three-step-approach how the impact analysis works for ZDO. If the statistical data is not provided in the right format, SUM will stop with an error in phase RUN_IMPACT_ANALYSIS_ZDO.

 

To make use of the Impact Analysis you’ve to export the statistical data from your production system first. The file must be uploaded and provisioned to the save directory of Software Update Manager.

The idea is rather simple: SUM known which tables are touched by the update. Based on this information all tables (SAP owned and customer tables) will be classified. The most important table classifications are:

  • Share [upgrade does not touch the table]
  • Clone [e.g. upgrade delivers table content]
  • Clone read-only [e.g. upgrade delivers a complex structural change]

Now, with the statistical data of your production it can be compared whether any of the tables either have database triggers, are very large, or will be set to read-only. As mentioned above, all cases can have a severe business impact.

The next picture shows you how the Impact Analysis works:

(click on the image to enlarge it)

 

You can see in this example that Table-B will be cloned and set to read-only for the bridge [business users might be affected]. Read-only tables cannot be written by the bridge subsystem. If the statistics file provided to SUM shows that the Table-B has write access a warning or an error will be displayed. The result need to be interpreted in a way to see whether the bridge really needs to write into the table. You’ve to try to figure out which business processes writes into the table. If the process is identified it must be checked with the responsible administrators and key users whether the impact would really be critical if it occurred on the production system.

Table-D will also be cloned, but fully available with read and write accesses for the bridge. Hence there’s no read-only conflict for Table-D. As Table-D will be cloned and has a database trigger, this trigger need to be dropped. Dropping database triggers may have also an impact on the business.

Besides the described business impacts, we got some more nice key figures with the Impact Analysis result:

  • Estimation of addition database space for clone tables
  • Number of large tables to be cloned

With the latest version of SUM 2.0, the Impact Analysis of ZDO will be triggered by SUM in the background, and write the results into the following log file:

<SUM>/abap/log/IMPANAUPG.<SID>

If you would like to repeat the Impact Analysis with a different statistics file or play around with the threshold parameters, you can use the dialog report RSUPG_RUN_IMPACT_ANALYSIS_DIA (up to SUM 2.0 SP01):


 

Additional information

Export of table statistics for the Impact Analysis (all use-cases)

  • SAP Note 2402270 – Export of Table Statistics for SUM Impact Analysis

nZDM: near-Zero Downtime Maintenance

ZDO: Zero Downtime Option of SUM

 

 

Jens Fieger

Product Management SAP SE, Software Logistics

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply