DMO: technical background
DMO will update and migrate your SAP system to the SAP HANA DB, so before starting DMO we need:
- Source database with application data and (productive) repository on source release
- Primary Application Server (PAS; fka central instance) with kernel on the SAP source release
- SAP HANA DB as target database on a separate host (as an appliance)
- Software Update Manager (SUM) on PAS host (SAPup is covering the ABAP part)
- SAP Host Agent on PAS host (updated to enable DMO communication)
- Browser on frontend to display user interface
In the web browser, we open the respective URL which sends an HTTP request to SAP Host Agent. After we have provided credentials, the request is forwarded to SAPup which is started on PAS host (Browser request not shown).
Uptime processing means the SAP system is still running and end users can work productively with the system. Still the SUM is preparing the system update by creating a shadow system: a shadow instance on the PAS host, and a shadow repository (on the new SAP release). The shadow instance is based on the shadow kernel, which is the new kernel (new SAP release) for the old (source) database.
(Note the legende concerning the color for the parts that are on the target release)
After the shadow repository was created on the source database (without influencing the productive use of the system), the shadow repository is copied onto the SAP HANA database. For this, the tool R3load is used which is part of the kernel. The shadow instance is no longer required. So the target repository is created on the SAP HANA DB, as it is already on the new SAP release, and on the target database.
Now the system is being shut down, and the downtime processing starts. During this phase, the application data are migrated from the source to the target database. Like for a classical migration, the tool R3 load is used. R3load pairs are doing the export and import. The first R3 load (part of the shadow kernel) is exporting the data, the second R3load (part of the target kernel) is importing the data into SAP HANA DB.
Both R3loads are running in parallel on the same host. No export files (dump files) are created because the data transfer between the R3load pair happens throught the main memory of the host. This R3load option is called memory pipes.
ℹ Note that this procedure requires to have two additional kernel sets: shadow kernel (new release for source DB) and target kernel (new release on target DB). They will have to be selected manually during use of Maintenance Optimizer (MOpz).
After the migration of the application data is done, SUM will provide the target kernel for the PRD instance (kernel switch). the application data are still on the source release. The system is switched on, but it can’t be used by endusers as the procedure is not finished yet (technical downtime).
Then the application tables are updated (e.g. XPRAS), and when the procedure is finished, the system is available, running on SAP HANA DB and on the new SAP release.
ℹ Note that during the complete procedure, the source database continues to run and is not changed. In case of any reason to return to the source database, a simple reset procedure offered by SUM can be used, and the state before shutting down the system is restored (without the need for a manual database restore). The SUM will deleted the data from SAP HANA DB, will restore the old kernel, and will delete the shadow repository.
(Blog was updated on march 28th reflecting more precise pictures).