Skip to Content

SAP HANA System Replication on SLES for SAP Applications

You like directly to start with?

If you like to know, how to implement the solution including SUSE Linux Enterprise for SAP Applications, please read our setup guide available at:

https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/

What is this solution about?

The solution created by SUSE is to automate the takeover in SAP HANA system replication setups.

The basic idea is that only synchronizing the data to the second SAP HANA instance is not enough, as this only solves the problem of having the data shipped to a second instance. To increase the availability you need a cluster solution, which controls the takeover of the second instance as well as providing the service address for the client access to the database.

Let’s see from a bird’s perspective how the takeover automation is working

Step 1

In the first step a SAP HANA “PR1”  is running in system replication setup. The left node has the primary SAP HANA instance which means that this one is the instance clients should access for read/write actions.

Step 2

The second step demonstrates what happens first, when either the node1 or the instance on that node is failing. The setup has now a “broken” SAP HANA primary and of course also the synchronization to the second node is stopped.

 

Step 3

Step 3 explains the cluster’s first reaction: The secondary will be switched into a primary and in addition this new primary will be configured as new source of the system replication. Because of the complete node1 or only it’s SAP HANA instance is still down, the synchronization is not in “active” mode.

Step 4

Step 4 illustrates the situation, when node1 (or it’s SAP HANA instance) is coming back. Depending on the resource parameters the cluster registers the former primary to be the new secondary and the system replication begins to work.

 

If you do not like the cluster to proceed an automated registration of the former primary, you could change the resource parameters and the cluster will keep the “broken”/former primary in shutdown status. This could make sense, if administrators first like to figure out what happened at this instance in detail or for other operating aspects.

When the automated registration is switched off, the administrator could register the former primary at any time. The cluster resource agent will detect this new status during the following monitor action.

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply