Skip to Content
Author's profile photo Fabian Herschel

Automate SAP HANA System Replication with SLES for SAP Applications

SAP HANA System Replication on SLES for SAP Applications

You like directly to start with?

If you like to know, how to implement the solution including SUSE Linux Enterprise for SAP Applications, please read our setup guide available at:

https://www.suse.com/products/sles-for-sap/resource-library/sap-best-practices/

What is this solution about?

The solution created by SUSE is to automate the takeover in SAP HANA system replication setups.

The basic idea is that only synchronizing the data to the second SAP HANA instance is not enough, as this only solves the problem of having the data shipped to a second instance. To increase the availability you need a cluster solution, which controls the takeover of the second instance as well as providing the service address for the client access to the database.

Let’s see from a bird’s perspective how the takeover automation is working

Step 1

In the first step a SAP HANA “PR1”  is running in system replication setup. The left node has the primary SAP HANA instance which means that this one is the instance clients should access for read/write actions.

Step 2

The second step demonstrates what happens first, when either the node1 or the instance on that node is failing. The setup has now a “broken” SAP HANA primary and of course also the synchronization to the second node is stopped.

 

Step 3

Step 3 explains the cluster’s first reaction: The secondary will be switched into a primary and in addition this new primary will be configured as new source of the system replication. Because of the complete node1 or only it’s SAP HANA instance is still down, the synchronization is not in “active” mode.

Step 4

Step 4 illustrates the situation, when node1 (or it’s SAP HANA instance) is coming back. Depending on the resource parameters the cluster registers the former primary to be the new secondary and the system replication begins to work.

 

If you do not like the cluster to proceed an automated registration of the former primary, you could change the resource parameters and the cluster will keep the “broken”/former primary in shutdown status. This could make sense, if administrators first like to figure out what happened at this instance in detail or for other operating aspects.

When the automated registration is switched off, the administrator could register the former primary at any time. The cluster resource agent will detect this new status during the following monitor action.

Assigned Tags

      4 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member

      Dear Fabian,

      Did you test the sceanairo that threre are 2 or more non-prd HANA instances on  the Secondary node?

      Could all of the non-prd HANA instances be stoped automatically, and then the sercondary PRD-intance is promoted successfully?

       

      Author's profile photo Fabian Herschel
      Fabian Herschel
      Blog Post Author

      Hi Arthur,

      in the LinuxLab we did not tested your scenario so far, which would be something like A=> B ; Q1, Q2.
      We have a customer who has implemented that. They needed just to drastic decrease the stickiness for the non prod systems to something between 100 to 200 (scoring).

      Regards
      Fabian

       

       

      Author's profile photo Former Member
      Former Member

      Hi Fabian,

      Nice Document. Please help what are the steps that we need to take care when i have the hosts in different data center areas and what is the mechanism built in automatic takeover when i enable the clustering.

       

      Regards

      DM

      Author's profile photo Former Member
      Former Member

      Hi There,
      Just the information I was looking for. Thanks for the www.asha24.com detailed instructions. I haven’t used it yet but I guess the time has come.

      At Hana MDC can i restore only one db,example have container Named FR1 include tenant dev, qas & sandbox can i give dev more resources than other two ? can i only restore one db "sandbox" without restore all container?

      Anyways great write up, your efforts are much appreciated.
      Many Thanks,

      kevin