Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
JPReyes
Active Contributor
SAP ASCS High Availability using ERS explained

I have read and written many blogs on SAP High Availability but people seem to struggle to understand the inner workings of the mechanism that makes the SAP central services instance highly available

SAP Standard approach for making ASCS instance highly available is ERS

What is ERS?

ERS stands for Enqueue replication server and its job is to keep an up to date replica of the lock table so if something tragic was to happen to the ASCS instance the state of the table locks is safeguarded

That’s it?.... well yeah… its not a magic box!... or is it?....  on its own it does not guarantee the availability of the system, it just does what is stated above, to deliver the high availability desired its capabilities need to be combined with the features of a cluster with an automatic failover mechanism, that way when (or if) the ASCS instance crashes, its brought back to a different host/node where it will use the replication table to create a new lock table so the system can resume operation

What is the basic architecture of a highly available central instance?

At its leanest expression you need at least 2 nodes and a shared file system, for the purpose of this blog I’m just going to focus on the ASCS/ERS instances and the assumption is that the rest of the components are distributed on other nodes

Also you need a cluster provider with an automatic failover mechanism, again I’m not going to focus on a particular provider and make this as generic as possible so it applies to most scenarios

ASCS / ERS installation

In order for the ASCS and ERS instances to be able to move from one node to the other they need to be installed on a shared filesystem and using virtual hostnames, why?... because together with the virtual IP will be added to the cluster resource group so they can all switch as one logical unit.

A few high level tips for the installations,

Installation executable should point to that virtual host

./sapinst SAPINST_USE_HOSTNAME=<virtual hostname>

For this exercise the ASCS instance will be installed on sapnode1 using virtual hostname sapascs and the ERS instance will be installed in sapnode2 with virtual hostname sapers

Also post installation you need to make sure you have created mount points for both ASCS and ERS in their counterpart hosts (/usr/sap/<SID>/ASCSXX and /usr/sap/<SID>/ERSXX)

There is a number of other installation specific steps which are required but for the sake of keeping this generic I'll leave those aside (I have included a few links at the bottom of this blog where you can check some of those)

Below is a representation of the basic requirements to kick the cluster basic configuration going  including the inactive (grey) instance requirements


Once your cluster config is completed and your ASCS/ERS instances (and rest of your system components) are operational your system will look as below,


So, what happens If sapnode2 crashes?

Well, the system will continue to operate as normal because ASCS availability was unaffected. ERS will be brought back once sapnode2 is back online

What happens when sapnode1 fails?

Heartbeat monitor will trigger a cluster resource failover and the ASCS instance will be spun on sapnode2 together with ERS....  (this is part of the cluster colocation configuration) and will use the replication table to create a new table lock and resume operations, at the same time ERS will be shutdown (again also part of colocation rules) and will be shifted and brought back on sapnode1 once the host is back online

Just to be clear for a small period of time both ASCS and ERS will be running in parallel, this is necessary as the replication table is kept on memory in the node ERS is running and only once the ASCS has completed reading and recreating the lock table the ERS node will be killed and will wait to be moved to the other node once is back online


Ultimately once sapnode1 is back online the ERS instance will be started and will create a new lock replication table and the ASCS will be once more highly available


I hope this paints a picture on how the ASCS/ERS instances work and how together with the cluster guarantee the ASCS instance availability and hence the SAP system uptime and the business continuity

Last but not least, I would like to quote here some documentation and white papers I found very helpful,

High Availability with the Standalone Enqueue Server

RedHat configuration guide for ASCS/ERS 

Suse SAP NetWeaver Enqueue Replication with High Availability Cluster - Setup Guide for SAP NetWeave...

Love to hear your comments,

Regards, JP
23 Comments
Labels in this area