SAP B4HANA High Availabilty on AWS
B4HANA HA
ARCHITECTURE
ASCS HA Auto Failover – Clustering
ASCS ERS
ascsserver – 10.0.1.106 ersserver – 10.0.1.229
VIP – 10.2.0.1 VIP – 10.2.0.1
Virtual hostname – haascs Virtual hostname – haers
HANA HA Auto Failover – Clustering
HANA NODE1 HANA NODE2
Hanaha1 – 10.0.1.78 Hanaha2 – 10.0.1.225
VIP – 10.2.0.3
Virtual hostname – hanaha
Amazon EC2 – Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.
SUSE LINUX Community AMI, Launch 5 instances – PAS, ASCS and ERS and 2 HANA instances.
suse-sles-sap-12-sp3-byos-v20180706-hvm-ssd-x86_64 – ami-2c693f54
SUSE Linux Enterprise Server for SAP Applications 12 SP3 for BYOS (HVM, 64-bit, SSD-Backed)
Creation of EC2 instances
Amazon EFS – Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as we add and remove files, so our applications have the storage they need, when they need it. As EFS is a file storage it can be mounted across several EC2 instances.
Creation of Elastic file system
Amazon Elastic File System (Amazon EFS) provides simple, scalable file storage for use with Amazon EC2 instances in the AWS Cloud. Amazon EFS is easy to use and offers a simple interface that allows you to create and configure file systems quickly and easily. With Amazon EFS, storage capacity is elastic, growing and shrinking automatically as we add and remove files, so our applications have the storage they need, when they need it. As EFS is a file storage it can be mounted across several EC2 instances.
Mounting EFS to EC2 Instances
Create Directory on the EC2 instances
Create /etc/fstab entry
ASCS/ERS filesystem
PAS filesystem
HANAHA1 filesystem
HANAHA2
Change the hostname of the EC2 Instances
ASCS
sudo hostnamectl set-hostname ascsserver
ERS
Sudo hostnamectl set-hostname ersserver
HANAHA1
Sudo hostnamectl set-hostname hanaha1
HANAHA2
Sudo hostnamectl set-hostname hanaha2
PAS
Sudo hostnamectl set-hostname passerver
Tagging the EC2 instances
Amazon Web Services (AWS) allows customers to assign metadata to their AWS resources in the form of tags. Each tag is a simple label consisting of a customer-defined key and an optional value that can make it easier to manage, search for, and filter resources.
The EC2 instances will have host names which are automatically generated.
The SLES agents will have to be able to identify the EC2 instances in the correct way.
ASCS
In the same way do it for ERS,PAS,HANAHA1 and HANAHA2
Creating an AWS CLI Profile on EC2 Instances
The SLES agents use the AWS Command Line Interface (CLI). They will use an AWS CLI profile which needs to be created for the root account root on both instances. The SUSE resources require a profile which creates output in the text format. The name of the profile is arbitrary.
ASCS
In the same way do it for ERS,PAS,HANAHA1 and HANAHA2
Disable the Source/Destination Check for the Cluster Instances
** Do this on all the server**
The following command needs to be executed one time for both EC2 instances, which are supposed to receive traffic from the Overlay IP address.
Overlay IP Addresses
Add VIP on Route table
add the virtual ip to the network interface of ASCS, ERS and HANAHA1
Add the Service IP Address for your ASCS Service
add the overlay ip to ascs, ers and hanaha1
Configure http Proxies
** Do this on all the server**
export http_proxy=10.*.*.*:80
export https_proxy=10.*.*.*:443
export NO_PROXY=169.254.169.254
Permit Root Login in SSH and Change Root Password
SAP Installation
ASCS INSTALLATION
./sapinst SAPINST_USE_HOSTNAME=haascsIS_HOST_LOCAL_USING_STRING_COMPARE=true
ASCS installation got Completed.
ERS INSTALLATION
./sapinst SAPINST_USE_HOSTNAME=haers IS_HOST_LOCAL_USING_STRING_COMPARE=true
ERS Installation got completed
Post steps for ASCS and ERS
Stopping ASCS and ERS
Maintaining sapservices
Ensure that the file /usr/sap/sapservices holds both entries (ASCS+ERS) on both cluster nodes.
Integrating the Cluster Framework using the sap_suse_cluster_connector Package
For the ERS and ASCS instance edit the instance profile files B4H_ASCS10_haascs and B4H_ERS20_haers in the profile directory /usr/sap/B4H/SYS/profile/
ASCS profile
ERS profile
Start ASCS and ERS
Add the user b4hadm to the Unix user group haclient
DB INSTANCE INSTALLATION
INSTALL DB INSTANCE ON HANAHA1
./sapinst SAPINST_USE_HOSTNAME=hanaha
IS_HOST_LOCAL_USING_STRING_COMPARE=true
DB Instance Installation Completed.
PAS INSTALLATION
./sapinst
PAS installation completed.
ASCS/ERS CLUSTER CREATION
ASCS
ERS
ssh ersserver from ascsserver
ssh ascsserver from ersserver
ASCS
ha-cluster-init
ERS
Preparing the Cluster for adding the Resources
Put cluster on maintenance
As user root
crm configure property maintenance-mode=”true”
**NOTE**
Stop all SAP instances
Remove the (manual added) IP addresses on the cluster nodes
Unmount the file systems which will be controlled by the cluster
Configure AWS specific Settings
Configuration of AWS specific Stonith Resource
The EC2 tag pacemaker entry needs to match the tag chosen for the EC2 instances. The value for this tag will contain the host name.
Configure the Resources for the ASCS
Filesystem Primitives
VIP Primitives
SAP instance
The name of the AWS CLI profile will have to match the previously configured AWS profile.
Group the resources for ASCS
Configure the Resources for the ERS
Filesystem Primitives
VIP Primitives
SAP Instance
The name of the AWS CLI profile will have to match the previously configured AWS profile.
Group the resources for ERS
Configure the Colocation Constraints between ASCS and ERS
The constraints between the ASCS and ERS instance are needed to define that the ASCS instance should start-up exactly on the cluster node running the ERS instance after a failure.
crm configure property maintenance-mode=”false”
SAP HANA SYSTEM REPLICATION CONFIGURATION
HANAHA1(primary) HANAHA2(secondary)
Hana System Replication is configured.
HANA CLUSTER CONFIGURATION
HANAHA1
HANAHA2
ssh hanaha2 from hanaha1
ssh hanaha1 from hanaha2
HANAHA1
ha-cluster-init -u
HANAHA2
Preparing the Cluster for adding the Resources
Put cluster on maintenance
As user root
crm configure property maintenance-mode=”true”
Configure AWS specific Settings
Configuration of AWS specific Stonith Resource
The EC2 tag pacemaker entry needs to match the tag chosen for the EC2 instances. The value for this tag will contain the host name.
The name of the AWS CLI profile will have to match the previously configured AWS profile.
Configure the Resources for HANA
VIP Primitives
HANA instance
HANA topology
Configure the Colocation Constraints for HANA
crm configure property maintenance-mode=”false”
This completes the BW4HANA Application in which ASCS/ERS is on HA and Database Hana is also highly available with hana system replication configured.
Hi Kunal, this was really helpful. Can you let me know how on-prem users(thru SAPGUI) access to active ASCS or HANA instance achieved?
Thanks for this practice, please consider to change the blog.