Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
Amarnath
Participant
About

Configuring HA for a resource (Resource can be IP, DB, Webserver, etc.,)

In reference to Open SAP course (Say Goodbye to Downtime with SUSE Linux Enterprise Server ), following lab environment was setup and could be useful for someone who wish to create everything from scratch.

Good overview of managing ANY - 1 resource (Be it, HANA DB, ASCS, SCS, etc.,) will clarify our understanding in managing any kind of resource.

A special Thanks to @Richard Mayne for explaining the concepts so well !

What are we going to experiment here?

High availability for an IP address. So, its reachable even if one 1 VM fails.

What do we need?

Requirements for basic setup: 3 VMS

  • Hostname: noadea (SLES15) (VM1), Hostname: nobeb (SLES15) (VM2) – to host resources

  • Hostname: nasvm (TrueNAS) (VM3) – to create iSCSI device and NFS shares


What media do we need?

Virtual Box - https://www.virtualbox.org/wiki/Downloads

SLES Image - https://www.suse.com/download/sle-sap/

True NAS -  https://www.truenas.com/download-truenas-core/ (to provision SBD, NFS (basically shared storage))

 

NOTE: For lab setup I have used minimum configuration and based on RTO these has to be adjusted eliminating all the single point of failures.

 

Glance of execution

  • Install virtual box

  • Configure Network before you install VM’s

  • Install SLES on 1 VM

  • Clone above created VM to construct 2nd VM

  • Install True NAS

  • Create SBD in TrueNAS VM and mount that to SLES VM’S

  • Install ha_sles pattern

  • Configure cluster

  • Create an IP

  • Test HA of IP


 

NOTE: Steps - Installation of virtual box, SLES on VirtualBox, TrueNAS installation on VirtualBox are self-explanatory as much content is already available over web.

Configure Networks

Once VirtualBox is installed, and before you deploy VM’s, create a “VirtualBox Host-Only Ethernet Adapter” as below,

Files -- Host Network Manager -- Create



Choose “Configure Adapter Manually” and disable DHCP Server in the other TAB.

Create multiple adapters if you wish to configure “bond” networks (combining 2 or more interfaces to avoid single point of failures).

Bond0 (Just a name like eth0, and nothing to confuse) – As an example, here I have created bond0 combing eth1 and eth4 from yast2 (yast2 – GUI for SLES configuration and settings) but for this lab I have not used bonding of network interfaces and below screen is just for an understanding.


Post above step, build / install SLES with the .iso file you have downloaded earlier and this would be self-explanatory to install a VM in Virtual Box.

In a similar way install TrueNas from the .iso you have downloaded earlier. This distribution type would be “bsd” in VirtualBox, while you install.

Once installed, you will be able to access TrueNAS WEB UI with the IP assigned automatically as per your network interface, and the same will be displayed in console.


From browser access the URL with the user / password that you would have provided during installation on TrueNas.

 

Create SBD in TrueNAS


Assign a static IP to em0.

Activate iscsi, NFS (if you wish to use) from the services section.


Once done, poweroff the VM “nasvm” attach a new disk of 1 GB (VHD) from VirtualBox to “nasvm” and start the VM.

“nasvm” will automatically recognize the new disk attached and now you will be able to use the newly attached disk to as an iSCSI device. From “sharing” section select block shares and provide basic information from the wizard. My settings are as below.


I have setup my network interfaces / hosts as below,


About Network Adapters used in VirtualBox

NAT adapter – This is required for VM’s hosted inside VirtualBox for internet access.

Other, adapters (eth1, eth3) can be “VirtualBox Host – Only Ethernet Adapters” etc.,

Static IP’s to VM interfaces can be assigned with the help of “yast2” – “Network Settings”

Once IP’s are assigned, you will have to enable “ssh” in “public” section of “Firewall Settings” and it can be accessed from “yast2”.

So, by now you should be able to access VM’s hosted inside VirtualBox via putty installed on your local machine. Set hostnames with “hostnamectl set-hostname nodea”

Adapt all the /etc/hosts files across VM’s

In both SLES VM’s, access “iSCSI initiator” from “yast2” and provide IP of “nasvm” and your iSCSI device will be auto detected.


 


Click on connect and choose “automatic” and you should now see the iSCSI device in your “Connected Targets”



You should also be able to see this from /dev/disk/by-id


So, our SBD (Split Brain Detection device is ready)

Install pattern ha_sles for enabling high availability on all SLES nodes

zypper install -t pattern ha_sles


Create a softdog timer in all nodes



Enable chronyc service

systemctl status chronyd.service

systemctl start chronyd.service

systemctl enable chronyd.service


 

ha-cluster-init -u -I eth0 (below screenshot is from another VM and do not compare IP’s with the matrix)

As input, we are going with -u (unicast option) and 1 SBD device (/dev/disk/by-id/scsi-1TrueNAS_iSCSI_Disk_080027f7e434000) in my case and provide a virtual IP.


 


So, we have deployed 1 cluster node. (Make sure /etc/hosts files are updated with IP addresses across VM’S) and you should be able to access “hawk” (WEB UI interface for managing cluster)


Join another node with ha-cluster-join -i eth0 (I have used -i option to specify interface that cluster has to bound to)


Change parameter “SBD_STARTMODE=always” in /etc/sysconfig/sbd to “SBD_STARTMODE=clean”

(This will prevent fenced node joining cluster automatically)

So, a basic cluster setup is now ready and 2 nodes can be seen in hawk UI or with crm status command


To test if cluster is working, a node can be fenced from UI and fenced node will be rebooted automatically.


 


 


 



 


 


 

Configuring IP address as resource (Ex - 192.168.56.110) to be highly available.

crm configure edit

primitive p-IP_110 IPaddr2 \

params ip=192.168.56.110 cidr_netmask=24 \

op start timeout=20s interval=0 \

op stop timeout=20s interval=0 \

op monitor timeout=20s interval=10s \

meta target-role=Started


You can also see the same status in hawk as well


 

IP is running on “nobeb” VM

If you “poweroff” nodeb CRM will automatically detect VM failure and migrate IP resource to nodea.


 

IP was automatically moved to nodea without any intervention.

 


 

In a similar approach HA can be configured for HANA as a resource.

Please feel free to suggest / correct / recommend anything that could help.

Open to your feedback and do let me know if this was useful.

 
Labels in this area