Skip to Content

This blog gives you information about the minimal setup required for HANA high Availability. How to add standby host and perform a failover (simulation). How services, hosts and Volumes looks like before and after failover.

For high availability, a distributed HANA (scale out) setup is required.

The minimal setup for a scale out is 2 servers (one worker, one standby).

When an active (worker) host fails, a standby host automatically takes its place.

For that, standby host needs shared access to the database volumes.

Note: standby hosts do not contain any data and do not accept

requests or queries.

 

Host 1 (first node):

host role = Worker

host name = hanasl12

SID = HIA

Instance Number = 00

IP Address = 192.168.1.149

Host 2 (second node):

host role = Standby

hostname = hanadb4 (alias hanadb2)

SID = HIA

Instance number = 00

IP Address = 192.168.1.172

failover group = default

 

NFS is used here to shared file systems (/hana/shared, /hana/data/ and /hana/log)

export /hana from first node: /etc/exports

On second node: maintain /etc/fstab as shown below and mount the file systems.

Install SID= HIA, master node (first node) using installation media’s HDBLCM (the below screen shows services before adding standby node)

Hosts:

On master node: execute action configure_internal_network (using resident HDBLCM)

Then, on second node: run resident HDBLCM to add_hosts

Select Host role as “standby”

The below screen, shows services after adding standby node

Hosts – after adding standby node:

Volumes: (before failover) attached to active (worker) – on first node

To perform failover (simulation), I’ve killed deamon process

The below screen shows, stopped first node and now second node has Master name server (actual role) and Master Index server (actual role)

Volumes: (After failover) attached to second node

start instance on first node, it retains standby role (as actual role).

To report this post you need to login first.

9 Comments

You must be Logged on to comment or reply to a post.

    1. Former Member

      Thank you, Maruthi. You are right. Host auto-failover is a local fault recovery solution that can be used in addition or as an alternative measure to system replication.

      (0) 
  1. Former Member

    Hi Sathish,

    Thanks for the Blog which was help Full.

     

    Can you please share us the basic requirements of the network side (Internet Speed for both the Linux boxes) and the Specs of servers .

     

    Thanks,

    Meghanth.S

     

     

    (0) 
    1. Former Member

      Hi Meghanth,

      My apologies for the late reply, It has been quite a while since I last checked comments here.

      For production setup – 10 Gbps is required.

      For minimal setup, we used 1 Gbps network.

      Linux servers with 30 GB RAM and 4 CPUs (2 sockets, 2 cores per socket, 1.8 GHz) were used.

      Thank you

      Satish Kumar

      (0) 
  2. Former Member

    Don’t quite understand this part :

    NFS is used here to shared file systems (/hana/shared, /hana/data/ and /hana/log); export /hana from first node: /etc/exports

     

    If the first node becomes unavailable, example power failure or OS unable to start, how is the NFS share available to the 2nd node ?

    (0) 
    1. Former Member

      Hi Kent Peh,

      valid point,
      The above work was done in lab environment. My primary focus was to show Auto host failover simulation.
      coming to production/non-prod setup, we have multiple options to take care of /hana/shared, /hana/data and /hana/log considering your point,

      1. Non-shared SAN storage attached via Fibre Channel for HANA data (/hana/data) and log volumes (/hana/log) and shared NFS/NAS storage for HANA binaries (/hana/shared).

      2. Shared storage infrastructure using NAS provides a shared everything
      architecture where data volumes (/hana/data), log volumes (/hana/log) and SAP HANA binaries(/hana/shared) can be accessed via NFS.

      3. Shared Storage Infrastructure using Cluster File System with local disks: A cluster file system, such as GPFS, spans the local SAP HANA server node disks, making the data and log volumes as well as the SAP HANA binaries available to the whole scaled-out landscape. In the event of a host auto-failover, the cluster file system provides a standby node with the required data set.

      Thank you

      Satish Kumar

      (0) 
  3. Former Member

    Hi,

    Please also help me in below design

    Node 1- Own File System

    Node 2 Own File System

    Both Node have same hardware configuration than how to configure HA  mode.

    (0) 

Leave a Reply