Skip to Content
Technical Articles
Author's profile photo Dennis Padia

HANA Scale-Up HA with System Replication & Automated Failover using SUSE HAE on SLES 12 SP 3 – Part 3

Big thanks to Bernd Schubert  from SUSE for proof-reading the blog.

Part 1: HANA Scale-Up HA with System Replication & Automated Failover using SUSE HAE on SLES 12 SP 3 – Part 1

Part 2: HANA Scale-Up HA with System Replication & Automated Failover using SUSE HAE on SLES 12 SP 3 – Part 2

This blog describes how to install and configure SUSE HAE to automate the failover process in SAP HANA system replication. SUSE HAE is part of the SUSE Linux Enterprise Server for SAP Application and SAP HANA Database Integration.

Procedure (High Level Steps)

  • Install the High Availability pattern and the SAPHanaSR Resource Agents
  • Basic Cluster Configuration.
  • Configure Cluster Properties and Resources.

If you are having separate OS team then have a round table discussion on cluster configuration and setup as they know how to perform it. But if you are pure Basis resource and have little knowledge on OS then I prefer you to read more on the cluster setup to get more insight

Installation of SLES High Availability Extension

Download the installation media and mount the same. Install the SUSE HAE on both the Nodes using the command

# zypper in -t pattern ha_sles

This installs several rpm packages required for SUSE HAE. The installation must be performed on both primary and secondary HANA servers.

Create STONITH Device

STONITH (shoot the other node in the head) is the way to implement fencing in SUSE HAE. If a cluster member is not behaving normally, it must be removed from the cluster. This is referred as fencing. A cluster without the STONITH mechanism is not supported by SUSE. There are multiple ways to implement STONITH, but in this blog, STONITH Block Devices (SBD) is used.

Create a small LUN (1 MB) on the storage array that is shared between the cluster members. Map this LUN to both primary and secondary HANA servers through storage ports. Make note of the SCSI identifier of this LUN (the SCSI identifier should be the same on both primary and secondary HANA servers). It is possible to add more than one SBD device in a cluster for redundancy. If the two HANA nodes are installed on separate storage arrays, an alternate method such as IPMI can be used for implementing STONITH.

Refer to the SUSE Linux Enterprise High Availability Extension SLE HA Guide for best practices for implementing STONITH. The validation of this reference architecture has been performed using shared storage and SBD for STONITH implementation.

# sbd -d <shared lun> dump

All the above timeout parameters are default. But you can change it, but it is advisable to change only if you are encountering any issue in cluster or if guided by SAP/SuSE.

Now the Resource Agents for controlling the SAP HANA system replication needs to be installed
at both cluster nodes.

# zypper in SAPHanaSR SAPHanaSR-doc

Configure SUSE HAE on Primary HANA Server

These steps are used for the basic configuration of SUSE HAE on primary HANA servers. Start the configuration by running the command

# sleha-init
  • /root/.ssh/id_rsa already exists – overwrite? [y/N]: Type N
  • Network Address to bind: Provide the subnet of the replication network
  • Multicast Address: Type the multicast address or leave the default value if using unicast
  • Multicast Port: Leave the default value or type the port that you want to use
  • Do you wish to use SBD? [y/N]: Type y
  • Path to storage device: Type the SCSI identifier of the SBD device created in the step Create STONITH Device (/dev/disk/by-id/scsi-360000970000197700209533031354139)
  • Are you sure you want to use this device [y/N]: Type y

Add Secondary HANA Serve to the Cluster

To add the secondary HANA server to the cluster configured on the primary HANA server, run the following command on the secondary HANA server as root user.

# slehajoin
  • /root/.ssh/id_rsa already exists – overwrite? [y/N]: Type N
  • IP address or hostname of existing node: Enter the primary node replication IP address

Input looks just as below figure on insert of above command

This completes the basic cluster configuration on the primary and secondary HANA servers.

After all of the previous steps are finished, login to Hawk (HA Web Konsole) using the URL ‘https://<Hostname of Primary or Secondary Server>:7630’ with the user ID ‘hacluster’ and password ‘linux’.

The default password can be changed later. You should see the cluster members ‘Server1’ and ‘Server2’ online

NOTE: Sometime it may happens due to firewall, port 7630 might not be open from your desktop or RDP server. So kindly open the port or you can forward the port in putty to your localhost while logging into server (something like below)

NOTE: In above screen you can see “admin-ip” which is the Virtual IP service configured to manage cluster. It might not be available when you configure it.

After setting up cluster, in SLES 11 you need to set below parameter to make cluster working without any issue.

noquorumpolicy = ignore (Obsolete, only applicable for SLES 11)

IMPORTANT STEP: For SLES 12, quorom policy is obsolote but you have to make sure that you set below value in /etc/corosync/corosync.conf file

# Please read the corosync.conf.5 manual page
totem {
 version: 2
 token: 5000
 consensus: 7500
 token_retransmits_before_loss_const: 6
 secauth: on
 crypto_hash: sha1
 crypto_cipher: aes256
 clear_node_high_bit: yes
 interface {
 ringnumber: 0
 bindnetaddr:  **IP-address-for-heart-beating-for-the-current-server**
 mcastport: 5405
 ttl: 1
 }
 transport: udpu
}
logging {
 fileline: off
 to_logfile: yes
 to_syslog: yes
 logfile: /var/log/cluster/corosync.log debug: off
 timestamp: on
 logger_subsys {
 subsys: QUORUM
 debug: off
 }
}
nodelist {
 node {
 ring0_addr: **ip-node-1**
 nodeid: 1
 }
 node {
 ring0_addr: **ip-node-2**
 nodeid: 2
 }
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
 provider: corosync_votequorum
 expected_votes: 2
 two_node: 1
}

This changes works like the noquorumpolicy=ignore option

SAPHanaSR Configuration

The SAPHanaSR package can be configured using the Hawk wizard. Follow the procedure on SLES for SAP Applications for configuration steps. Below is a list of the following parameters required by the Hawk wizard:

  • SAP SID: SAP System Identifier. The SAP SID is always a 3-character alphanumeric string.
  • SAP Instance Number: The instance number must be a two-digit number including a leading zero.
  • Virtual IP Address: The Virtual IP Address will be configured on the host where the primary database is running.

Navigate to “Wizards” > SAP> SAP HANA SR Scale-Up Performance Optimized

Virtual IP address is not Client IP of HANA Server. Make sure you provide available IP from your landscape as this IP address will later be registered in DNS with virtual host name.

This virtual host name will be used by your SAP Application Server to connect to HANA database. The advantage of connecting SAP Application via virtual hostname (Virtual IP) is that on failover of HANA database, this virtual IP will also migrate secondary node which automatically connects your SAP Application to HANA database.

Below parameters plays vital role based on the scenario you are deploying.

Parameter Performance Optimized Cost Optimized Multi-Tier
PREFER_SITE_TAKEOVER True False True/False
AUTOMATED_REGISTER False/True False/True False
DUPLICATE_PRIMARY_TIMEOUT 7200 7200 7200

 

Parameter Description
PREFER_SITE_TAKEOVER Defines whether RA should prefer to takeover to the secondary instance instead of restarting the failed primary locally.
AUTOMATED_REGISTER

Defines whether a former primary should be automatically registered to be secondary of the new primary. With this parameter you can adapt the level of system replication automation.

If set to false the former primary must be manually registered. The cluster will not start this SAP HANA RDBMS till it is registered to avoid double primary up situations.

DUPLICATE_PRIMARY_TIMEOUT Time difference needed between two primary time stamps, if a dual-primary situation occurs. If the time difference is less than the time gap, than the cluster hold one or both instances in a “WAITING” status. This is to give a admin the chance to react on a failover. If the complete node of the former primary crashed, the former primary will be registered after the time difference is passed. If “only” the SAP HANA RDBMS has crashed, then the former primary will be registered immediately. After this registration to the new primary all data will be overwritten by the system replication.

 

Verify cluster resources and click on “Apply”

In status screen you will see that the resources are registered and now virtual IP is assigned to primary server i.e. 4021

After the configuration of Performance Optimized system replication between two nodes in a cluster, the graphical representation of the configuration looks as below

rsc_SAPHanaTopology_SLH_HDB00 – Manages two HANA Database in the system replication. In our case, it is HANA database residing on XXXXXXXXX4021 and YYYYYYYY4022 server.

rsc_SAPHana_SLH_HDB00 – Analyses SAP HANA System Replication topology. This Resource Agent (RA) analyzes the SAP HANA topology and “sends” all findings via the node status attributes to all nodes in the cluster. These attributes are taken by the SAPHana RA to control the SAP Hana Databases. In addition, it starts and monitors the local saphostagent.

rsc_ip_SLH_HDB00 – This Linux-specific resource manages IP alias IP addresses. On creating resource, virtual IP will be attached to primary site and this virtual IP will to move to secondary in case of failover.

Constraints

As you can see in the graphical representation of the resource configuration in cluster there are some Resource Constraints have been defined. It specifies –

  • on which cluster nodes resources can run
  • in which order resources will be loaded
  • what other resources a specific resource depends on

Below are the two constraints that gets generated automatically on registering resource in constraints. In case if it is not generate, you can define it manually.

Colocation Constraints – col_saphana_ip_SLH_HDB00

A colocational constraint tells the cluster which resources may or may not run together on a node.

To create a location constraint, specify an ID, select the resources between which to define the constraint, and add a score. The score determines the location relationship between the resources.

  • Positive values: The resources should run on the same node.
  • Negative values: The resources should not run on the same node.
  • Score of INFINITY: The resources have to run on the same node.
  • Score of -INFINITY: The resources must not run on the same node.

An example for use of a colocation constraint is a Web service that depends on an IP address. Configure individual resources for the IP address and the Web service, then add a colocation constraint with a score of INFINITY. It defines that the Web service must run on the same node as the IP address. This also means that if the IP address is not running on any node, the Web service will not be permitted to run.

Here your msl_SAPHana_SLH_HDB00 and rsc_ip_SLH_HDB00 service should run together. So in case of failover, this constraints will check where your master service is running and based on that virtual IP service will run with it.

Order Constraints – ord_SAPHana_SLH_HDB00

Ordering constraints define the order in which resources are started and stopped.

To create an order constraint, specify an ID, select the resources between which to define the constraint, and add a score. The score determines the location relationship between the resources: The constraint is mandatory if the score is greater than zero, otherwise it is only a suggestion. The default value is INFINITY. Keeping the option Symmetrical set to Yes (default) defines that the resources are stopped in reverse order.

An example for use of an order constraint is a Web service (e.g. Apache) that depends on a certain IP address. Configure resources for the IP address and the Web service, then add an order constraint that defines that the IP address is started before Apache is started.

Do’s and Don’t

In your project you should,

  • Define STONITH before adding other resources to the cluster
  • Do intensive testing
  • Tune the timeouts of the operations of SAPHana and SAPHanaTopology
  • Start with PREFER_SITE_TAKEOVER=”true”, AUTOMATED_REGISTER=”false” and DUPLICATE_PRIMARY_TIMEOUT=”7200”

In your project, avoid:

  • Rapidly changing/changing back cluster configuration, such as: Setting nodes to standby and online again or stopping/starting the master/slave resource.
  • Creating a cluster without proper time synchronization or unstable name resolutions for hosts, users and groups
  • Adding location rules for the clone, master/slave or IP resource. Only location rules mentioned in this setup guide are allowed.
  • As “migrating” or “moving” resources in crm-shell, HAWK or other tools would add client prefer location rules this activity are completely forbidden.

Regards,

Dennis Padia.

Assigned Tags

      25 Comments
      You must be Logged on to comment or reply to a post.
      Author's profile photo Vivin Andrews
      Vivin Andrews

      Hi Dennis,

       

      Thanks for such a good feature on linux clustering. It gave idea of how it works. Actually i need a help .

      Our implementation of Linux cluster and Hana replication between the primary db and secondary db was working fine.

      A day before we faced issue with the primary DB OS and the OS got rebooted, meanwhile the secondary server got invoked but the application server did not get connected to secondary DB.

      So we again restarted the DB on primary and application got connected to primary db. Later we reconfigured the replication to secondary db.  So we understood that the application is pointed only to primary db server and not to the virtual IP configured in the cluster.

      Please help in identifying the issue and  provide us a solution.

      Eg:

      HANADB - 192.168.0.1

      HANADBDR - 192.168.0.2

      Virtual IP: 192.168.0.3

      Currently our application is pointed to HANADB (192.18.0.1) in hdbuserstore.

      When we provide hdbuserstore list , it shows the primary db hostname.

      How can we remove that and assign the Virtual IP ,so that when the swapping happens the application does not stop.

      We have 4 application servers. Please help !

       

      Regards,

      Vivin.

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Vivin,

      You need to first register your virtual IP (192.168.0.3) in DNS with virtual hostname. After that, you maintain that virtual hostname in all your application servers.

      Your application servers connects to HANA database using the information stored in hdbuserstore. In order to view current details, you can execute below command in application server

      sidadm> hdbuserstore list

      Currently, it will have server details of primary HANA database. In order to change this entry and use virtual hostname to connect to HANA database you need to execute below command

      sidadm> hdbuserstore set DEFAULT <virtual hostname>:<tenant SQL port> <username> <password>

      Link: https://help.sap.com/viewer/b3ee5778bc2e4a089d3299b82ec762a7/2.0.04/en-US/ddbdd66b632d4fe7b3c2e0e6e341e222.html

      Above command needs to be executed in all application servers. After that your application servers will be connected to HANA database using virtual hostname.

      NOTE: Make sure in cluster, virtual IP service is running on primary database and is green. On failover, this virtual IP service will move to secondary and your application servers will automatically be connected to secondary database.

      Regards,

      Dennis Padia

      Author's profile photo Vivin Andrews
      Vivin Andrews

      Hi Dennis,

       

      Thanks for the support, i was able to successfully map the virtual IP from all the application servers.

       

      Regards,

      Vivin.

      Author's profile photo Jatinder Sati
      Jatinder Sati

      Hi Sir,

       

      i need your help. could i call you please

      Author's profile photo Vivin Andrews
      Vivin Andrews

      Hi,

      hdbuserstore list was run on primary db server. how to check application side? where it is pointed?

       

      Regards,

      Vivin

      Author's profile photo dongsil jang
      dongsil jang

      Hello.

      I have a question.

      -I configured sap hana cluster on suse12sp3.
      On node1 which is active for failover test after the completion of deployment
      Run ‘systemctl stop pacemaker’

      -As a result, both node1, node2 cluster and sap hana DB are stopped.

      -As a result of the log check, I checked the following log.
      ‘Nov 09 22:18:36 [7238] esscmhana2 pengine: warning: cluster_status: Fencing and resource management disabled due to lack of quorum’

      -‘crm configure property no-quorum-policy = ignore’ was applied to check normal failover behavior.

      -As a result of setting check, ‘/etc/corosync/corosync.conf’ setting is as follows.
      /etc/corosync/corosync.conf
      quorum {
      #votequorum requires an expected_votes value to function
      expected_votes: 1
      #Enables two node cluster operations
      two_node: 0
      #Enable and configure quorum subsystem
      provider: corosync_votequorum

      -I was wondering if I can run it in ‘suse12sp3’ as it is now.
      (Does it matter if I set ‘no-quorum-policy = ignore’ to suse12?)

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello,

      As mentioned "no-quorum-policy" is obsolete in SLES 12, so I wouldn't suggest you to use it. Instead in corosync.conf file you can update the configuration as below -

      quorum {
      # Enable and configure quorum subsystem (default: off)
      # see also corosync.conf.5 and votequorum.5
      provider: corosync_votequorum
      expected_votes: 2
      two_node: 1
      }

      Regards,

      Dennis Padia.

      Author's profile photo Igor Parkhomenko
      Igor Parkhomenko

      Hello Dennis,

      Above You explained for Vivin how to setup a virtual hostname on the application server side using hdbuserstore utility to connect HANA server.

      But it's not quite clear what should we do on the HANA server side in this case? What should we configure that the HANA server could accept and "listen" the app connections for the single virtual hostname? Initially and according to your article, the primary and secondary nodes have their own and different hostnames installed.

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Igor,

      As mentioned in the section SAPHanaSR Configuration, while creating configuration for HANA System Replication in cluster, you need to provide Virtual IP and this Virtual IP will be tagged to your Primary site. In case of failover, it will move to secondary site.

      Now this Virtual IP won't be resolved from other servers until and unless you register it in DNS. So you need to register Virtual IP in DNS with virtual host name and this virtual host name will be used in hdbuserstore for SAP application server to connect to HANA database. So during failover, your virtual IP will move to secondary without impacting SAP Application server connection to database.

      Regards,

      Dennis Padia.

      Author's profile photo Vivin Andrews
      Vivin Andrews

      Hi Dennis,

      we faced a real fail over in our production database last week, as discussed above, i had configured the hdbuserstore pointing to the virtual ip (192.168.0.3) , we have not registered the IP with a virtual hostname in the DNS.

      HANADB – 192.168.0.1

      HANADBDR – 192.168.0.2

      Virtual IP: 192.168.0.3

      At the time of fail over , the primary db (HANADB – 192.168.0.1) got rebooted and virtual ip (192.168.0.3 ) got shifted to secondary db  (HANADBDR – 192.168.0.2). i could see the secondary db Hana services getting started.

      But at this stage, none of the application servers were able to connect to DB. the entire four application system was stuck.

      By that time the primary OS got back after the reboot and i started the primary db to establish the application connection, and the application was back online.

      But from the VM-ware console we understood the virtual IP was still pointing to secondary server and this IP is not getting reflected any where in the application. later i shutdown the secondary server to move the virtual ip to the primary server.

      so i guess the cluster is working fine, the db is getting shifted including the virtual ip, but the application is not able to detect the fail over switching.

       

      please help to sort the issue to avoid future fail over switching.

      Thanks

      Vivin

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Vivin,

      You have to make sure that your virtual IP is resolvable from SAP Application server, which means when you perform nslookup <virtual IP> it should provide you the hostname.

      You have to provide virtual hostname to your virtual IP in order for it to be maintained in hdbuserstore of SAP Application server. Because your application server will try to resolve virtual IP entry maintained in hdbuserstore.

      If you have not registered virtual IP with hostname in DNS, then you can maintain the entry in /etc/hosts file on all application servers. But my recommendation would be to register virtual IP in DNS in virtual hostname and maintain the same in hdubserstore.

      Regards,

      Dennis Padia.

       

       

      Author's profile photo Igor Parkhomenko
      Igor Parkhomenko

      Hello Dennis,

      thank you very much for your explanation above. But I will venture to ask one more question.

      You say "need to provide Virtual IP and this Virtual IP will be tagged to your Primary site"  and "...unless you register it in DNS". It's all about the network configuration of two Hana hosts. It's quite clear. I understand how to move it between two hosts.

      But my initial question was what hostnames will be setup for two HANA nodes during its installations. We cannot use the same virtual hostname and virtual ip that you mentioned in your explanation for both HANA nodes during its installations, don't we? The mentioned virtual hostname and virtual ip will be used for the seamless takeover for the client's application. So to the both of HANA nodes must be assigned different hostnames during HANA installation (not replication). As a result these nodes will know nothing about the virtual hostname and not accept any external connections using it.

      And my question is how to force HANA node "listen" this virtual hostname and accept the client's connection in further. What HANA parameter is responsible for it?

      Thank you very much in advance and sorry for trouble.

       

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Igor,

      But my initial question was what hostnames will be setup for two HANA nodes during its installations?

      You use two different hostname (can be physical hostname) to install HANA on two separate servers. Virtual IP only comes into picture when you are creating HANA cluster resource between two cluster node (using wizard).

      And my question is how to force HANA node “listen” this virtual hostname and accept the client’s connection in further. What HANA parameter is responsible for it?

      As described earlier, virtual IP is assigned during HANA cluster resource creation and you can register this virutal IP with virtual hostname in DNS. After doing that, change the entry of hdbuserstore in all application server, pointing to HANA database using virtual hostname. So on failover, with virtual IP being moved to secondary host, the application sever connection to HANA database will be automatically connected to secondary host.

      We don't have to maintain any parameters on HANA end as it is taken care by HANA resources in the cluster.

      Regards,

      Dennis Padia

      Author's profile photo Igor Parkhomenko
      Igor Parkhomenko

      Hello Dennis,

      We have setup the same HA HANA configuration as described in your article and assigned the virtual hostname and ip to the active HANA node. Then we "changed the entry of hdbuserstore in all application servers, pointing to HANA database using the virtual hostname" as you recommended. Didn't forget to write this virtual hostname/ip to /etc/hosts files on app. servers. Ping is successful.

      Then we tried to check the connection from app. server by hdbsql utility. We get the expected error:

      * -10709: Connection failed (RTE:[89006] System call 'connect' failed, rc=111:Connection refused

      R3trans utility from the app. server get the same error. We went further and added the entry in hdbuserstore on HANA server locally. We get the same result as well.

      What's wrong? It seems to me HANA database knows nothing about the virtual hostname/ip and drops the connections.

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Igor,

      Can you please confirm on the command you have used to register your virtual hostname in application servers? Also have you checked below SAP Note

      2668492 - After changing the hostname of the application server, connections to the HANA database fail with "Connect to database failed" and "rc=111:Connection refused"

      Regards,
      Dennis Padia

      Author's profile photo Igor Parkhomenko
      Igor Parkhomenko

      Hello Dennis,

      thank you for your attention and patience.

      Yes, we checked the note 2668492 and the content of hdbuserstore of the app. server. It looks normal. There is a key "DEFAULT" with the virtual hostname of hana server. I suppose it's not enough just to write the entry in hdbuserstore of app. server and setup virtual name/ip on Linux server where HANA DB installed without notifying HANA system. It seems to me we have to "explain" HANA that the virtual hostname is valid and trustful and that it can accept the connections from app. side.

      What we have done:

      1. Setup an IP address alias on a network card

      sap-suse124:~ # more /etc/sysconfig/network/ifcfg-eth0
      BOOTPROTO='static'
      BROADCAST=''
      ETHTOOL_OPTIONS=''
      IPADDR='10.1.120.31/24'
      MTU=''
      NAME='82545EM Gigabit Ethernet Controller (Copper)'
      NETWORK=''
      REMOTE_IPADDR=''
      STARTMODE='auto'
      IPADDR_0='10.1.120.60/24'
      LABEL_0='VirtualIP'

      2. Add the entry on server side in /etc/hosts

      sap-suse124:~ # more /etc/hosts

      127.0.0.1 localhost
      #Real physical hostname/ip
      10.1.120.31 sap-suse124

      #Virtual hostname/ip
      10.1.120.60 vsap-suse124

      # Application server hostname/ip
      10.1.120.30 sap-app124

      3. Add virtual hostname/ip in etc/hosts file on App. side

      sap-app124:~ # more /etc/hosts :
      #
      127.0.0.1 localhost
      10.1.120.31 sap-suse124
      10.1.120.30 sap-app124
      # Virtual hostname/ip of HANA server
      10.1.120.60 vsap-suse124

      4. Ping the virtual name from app. server:

      sap-app124:~ # ping vsap-suse124
      PING vsap-suse124 (10.1.120.60) 56(84) bytes of data.
      64 bytes from vsap-suse124 (10.1.120.60): icmp_seq=1 ttl=128 time=0.769 ms
      64 bytes from vsap-suse124 (10.1.120.60): icmp_seq=2 ttl=128 time=0.390 ms
      64 bytes from vsap-suse124 (10.1.120.60): icmp_seq=3 ttl=128 time=0.386 ms

      5. Add the entry to hdbuserstore on App. server side:

      sap-app124:bw4adm 58> hdbuserstore list

      sap-app124:bw4adm 69> hdbuserstore list
      DATA FILE : /home/bw4adm/.hdb/sap-app124/SSFS_HDB.DAT
      KEY FILE : /home/bw4adm/.hdb/sap-app124/SSFS_HDB.KEY

      KEY DEFAULT
      ENV : vsap-suse124:30013
      USER: SAPHANADB

      6. Finally, check the connection:

      sap-app124:bw4adm 71> hdbsql -n vsap-suse124:30013 -u SAPHANADB -p PassWord123
      * -10709: Connection failed (RTE:[89006] System call 'connect' failed, rc=111:Connection refused {10.1.120.60:30013} (vsap-suse124:30013))

      sap-app124:bw4adm 72> R3trans -d
      This is R3trans version 6.26 (release 773 - 02.05.19 - 20:19:01).
      unicode enabled version
      2EETW169 no connect possible: "DBMS = HDB --- SERVER = '' PORT = ''"
      R3trans finished (0012).

      As you can see it's quite simple and clear. Nevertheless, we get the error.

      It will be surprising if HANA server just accepts this "violent" connection from "violent" hosts without additional configuring in HANA system.

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Igor,

      One thing I have noticed in your Point 5 is that you are connecting to 30013 port which is usually SYSTEMDB Nameserver SQL port. Your schema SAPHANADB resides on tenant DB. So kindly run below query in HANA Studio from SYSTEMDB

      SELECT * FROM SYS_DATABASES.M_SERVICES; #This Query will work only from SYSTEMDB

      The result of above query will provide you the SQL Port of your tenant DB. Now you can either use below two ways to connect to your tenant DB where SAPHANADB resides

      hdbuserstore SET <KEY> <HANA hostname:systemdb_sqlport@tenant_database> <USERNAME> <PASSWORD>

      hdbuserstore SET <KEY> <HANA hostname:tenantdb_sqlport> <USERNAME> <PASSWORD>

      NOTE: You can use SYSTEMDB Nameserver SQL port if you are specifying the database to which you want to connect (shown in first query). Otherwise, you have to directly connect to SQL Port of your tenant database (shown in second query).

      In your case, you have use SYSTEMDB Nameserver SQL Port but you have not specified Tenant DB as it cannot be seen in your output of "hdbuserstore list". Kindly refer below SAP Note

      2853601 - Why is Nameserver Port Used in HDBUSERSTORE for SAP Application Installation

      Let me know if this resolves your issue.

      Regards,

      Dennis Padia

      Author's profile photo Igor Parkhomenko
      Igor Parkhomenko

      Hello Dennis,

      Thank you very much for the Note. I've corrected the entries in hdbuserstore of the app. server. It doesn't help:

      sap-app124:bw4adm 58> hdbuserstore list
      DATA FILE : /home/bw4adm/.hdb/sap-app124/SSFS_HDB.DAT
      KEY FILE : /home/bw4adm/.hdb/sap-app124/SSFS_HDB.KEY

      KEY DEFAULT
      ENV : vsap-suse124:3013
      USER: SAPHANADB
      DATABASE: DB1

      sap-app124:bw4adm 59> hdbsql -n vsap-suse124 -u SAPHANADB -p PassWord123
      * -10709: Connection failed (RTE:[89006] System call 'connect' failed,rc=111:Connection refused {10.1.120.60:30015} (vsap-suse124:30015))

      Yes, now the app. server connection was redirected to Tenant DB port 30015, but it changes nothing.

       

       

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Igor,

      You are so close to resolving your issue but you need to understand that connection to any HANA database work only when you pass correct port with it. While using hdbsql I could not see you have passed port along with hostname, which results into the error you have mentioned.

      Kindly check the example below, in first query I have passed <hostname>:<tenantport> and I got connected to my tenant database, but when I have just passed <hostname> I get the same error you are getting right now.

      Also I could see in hdbuserstore, port mentioned is wrong.

      Regards,

      Dennis Padia

      Author's profile photo Vivin Andrews
      Vivin Andrews

      Hi Dennis,

      Thanks for updating my query related to the Virtual Hostname. My IT Team have registered the  same in the DNS and i will be updating during the downtime , on this Sunday.

      Meanwhile my IT team asked me to shutdown the secondary db server for a hardware maintenance. And i was surprised to see the it affected the primary db server also. Please note the steps i have taken to do it.

      1. Put the secondary node to maintenance mode.
      2. Disable system replication in primary server
      3. Stop the secondary db system using Hana studio ( but it triggered the primary db also to shutdown- which should not have happened)
      4. So i restarted the primary db and the application got connected back.
      5. then powered off the secondary system at OS level and again we found the primary getting shutdown.
      6. Primary db got down two times and the third restart of the primary db as of now is stable.
      7. Now all the 4 application servers are connected to  primary db.

      Can you help me, to find out why it happened or how to analyse the issue.

      Are there any mandatory steps to be followed to  take the down time of any of the servers?

      Also we are confused with the button standby (we see it is enable in both the nodes)

      Regards,

      Vivin.

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      Hello Vivin,

      If you want to shutdown you secondary HANA database, you have to put both the server in maintenance mode. If you put one server in maintenance mode, while the other is active you can encounter the behavior like yours.

      So when you are performing any maintenance activity where you don't want your HANA database to failover, you have to switch both server to maintenance mode.

      Regards,

      Dennis Padia

      Author's profile photo Vivin Andrews
      Vivin Andrews

      Hi Dennis,

      Thanks for the update. i will try this method and update you.

      Meanwhile i was able to update the hdbuserstore with the virtual hostname (SAPERPHANA)

       

      nslookup 192.168.0.3 is able resolve the hostname SAPERPHANA

      The application got successfully connected, but when i logon to the system and check the db connected ,its showing the Primary DB Hostname (HANADB).  so i doubt the application connection to secondary db during failover.

      I also updated the DBHOST in default profile to SAPERPHANA

       

      But still its showing the primary db hostname in Status of SAP.

       

      Regards,

      Vivin

      Author's profile photo Dennis Padia
      Dennis Padia
      Blog Post Author

      The database hostname you will see in SAP Application will be the hostname on which you have installed HANA database. Virtual hostname acts just like a friendly URL we use for web dispatcher (example).

      The information of database hostname is not fetched from DEFAULT.PFL and hdbuserstore. You are good if you see your primary database hostname in your application. After failover, you will see secondary database hostname in your application.

      I feel it is even good, because that way I can know which database host is my application is connected to (Primary or Secondary).

      Regards,
      Dennis Padia

      Author's profile photo Rony TBC
      Rony TBC

      Hi Dennis , DO you have some document to Implement 2 Node Suse Failover Cluster with SAP HANA without shared storage ( SAN).  it can be done with Fencing and Pacemaker.

      Author's profile photo JITENDRA KUMAR SATI
      JITENDRA KUMAR SATI

      Hi Sir,

       

      could i call you.need help on hana cluster.

       

      i have one question like when we do SAP installation on cluster so ASCS we make NFS mount mopit so it automatically move our ASCS to secondary node when  primary node goes down. my question is in HANA replication ON cluster node which service move automatically on standby node? and which filesystem do we need to make NFS. please help