Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
former_member642947
Participant

Introduction


This blog describes the SAP NetWeaver system setup procedure on Azure platform with Application Layer on SUSE Linux12 and Highly Available DB2 11.1 database in Linux SUSE 12 SP3 environment.
This blog can be used as reference architecture for setting up of HA environment and will require adaptation based on customer needs in network, system sizing & performance requirements.

Purpose


Blog provides step by step procedure to setup HADR for DB2 DB for SAP environment and its setup with SUSE Linux pacemaker cluster, failover and failback procedure for DB cluster. It is focusing on eliminating the single point of failure at Database layer of SAP environment. Blog is not covering high Availability setup for ASCS/ERS and separate documents/guides need to be followed for it. This guide is also not intended for performance optimization of the SAP application and database layer.

 

SAP System Design


Below is the high level design of the setup for SAP environment with HA for DB layer. Azure Internal Load balancer is used to define virtual IP for the DB cluster. Jump Server (with Public IP) is used as RDP connection to login to SAP VMs which have private IPs.



Following are the hostname details for the setup described in this Blog.

































Hostname Role File Systems
azsuascst01 PAS & ASCS VM 1.     /usr/sap
2.     /sapmnt/T01
azsudbudbt01 DB2 Database Node1 1.     /db2/db2t01
2.     /db2/T01
3.     /db2/T01/log_dir
4.     /db2/T01/logarch
5.     /db2/T01/backup
azsudbhdrt01 DB2 Database Node2 1.     /db2/db2t01
2.     /db2/T01
3.     /db2/T01/log_dir
azsudbhat01 Virtual Hostname for DB Cluster
azsusdbt01 iSCSI VM


  • Subnet have separate network security group (firewall) to control the traffic allowed.

  • Additional application servers can be added in the subnet of PAS/ASCS.


 Preparations



  • Create/determine Resource Group for the setup

  • Define V-net and Subnet

  • Setup VM’s as per the sizing requirement.

  • Update the /etc/hosts file with IP/hostname of all the VMs & Virtual IP.

  • Add Data Disks as per the filesystem layout. (Premium Disk must be used in Prod Env)

    • SAPDATA, LOGFILES should be on separate disks

    • Use LVM to define database filesystems and Stripe the disks(64k and above) for optimal throughput.



  • Use the SAP Installation Manual for latest steps.

  • SWAP space setup for VMs.
    sudo vi /etc/waagent.conf
    # Set the property ResourceDisk.EnableSwap to y
    ResourceDisk.EnableSwap=y
    # Size of the swapfile.
    ResourceDisk.SwapSizeMB=4000
    Restart the agent to activate the changesudo service waagent restart


 Installation of ASCS Instance 



  • Start the installation using ‘sapinst’ tool in SWPM DVD.


sudo <swpm>/sapinst SAPINST_REMOTE_ACCESS_USER=<OS admin user>





  • Define the SAP SID.

  • Master Password for all users.

  • OS user password details.

  • Select location for SAPEXE.CAR & SAPHOSTAGENT.

  • OS user sapadm details.

  • Enter ASCS Hostname and instance number

  • Message server ports.

  • Additional components in ASCS. Do not select anything. Press next.

  • Summary Page. Press next.

  • Installation started for ASCS

  • ASCS Installation is complete.


Installation of Primary Database Instance 



  • NFS Mount ‘/sapmnt/T01’ folder from ASCS Instance.


Steps at ASCS VM (Source System)






    • Install the NFS server using YaST

    • Add Entry to /etc/exports
      /sapmnt/T01 azsudbudbt01(rw,fsid=0,sync,no_root_squash,no_subtree_check)

    • Restart the NFS service
      exportfs -r




Steps at DB VM (Target System)






    • Create the folder /sapmnt/T01

    • Mount the NFS filesystem
      >> mount azsudbudbt01:/sapmnt/T01 /sapmnt/T01

    • Add an entry in /etc/fstab for permanent mountpoint
      azsuascst01:/sapmnt/T01 /sapmnt/T01 nfs defaults 0 2



  • Start the installation using ‘sapinst’ tool in SWPM DVD

  • Enter the profile directory which is mounted from ASCS

  • Enter the ASCS message server port

  • Enter the master password for all users

  • Enter the DB SID (can be same as SAP SID)

  • Schema name for DB2 Database

  • Enter Media location for Installation Export & RDBMS

  • Do Not select DB2 pureScale and IBM Tivoli System Automation

  • Media location for RDBMS Client

  • Enter DB2 Setup parameters like Instance Memory, DB Compression, Tablespace Storage Management, Use of Tablespace Pool & its Size and Tablespace Layout

  • DB2 Installation Initiated

  • DB2 DB Installation Completed


Installation of Primary Application Server (PAS) Instance 



  • Start the installation using ‘sapinst’ tool in SWPM DVD

  • Enter the profile directory

  • PAS Instance hostname & instance number

  • Media path for Installation Export DVD

  • PAS Installation is completed.





  • Once HADR setup is complete. Update the DB2 DB virtual IP in SAP profile parameters

    • Update /sapmnt/T01/profile/DEFAULT.PFL
      SAPDBHOST = azsudbhat01
      j2ee/dbhost = azsudbhat01

    • Update /sapmnt/T01/global/db6/db2cli.ini
      Hostname=azsudbhat01




DB2 DB for SAP system : HADR setup


DB2 DB High Availability can be done using several methodologies. DB2 HADR setup is preferred way in Azure in which 2 independent databases will be clustered without sharing the file systems

http://www.linux-ha.org/wiki/Db2_(resource_agent)#DB2_Cluster_with_HADR_.28new_with_release_1.0.5.29

  • Setup Internal Load balancer for virtual IP of DB Cluster


(Make sure to enable floating IP & Increase the Idle timeout to 30 seconds for each Load Balancing rule.)




  • Create Standby DB VM and perform all the preparations (host file update, Filesystem creation, SWAP space, ‘/sapmnt/T01’ NFS from ASCS)

  • Record the UID & GID of following users from Primary DB server
    uid=1001(t01adm) gid=1001(sapsys) groups=1001(sapsys),1000(sapinst),1003(dbt01ctl)
    uid=1002(db2t01) gid=1002(dbt01adm) groups=1002(dbt01adm),1000(sapinst)
    uid=1003(sapt01) gid=1005(dbt01mon) groups=1005(dbt01mon)

  • Start the installation of standby DB(on node 2 VM) using SWPM

  • Choose “Custom” installation option

  • Enter the Profile parameter directory (Make sure its NFS mounted from ASCS host)

  • Message Server Port

  • Make sure UID for user is same as Primary DB users

  • Select ‘Unicode’ System

  • Select the Copy method as Backup/restore

  • Enter DB ID

  • DB connect user ID

  • DB Install location

  • Enter UID of DB2 DB OS user (UID must be same as primary DB)

  • Enter the GID for OS user groups (GID’s must be same as in Primary DB)

  • Software media location

  • HADR cluster type (No selection required)

  • Data communication ports (values same as in primary DB)

  • Define Instance Memory

  • Restore the DB backup and click ‘cancel’ in SWPM.

  • Copy the recent backup from primary DB to standby DB

  • Restore the backup

  • Check the database status

  • Check the log archiving location

  • Check that following ports in both the nodes and it should match in both the nodes.

  • Configure HADR


In Primary DB :
db2 UPDATE DB CFG FOR T01 USING HADR_LOCAL_HOST AZSUDBUDBT01
db2 UPDATE DB CFG FOR T01 USING HADR_LOCAL_SVC T01_HADR_1
db2 UPDATE DB CFG FOR T01 USING HADR_REMOTE_HOST AZSUDBHDRT01
db2 UPDATE DB CFG FOR T01 USING HADR_REMOTE_SVC T01_HADR_2
db2 UPDATE DB CFG FOR T01 USING HADR_REMOTE_INST db2t01
db2 UPDATE DB CFG FOR T01 USING HADR_TIMEOUT 60
db2 UPDATE DB CFG FOR T01 USING HADR_SYNCMODE NEARSYNC
db2 UPDATE DB CFG FOR T01 USING HADR_SPOOL_LIMIT 1000
db2 UPDATE DB CFG FOR T01 USING HADR_PEER_WINDOW 300
db2 UPDATE DB CFG FOR T01 USING indexrec RESTART logindexbuild ON


In Standby DB :
db2 UPDATE DB CFG FOR T01 USING HADR_LOCAL_HOST AZSUDBHDRT01
db2 UPDATE DB CFG FOR T01 USING HADR_LOCAL_SVC T01_HADR_2
db2 UPDATE DB CFG FOR T01 USING HADR_REMOTE_HOST AZSUDBUDBT01
db2 UPDATE DB CFG FOR T01 USING HADR_REMOTE_SVC T01_HADR_1
db2 UPDATE DB CFG FOR T01 USING HADR_REMOTE_INST db2t01
db2 UPDATE DB CFG FOR T01 USING HADR_TIMEOUT 60
db2 UPDATE DB CFG FOR T01 USING HADR_SYNCMODE NEARSYNC
db2 UPDATE DB CFG FOR T01 USING HADR_SPOOL_LIMIT 1000
db2 UPDATE DB CFG FOR T01 USING HADR_PEER_WINDOW 300
db2 UPDATE DB CFG FOR T01 USING indexrec RESTART logindexbuild ON


 Note : If you are using Azure Fencing agent then use following parameter values


HADR_PEER_WINDOW 900




  • Start HADR, execute the following.
    In Standby DB :
    db2 deactivate db t01db2 start hadr on db t01 as standby
    In Primary DB :
    db2 deactivate db t01db2 start hadr on db t01 as primary

  • As HADR setup is completed successfully, Check the status of HADR


azsudbudbt01:db2t01 57> db2pd -d T01 -hadr


Database Member 0 -- Database T01 -- Active -- Up 0 days 00:11:15 -- Date 2018-11-14-02.25.34.585178


HADR_ROLE = PRIMARY
REPLAY_TYPE = PHYSICAL
HADR_SYNCMODE = NEARSYNC
STANDBY_ID = 1
LOG_STREAM_ID = 0
HADR_STATE = PEER
HADR_FLAGS = TCP_PROTOCOL
PRIMARY_MEMBER_HOST = AZSUDBUDBT01
PRIMARY_INSTANCE = db2t01
PRIMARY_MEMBER = 0
STANDBY_MEMBER_HOST = AZSUDBHDRT01
STANDBY_INSTANCE = db2t01
STANDBY_MEMBER = 0
HADR_CONNECT_STATUS = CONNECTED
HADR_CONNECT_STATUS_TIME = 11/14/2018 02:14:36.164342 (1542161676)
HEARTBEAT_INTERVAL(seconds) = 30
HEARTBEAT_MISSED = 0
HEARTBEAT_EXPECTED = 15
HADR_TIMEOUT(seconds) = 120
TIME_SINCE_LAST_RECV(seconds) = 0
PEER_WAIT_LIMIT(seconds) = 0
LOG_HADR_WAIT_CUR(seconds) = 0.000
LOG_HADR_WAIT_RECENT_AVG(seconds) = 0.004486
LOG_HADR_WAIT_ACCUMULATED(seconds) = 12.180
LOG_HADR_WAIT_COUNT = 6904
SOCK_SEND_BUF_REQUESTED,ACTUAL(bytes) = 0, 46080
SOCK_RECV_BUF_REQUESTED,ACTUAL(bytes) = 0, 374400
PRIMARY_LOG_FILE,PAGE,POS = S0000011.LOG, 1065, 15289342814
STANDBY_LOG_FILE,PAGE,POS = S0000010.LOG, 19445, 15282741896
HADR_LOG_GAP(bytes) = 69432348
STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000010.LOG, 19436, 15282702198
STANDBY_RECV_REPLAY_GAP(bytes) = 4357875
PRIMARY_LOG_TIME = 11/14/2018 02:25:34.000000 (1542162334)
STANDBY_LOG_TIME = 11/14/2018 02:25:16.000000 (1542162316)
STANDBY_REPLAY_LOG_TIME = 11/14/2018 02:25:16.000000 (1542162316)
STANDBY_RECV_BUF_SIZE(pages) = 2048
STANDBY_RECV_BUF_PERCENT = 0
STANDBY_SPOOL_LIMIT(pages) = 1000
STANDBY_SPOOL_PERCENT = 0
STANDBY_ERROR_TIME = NULL
PEER_WINDOW(seconds) = 240
PEER_WINDOW_END = 11/14/2018 02:29:32.000000 (1542162572)
READS_ON_STANDBY_ENABLED = N



Pacemaker Cluster Setup For highly available DB2 Database


Pacemaker cluster setup is required to automate the failover of DB from primary to secondary node in case of unavailability of primary DB. Also virtual hostname & IP will be assigned to secondary node during failover and SAP application connection to DB will be auto redirected to new Primary DB.

Change the user shell



  • [A] Stop Db2 database

  • [A] Change shell for user db2<sid> to /bin/ksh from /bin/csh. Recommend to change it using SUSE Linux Yast tool.

  • [A]Download the latest version of db2 Resource agent from github


https://github.com/ClusterLabs/resource-agents/blob/master/heartbeat/db2




  • [A] Start Db2 database


Setup iSCSI target server


• Deploy a new VM with OS as SUSE 12 SP13. Make sure to use premium storage disk. (For production cluster environment, its recommended to use 3 iSCSI target servers.)
• Update SLSE
sudo zypper update
• Install iSCSI target packages
sudo zypper install targetcli-fb dbus-1-python
• Enable the iSCSI target service
sudo systemctl enable targetcli
sudo systemctl start targetcli
• Create the root folder for SBD devices
sudo mkdir /sbd
• Create the SBD device for the database cluster of SAP System T01
sudo targetcli backstores/fileio create sbddbt01 /sbd/sbddbt01 50M write_back=false
sudo targetcli iscsi/ create iqn.2006-04.dbt01.local:dbt01
sudo targetcli iscsi/iqn.2006-04.dbt01.local:dbt01/tpg1/luns/ create /backstores/fileio/sbddbt01
sudo targetcli iscsi/iqn.2006-04.dbt01.local:dbt01/tpg1/acls/ create iqn.2006-04.azsudbudbt01.local:azsudbudbt01
sudo targetcli iscsi/iqn.2006-04.dbt01.local:dbt01/tpg1/acls/ create iqn.2006-04.azsudbhdrt01.local:azsudbhdrt01
• Save the targetcli changes
sudo targetcli saveconfig



Setup SBD device for Pacemaker Cluster


The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] - only applicable to node 2.

• [A] Enable the iSCSI & SBD services
sudo systemctl enable iscsid
sudo systemctl enable iscsi
sudo systemctl enable sbd
• [1] Change the initiator name on the first node
sudo vi /etc/iscsi/initiatorname.iscsi


Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI target server.
InitiatorName=iqn.2006-04.azsudbudbt01.local:azsudbudbt01


• [2] Change the initiator name on the second node
sudo vi /etc/iscsi/initiatorname.iscsi


Change the content of the file to match the ACLs you used when creating the iSCSI device on the iSCSI target server.
InitiatorName=iqn.2006-04.azsudbhdrt01.local:azsudbhdrt01


• [A] Restart the iSCSI service
sudo systemctl restart iscsid
sudo systemctl restart iscsi
• [A] Connect the iSCSI devices
sudo iscsiadm -m discovery --type=st --portal=10.0.1.136:3260
sudo iscsiadm -m node -T iqn.2006-04.dbt01.local:dbt01 --login --portal=10.0.1.136:3260
sudo iscsiadm -m node -p 10.0.1.136:3260 --op=update --name=node.startup --value=automatic
• [A] Make sure that the iSCSI devices are available and note down the device name
azsudbudbt01:/db2/T01 # lsscsi
[2:0:0:0] disk Msft Virtual Disk 1.0 /dev/sda
[3:0:1:0] disk Msft Virtual Disk 1.0 /dev/sdb
[5:0:0:0] disk Msft Virtual Disk 1.0 /dev/sdc
[6:0:0:0] disk LIO-ORG sbddbt01 4.0 /dev/sdd
• [A] Now, retrieve the IDs of the iSCSI devices.
ls -l /dev/disk/by-id/scsi-* | grep sdd


lrwxrwxrwx 1 root root 9 Nov 14 11:43 /dev/disk/by-id/scsi-1LIO-ORG_sbddbt01:75af6b58-fa6f-4c41-8fb4-d33c7c3ec533 -> ../../sdd
lrwxrwxrwx 1 root root 9 Nov 14 11:43 /dev/disk/by-id/scsi-3600140575af6b58fa6f4c418fb4d33c7 -> ../../sdd
lrwxrwxrwx 1 root root 9 Nov 14 11:43 /dev/disk/by-id/scsi-SLIO-ORG_sbddbt01_75af6b58-fa6f-4c41-8fb4-d33c7c3ec533 -> ../../sdd
• [1] Use the device ID of the iSCSI devices to create the new SBD devices on the first cluster node
sudo sbd -d /dev/disk/by-id/scsi-3600140575af6b58fa6f4c418fb4d33c7 -1 60 -4 120 create
• [A] Adapt the SBD config
sudo vi /etc/sysconfig/sbd
[...]
SBD_DEVICE="/dev/disk/by-id/scsi-3600140575af6b58fa6f4c418fb4d33c7
[...]
SBD_PACEMAKER="yes"
[...]
SBD_STARTMODE="always"
[...]
SBD_WATCHDOG="yes"
• [A]Create the softdog configuration file
echo softdog | sudo tee /etc/modules-load.d/softdog.conf


And Load the module.
sudo modprobe -v softdog



Pacemaker Cluster Installation


The following items are prefixed with either [A] - applicable to all nodes, [1] - only applicable to node 1 or [2] - only applicable to node 2.

• [A] Update SLSE
sudo zypper update
• [A] Configure Operating System
# Edit the configuration file
sudo vi /etc/systemd/system.conf


# Change the DefaultTasksMax
#DefaultTasksMax=512
DefaultTasksMax=4096


#and to activate this setting
sudo systemctl daemon-reload


# test if the change was successful
sudo systemctl --no-pager show | grep DefaultTasksMax


Reduce the size of Dirty cache
sudo vi /etc/sysctl.conf


# Change/set the following settings
vm.dirty_bytes = 629145600
vm.dirty_background_bytes = 314572800
• [A] Enable ssh access between cluster node
sudo ssh-keygen


Enter file in which to save the key (/root/.ssh/id_rsa): -> Press ENTER
Enter passphrase (empty for no passphrase): -> Press ENTER
Enter same passphrase again: -> Press ENTER


Display the public Key
sudo cat /root/.ssh/id_rsa.pub


Insert the public Key (of each other which means insert of the public key of Node 1 into the below file of Node 2 & vice-versa)
sudo vi /root/.ssh/authorized_keys
• [A] Install Fence Agent
sudo zypper install fence-agents
• [1] Install Cluster
sudo ha-cluster-init


(For NTP warning, continue with ‘y’. Do not overwrite ssh key and fence agent device. Rest all press ‘enter’.)
• [2] Add node to cluster
sudo ha-cluster-join


(For NTP warning, continue with ‘y’. Enter IP address of Node1 which is already part of cluster. Do not overwrite ssh key and fence agent device. Rest all press ‘enter’.)


• [A] Change the ‘hacluster’ user password
sudo passwd hacluster


(Keep the password same for both the nodes of cluster)
• [A] Update Configuration of corosync.
sudo vi /etc/corosync/corosync.conf
[...]
token: 30000
token_retransmits_before_loss_const: 10
join: 60
consensus: 36000
max_messages: 20


interface {
[...]
}
transport: udpu
# remove parameter mcastaddr
# mcastaddr: IP
}
nodelist {
node {
# IP address of azsudbudbt01
ring0_addr:10.0.1.134
}
node {
# IP address of azsudbhdrt01
ring0_addr:10.0.1.135
}
}
logging {
[...]
}
quorum {
# Enable and configure quorum subsystem (default: off)
# see also corosync.conf.5 and votequorum.5
provider: corosync_votequorum
expected_votes: 2
two_node: 1
}


Restart the corosync service
sudo service corosync restart



Create DB2 DB Cluster Resources


• Cluster property change to maintenance mode
sudo crm configure property maintenance-mode="true"


• Configure the virtual IP  and health probe resources as defined in Internal Load Balancer
sudo crm configure primitive ip_azsudbhat01 ocf:heartbeat:IPaddr2 \
op monitor interval="10s" timeout="20s" \
params ip="10.0.1.140"


sudo crm configure primitive nc_azsudbhat01 anything \
params binfile="/usr/bin/socat" cmdline_options="-U TCP-LISTEN:62500,backlog=10,fork,reuseaddr /dev/null" \
op monitor timeout=20s interval=10 depth=0


sudo crm configure group g_ip_T01_azsudbhat01 ip_azsudbhat01 nc_azsudbhat01


• Configure the DB2 DB resources and define the constrains
sudo crm configure primitive db_db2t01 ocf:heartbeat:db2 \
params instance="db2t01" dblist="t01" \
op start interval="0" timeout="130" \
op stop interval="0" timeout="120" \
op promote interval="0" timeout="120" \
op demote interval="0" timeout="120" \
op monitor interval="30" timeout="60" \
op monitor interval="45" role="Master" timeout="60"


sudo crm configure ms ms_db2_t01 db_db2t01 \
meta target-role="Started" notify="true"


sudo crm configure colocation ip_db_with_master inf: g_ip_T01_azsudbhat01:Started ms_db2_t01:Master


sudo crm configure order ip_db_after_master inf: ms_db2_t01:promote g_ip_T01_azsudbhat01:start
sudo crm configure rsc_defaults resource-stickiness=1000
sudo crm configure rsc_defaults migration-threshold=5000


• Cluster property change to normal mode
sudo crm configure property maintenance-mode="false"




  • Check the cluster status. Its  will be working successfully.

  • Test the cluster manual failover from node1 to node2 by executing the command.
    crm resource migrate ms_db2_t01 azsudbhdrt01 
    Make sure to clear resource constrains using following commands
    crm resource unmigrate ms_db2_t01          
    crm resource cleanup ms_db2_t01


 

Conclusion


Now we have completed the DB2 HA setup for SAP Netweaver environment (including ASCS & PAS installation) in SUSE Linux in Azure Cloud and system is ready. We need to perform all the post installation steps as per SAP Netweaver installation Guide.
1 Comment
Labels in this area