Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
Introduction

We will go through step by step hands-on scenario to build virtual appliance for SAP HANA on AWS in this hands-on. It includes how to install SAP HANA on AWS and we will go through the steps required to install and configure SAP HANA on AWS. I hope the information contained in this hands-on provide you basic idea about what AWS services are and how to configure SAP HANA on AWS environment.

The hands-on is composed of following posts:

  1. Prior knowledge

  2. EC2 instance creation

  3. EC2 instance configuration

  4. SAP HANA installation for the master

  5. Adding worker to SAP HANA landscape


This post handles the last part of the hands-on.

We will try most steps in command line (awscli) but you should know that all tasks in this hands-on could be done on AWS console too.

5. Adding worker


We can operate a few servers and aim at scale-out landscape.

In this chapter we will add one worker to the existing SAP HANA landscape. Most tasks are the same as we’ve done in the previous posts ("2. EC2 instance creation" to "3.EC2 instance configuration") but NFS settings are required further.

5.1. Common steps as in SAP HANA Master installation


Adding a worker, as another EC2 instance, requires almost the same steps as in SAP HANA Master installation.There are two things to keep in mind.

  • Some file systems don’t need to be created in the worker because those file systems will be shared by the master.

  • We will share the same access key id/secret access key, security group, image id, etc which are already created/decided.


Following commands will be done in your Linux terminal.

We need a new block device mapping file for the worker. Devices specified in this file will be used for “/hana/data”, “/hana/log”, “/” and “/usr/sap” file systems in the worker. Another two file systems, “/hana/shared” and “/backup” will be shared by the master.
yourLinux:~ # cat /tmp/ebs2.json
[
{"DeviceName":"/dev/sda1","Ebs":{"VolumeSize":100,"VolumeType":"gp2","DeleteOnTermination":true}},
{"DeviceName":"/dev/sdf","Ebs":{"VolumeSize":667,"VolumeType":"gp2","DeleteOnTermination":true}},
{"DeviceName":"/dev/sdg","Ebs":{"VolumeSize":667,"VolumeType":"gp2","DeleteOnTermination":true}},
{"DeviceName":"/dev/sdh","Ebs":{"VolumeSize":667,"VolumeType":"gp2","DeleteOnTermination":true}},
{"DeviceName":"/dev/sdi","Ebs":{"VolumeSize":667,"VolumeType":"gp2","DeleteOnTermination":true}},
{"DeviceName":"/dev/sdj","Ebs":{"VolumeSize":50,"VolumeType":"gp2","DeleteOnTermination":true}}
]

Then, we can create new EC2 instance for the worker. Most information is the same as the master’s but private-ip-address, block-device-mapping file name, tag-specification are different than the master’s.
yourLinux:~ # aws ec2 run-instances  \
--image-id ami-e22b898c \
--count 1 \
--instance-type r4.2xlarge \
--ebs-optimized \
--private-ip-address 172.31.128.22 \
--key-name=KeyPair \
--security-group-ids sg-07d8b7d9bc71e0e5d \
--subnet-id subnet-0ec93994701de0193 \
--placement AvailabilityZone=ap-northeast-2c,GroupName=myplsgrp \
--instance-initiated-shutdown-behavior stop \
--block-device-mappings file:///tmp/ebs2.json \
--tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value=SAP HANA Worker}]'

{
"Instances": [
{
"Monitoring": {
"State": "disabled"
},
"PublicDnsName": "",
"StateReason": {
"Message": "pending",
"Code": "pending"
},
"State": {
"Code": 0,
"Name": "pending"
},
"EbsOptimized": true,
"LaunchTime": "2018-06-22T07:09:04.000Z",
"PrivateIpAddress": "172.31.128.22",
"ProductCodes": [],
"VpcId": "vpc-b49ab4dc",
"CpuOptions": {
"CoreCount": 4,
"ThreadsPerCore": 2
},
"StateTransitionReason": "",
"InstanceId": "i-0d4bb4677f5a80c28",
"ImageId": "ami-e22b898c",
"PrivateDnsName": "ip-172-31-128-22.ap-northeast-2.compute.internal",
"KeyName": "KeyPair",
"SecurityGroups": [
{
"GroupName": "SecGrp",
"GroupId": "sg-07d8b7d9bc71e0e5d"
}
],

You will find a new instance id, i-0d4bb4677f5a80c28 for newly created worker EC2 instance. You can check the status of the new EC2 instance for the worker.
yourLinux:~ # aws ec2 describe-instance-status --instance-id i-0d4bb4677f5a80c28
{
"InstanceStatuses": [
{
"InstanceId": "i-0d4bb4677f5a80c28",
"InstanceState": {
"Code": 16,
"Name": "running"
},
"AvailabilityZone": "ap-northeast-2c",
"SystemStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
},
"InstanceStatus": {
"Status": "ok",
"Details": [
{
"Status": "passed",
"Name": "reachability"
}
]
}
}
]
}

In a few minutes, InstanceStatus changes to “running” and SystemStatus and InstanceStatus change to “passed”. You can allocate/associate another Elastic IP address for newly created worker instance.
yourLinux:~ # aws ec2 allocate-address
{
"PublicIp": "13.209.86.198",
"Domain": "vpc",
"AllocationId": "eipalloc-035d6f2968843e907"
}

yourLinux:~ # aws ec2 associate-address --instance-id i-0d4bb4677f5a80c28 --allocation-id eipalloc-035d6f2968843e907
{
"AssociationId": "eipassoc-00caee5e05fb50531"
}

Once elastic IP address is associated to the worker, you can connect to new EC2 instance.

You could use different key pair PEM file than from the master. But, we’ll share the keypair PEM file with the master. You remember we already had KeyPair.pem file in "2.7. Create a key pair".
yourLinux:~ # ssh -i KeyPair.pem ec2-user@13.209.86.198
SUSE Linux Enterprise Server 12 SP3 x86_64 (64-bit)

As "root" (sudo or sudo -i) use the:
- zypper command for package management
- yast command for configuration management

Management and Config: https://www.suse.com/suse-in-the-cloud-basics
Documentation: https://www.suse.com/documentation/sles-12/
Forum: https://forums.suse.com/forumdisplay.php?93-SUSE-Public-Cloud

Have a lot of fun...
ec2-user@ip-172-31-128-22:~> sudo su -
ip-172-31-128-22:~ #

We will do the same tasks as we did in "3.2 Change hostname, 3.3. Install prerequisite software packages, 3.4. EC2 instance (Host) configuration" sections in https://blogs.sap.com/2018/07/04/hands-on-configure-sap-hana-on-aws-part3/

We’re in EC2 instance (Worker).
ip-172-31-128-22:~ # hostname imdbworker
ip-172-31-128-22:~ # echo "imdbworker" > /etc/HOSTNAME
ip-172-31-128-22:~ # cp /etc/hosts /etc/hosts.bak
ip-172-31-128-22:~ # echo "172.31.128.22 imdbworker imdbworker.local" >> /etc/hosts
ip-172-31-128-22:~ # sed -i '/preserve_hostname/ c\preserve_hostname: true' /etc/cloud/cloud.cfg
ip-172-31-128-22:~ # cp /etc/defaultdomain /etc/defaultdomain.bak
ip-172-31-128-22:~ # echo "local" >> /etc/defaultdomain

In addition, /etc/hosts in imdbmaster (master) and imdbworker (worker) need to be adjusted so both hostname entries are adopted. The /etc/hosts files in the imdbmaster and the imdbworker will look like below:

172.31.128.22 imdbworker imdbworker.local

172.31.128.21 imdbmaster imdbmaster.local

We restart worker EC2 instance:
yourLinux:~ # aws ec2 stop-instances --instance-ids i-0d4bb4677f5a80c28
yourLinux:~ # # wait till the instance is successfully stopped:
yourLinux:~ # aws ec2 start-instances --instance-ids i-0d4bb4677f5a80c28

Next step is installing prerequisite software packages and configuring EC2 instance in the worker. You need to visit back to "3.3. Install prerequisite software packages" and "3.4. EC2 instance (Host) configuration" sections in https://blogs.sap.com/2018/07/04/hands-on-configure-sap-hana-on-aws-part3/

5.2. Volume creation


We’ll create three file systems, “/hana/data”, “/hana/log” and “/usr/sap”, in the worker.
imdbworker:~ # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 100G 0 disk
└─xvda1 202:1 0 100G 0 part /
xvdf 202:80 0 667G 0 disk
xvdg 202:96 0 667G 0 disk
xvdh 202:112 0 667G 0 disk
xvdi 202:128 0 667G 0 disk
xvdj 202:144 0 50G 0 disk

There are two kinds of the command sets required for file system creation.

One is for HANA database related file systems, “/hana/data” and “/hana/log”. We’ll create volume group with four physical volumes. For this, we will execute following commands sequence in EC2 instance:

  • Create physical volumes from EBSes.



imdbworker:~ # pvcreate /dev/xvdf /dev/xvdg /dev/xvdh /dev/xvdi



  • Set the I/O scheduling option to NOOP for the physical volumes.



imdbworker:~ # echo "noop" > /sys/block/xvdf/queue/scheduler
imdbworker:~ # echo "noop" > /sys/block/xvdg/queue/scheduler
imdbworker:~ # echo "noop" > /sys/block/xvdh/queue/scheduler
imdbworker:~ # echo "noop" > /sys/block/xvdi/queue/scheduler

Create a volume group.
imdbworker:~ # vgcreate vghana /dev/xvdf /dev/xvdg /dev/xvdh /dev/xvdi



  • Create logical volumes



imdbworker:~ # lvcreate -n lvhanalog -i 4 -I 256 -L 200G vghana
imdbworker:~ # lvcreate -n lvhanadata -i 4 -I 256 -L 800G vghana



  • Create file systems.



imdbworker:~ # mkfs.xfs /dev/mapper/vghana-lvhanalog
imdbworker:~ # mkfs.xfs /dev/mapper/vghana-lvhanadata



  • Create directories for file system mount points.



imdbworker:~ # mkdir /hana /hana/data /hana/log 
imdbworker:~ # mkdir /hana/data/<SID> /hana/log/<SID>


The other is for “/usr/sap” file system. “/usr/sap” is for SAP HANA executables and libraries. We’ll create the file system from single block device and we will execute following commands:

  • Create file system



imdbworker:~ # mkfs.xfs -f /dev/xvdj



  • Create directories for file system mount points



imdbworker:~ # mkdir /usr/sap 


To make new file systems mounted upon EC2 instance restart, /etc/fstab file needs to be updated. We need to add below lines in /etc/fstab:
/dev/xvdj /usr/sap xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0
/dev/mapper/vghana-lvhanadata /hana/data xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0
/dev/mapper/vghana-lvhanalog /hana/log xfs nobarrier,noatime,nodiratime,logbsize=256k 0 0

Memo.

In case of, SLES11SP4, /etc/fstab should be changed as below (delaylog option is added):
/dev/xvdj /usr/sap xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
/dev/mapper/vghana-lvhanadata /hana/data xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0
/dev/mapper/vghana-lvhanalog /hana/log xfs nobarrier,noatime,nodiratime,logbsize=256k,delaylog 0 0

“mount -a” will mount all file systems listed in /etc/fstab file. You can check if all file systems are shown in “df -h" command with correct sizes.
imdbmaster:~ # mount -a
imdbmaster:~ # df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 30G 8.0K 30G 1% /dev
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 30G 23M 30G 1% /run
tmpfs 30G 0 30G 0% /sys/fs/cgroup
/dev/xvda1 99G 1.7G 93G 2% /
tmpfs 6.0G 0 6.0G 0% /run/user/1000
/dev/xvdj 50G 33M 50G 1% /usr/sap
/dev/mapper/vghana-lvhanadata 800G 34M 800G 1% /hana/data
/dev/mapper/vghana-lvhanalog 200G 33M 200G 1% /hana/log

5.3. NFS configuration for shared file systems.


In "5.2. Volume creation", we created file systems which are for exclusive use of the worker. We have to configure shared NFS for “/hana/shared” and “/backup” file systems. Those file systems will be shared between the master and the worker. The master will act as a NFS server and the worker NFS client.

Following commands will be done from the master as a NFS server:

  • Install and configure NFS server.



imdbmaster:~ # zypper -n install nfs-kernel-server
imdbmaster:~ # chkconfig nfsserver on
imdbmaster:~ # service nfsserver start
imdbmaster:~ # service nfsserver status



  • Adjust following lines in /etc/sysconfig/nfs file



STATD_PORT=”4000”
LOCKD_TCPPORT=”4001”
LOCKD_UDPPORT=”4001”
MOUNTD_PORT=”4002”



  • edit /etc/exports



#Share global HANA shares 
/hana/shared imdbworker(rw,no_root_squash,no_subtree_check)
/backup imdbworker(rw,no_root_squash,no_subtree_check)



  • Maintain table of exported NFS file systems.



imdbmaster:~ # exportfs -a

Memo.

“# exportfs -a“ command will produce below errors (“Function not implemented”) when nfsserver is not started yet.
exportfs: imdbworker:/backup: Function not implemented
exportfs: imdbworker:/hana/shared: Function not implemented



  • Check mount information for an NFS server.



imdbmaster:~ # showmount -e
Export list for imdbmaster:
/backup imdbworker
/hana/shared imdbworker

Memo.

“# showmount -e“ command will produce below error when nfsserver is not started yet.
clnt_create: RPC: Port mapper failure - Unable to receive: errno 111 (Connection refused)


Following commands will be done from the worker as a NFS client.

  • Create required directories for file system mounts.



imdbworker:~ # mkdir /hana/shared /backup



  • “autofs” service configuration.


edit /etc/auto.master (comment out the line with +auto.master)
#+auto.master
/- auto.direct

edit /etc/auto.direct
/hana/shared  -rw,rsize=32768,wsize=32768,timeo=14,intr imdbmaster.local:/hana/shared
/backup -rw,rsize=32768,wsize=32768,timeo=14,intr imdbmaster.local:/backup

restart autofs service.
imdbworker:~ # chkconfig autofs on
imdbworker:~ # service autofs restart
imdbworker:~ # df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 30G 8.0K 30G 1% /dev
tmpfs 30G 0 30G 0% /dev/shm
tmpfs 30G 22M 30G 1% /run
tmpfs 30G 0 30G 0% /sys/fs/cgroup
/dev/xvda1 99G 1.7G 93G 2% /
/dev/xvdj 50G 226M 50G 1% /usr/sap
tmpfs 6.0G 0 6.0G 0% /run/user/1000
imdbmaster.local:/hana/data 800G 5.6G 795G 1% /hana/data
imdbmaster.local:/hana/log 200G 3.9G 197G 2% /hana/log
imdbmaster.local:/hana/shared 200G 9.3G 191G 5% /hana/shared
imdbmaster.local:/backup 1.2T 34M 1.2T 1% /backup


In the worker (as a NFS client), file system will be mounted via autofs service. Sometimes it may be hung while autofs restarts. You need to check if all communication between master and worker is fine. To check this, you need to verify if there are any UDP ports which are not enabled yet for NFS from the master. “service autofs status” will provide you another clue about NFS issue, for example, any other errors preventing service from starting.

5.4. Add a worker to landscape - hdblcm.


You can add a host (SAP HANA worker) using hdblcm command from the master. I will show you batch installation in this hands-on for your reference. You can install the worker interactively just by using (“./hdblcm”).

To add a worker, run /hana/shared/<SID>/hdblcm/hdblcm from the master.
imdbmaster:~ # cd /hana/shared/<SID>/hdblcm
imdbmaster:~ # ./hdblcm --action=add_hosts \
--addhosts=imdbworker:role=worker:group=default:workergroup=default \
--password=<hana adm password>\
--sapadm_password=<sapadm password> \
--sid=<SID> \
--batch

Memo.

For secure communication between master and worker, you have something to do in the master and the worker when you meet either error as below while adding host:

  • “Authorization failed (user name=’root’), password authorization on host ……. Failed: -18; LIBSSH2 ERROR PUBLICKEY UNRECOGNIZED, Authentication failed (keyboard-interactive)”.

  • “Mandatory parameter 'root_password' (RootPassword) is missing or invalid”


On master, run # ssh-keygen -t rsa

This will create or modify id_rsa and id_rsa.pub under ~/.ssh directory

On the worker,

Append the content of id_rsa.pub file (from master) to ~/.ssh/authorized_keys file (in worker).

5.5. Post-installation.


Refer to "4.5. Post-installation" in https://blogs.sap.com/2018/07/04/hands-on-configure-sap-hana-on-aws-part4/

6. Scheduling EC2 Instance


Sometimes it’s required to stop and start EC2 instance periodically. For example, EC2 instance may not need to be online during weekend to reduce costs. AWS service is based on on-demand-billing and you can reduce costs by stopping EC2 instance during weekend.

We use AWS Lambda function and CloudWatch event for scheduling tasks:

  • AWS Lambda function is actual function body to be executed and it’s created via AWS Lambda console.

  • AWS CloudWatch event will trigger Lambda function at scheduled time (cron based) or by interval. AWS CloudWatch event can be defined via Amazon CloudWatch console.


The detail for scheduling is beyond the scope of this hands-on. You can refer to https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/ for more information.

It's the end of my hands-on 🙂
1 Comment