SAP-as-a-Service with OpenStack Cloud – Part 2: HEAT Template for HANA Deployment
In the previous blog entry, we looked into building a HEAT template for SAP instances.
In part 2 of the series SAP-as-a-Server with OpenStack Cloud, we will build-up on the previous HEAT template to create a HANA instance.
Running HANA on top of KVM is not certified for running production. Running a development / quality instance does not have restrictions.
The HANA HEAT Template should automate the following setup.
- Use a SUSE for SAP Image
- Take the inputs for the HANA system (Instance Size (S,M,L,XL), Instance SID, Passwords to be set)
- Create a Server based on the Instance Size chosen
- Block Attachments from different Volume Types, size varies again based on the Instance Size
- Create mount point for HANA data /hana/<SID>/global/hdb/data
- Create mount point for HANA data /hana/<SID>/global/hdb/log
- Create mount point for HANA Installer
- Download the HANA Installer from a repo (This needs to be created prior)
- Do an unattended installation of HANA Server
HEAT Stack Inputs
HANA Instance Type : Small = 24GB, Medium = 32GB, Large= 48GB , ExtraLarge=64GB
Once you launch the stack, the necessary operations start orchestrating. We have to wait till the Stack completely gets created.
The Complete Stack Operation should complete within 10 minutes.
The HEAT stack automates the complete setup of the HANA system, including the setup of the Data Volume and the log volume locations and attachment to the respective block storages. Also after the stack is completed, you can use your existing HANA Studio to connect to the HANA
The following HEAT template was used to achieve this:
heat_template_version: 2015-04-30
description: >
A template showing how to create a Nova instance, a Cinder volume and attach
the volume to the instance. The template uses only Heat OpenStack native
resource types.
parameters:
hanainstance_type:
type: string
label: HANA Instance Type
description: SAP Instance Type Ex ECCEHP7, ECCEHP6, NW74
constraints:
- allowed_values: [ 'Small', 'Medium', 'Large', 'ExtraLarge' ]
hanainstance_sid:
type: string
label: HANA Instance SID
description: HANA Instance SID
constraints:
- length: {min: 3, max: 3}
description: SID should be of 3 Characters
- allowed_pattern: "[A-Z][A-Z0-9][A-Z0-9]"
hanasystem_password:
type: string
label: SYSTEM user Password
description: The password for SYSTEM User
constraints:
- length: {min: 8}
description: Password should be minimum of 8 Characters
hanasapadm_password:
type: string
label: SAP Host Agent User Password
description: The password for SAP Host Agent (sapadm)
constraints:
- length: {min: 8}
description: Password should be minimum of 8 Characters
hanasidadm_password:
type: string
label: HANA System Administrator Password
description: The password for HANA System Administrator (<sid>adm)
constraints:
- length: {min: 8}
description: Password should be minimum of 8 Characters
resources:
floating_ip:
type: OS::Nova::FloatingIP
properties:
pool: floating
wait_condition:
type: OS::Heat::WaitCondition
properties:
handle: {get_resource: wait_handle}
count: 1
timeout: 720
wait_handle:
type: OS::Heat::WaitConditionHandle
nova_instance:
type: OS::Nova::Server
properties:
image: 'SuseforSAP12SP1'
flavor: { "Fn::Select" : [ {get_param: hanainstance_type} , { "Small" : 'h1.small', "Medium" : 'h1.medium', "Large" : 'h1.large', "ExtraLarge" : 'h1.xlarge' } ] }
key_name: 'phani-laptop'
name: 'hanaserver'
networks:
- network: 'fixed'
security_groups:
- {get_resource: security_group }
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash
disk1_id='%voldata1_id%'
disk2_id='%voldata2_id%'
disk3_id='%voldata3_id%'
disk1_size='%voldata1_size%'
disk2_size='%voldata2_size%'
disk3_size='%voldata3_size%'
hanasystem_pwd='%hanasystem_password%'
hanasapadm_pwd='%hanasapadm_password%'
hanasidadm_pwd='%hanasidadm_password%'
disk1_size_mb=$(( $disk1_size * 1024 ))
disk2_size_mb=$(( $disk2_size * 1024 ))
disk3_size_mb=$(( $disk3_size * 1024 ))
db_mount_point='%dbmountpoint%'
voldata1_dev="/dev/disk/by-id/virtio-$(echo ${disk1_id} | cut -c -20)"
voldata2_dev="/dev/disk/by-id/virtio-$(echo ${disk2_id} | cut -c -20)"
voldata3_dev="/dev/disk/by-id/virtio-$(echo ${disk3_id} | cut -c -20)"
hanainstance_sid='%hanainst_sid%'
while [ ! -e ${voldata1_dev} ]; do echo Waiting for volume to attach; sleep 1; done
while [ ! -e ${voldata2_dev} ]; do echo Waiting for volume to attach; sleep 1; done
while [ ! -e ${voldata3_dev} ]; do echo Waiting for volume to attach; sleep 1; done
parted -s ${voldata1_dev} mklabel msdos
parted -s ${voldata1_dev} mkpart primary ext3 1 ${disk1_size_mb}
parted -s ${voldata1_dev} set 1 lvm on
parted -s ${voldata2_dev} mklabel msdos
parted -s ${voldata2_dev} mkpart primary ext3 1 ${disk2_size_mb}
parted -s ${voldata2_dev} set 1 lvm on
parted -s ${voldata3_dev} mklabel msdos
parted -s ${voldata3_dev} mkpart primary ext3 1 ${disk3_size_mb}
parted -s ${voldata3_dev} set 1 lvm on
partprobe
vgcreate hanasoft ${voldata1_dev}-part1
vgcreate hanadata ${voldata2_dev}-part1
vgcreate hanalog ${voldata3_dev}-part1
#The creation of Logical Volumes sapdata. Right now very simple. We will make it complex as we go.
lvcreate -l +100%FREE -n dump hanasoft
lvcreate -l +100%FREE -n data hanadata
lvcreate -l +100%FREE -n log hanalog
#lvcreate -L+20G -n sapdump sapdata
#The creation of Logical Volumes dbdata. We will just minus 5G from disk2 size (to counter LVM extenrs). Again need to make some proper calculations.
#lvcreate -L+$(( $disk2_size - 5 ))G -n ${db_mount_point} dbdata
mkfs.ext4 /dev/hanasoft/dump
mkfs.ext4 /dev/hanadata/data
mkfs.ext4 /dev/hanalog/log
mkdir -p /hana/$( echo ${hanainstance_sid} )/global/hdb/data
mkdir -p /hana/$( echo ${hanainstance_sid} )/global/hdb/log
#mkdir -p /sapmnt
mkdir -p /sapdump
#mkdir -p /${db_mount_point}
echo "/dev/hanasoft/dump /sapdump ext4 defaults 0 0" >> /etc/fstab
echo "/dev/hanadata/data /hana/$( echo ${hanainstance_sid} )/global/hdb/data ext4 defaults 0 0" >> /etc/fstab
echo "/dev/hanalog/log /hana/$( echo ${hanainstance_sid} )/global/hdb/log ext4 defaults 0 0" >> /etc/fstab
mount -av
IPADDR=$(ifconfig | awk -F" +|:" '/inet addr/ && $4 != "127.0.0.1" {print $4}')
echo "$IPADDR $HOSTNAME" >>/etc/hosts
cd /sapdump
wget http://w.x.y.z/saprepo/HANAPLATFORM/HDB_SERVER_LINUX_X86_64.tar.gz
tar -zxvf HDB_SERVER_LINUX_X86_64.tar.gz
cd HDB_SERVER_LINUX_X86_64
cat << EOF > hdb_pwd.xml
<?xml version="1.0" encoding="UTF-8"?>
<Passwords>
<password><![CDATA[$( echo ${hanasidadm_pwd} )]]></password>
<sapadm_password><![CDATA[$( echo ${hanasapadm_pwd} )]]></sapadm_password>
<system_user_password><![CDATA[$( echo ${hanasystem_pwd} )]]></system_user_password>
</Passwords>
EOF
cat hdb_pwd.xml |./hdbinst -b --number=00 --hostname=hanaserver --sapmnt=/hana -s $( echo ${hanainstance_sid} ) --db_mode=single_container --system_usage=custom --read_password_from_stdin=xml > hdb_inst.log
wc_notify --data-binary '{"status": "SUCCESS"}'
params:
"%voldata1_id%": { get_resource: cinder_volume1 }
"%voldata2_id%": { get_resource: cinder_volume2 }
"%voldata3_id%": { get_resource: cinder_volume3 }
"%voldata1_size%": 20
"%voldata2_size%": { "Fn::Select" : [ {get_param: hanainstance_type} , { "Small" : 40, "Medium" : 60, "Large" : 90, "ExtraLarge" : 120 } ] }
"%voldata3_size%": { "Fn::Select" : [ {get_param: hanainstance_type} , { "Small" : 24, "Medium" : 32, "Large" : 48, "ExtraLarge" : 64 } ] }
"%hanainst_sid%": { get_param: hanainstance_sid }
"%hanasystem_password%" : { get_param: hanasystem_password }
"%hanasapadm_password%" : { get_param: hanasapadm_password }
"%hanasidadm_password%" : { get_param: hanasidadm_password }
wc_notify: { get_attr: ['wait_handle', 'curl_cli'] }
cinder_volume1:
type: OS::Cinder::Volume
properties:
size: 20
volume_type: 'SUSE-Enterprise-Storage'
cinder_volume2:
type: OS::Cinder::Volume
properties:
size: { "Fn::Select" : [ {get_param: hanainstance_type} , { "Small" : 40, "Medium" : 60, "Large" : 90, "ExtraLarge" : 120 } ] }
volume_type: 'NetApp'
cinder_volume3:
type: OS::Cinder::Volume
properties:
size: { "Fn::Select" : [ {get_param: hanainstance_type} , { "Small" : 24, "Medium" : 32, "Large" : 48, "ExtraLarge" : 64 } ] }
volume_type: 'NetApp'
volume_attachment1:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: cinder_volume1 }
instance_uuid: { get_resource: nova_instance }
mountpoint: /dev/vdc
volume_attachment2:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: cinder_volume2 }
instance_uuid: { get_resource: nova_instance }
mountpoint: /dev/vdd
volume_attachment3:
type: OS::Cinder::VolumeAttachment
properties:
volume_id: { get_resource: cinder_volume3 }
instance_uuid: { get_resource: nova_instance }
mountpoint: /dev/vde
association:
type: OS::Nova::FloatingIPAssociation
properties:
floating_ip: { get_resource: floating_ip }
server_id: { get_resource: nova_instance }
security_group:
type: OS::Neutron::SecurityGroup
properties:
name: hanasg
description: Ping and SSH
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 30015
port_range_max: 30015
- protocol: tcp
port_range_min: 50013
port_range_max: 50013
outputs:
instance_ip:
description: Public IP address of the newly created Nova instance.
value: { get_attr: [nova_instance, first_address] }
Hello Phani Kumar Arava
Could you please update the snapshot in the blogpost please ?
Thank you
Amaury