Automating SAP HANA Scale out System with Non-shared Storage
This document demonstrates the steps required to automate the installation of an SAP HANA scale out system with non-shared storage. Participating hosts have the mandatory file systems (XFS) mounted locally, such as /usr/sap, /hana/data/, and /hana/log.
For this demonstration, two nodes have been considered, but the design can be scaled up as needed. The only file system that is shared between the hosts is /hana/shared.
The generated template can be extended with other automation scripts (refer section: Extension) for provisioning the systems automatically using Terraform in AWS.
Generating Template: The template can be generated using the following command
. /hdblcm –action=update –dump_configfile_template=/tmp/installmulti.rsp
Customizing Parameters: Maintain the mandatory parameters of the templates as below and update the other specific parameters based on your requirements.
Custom_CFG: The parameter custom_cfg is one of the important parameter needs more attentions which points to a directory, which contains custom configuration (*.ini) files, for a scale out system, the default parameter basepath_shared=”yes” for a shared storage not maintaining this value will result in a failure of installation, since the blog demonstrating a non-shared storage installation the parameter basepath_shared=”no” has been set.
Executing Installation: [.hdblcm –cofifgfile=/tmp/installmulti.rsp]
The generated template can be extended along with the other scripts found in the listed blog can be used to provision multi-node system using Terraform in AWS.
- Automating SAP HANA Installation in Minutes (AWS) – Part 1 https://blogs.sap.com/2022/11/17/automating-sap-hana-installation-in-minutes-aws-part-1/
- Automating SAP HANA Installation in Minutes (AWS) – Part https://blogs.sap.com/2022/11/18/automating-sap-hana-installation-in-minutes-aws-part-2/