Skip to Content
Author's profile photo Former Member

Reinstalling SAP HANA SPS 8

This is a step by step tutorial that will show you how to install HANA SPS 8 on the Amazon Cloud. With this, I strive to reconstruct the entire installation process from scratch, never missing out on any command or step to take. Therefore, I also included some screenshots from the console in order for you to see how the process unfolds. Since some of these images were quite big in size, it might prove hard to read the information therein – in this case, double-clicking on any of these images will give you a full-sized image.

Launching an instance

First, we will install a SUSE Linux Enterprise Server 11 Service Pack 3 64-bit, sized r3.8xlarge.




Choose your corresponding Availability Zone. Activating the option Protect against accidental termination makes it impossible to accidentally delete this instance.


5 volumes (adding up to 1847 GB) must be created now. You can adjust your storage options according to your needs.


Name your instance, e.g. “SAP HANA SPS 8”.


If you formerly already had an AWS instance, you can use your existing Security Group ID. Otherwise you can create a new one now:


After reviewing your data one more time, you are finally ready to launch your instance.


For accessing the operating system via SSH you can keep using your existing key pairs. Should you not have one yet, then you must create one now:



Your instance has now successfully launched,  and is available under a Public IP address via SSH.



Preparing an instance

Please log in as root user and create the following directories:

mkdir /hana

mkdir /hana/shared

mkdir /usr/sap

mkdir /hana/data

mkdir /hana/data/HDB

mkdir /hana/log

mkdir /hana/log/HDB


With the following commands, we will format our 5 volumes with the ext3 file system (proceed with “y”):

fdisk -l (shows you the list of devices on your machine)

mkfs -t ext3 /dev/xvdb


Proceed along with the following commands:

mkfs -t ext3 /dev/xvdc

mkfs -t ext3 /dev/xvdd

mkfs -t ext3 /dev/xvde

mkfs -t ext3 /dev/xvdf

Next, we will edit the file vi/etc/fstab and add the following values, so that our volumes will be automatically mounted every time we restart our system. Thus, they will always be available.

/dev/xvdb  /usr/sap       ext3    defaults        0 0

/dev/xvdc  /hana/data     ext3    defaults        0 0

/dev/xvdd  /hana/log      ext3    defaults        0 0

/dev/xvde  /hana/shared   ext3    defaults        0 0

/dev/xvdf  /hanadata      ext3    defaults        0 0


With the command mount -a, all volumes displayed within fstab will automatically be mounted. You can check this with the command df-h:


For the installation it is also required to install Java, which can be easily installed on your Linux machine with the following commands:


mv FILE NAME java-linux-x64.rpm


rpm -ivh java-linux-x64.rpm


It is up to you whether you would like to download and decompress the data on a Windows machine or a Linux machine. Keep in mind, though, that the decompressed data must be readily available on your Suse Linux machine.

Within the Marketplace, we download the 3 RAR files – for instance on a Windows machine – and decompress these. For this, you will need a decompression tool such as WinRAR or 7-Zip.


The decompressed files can now be transferred to our SUSE server e.g. via SFTP:


Switch to the *DATA_UNITS/HDB_SERVER_LINUX_X86_64 directory and make the data executable with the following commands:

find -name hdbinst   -exec chmod 744 {} +
find -name hdbsetup  -exec chmod 744 {} +
find -name hdbuninst -exec chmod 744 {} +
find -name sdbrun    -exec chmod 744 {} +


In order to prevent the installation from aborting due to a hardware check, we will have to implement this script:

export IDSPISPOPD=”1″


>>> import os

>>> ‘IDSPISPOPD’ in os.environ.keys()


>>> quit()


I have noticed once that my installation did not resume due to a file that was not executable. I would suggest to type this command:

chmod 744 /mnt/xvdu/HANA_SPS8_ALLIN/DATA_UNITS/HDB_AFL/LINUX_X86_64/hdbinst

You should also make the file hdblcm within the subdirectory /HDB_LCM_LINUX_X86_64 executable:

chmod 744 hdblcm


Begin installation:


Proceed along, entering “y” whenever prompted.


After the process, a Logfile will be created. Please check it for any warnings or errors.

Now, we will rename our instance. For this, we will have to edit the hosts file.


Add the following line:     imdbhdb


sudo -su hdbadm


Now, we will rename our file. For this, we will have to switch back to our directory and type the following commands:

cd /usr/sap/HDB/SYS/global/hdb/install/bin

./hdbrename -hostmap ALTERNATE=imdbhdb



Now, your HANA system is ready to go.

Should you have any questions or comments, I would be glad to assist you.

Assigned Tags

      You must be Logged on to comment or reply to a post.
      Author's profile photo Former Member
      Former Member


      Wolf gang Amazing Blog!! really help ful.

      is it possible to have video clip of installation.? 🙂 because we planned to install it on AWS & planning to migrate our Bw on top of it(DEMO) purpose

      And I also  have few questions to ask

      1.Does the installation ask for a hardware check.

      2.Can i launch a 60 GB ram instance fro installation

      3.Is the Installation Different from the AWS provided cloud formation

      I would appreciate if you coud help us with the questions!! 🙂


      Pradeep Veepuri

      Author's profile photo Former Member
      Former Member

      Hi Pradeep Veepuri,

      this is only a SAP HANA DB installation without SAP BW on top of it.

      Here my answers to your questions:

      1. Yes, but you can disable the hardware check with the python command (see above)

      2. Yes you can install a 68 GB version -> see my blog about SPS7 Installing SAP HANA SPS 7 on AWS

      3. Yes it is different, because it is a installation for testing, not for production. You can also use the following installation method SAP HANA Enterprise Cloud / Cloud Appliance Library where you should have the possibility to use a complete SAP BW on HANA system out of the box.

      Best regards


      Author's profile photo Former Member
      Former Member

      Hi Wolfgang Muhlhofer

      Thanks for your time for answering my questions? 🙂

      i have one last querie 😉 please help if possible ?

      As mentioned earlier we started launching a HANA instance with cloud formationtemplate & later will try with this option too.

      1.Now my intention is to perform a Migration POC (OS/DB), which is system copy of Existing(DEMO) BW7.4 to AWS instance.

      Approach which i thought!!

      • Take Export Dump(BW)
      • Launch Instance For BW on AWS typically medium 
      • Start BW installation & Give Export
      • when promted Connection to HANA Db which already installed on  AWS
      • i would like to do it in the  same VPC which is created for HANA instance

      Hope you understood my requirement


      Will it be a challenge to connect my BW to the hana instance & perform I/O inside AWS?



      Pradeep veepuri

      Author's profile photo Former Member
      Former Member

      Hi Pradeep Veepuri,

      sorry I can not answer your questions. Maybe somone else can help you.



      Author's profile photo nilesh khorgade
      nilesh khorgade


      Thanks for this useful information, keep the good work going.

      Author's profile photo Former Member
      Former Member

      In my experience google cloud offers more benefits and handling instance using gcloud console or cmd tools was relatively easier than AWS.

      rather than hourly rate they offer 10 minute rate which was unwasteful, overall I wish gce increases the max ram from 104 gig to Terabytes...

      Author's profile photo Former Member
      Former Member

      Hi Wolfgang,

      No worries,But the infomration in the blog is really helpful 😉


      Pradeep veepuri

      Author's profile photo Kenichi Haga
      Kenichi Haga

      Hi Wolfgang.

      It's very helpful information.

      Thank you.