Technology Blogs by SAP
Learn how to extend and personalize SAP applications. Follow the SAP technology blog for insights into SAP BTP, ABAP, SAP Analytics Cloud, SAP HANA, and more.
cancel
Showing results for 
Search instead for 
Did you mean: 
c_baker
Employee
Employee
In the last blog, I showed how to install 3 nodes for HADR (primary, companion and DR node) using setup.bin response files for the primary and companion and then adding the DR node using a setuphadr response file.

That is fine for a new installation or testing.

But what do you do if you want to add HADR to an existing system?

If you want to start from an existing 'primary' (active) node, there are steps you should perform as you now need to add:

  • a companion node

  • create and materialize the database on the companion

  • a DR node

  • create and materialize the database on the DR


In the previous blog, the ASE software was installed, the server instances configured, and the primary and companion HADR nodes configured using a response file for 'setup.bin'.  This also installed the SAPHostAgent on the two nodes.  The DR node was configured by creating the DR instance using a 'srvbuildres' resource file and then adding it to the HADR cluster using a response file for setuphadr.

This time, we will first install the binaries and create the primary and companion servers and HADR cluster separately.

(For this blog I will be using ASE 16.0 SP04PL04).

Installing the binaries


First, let's install the ASE binaries on the primary and companion host nodes.  To do this, we will use a setup.bin response file:

Sample binary installation response file:
###################hadr_response_primary.txt.org########################################################### # HADR sample responses file for SAP Adaptive Server Enterprise 16.0 SP04.
#
# This sample responses file installs software only.
# saphostagent must be installed separately on primary and companion
# server instances must be built using srvbuildres/sqllocres
#
# use to install software only on primary, companion and dr
# separate response file for FM
#
# Prerequisite:
# ASE distribution suftware extracted for installation
#
#
##############################################################################

#Validate Response File
#----------------------
#
RUN_SILENT=true

#Choose Install Folder
#---------------------
# USER_INSTALL_DIR=<Destination directory in absolute path>
#
USER_INSTALL_DIR=/opt/sybase/ASE

#Install older version
#---------------------
# INSTALL_OLDER_VERSION=<true|false>.
# This determines whether installer can overwrite newer version.
#
INSTALL_OLDER_VERSION=false

#Choose Update Installation
#--------------------------
# DO_UPDATE_INSTALL=<true|false>.
# This determines if the installer selects and applies
# updates to the installed product/features.
# DO_UPDATE_INSTALL_HADR_COMPONENT=<ASE|DM|ALL>
# Which component to update for ASE HADR.
# This is only valid if DO_UPDATE_INSTALL=true and
# installed directory has ASE HADR.
# Valid values are:
# -----------------
# ASE --> Update only the SAP ASE components in rolling upgrade
# DM --> Update only the Data Movement component in rolling upgrade
# ALL --> Update all components
#
DO_UPDATE_INSTALL=false
DO_UPDATE_INSTALL_HADR_COMPONENT=DM

#Choose Install Set
#------------------
# CHOSEN_INSTALL_SET=<Typical|TypicalASEHADR|Full|Custom>
# CHOSEN_FEATURE_LIST=<Features you want to install>
# Valid values are:
# -----------------
# fase_srv --> SAP Adaptive Server Enterprise
# fase_add_lm --> Additional SAP ASE Language Modules
# fase_amc --> Administration and Management Console
# fase_hadr --> SAP ASE Data Movement for HADR
# Available on:
# Itanium/HP-UX 64-bit
# IBM/AIX 64-bit
# x86-64/Linux 64-bit
# SPARC/Solaris 64-bit
# x86-64/Windows 64-bit
# fopen_client --> Open Client
# fdblib --> DB-Library
# fesql_c_lang --> Embedded SQL/C
# fesql_cobol_lang --> Embedded SQL/Cobol
# fxa --> XA Interface Library for SAP ASE Distributed Transaction Manager
# Available on:
# Itanium/HP-UX 64-bit
# IBM/AIX 64-bit
# x86-64/Linux 64-bit
# SPARC/Solaris 64-bit
# x86-64/Solaris 64-bit
# x86-64/Windows 64-bit
# fconn_add_lm --> Additional Connectivity Language Modules
# fjconnect160 --> jConnect 16.0 for JDBC
# fodbcl --> SAP ASE ODBC Driver
# fodata_ase --> OData Server for SAP ASE
# Available on:
# x86-64/Linux 64-bit
# x86-64/Windows 64-bit
# fdbisql --> Interactive SQL
# fqptune --> QPTune
# fsysam_util --> SySAM License Utilities
# fsysam_server --> SySAM License Server
# fscc_server --> Cockpit
# fasecmap --> SAP ASE Cockpit
# fase_cagent --> Remote Command and Control Agent for SAP ASE
# fconn_python --> SAP ASE extension module for Python
# fconn_perl --> SAP ASE database driver for PERL
# Available on:
# x86-64/Linux 64-bit
# x86-64/Windows 64-bit
# fconn_php --> SAP ASE extension module for PHP
#
# Notes:
# - If DO_UPDATE_INSTALL=true, CHOSEN_INSTALL_SET and CHOSEN_FEATURE_LIST
# are ignored.
# - If CHOSEN_INSTALL_SET is set to "Typical", "TypicalASEHADR", or "Full", do not set
# CHOSEN_FEATURE_LIST.
#
# CHOSEN_FEATURE_LIST=fase_srv,fopen_client,fdblib,fconn_python,fconn_perl,fconn_php,fjconnect160,fodbcl,fdbisql,fqptune,fsysam_util,fscc_server,fasecmap,fase_cagent,fase_hadr
#CHOSEN_INSTALL_SET=TypicalASEHADR
CHOSEN_INSTALL_SET=Custom
CHOSEN_FEATURE_LIST=fase_srv,fopen_client,fdblib,fjconnect160,fase_hadr,fase_amc

#SAP Host Agent
#--------------
# Install SAP Host Agent for ASE HADR.
#
# You need root permission to install
# SAP Host Agent. Enter your password for
# installer to execute "sudo" command to
# install SAP Host Agent. If you do not
# have "sudo" permission, set
# INSTALL_SAP_HOST_AGENT=FALSE and ask
# your system administrator to manually
# install SAP Host Agent at later time.
#
# Notes:
# - You also can set SUDO_PASSWORD property value
# through SUDO_PASSWORD environment variable.
#
INSTALL_SAP_HOST_AGENT=FALSE
SUDO_PASSWORD=

#Choose Product License Type
#---------------------------
# SYBASE_PRODUCT_LICENSE_TYPE=<license|evaluate|express>
# This is the End User License Agreement (EULA) you agreed to when run
# installer with "-DAGREE_TO_SAP_LICENSE=true" argument.
#
# Note:
# - 'evaluate' and 'express' only available on some platforms.
#
SYBASE_PRODUCT_LICENSE_TYPE=license

#Choose Sybase Software Asset Management License
#-----------------------------------------------
# SYSAM_LICENSE_SOURCE=<license_file|existing_license_server|proceed_without_license>
# SYSAM_LICENSE_FILE_PATHNAME=<license key file path>
# Required when SYSAM_LICENSE_SOURCE is set to 'license_file'.
# SYSAM_EXISTING_LICENSE_SERVER_HOSTNAME=<license key server name>
# Required when SYSAM_LICENSE_SOURCE is set to 'existing_license_server'.
# SYSAM_EXISTING_LICENSE_SERVER_PORTNUMBER=<license key server port number>
# Set this to null for default port number.
#
SYSAM_LICENSE_SOURCE=existing_license_server
SYSAM_LICENSE_FILE_PATHNAME=
SYSAM_EXISTING_LICENSE_SERVER_HOSTNAME=<license key server name>
SYSAM_EXISTING_LICENSE_SERVER_PORTNUMBER=<license key server port number>

#Choose SYSAM Product Edtion and License Type
#--------------------------------------------
# SYSAM_PRODUCT_EDITION=<Enterprise Edition|Small Business Edition|Unknown>
# SYSAM_LICENSE_TYPE=<License type>
# Valid SYSAM_LICENSE_TYPE value for SYSAM_PRODUCT_EDITION='Enterprise Edition':
# CP : CPU License
# SF : Standby CPU License
# SR : Server License
# SV : Standby Server License
# DT : Development and Test License
# EV : Evaluation License
# OT : Other License
# SS : Standalone Seat License
# DV : Developer License
# NA : Not Applicable or Other License
# AC : OEM Application Deployment CPU License
# BC : OEM Application Deployment Standby CPU License
# AR : OEM Application Deployment Server License
# BR : OEM Application Deployment Standby Server License
# AO : OEM Application Deployment Other License
# LP : Application Specific CPU License
# LF : Application Specific Standby CPU License
# LR : Application Specific Server License
# LV : Application Specific Standby Server License
# Unknown
# Valid SYSAM_LICENSE_TYPE value for SYSAM_PRODUCT_EDITION='Small Business Edition':
# CP : CPU License
# SF : Standby CPU License
# SR : Server License
# SV : Standby Server License
# DT : Development and Test License
# EV : Evaluation License
# OT : Other License
# SS : Standalone Seat License
# DV : Developer License
# NA : Not Applicable or Other License
# AC : OEM Application Deployment CPU License
# BC : OEM Application Deployment Standby CPU License
# AR : OEM Application Deployment Server License
# BR : OEM Application Deployment Standby Server License
# AO : OEM Application Deployment Other License
# LP : Application Specific CPU License
# LF : Application Specific Standby CPU License
# LR : Application Specific Server License
# LV : Application Specific Standby Server License
# DH : Development and Testing Chip License
# CH : Chip License
# SH : Standby Chip License
# AH : Application Deployment Chip License
# BH : Application Deployment Standby Chip License
# LH : Application Specific Chip License
# LI : Application Specific Standby Chip License
# Unknown
# Valid SYSAM_LICENSE_TYPE value for SYSAM_PRODUCT_EDITION=Unknown
# None
#
SYSAM_PRODUCT_EDITION=Enterprise Edition
SYSAM_LICENSE_TYPE=CP : CPU License

#Software Asset Management Notification Setting
#----------------------------------------------
# SYSAM_NOTIFICATION_ENABLE=<true|false>
# Enable SySAM email notification
# SYSAM_NOTIFICATION_SMTP_HOSTNAME=<SMTP server host name>
# Required if SYSAM_NOTIFICATION_ENABLE=true
# SYSAM_NOTIFICATION_SMTP_PORTNUMBER=<SMTP server port number>
# Required if SYSAM_NOTIFICATION_ENABLE=true
# SYSAM_NOTIFICATION_SENDER_EMAIL=<Sender email>
# Required if SYSAM_NOTIFICATION_ENABLE=true
# SYSAM_NOTIFICATION_RECIPIENT_EMAIL=<Recipient emails>
# Required if SYSAM_NOTIFICATION_ENABLE=true
# SYSAM_NOTIFICATION_EMAIL_SEVERITY=<INFORMATIONAL|WARNING|ERROR>
# Required if SYSAM_NOTIFICATION_ENABLE=true
#
SYSAM_NOTIFICATION_ENABLE=false
SYSAM_NOTIFICATION_SMTP_HOSTNAME=smtp
SYSAM_NOTIFICATION_SMTP_PORTNUMBER=25
SYSAM_NOTIFICATION_SENDER_EMAIL=sybase
SYSAM_NOTIFICATION_RECIPIENT_EMAIL=c.baker@sap.com
SYSAM_NOTIFICATION_EMAIL_SEVERITY=NONE

#Choose Update SAP ASE
#-----------------------------
# DO_UPDATE_ASE_SERVER=<true|false>
# This property determines whether to update the existing SAP ASE.
# It is only valid if DO_UPDATE_INSTALL=true.
# UPDATE_ASE_SERVER_NAME_[n]=<SAP ASE name to update>
# UPDATE_ASE_PASSWORD_[n]=<SAP ASE SA password>
#
# Notes:
# - You also can set UPDATE_ASE_SERVER_NAME_[n] and UPDATE_ASE_PASSWORD_[n]
# property values through the environment variables UPDATE_ASE_SERVER_NAME_[n]
# and UPDATE_ASE_PASSWORD_[n], respectively.
# - If the ASE password is null, set UPDATE_ASE_SERVER_NAME_[n] value to "NA".
#
DO_UPDATE_ASE_SERVER=false

#Configure New Servers
#---------------------
# SY_CONFIG_ASE_SERVER=<true|false>
# This property determines whether to configure SAP ASE.
# SY_CONFIG_HADR_SERVER=<true|false>
# This property determines whether to setup ASE HADR.
# Available on:
# Itanium/HP-UX 64-bit
# Power/AIX 64-bit
# x86-64/Linux 64-bit
# SPARC/Solaris 64-bit
# SY_CONFIG_BS_SERVER=<true|false>
# This property determines whether to configure Backup Server.
# SY_CONFIG_XP_SERVER=<true|false>
# This property determines whether to configure XP Server.
# SY_CONFIG_JS_SERVER=<true|false>
# This property determines whether to configure Job Scheduler Agent.
# SY_CONFIG_SM_SERVER=<true|false>
# This property determines whether to enable Self Management.
# SY_CONFIG_SCC_SERVER=<true|false>
# This property determines whether to configure Cockpit.
#
# Notes:
# - These properties are ignored if you set DO_UPDATE_INSTALL=true.
# See above for updating the existing ASE servers.
# - If SY_CONFIG_BS_SERVER, SY_CONFIG_XP_SERVER, and/or SY_CONFIG_JS_SERVER,
# are set to "true", SY_CONFIG_ASE_SERVER must also set to "true".
# - If SY_CONFIG_HADR_SERVER=true, SY_CONFIG_ASE_SERVER and SY_CONFIG_BS_SERVER
# must also set to "true".
# - If SY_CONFIG_SM_SERVER=true, SY_CONFIG_JS_SERVER must also set to "true".
#
SY_CONFIG_ASE_SERVER=false
SY_CONFIG_HADR_SERVER=false
SY_CONFIG_BS_SERVER=false
SY_CONFIG_XP_SERVER=false
SY_CONFIG_JS_SERVER=false
SY_CONFIG_SM_SERVER=false
SY_CONFIC_WS_SERVER=false
SY_CONFIG_SCC_SERVER=false
SY_CONFIG_TXT_SERVER=false

#Configure Servers with Different User Account
#---------------------------------------------
# If SY_CFG_USER_ACCOUNT_CHANGE=yes, below properties are required:
#
# SY_CFG_USER_ACCOUNT_NAME=<user name>
# SY_CFG_USER_ACCOUNT_PASSWORD=<user password>
#
SY_CFG_USER_ACCOUNT_CHANGE=no
SY_CFG_USER_ACCOUNT_NAME=
SY_CFG_USER_ACCOUNT_PASSWORD=

#User Configuration Data Directory
#---------------------------------
#SY_CFG_USER_DATA_DIRECTORY=/data/ASE
SY_CFG_USER_DATA_DIRECTORY=/opt/sybase/ASE

This can be installed by calling setup.bin from the directory holding the extracted installation files. e.g.
> ./setup.bin -f ~/hadr/hadr_response_software.txt -i silent -DAGREE_TO_SAP_LICENSE=true -DRUN_SILENT=true

from the host shell.

This can be run on the primary host, the companion host (and the DR host).

Creating the Server Instances


Once the software is installed, the server instances can be built using srvbuildres/sqllocres and resource files.  Examples of these were provided in the previous blog with the caveat that for this blog, the same resource files were used but to maintain consistency with the previous style of HADR build, the master database and master database device was increased to 500 MB and 1000 MB respectively from the 110 MB and 300 MB defined in the resource file examples.

Server instances for PRIMARY_ASE, COMPANION_ASE and DR_ASE can and should now be built and localized.  Remember - character set, sort order and master database size should all match.  (The examples will use server instances having the utf8/utf8bin character set and sort order).

Also, to simplify the eventual setuphadr process, we can configure the ASE's now, and restart them after running the following on each instance using isql:
-- for all instances
sp_configure "optimizer level", 0, "ase_current"
go
sp_configure "max network packet size", 16384
go
sp_configure "additional network memory", 1024000
go
--only really needed for PRIMARY_ASE and COMPANION_ASE for eventual FaultManager use but can do on DR_ASE
sp_configure "enable monitoring", 1
go
sp_configure "errorlog pipe active",1
go
sp_configure "errorlog pipe max messages", 1024
go

and restarting the instances as necessary.

Adding a User Database


In the previous blog, the user database (tpcc) was added after building the 2 HADR cluster nodes (PRIMARY_ASE & COMPANION_ASE) and the dr ASE instance, DR_ASE, starting in Part 2.  For this example, I will first add a database to the primary node, better emulating having an existing ASE server instance that now needs to be enabled for HADR, but the companion and dr instances must be new instances.

The creation of the database (tpcc in my example) is the same as in Part 2 except that the creation of the database devices, database, login and db_options are only run on the primary ASE instance.  In addition, there is no DR_maint user created yet, for eventual aliasing as the dbo for the tpcc database.

Once created and populated, the tables in the tpcc database have counts similar to:
 tablename            records              
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90020
ORDER_LINE 900204
HISTORY 90025
NEW_ORDER 27020

Now we have an existing server (PRIMARY_ASE) and existing user database (tpcc).

Preparing for HADR


As HADR will add the DR_admin and DR_maint logins to the server instances involved (PRIMARY_ASE, COMPANION_ASE and DR_ASE) we need to ensure that the current logins available in the primary are also defined in the companion and dr instance, so the SUID numbering is consistent for the future DR_admin and DR_maint logins.

(In the previous blog, the companion and dr instances were part of the HADR cluster already, so creating the login on the primary only was needed as the creation was replicated to the other instances).  But at present, there is no HADR cluster defined.

Currently on the PRIMARY_ASE we have:
1> select suid, name from syslogins
2> go
suid name
----------- ------------------------------
1 sa
2 probe
3 jstask
4 tpcc

(4 rows affected)

...but on the new COMPANION_ASE and DR_ASE instances, we have:
1> select suid, name from syslogins
2> go
suid name
----------- ------------------------------
1 sa
2 probe
3 jstask

(3 rows affected)

Since the COMPANION_ASE and DR_ASE must be new, empty instances, we can take advantage of the creation of the HADR cluster and replication of the master database to have the necessary logins created for us.

Creating the HADR Cluster


In the first blog, the instances were created all the way from scratch by installing the software, creating the instances and configuring the HADR cluster using a setup.bin response file.  In this case, we are starting with already-created instances and then will create an HADR cluster from them.  To do this, we will use setuphadr response files for $SYBASE/$SYBASE_ASE/bin/setuphadr to create an HADR cluster from the PRIMARY_ASE and COMPANION_ASE (and DR) instances, instead of setup.bin response files.

The primary (active) node must always be configured first.  The response file used will be:
###############################################################################
# Setup HADR sample response file for non-BS + 3rd node
#
# This sample responses file setup ASE HADR on
# hosts "host1" (primary) and "host2" (companion) and "host3" (DR).
#
# Prerequisite :
# - New SAP ASE and Backup servers setup and started on "host1" and "host2".
# See HADR User Guide for requirements on SAP ASE servers.
# - Replication Management Agent (RMA) started on "host1" and "host2".
#
# Usage :
# 1. On host1 (primary), run:
# $SYBASE/$SYBASE_ASE/bin/setuphadr <this_responses_file>
#
# 2. Change this response file properties:
# setup_site=site2
# is_secondary_site_setup=true
#
# 3. On host2 (companion), run
# $SYBASE/$SYBASE_ASE/bin/setuphadr <response_file_from_step_2>
#
# 4. Change this response file properties:
# setup_site=DR
# is_secondary_site_setup=true
#
###############################################################################

# ID that identifies this cluster
#
# Value must be unique,
# begin with a letter and
# 3 characters in length.
cluster_id=DEM

# Which site being configured
#
# Note:
# You need to set "<setup_site_value>.*"
# properties in this responses file.
setup_site=PRIM

# Set installation_mode
#
# Valid values: true, false
#
# If set to true, installation_mode will be set to "BS"
# If set to false, installation_mode will be set to "nonBS"
setup_bs=false

# This is for BusS only
# if set to true, DR admin user will be added to secure store
add_user_to_secure_store=false
# Adding user action will be executed by following user
#sid_admin_user=DEM_adm
#sid_admin_password=

# true OR false
enable_ssl=false
# common name, take SYBASE for example
ssl_common_name=SYBASE
# If this ASE server has enabled SSL, if set to "true", ssl_private_key_file and ssl_public_key_file will be ignored
ase_ssl_enabled=true
# If we should enable SSL bacupserver connections
enable_ssl_for_bs=true
# private key file
ssl_private_key_file=/tmp/hadr.key
# public key file
ssl_public_key_file=/tmp/hadr.crt
# root CA cert
# NOTE: if you're using self-signed cert, put your public key file here
ssl_ca_cert_file=/tmp/rootCA.pem
# ssl password
ssl_password=Sybase

# Has the secondary site prepared for ASE HADR
#
# Valid values: true, false
#
# If set to true, "<secondary_setup_site_value>.*"
# properties must set in this responses file.
is_secondary_site_setup=false

# How data is replicated
#
# Valid values: sync, async
synchronization_mode=sync

# SAP ASE system administrator user/password
#
# setuphadr will prompt from standard input if not specified
ase_sa_user=sa
ase_sa_password=<password>

# BACKUP server system administrator user/password
#
bs_admin_user=sa
bs_admin_password=<password>

# ASE HADR maintenance user/password
#
# Password must have at least 6 characters
# setuphadr will prompt from standard input if not specified
hadr_maintenance_user=DR_maint
hadr_maintenance_password=<password>

# Replication Management Agent administrator user/password
#
# Password must have at least 6 characters
# setuphadr will prompt from standard input if not specified
rma_admin_user=DR_admin
rma_admin_password=<password>

# If we XA replication is enabled
#
# Valid values: true, false
xa_replication=false

# If we need to config and start Replication Management Agent
#
# Valid values: true, false
config_start_rma=true

# If we need to create Replication Management Agent windows service
# Only affects windows
#
# Valid values: true, false
# If set to true, rma_service_user and rma_service_password will be used
create_rma_windows_service=false

# Replication Management Agent Service user/password
#
rma_service_user=admin
rma_service_password=<password>

# If disable referential constraints on HADR setups
#
# Valid values: true, false
disable_referential_constraints=false

# Databases that will participate in replication
# and "auto" materialize.
#
# If database doesn't exist in the SAP ASE, you need
# to specify <site>.ase_data_device_create_[x]_[y] and
# <site>.ase_log_device_create_[x]_[y] properties.
# See below.
#
# ASE HADR requires SAP ASE to have a database
# with cluster ID name (see "cluster_id" above).
# If you have not created this database, you can
# enter it here to have it created.

# cluster ID database
participating_database_1=DEM
materialize_participating_database_1=true

# user database
#participating_database_2=tpcc
#materialize_participating_database_2=true

###############################################################################
# Site "PRIM" on host primarynode with primary role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
PRIM.ase_host_name=primarynode

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS/RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
PRIM.rma_host_name=primarynode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
PRIM.site_name=Toronto

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion
PRIM.site_role=primary

# directory where SAP ASE installed
PRIM.ase_release_directory=/opt/sybase/ASE

# User defined dm data dir for eRSSD
#PRIM.dm_database_file_directory=/data/SRS/data
#PRIM.dm_translog_file_directory=/data/SRS/data
#PRIM.dm_log_file_directory=/data/SRS/data
#PRIM.dm_config_file_directory=/opt/sybase/ASE/DM
#PRIM.dm_backup_file_directory_for_database=/data/ASE/dump

# Public IP for host
#PRIM.host_public_ip=

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
PRIM.ase_user_data_directory=/opt/sybase/ASE

PRIM.ase_server_name=PRIMARY_ASE
PRIM.ase_server_port=5000

PRIM.backup_server_name=PRIMARY_ASE_BS
PRIM.backup_server_port=5001

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
PRIM.backup_server_dump_directory=/data/ASE/dump

# Data & log devices to create the databases specified
# in "participating_database_[x]" properties. You do
# not need to specify these properties if the database(s)
# already exist in the SAP ASE server.
#
# ase_data_device_create_[x]_[y] - property to create data device
# ase_log_device_create_[x]_[y] - property to create log device
# where
# x is number in "participating_database_[x]" property
# y is number device to create
#
# Format: <logical_device_name>, <physical_device_path>, <size_in_MB>
#
# NOTE: Databases sizes on primary and companion
# SAP ASE must be the same.

# Device for cluster ID database "DEM" (See "participating_database_1" property)
# Database size = 25MB
# data device "DEM_data_dev" = 25MB
PRIM.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/DEM_dev1.dat, 25

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#PRIM.ase_data_device_create_2_1=db1_data_dev1, /host1_eng/ase/data/db1_dev1.dat, 25
#PRIM.ase_data_device_create_2_2=db1_data_dev2, /host1_eng/ase/data/db1_dev2.dat, 25
#PRIM.ase_data_device_create_2_3=db1_data_dev3, /host1_eng/ase/data/db1_dev3.dat, 25
#PRIM.ase_log_device_create_2_1=db1_log_dev1, /host1_eng/ase/data/db1_dev1.log, 25

# Port numbers for Replication Server and Replication Management Agent on host1
#
# In remote topology, these are the companion Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>/DM/RMA-16_0/instances/AgentContainer/config/bootstrap.prop

# for value
PRIM.rma_rmi_port=7000
PRIM.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
PRIM.srs_port=5005

# Device buffer for Replication Server on host1
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
PRIM.device_buffer_dir=/data/SRS/data
PRIM.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host1
#
# For synchronous replication (synchronization_mode=sync),
# enter directory to an SSD (solid state drive) or other
# type of fast read/write storage device
PRIM.simple_persistent_queue_dir=/data/SRS/ssd
PRIM.simple_persistent_queue_size=8000

###############################################################################
# Site "COMP" on host companionnode with companion role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
COMP.ase_host_name=companionnode

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS/RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
COMP.rma_host_name=companionnode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
COMP.site_name=London

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion
COMP.site_role=companion

# directory where SAP ASE installed
COMP.ase_release_directory=/opt/sybase/ASE

# User defined dm data dir for eRSSD
#COMP.dm_database_file_directory=
#COMP.dm_translog_file_directory=
#COMP.dm_log_file_directory=
#COMP.dm_config_file_directory=
#COMP.dm_backup_file_directory_for_database=

# Public IP for host
#COMP.host_public_ip=

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
COMP.ase_user_data_directory=/opt/sybase/ASE

COMP.ase_server_name=COMPANION_ASE
COMP.ase_server_port=5000

COMP.backup_server_name=COMPANION_ASE_BS
COMP.backup_server_port=5001

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
COMP.backup_server_dump_directory=/data/ASE/dump

# Data & log devices to create the databases specified
# in "participating_database_[x]" properties. You do
# no need to specify these properties if the database(s)
# already exist in the SAP ASE server.
#
# ase_data_device_create_[x]_[y] - property to create data device
# ase_log_device_create_[x]_[y] - property to create log device
# where
# x is number in "participating_database_[x]" property
# y is number device to create
#
# Format: <logical_device_name>, <physical_device_path>, <size_in_MB>
#
# NOTE: Databases sizes on primary and companion
# SAP ASE must be the same.

# Devices for database "DEM" (See "participating_database_1" property)
# Database size = 25MB
# data device "le_data_dev" = 25MB
COMP.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/dem_dev1.dat, 25

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#site2.ase_data_device_create_2_1=db1_data_dev1, /host2_eng/ase/data/db1_dev1.dat, 25
#site2.ase_data_device_create_2_2=db1_data_dev2, /host2_eng/ase/data/db1_dev2.dat, 25
#site2.ase_data_device_create_2_3=db1_data_dev3, /host2_eng/ase/data/db1_dev3.dat, 25
#site2.ase_log_device_create_2_1=db1_log_dev1, /host2_eng/ase/data/db1_dev1.log, 25

# Devices for database "userdb2" (See "participating_database_3" property)
# Database Size = 100MB
# data device 1 "db2_data_dev1" = 25MB
# data device 2 "db2_data_dev2" = 25MB
# log device 1 "db2_log_dev1" = 25MB
# log device 2 "db2_log_dev2" = 25MB
#site2.ase_data_device_create_3_1=db2_data_dev1, /host2_eng/ase/data/db2_dev1.dat, 25
#site2.ase_data_device_create_3_2=db2_data_dev2, /host2_eng/ase/data/db2_dev2.dat, 25
#site2.ase_log_device_create_3_1=db2_log_dev1, /host2_eng/ase/data/db2_dev1.log, 25
#site2.ase_log_device_create_3_2=db2_log_dev2, /host2_eng/ase/data/db2_dev2.log, 25

# Port numbers for Replication Server and Replication Management Agent on host2
#
# In remote topology, these are the companion Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>/DM/RMA-16_0/instances/AgentContainer/config/bootstrap.prop
# for value
COMP.rma_rmi_port=7000
COMP.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
COMP.srs_port=5005

# Device buffer for Replication Server on host2
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
COMP.device_buffer_dir=/data/SRS/data
COMP.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host2
#
# For synchronous replication (synchronization_mode=sync),
# enter directory to an SSD (solid state drive) or other
# type of fast read/write storage device
COMP.simple_persistent_queue_dir=/data/SRS/ssd
COMP.simple_persistent_queue_size=8000

###############################################################################
# Site "DR" on host drnode with dr role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
DR.ase_host_name=drnode.openstack.na-ca-1.cloud.sap

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS\RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
DR.rma_host_name=drnode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
DR.site_name=Offsite

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion, dr
DR.site_role=dr

# directory where SAP ASE installed
DR.ase_release_directory=/opt/sybase/ASE

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
DR.ase_user_data_directory=/opt/sybase/ASE

DR.ase_server_name=DR_ASE
DR.ase_server_port=5000

DR.backup_server_name=DR_ASE_BS
DR.backup_server_port=5001

# added to support demo
# Devices for database "DEM" (See "participating_database_1" property)
# Database size = 25MB
# data device "DEM" = 25MB
DR.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/DEM_dev1.dat, 25

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#DR.ase_data_device_create_2_1=db1_data_dev1, /host2_eng/ase/data/db1_dev1.dat, 25
#DR.ase_data_device_create_2_2=db1_data_dev2, /host2_eng/ase/data/db1_dev2.dat, 25
#DR.ase_data_device_create_2_3=db1_data_dev3, /host2_eng/ase/data/db1_dev3.dat, 25
#DR.ase_log_device_create_2_1=db1_log_dev1, /host2_eng/ase/data/db1_dev1.log, 25

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
DR.backup_server_dump_directory=/data/ASE/dump

# Port numbers for Replication Server and Replication Management Agent on host3
#
# In remote topology, these are the DR Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>\DM\RMA-16_0\instances\AgentContainer\config\bootstrap.prop
# for value
DR.rma_rmi_port=7000
# RMA RMI occupies five consecutive ports, with the configured port occupying the highest number.
DR.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
DR.srs_port=5005

# Device buffer for Replication Server on host3
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
# Note: For HADR on SAP Business Suite Installations use SID database logsize * 1.5
DR.device_buffer_dir=/data/SRS/data
DR.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host3
#
# For synchronous replication (synchronization_mode=async),
# enter directory to an SSD (solid state drive) or other
# type of fast read\write storage device
# Note: For HADR on SAP Business Suite Installations use SID database logsize * 1.5
DR.simple_persistent_queue_dir=/data/SRS/ssd
DR.simple_persistent_queue_size=8000

and is executed using:
> $SYBASE/$SYBASE_ASE/bin/setuphadr <response file>

The output will be similar to:
Clean up environment.
Environment cleaned up.
Setup user databases
Create user database DEM...
Setup user databases...Success
Setup ASE HADR maintenance user
Create maintenance login "DR_maint"...
Grant "sa_role" role to "DR_maint"...
Grant "replication_role" role to "DR_maint"...
Grant "replication_maint_role_gp" role to "DR_maint"...
Create "sap_maint_user_role" role...
Grant set session authorization to "sap_maint_user_role"...
Grant "sap_maint_user_role" role to "DR_maint"...
Grant "sybase_ts_role" role to "DR_maint"...
Add auto activated roles "sap_maint_user_role" to user "DR_maint"...
Allow "DR_maint" to be known as dbo in "master" database...
Allow "DR_maint" to be known as dbo in "DEM" database...
Setup ASE HADR maintenance user...Success
Setup administrator user
Create administrator login "DR_admin"...
Grant "sa_role" role to "DR_admin"...
Grant "sso_role" role to "DR_admin"...
Grant "replication_role" role to "DR_admin"...
Grant "hadr_admin_role_gp" role to "DR_admin"...
Grant "sybase_ts_role" role to "DR_admin"...
Add user "DR_admin" to DB "sybsystemprocs".
Setup administrator user...Success
Setup Backup server allow hosts
Backup server on "PRIM" site: Add host "companionnode" to allow dump and load...
Setup Backup server allow hosts...Success

Setup complete on "PRIM" site. Please run Setup HADR on "COMP" site to complete the setup.

The companion version of the response file is:
###############################################################################
# Setup HADR sample response file for non-BS + 3rd node
#
# This sample responses file setup ASE HADR on
# hosts "host1" (primary) and "host2" (companion) and "host3" (DR).
#
# Prerequisite :
# - New SAP ASE and Backup servers setup and started on "host1" and "host2".
# See HADR User Guide for requirements on SAP ASE servers.
# - Replication Management Agent (RMA) started on "host1" and "host2".
#
# Usage :
# 1. On host1 (primary), run:
# $SYBASE/$SYBASE_ASE/bin/setuphadr <this_responses_file>
#
# 2. Change this response file properties:
# setup_site=site2
# is_secondary_site_setup=true
#
# 3. On host2 (companion), run
# $SYBASE/$SYBASE_ASE/bin/setuphadr <response_file_from_step_2>
#
# 4. Change this response file properties:
# setup_site=DR
# is_secondary_site_setup=true
#
###############################################################################

# ID that identifies this cluster
#
# Value must be unique,
# begin with a letter and
# 3 characters in length.
cluster_id=DEM

# Which site being configured
#
# Note:
# You need to set "<setup_site_value>.*"
# properties in this responses file.
setup_site=COMP

# Set installation_mode
#
# Valid values: true, false
#
# If set to true, installation_mode will be set to "BS"
# If set to false, installation_mode will be set to "nonBS"
setup_bs=false

# This is for BusS only
# if set to true, DR admin user will be added to secure store
add_user_to_secure_store=false
# Adding user action will be executed by following user
#sid_admin_user=DEM_adm
#sid_admin_password=

# true OR false
enable_ssl=false
# common name, take SYBASE for example
ssl_common_name=SYBASE
# If this ASE server has enabled SSL, if set to "true", ssl_private_key_file and ssl_public_key_file will be ignored
ase_ssl_enabled=true
# If we should enable SSL bacupserver connections
enable_ssl_for_bs=true
# private key file
ssl_private_key_file=/tmp/hadr.key
# public key file
ssl_public_key_file=/tmp/hadr.crt
# root CA cert
# NOTE: if you're using self-signed cert, put your public key file here
ssl_ca_cert_file=/tmp/rootCA.pem
# ssl password
ssl_password=Sybase

# Has the secondary site prepared for ASE HADR
#
# Valid values: true, false
#
# If set to true, "<secondary_setup_site_value>.*"
# properties must set in this responses file.
is_secondary_site_setup=true

# How data is replicated
#
# Valid values: sync, async
synchronization_mode=sync

# SAP ASE system administrator user/password
#
# setuphadr will prompt from standard input if not specified
ase_sa_user=sa
ase_sa_password=<password>

# BACKUP server system administrator user/password
#
bs_admin_user=sa
bs_admin_password=<password>

# ASE HADR maintenance user/password
#
# Password must have at least 6 characters
# setuphadr will prompt from standard input if not specified
hadr_maintenance_user=DR_maint
hadr_maintenance_password=<password>

# Replication Management Agent administrator user/password
#
# Password must have at least 6 characters
# setuphadr will prompt from standard input if not specified
rma_admin_user=DR_admin
rma_admin_password=<password>

# If we XA replication is enabled
#
# Valid values: true, false
xa_replication=false

# If we need to config and start Replication Management Agent
#
# Valid values: true, false
config_start_rma=true

# If we need to create Replication Management Agent windows service
# Only affects windows
#
# Valid values: true, false
# If set to true, rma_service_user and rma_service_password will be used
create_rma_windows_service=false

# Replication Management Agent Service user/password
#
rma_service_user=admin
rma_service_password=<password>

# If disable referential constraints on HADR setups
#
# Valid values: true, false
disable_referential_constraints=false

# Databases that will participate in replication
# and "auto" materialize.
#
# If database doesn't exist in the SAP ASE, you need
# to specify <site>.ase_data_device_create_[x]_[y] and
# <site>.ase_log_device_create_[x]_[y] properties.
# See below.
#
# ASE HADR requires SAP ASE to have a database
# with cluster ID name (see "cluster_id" above).
# If you have not created this database, you can
# enter it here to have it created.

# cluster ID database
participating_database_1=DEM
materialize_participating_database_1=true

# user database
#participating_database_2=tpcc
#materialize_participating_database_2=true

###############################################################################
# Site "PRIM" on host primarynode with primary role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
PRIM.ase_host_name=primarynode

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS/RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
PRIM.rma_host_name=primarynode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
PRIM.site_name=Toronto

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion
PRIM.site_role=primary

# directory where SAP ASE installed
PRIM.ase_release_directory=/opt/sybase/ASE

# User defined dm data dir for eRSSD
#PRIM.dm_database_file_directory=/data/SRS/data
#PRIM.dm_translog_file_directory=/data/SRS/data
#PRIM.dm_log_file_directory=/data/SRS/data
#PRIM.dm_config_file_directory=/opt/sybase/ASE/DM
#PRIM.dm_backup_file_directory_for_database=/data/ASE/dump

# Public IP for host
#PRIM.host_public_ip=

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
PRIM.ase_user_data_directory=/opt/sybase/ASE

PRIM.ase_server_name=PRIMARY_ASE
PRIM.ase_server_port=5000

PRIM.backup_server_name=PRIMARY_ASE_BS
PRIM.backup_server_port=5001

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
PRIM.backup_server_dump_directory=/data/ASE/dump

# Data & log devices to create the databases specified
# in "participating_database_[x]" properties. You do
# not need to specify these properties if the database(s)
# already exist in the SAP ASE server.
#
# ase_data_device_create_[x]_[y] - property to create data device
# ase_log_device_create_[x]_[y] - property to create log device
# where
# x is number in "participating_database_[x]" property
# y is number device to create
#
# Format: <logical_device_name>, <physical_device_path>, <size_in_MB>
#
# NOTE: Databases sizes on primary and companion
# SAP ASE must be the same.

# Device for cluster ID database "DEM" (See "participating_database_1" property)
# Database size = 25MB
# data device "le_data_dev" = 25MB
PRIM.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/DEM_dev1.dat, 25

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#PRIM.ase_data_device_create_2_1=db1_data_dev1, /host1_eng/ase/data/db1_dev1.dat, 25
#PRIM.ase_data_device_create_2_2=db1_data_dev2, /host1_eng/ase/data/db1_dev2.dat, 25
#PRIM.ase_data_device_create_2_3=db1_data_dev3, /host1_eng/ase/data/db1_dev3.dat, 25
#PRIM.ase_log_device_create_2_1=db1_log_dev1, /host1_eng/ase/data/db1_dev1.log, 25

# Port numbers for Replication Server and Replication Management Agent on host1
#
# In remote topology, these are the companion Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>/DM/RMA-16_0/instances/AgentContainer/config/bootstrap.prop

# for value
PRIM.rma_rmi_port=7000
PRIM.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
PRIM.srs_port=5005

# Device buffer for Replication Server on host1
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
PRIM.device_buffer_dir=/data/SRS/data
PRIM.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host1
#
# For synchronous replication (synchronization_mode=sync),
# enter directory to an SSD (solid state drive) or other
# type of fast read/write storage device
PRIM.simple_persistent_queue_dir=/data/SRS/ssd
PRIM.simple_persistent_queue_size=8000

###############################################################################
# Site "COMP" on host companionnode with companion role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
COMP.ase_host_name=companionnode

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS/RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
COMP.rma_host_name=companionnode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
COMP.site_name=London

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion
COMP.site_role=companion

# directory where SAP ASE installed
COMP.ase_release_directory=/opt/sybase/ASE

# User defined dm data dir for eRSSD
#COMP.dm_database_file_directory=
#COMP.dm_translog_file_directory=
#COMP.dm_log_file_directory=
#COMP.dm_config_file_directory=
#COMP.dm_backup_file_directory_for_database=

# Public IP for host
#COMP.host_public_ip=

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
COMP.ase_user_data_directory=/opt/sybase/ASE

COMP.ase_server_name=COMPANION_ASE
COMP.ase_server_port=5000

COMP.backup_server_name=COMPANION_ASE_BS
COMP.backup_server_port=5001

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
COMP.backup_server_dump_directory=/data/ASE/dump

# Data & log devices to create the databases specified
# in "participating_database_[x]" properties. You do
# not need to specify these properties if the database(s)
# already exist in the SAP ASE server.
#
# ase_data_device_create_[x]_[y] - property to create data device
# ase_log_device_create_[x]_[y] - property to create log device
# where
# x is number in "participating_database_[x]" property
# y is number device to create
#
# Format: <logical_device_name>, <physical_device_path>, <size_in_MB>
#
# NOTE: Databases sizes on primary and companion
# SAP ASE must be the same.

# Devices for database "LE1" (See "participating_database_1" property)
# Database size = 25MB
# data device "le_data_dev" = 25MB
COMP.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/dem_dev1.dat, 25

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#site2.ase_data_device_create_2_1=db1_data_dev1, /host2_eng/ase/data/db1_dev1.dat, 25
#site2.ase_data_device_create_2_2=db1_data_dev2, /host2_eng/ase/data/db1_dev2.dat, 25
#site2.ase_data_device_create_2_3=db1_data_dev3, /host2_eng/ase/data/db1_dev3.dat, 25
#site2.ase_log_device_create_2_1=db1_log_dev1, /host2_eng/ase/data/db1_dev1.log, 25

# Devices for database "userdb2" (See "participating_database_3" property)
# Database Size = 100MB
# data device 1 "db2_data_dev1" = 25MB
# data device 2 "db2_data_dev2" = 25MB
# log device 1 "db2_log_dev1" = 25MB
# log device 2 "db2_log_dev2" = 25MB
#site2.ase_data_device_create_3_1=db2_data_dev1, /host2_eng/ase/data/db2_dev1.dat, 25
#site2.ase_data_device_create_3_2=db2_data_dev2, /host2_eng/ase/data/db2_dev2.dat, 25
#site2.ase_log_device_create_3_1=db2_log_dev1, /host2_eng/ase/data/db2_dev1.log, 25
#site2.ase_log_device_create_3_2=db2_log_dev2, /host2_eng/ase/data/db2_dev2.log, 25

# Port numbers for Replication Server and Replication Management Agent on host2
#
# In remote topology, these are the companion Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>/DM/RMA-16_0/instances/AgentContainer/config/bootstrap.prop
# for value
COMP.rma_rmi_port=7000
COMP.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
COMP.srs_port=5005

# Device buffer for Replication Server on host2
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
COMP.device_buffer_dir=/data/SRS/data
COMP.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host2
#
# For synchronous replication (synchronization_mode=sync),
# enter directory to an SSD (solid state drive) or other
# type of fast read/write storage device
COMP.simple_persistent_queue_dir=/data/SRS/ssd
COMP.simple_persistent_queue_size=8000

###############################################################################
# Site "DR" on host drnode with dr role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
DR.ase_host_name=drnode.openstack.na-ca-1.cloud.sap

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS\RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
DR.rma_host_name=drnode.openstack.na-ca-1.cloud.sap

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
DR.site_name=Offsite

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion, dr
DR.site_role=dr

# directory where SAP ASE installed
DR.ase_release_directory=/opt/sybase/ASE

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
DR.ase_user_data_directory=/opt/sybase/ASE

DR.ase_server_name=DR_ASE
DR.ase_server_port=5000

DR.backup_server_name=DR_ASE_BS
DR.backup_server_port=5001

# added to support demo
# Devices for database "DEM" (See "participating_database_1" property)
# Database size = 200MB
# data device "DEM" = 200MB
#DR.ase_data_device_create_1_1=le_data_dev, /host2_eng/ase/data/le1_dev1.dat, 25

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#DR.ase_data_device_create_2_1=db1_data_dev1, /host2_eng/ase/data/db1_dev1.dat, 25
#DR.ase_data_device_create_2_2=db1_data_dev2, /host2_eng/ase/data/db1_dev2.dat, 25
#DR.ase_data_device_create_2_3=db1_data_dev3, /host2_eng/ase/data/db1_dev3.dat, 25
#DR.ase_log_device_create_2_1=db1_log_dev1, /host2_eng/ase/data/db1_dev1.log, 25

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
DR.backup_server_dump_directory=/data/ASE/dump

# Port numbers for Replication Server and Replication Management Agent on host3
#
# In remote topology, these are the DR Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>\DM\RMA-16_0\instances\AgentContainer\config\bootstrap.prop
# for value
DR.rma_rmi_port=7000
# RMA RMI occupies five consecutive ports, with the configured port occupying the highest number.
DR.rma_tds_port=4909
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
DR.srs_port=5005

# Device buffer for Replication Server on host3
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
# Note: For HADR on SAP Business Suite Installations use SID database logsize * 1.5
DR.device_buffer_dir=/data/SRS/data
DR.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host3
#
# For synchronous replication (synchronization_mode=async),
# enter directory to an SSD (solid state drive) or other
# type of fast read\write storage device
# Note: For HADR on SAP Business Suite Installations use SID database logsize * 1.5
DR.simple_persistent_queue_dir=/data/SRS/ssd
DR.simple_persistent_queue_size=8000

Executing $SYBASE/$SYBASE_ASE/bin/setuphadr <response file> gives:
Clean up environment.
Environment cleaned up.
Setup ASE server configurations
Set server configuration "max network packet size" to "16384"...
Reboot SAP ASE "COMPANION_ASE"...
Setup ASE server configurations...Success
Setup user databases
Create user database DEM...
Setup user databases...Success
Setup ASE HADR maintenance user
Create maintenance login "DR_maint"...
Grant "sa_role" role to "DR_maint"...
Grant "replication_role" role to "DR_maint"...
Grant "replication_maint_role_gp" role to "DR_maint"...
Create "sap_maint_user_role" role...
Grant set session authorization to "sap_maint_user_role"...
Grant "sap_maint_user_role" role to "DR_maint"...
Grant "sybase_ts_role" role to "DR_maint"...
Add auto activated roles "sap_maint_user_role" to user "DR_maint"...
Allow "DR_maint" to be known as dbo in "master" database...
Allow "DR_maint" to be known as dbo in "DEM" database...
Setup ASE HADR maintenance user...Success
Setup administrator user
Create administrator login "DR_admin"...
Grant "sa_role" role to "DR_admin"...
Grant "sso_role" role to "DR_admin"...
Grant "replication_role" role to "DR_admin"...
Grant "hadr_admin_role_gp" role to "DR_admin"...
Grant "sybase_ts_role" role to "DR_admin"...
Add user "DR_admin" to DB "sybsystemprocs".
Setup administrator user...Success
Setup Backup server allow hosts
Backup server on "COMP" site: Add host "primarynode" to allow dump and load...
Backup server on "PRIM" site: Add host "companionnode" to allow dump and load...
Setup Backup server allow hosts...Success
Setup RMA
Set SAP ID to "DEM"...
Set installation mode to "nonBS"...
Set maintenance user to "DR_maint"...
Set site name "Toronto" with SAP ASE host:port to "primarynode:5000" and Replication Server host:port to "primarynode:5005"...
Set site name "London" with SAP ASE host:port to "companionnode:5000" and Replication Server host:port to "companionnode:5005"...
Set site name "Toronto" with Backup server port to "5001"...
Set site name "London" with Backup server port to "5001"...
Set site name "Toronto" databases dump directory to "/data/ASE/dump"...
Set site name "London" databases dump directory to "/data/ASE/dump"...
Set site name "Toronto" synchronization mode to "sync"...
Set site name "London" synchronization mode to "sync"...
Set site name "Toronto" distribution mode to "remote"...
Set site name "London" distribution mode to "remote"...
Set site name "Toronto" distribution target to site name "London"...
Set site name "London" distribution target to site name "Toronto"...
Set site name "Toronto" device buffer directory to "/data/SRS/data"...
Set site name "London" device buffer directory to "/data/SRS/data"...
Set site name "Toronto" device buffer size to "8192"...
Set site name "London" device buffer size to "8192"...
Set site name "Toronto" simple persistent queue directory to "/data/SRS/ssd"...
Set site name "London" simple persistent queue directory to "/data/SRS/ssd"...
Set site name "Toronto" simple persistent queue size to "8000"...
Set site name "London" simple persistent queue size to "8000"...
Set master, DEM databases to participate in replication...
Setup RMA...Success
Setup Replication
Setup replication from "Toronto" to "London"...
Configuring remote replication server..............................
Configuring local replication server......................................
Setting up replication on 'standby' host for local database 'master'...................
Setting up replication on 'standby' host for local database 'DEM'.....................
Setup Replication...Success
Materialize Databases
Materialize database "master"...
Starting materialization of the master database from source 'Toronto' to target 'London'...
Waiting 10 seconds: Before checking if Replication Connection 'DEM_London.master' is suspended......
Materialize database "DEM"...
Materializing database 'DEM' automatically from source 'Toronto' to target 'London'..
Executing ASE dump and load task for database 'DEM'....
Successfully verified materialization on database 'DEM'..
Stop the Replication Agent for database 'master' on host 'primarynode:5000' and data server 'DEM_Toronto'..
Stop the Replication Agent for database 'DEM' on host 'primarynode:5000' and data server 'DEM_Toronto'..
Configuring Replication Server: set 'hide_maintuser_pwd' to 'on'..
Waiting 10 seconds: Before checking if Replication Connection 'DEM_London.DEM' is suspended with dump marker...
Waiting 10 seconds: Before checking if Replication Connection 'DEM_London.DEM' is suspended........
Materialize Databases...Success

Connecting to an RMA with isql and issuing 'sap_status path' shows:
1> sap_status path
2> go
PATH NAME VALUE INFO
--------------------- ------------------------- ----------------------- ------------------------------------------------------------------------------------
Start Time 2023-05-09 17:56:13.653 Time command started executing.
Elapsed Time 00:00:00 Command execution time.
London Hostname companionnode Logical host name.
London HADR Status Standby : Inactive Identify the primary and standby sites.
London Synchronization Mode Synchronous The configured Synchronization Mode value.
London Synchronization State Inactive Synchronization Mode in which replication is currently operating.
London Distribution Mode Remote Configured value for the distribution_mode replication model property.
London Replication Server Status Active The status of Replication Server.
Toronto Hostname primarynode Logical host name.
Toronto HADR Status Primary : Active Identify the primary and standby sites.
Toronto Synchronization Mode Synchronous The configured Synchronization Mode value.
Toronto Synchronization State Synchronous Synchronization Mode in which replication is currently operating.
Toronto Distribution Mode Remote Configured value for the distribution_mode replication model property.
Toronto Replication Server Status Active The status of Replication Server.
London.Toronto.DEM State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.DEM Latency Time Unknown No latency information for database 'DEM'.
London.Toronto.DEM Latency Unknown No latency information for database 'DEM'.
London.Toronto.DEM Commit Time Unknown No last commit time for the database 'DEM'.
London.Toronto.DEM Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.DEM Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Toronto.master State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.master Latency Time Unknown No latency information for database 'master'.
London.Toronto.master Latency Unknown No latency information for database 'master'.
London.Toronto.master Commit Time Unknown No last commit time for the database 'master'.
London.Toronto.master Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.master Drain Status Unknown The drain status of the transaction logs of the primary database server.
Toronto.London.DEM State Active Path is active and replication can occur.
Toronto.London.DEM Latency Time 2023-05-09 17:51:33.046 Time latency last calculated
Toronto.London.DEM Latency 547 Latency (ms)
Toronto.London.DEM Commit Time 2023-05-09 17:51:33.046 Time last commit replicated
Toronto.London.DEM Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.DEM Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.London.master State Active Path is active and replication can occur.
Toronto.London.master Latency Time 2023-05-09 17:50:08.278 Time latency last calculated
Toronto.London.master Latency 426 Latency (ms)
Toronto.London.master Commit Time 2023-05-09 17:50:08.286 Time last commit replicated
Toronto.London.master Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.master Drain Status Not Applicable The drain status of the transaction logs of the primary database server.

(38 rows affected)

and checking the logins now shows the same output now for both PRIMARY_ASE and COMPANION_ASE:
1> select suid, name from syslogins
2> order by suid
3> go
suid name
----------- ------------------------------
1 sa
2 probe
3 jstask
4 tpcc
5 DR_maint
6 DR_admin

(6 rows affected)

Adding a User Database to the Companion


We will now add the tpcc user database to the COMPANION_ASE and configure for high availability.  The tpcc user is already defined on the companion (from the replication of the master database).  However, we still need to perform the following steps:

  • add the database to the companion.

  • change the ownership of the database on the companion.

  • alias DR_maint for the dbo on the companion and on the primary.

  • add the database to the HADR environment.

  • materialize the database.


We can add the empty database to the companion and change the ownership and any options by connecting to the companion and issuing the necessary commands as follows:
disk init name="tpccdata",physname="/data/ASE/data/tpccdata.dat",size="2048M"
go
disk init name="tpcclog",physname="/data/ASE/data/tpcclog.dat",size="2048M"
go
create database tpcc on tpccdata = "2048M" log on tpcclog = "2048M"
go
sp_dboption tpcc, 'trunc. log on chkpt.', true
go
use tpcc
go
sp_changedbowner tpcc, true
go

Since HA is from the active to the standby, we must alias the DR_maint on both sides.  Currently, running 'sp_helpuser dbo' one PRIMARY_ASE and COMPANION_ASE shows:
1> sp_helpuser dbo
2> go
Users_name ID_in_db Group_name Login_name
---------- -------- ---------- ----------
dbo 1 public tpcc

(1 row affected)
(return status = 0)

On both PRIMARY_ASE and COMPANION_ASE, issue the following commands:
use tpcc
go
sp_addalias DR_maint, dbo
go

Now 'sp helpuser dbo' shows:
1> sp_helpuser dbo
2> go
Users_name ID_in_db Group_name Login_name
---------- -------- ---------- ----------
dbo 1 public tpcc

(1 row affected)
(return status = 0)

Now 'sp_helpuser dbo' shows:
1> sp_helpuser dbo
2> go
Users_name ID_in_db Group_name Login_name
---------- -------- ---------- ----------
dbo 1 public tpcc

(1 row affected)
Users aliased to user.
Login_name
----------
DR_maint
(return status = 0)

To add the database to HADR, connect to the primary or companion RMA and issue the following commands:
sap_update_replication add_db, tpcc
go

Run 'sap_status task' in the RMA session until 'Update replication request to add database 'tpcc' completed successfully.' is seen.  Then run:
sap_materialize auto, Toronto, London, tpcc
go

Once again run 'sap_status task' in the RMA session until 'Completed automatic materialization of database 'tpcc' from source 'Toronto' to target 'London'.' is seen.

We can check the status of the replication by executing 'sap_status path' and should see:
1> sap_status path
2> go
PATH NAME VALUE INFO
--------------------- ------------------------- ----------------------- ------------------------------------------------------------------------------------
Start Time 2023-05-09 19:13:56.183 Time command started executing.
Elapsed Time 00:00:00 Command execution time.
London Hostname companionnode Logical host name.
London HADR Status Standby : Inactive Identify the primary and standby sites.
London Synchronization Mode Synchronous The configured Synchronization Mode value.
London Synchronization State Inactive Synchronization Mode in which replication is currently operating.
London Distribution Mode Remote Configured value for the distribution_mode replication model property.
London Replication Server Status Active The status of Replication Server.
Toronto Hostname primarynode Logical host name.
Toronto HADR Status Primary : Active Identify the primary and standby sites.
Toronto Synchronization Mode Synchronous The configured Synchronization Mode value.
Toronto Synchronization State Synchronous Synchronization Mode in which replication is currently operating.
Toronto Distribution Mode Remote Configured value for the distribution_mode replication model property.
Toronto Replication Server Status Active The status of Replication Server.
London.Toronto.DEM State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.DEM Latency Time Unknown No latency information for database 'DEM'.
London.Toronto.DEM Latency Unknown No latency information for database 'DEM'.
London.Toronto.DEM Commit Time Unknown No last commit time for the database 'DEM'.
London.Toronto.DEM Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.DEM Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Toronto.master State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.master Latency Time Unknown No latency information for database 'master'.
London.Toronto.master Latency Unknown No latency information for database 'master'.
London.Toronto.master Commit Time Unknown No last commit time for the database 'master'.
London.Toronto.master Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.master Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Toronto.tpcc State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.tpcc Latency Time Unknown No latency information for database 'tpcc'.
London.Toronto.tpcc Latency Unknown No latency information for database 'tpcc'.
London.Toronto.tpcc Commit Time Unknown No last commit time for the database 'tpcc'.
London.Toronto.tpcc Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.tpcc Drain Status Unknown The drain status of the transaction logs of the primary database server.
Toronto.London.DEM State Active Path is active and replication can occur.
Toronto.London.DEM Latency Time 2023-05-09 17:51:33.040 Time latency last calculated
Toronto.London.DEM Latency 544 Latency (ms)
Toronto.London.DEM Commit Time 2023-05-09 17:51:33.040 Time last commit replicated
Toronto.London.DEM Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.DEM Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.London.master State Active Path is active and replication can occur.
Toronto.London.master Latency Time 2023-05-09 17:50:08.272 Time latency last calculated
Toronto.London.master Latency 423 Latency (ms)
Toronto.London.master Commit Time 2023-05-09 17:50:08.280 Time last commit replicated
Toronto.London.master Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.master Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.London.tpcc State Active Path is active and replication can occur.
Toronto.London.tpcc Latency Time 2023-05-09 19:09:11.092 Time latency last calculated
Toronto.London.tpcc Latency 346 Latency (ms)
Toronto.London.tpcc Commit Time 2023-05-09 19:09:11.092 Time last commit replicated
Toronto.London.tpcc Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.tpcc Drain Status Not Applicable The drain status of the transaction logs of the primary database server.

(50 rows affected)

Testing the Database Replication


Running the application against the primary (active) shows the row counts match between the primary (active) and companion (standby) instances:

Before application run:
Primary
tablename records
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90000
ORDER_LINE 899496
HISTORY 90000
NEW_ORDER 27000

Companion
tablename records
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90000
ORDER_LINE 899496
HISTORY 90000
NEW_ORDER 27000


After:
Primary
tablename records
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90021
ORDER_LINE 899693
HISTORY 90022
NEW_ORDER 27021

Companion
tablename records
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90021
ORDER_LINE 899693
HISTORY 90022
NEW_ORDER 27021

Adding the DR node


Adding the dr ASE is similar to adding the companion, but, since we will be adding the DR node last, we do not need to worry about adding the tpcc login - just like the companion node, it will be added to the DR node when the master database is replicated.

Since the tpcc user database is already participating in HADR, it must be created and materialized as part of the addition as part of the addition of the dr node to the HADR cluster. This is done by ensuring the database is a 'participating' database in the setuphadr response file and adding the necessary directives to create the devices for the database as well.

Add the node by logging on to the dr node, running $SYBASE/$SYBASE_ASE/bin/setuphadr and providing a response file:
###############################################################################
# Setup HADR sample response file for non-BS + 3rd node
# where one database already available - so need to add to existing
#
# This sample responses file setup ASE HADR on
# hosts "host1" (primary) and "host2" (companion) and "host3" (DR).
#
# Prerequisite :
# - New SAP ASE and Backup servers setup and started on "host1" and "host2".
# See HADR User Guide for requirements on SAP ASE servers.
# - Replication Management Agent (RMA) started on "host1" and "host2".
#
# Usage :
# 1. On host1 (primary), run:
# $SYBASE/$SYBASE_ASE/bin/setuphadr <this_responses_file>
#
# 2. Change this response file properties:
# setup_site=site2
# is_secondary_site_setup=true
#
# 3. On host2 (companion), run
# $SYBASE/$SYBASE_ASE/bin/setuphadr <response_file_from_step_2>
#
# 4. Change this response file properties:
# setup_site=DR
# is_secondary_site_setup=true
#
###############################################################################

# ID that identifies this cluster
#
# Value must be unique,
# begin with a letter and
# 3 characters in length.
cluster_id=DEM

# Which site being configured
#
# Note:
# You need to set "<setup_site_value>.*"
# properties in this responses file.
setup_site=DR

# Set installation_mode
#
# Valid values: true, false
#
# If set to true, installation_mode will be set to "BS"
# If set to false, installation_mode will be set to "nonBS"
setup_bs=false

# This is for BusS only
# if set to true, DR admin user will be added to secure store
add_user_to_secure_store=false
# Adding user action will be executed by following user
#sid_admin_user=DEM_adm
#sid_admin_password=<password>

# true OR false
enable_ssl=false
# common name, take SYBASE for example
ssl_common_name=SYBASE
# If this ASE server has enabled SSL, if set to "true", ssl_private_key_file and ssl_public_key_file will be ignored
ase_ssl_enabled=true
# If we should enable SSL bacupserver connections
enable_ssl_for_bs=true
# private key file
ssl_private_key_file=/tmp/hadr.key
# public key file
ssl_public_key_file=/tmp/hadr.crt
# root CA cert
# NOTE: if you're using self-signed cert, put your public key file here
ssl_ca_cert_file=/tmp/rootCA.pem
# ssl password
ssl_password=Sybase

# Has the secondary site prepared for ASE HADR
#
# Valid values: true, false
#
# If set to true, "<secondary_setup_site_value>.*"
# properties must set in this responses file.
is_secondary_site_setup=true

# How data is replicated
#
# Valid values: sync, async
synchronization_mode=async

# SAP ASE system administrator user/password
#
# setuphadr will prompt from standard input if not specified
ase_sa_user=sa
ase_sa_password=<password>

# BACKUP server system administrator user/password
#
bs_admin_user=sa
bs_admin_password=<password>

# ASE HADR maintenance user/password
#
# Password must have at least 6 characters
# setuphadr will prompt from standard input if not specified
hadr_maintenance_user=DR_maint
hadr_maintenance_password=<password>

# Replication Management Agent administrator user/password
#
# Password must have at least 6 characters
# setuphadr will prompt from standard input if not specified
rma_admin_user=DR_admin
rma_admin_password=<password>

# If we XA replication is enabled
#
# Valid values: true, false
xa_replication=false

# If we need to config and start Replication Management Agent
#
# Valid values: true, false
config_start_rma=true

# If we need to create Replication Management Agent windows service
# Only affects windows
#
# Valid values: true, false
# If set to true, rma_service_user and rma_service_password will be used
create_rma_windows_service=false

# Replication Management Agent Service user/password
#
rma_service_user=admin
rma_service_password=<password>

# If disable referential constraints on HADR setups
#
# Valid values: true, false
disable_referential_constraints=false

# Databases that will participate in replication
# and "auto" materialize.
#
# If database doesn't exist in the SAP ASE, you need
# to specify <site>.ase_data_device_create_[x]_[y] and
# <site>.ase_log_device_create_[x]_[y] properties.
# See below.
#
# ASE HADR requires SAP ASE to have a database
# with cluster ID name (see "cluster_id" above).
# If you have not created this database, you can
# enter it here to have it created.

# cluster ID database
participating_database_1=DEM
materialize_participating_database_1=true

# user database
participating_database_2=tpcc
materialize_participating_database_2=true

###############################################################################
# Site "PRIM" on host primarynode with primary role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
PRIM.ase_host_name=primarynode

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS/RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
PRIM.rma_host_name=primarynode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
PRIM.site_name=Toronto

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion
PRIM.site_role=primary

# directory where SAP ASE installed
PRIM.ase_release_directory=/opt/sybase/ASE

# User defined dm data dir for eRSSD
#PRIM.dm_database_file_directory=/data/SRS/data
#PRIM.dm_translog_file_directory=/data/SRS/data
#PRIM.dm_log_file_directory=/data/SRS/data
#PRIM.dm_config_file_directory=/opt/sybase/ASE/DM
#PRIM.dm_backup_file_directory_for_database=/data/ASE/dump

# Public IP for host
#PRIM.host_public_ip=<primary host public ip>

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
PRIM.ase_user_data_directory=/opt/sybase/ASE

PRIM.ase_server_name=PRIMARY_ASE
PRIM.ase_server_port=5000

PRIM.backup_server_name=PRIMARY_ASE_BS
PRIM.backup_server_port=5001

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
PRIM.backup_server_dump_directory=/data/ASE/dump

# Data & log devices to create the databases specified
# in "participating_database_[x]" properties. You do
# not need to specify these properties if the database(s)
# already exist in the SAP ASE server.
#
# ase_data_device_create_[x]_[y] - property to create data device
# ase_log_device_create_[x]_[y] - property to create log device
# where
# x is number in "participating_database_[x]" property
# y is number device to create
#
# Format: <logical_device_name>, <physical_device_path>, <size_in_MB>
#
# NOTE: Databases sizes on primary and companion
# SAP ASE must be the same.

# Device for cluster ID database "DEM" (See "participating_database_1" property)
# Database size = 25MB
# data device "le_data_dev" = 25MB
PRIM.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/DEM_dev1.dat, 100

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#PRIM.ase_data_device_create_2_1=db1_data_dev1, /host1_eng/ase/data/db1_dev1.dat, 25
#PRIM.ase_data_device_create_2_2=db1_data_dev2, /host1_eng/ase/data/db1_dev2.dat, 25
#PRIM.ase_data_device_create_2_3=db1_data_dev3, /host1_eng/ase/data/db1_dev3.dat, 25
#PRIM.ase_log_device_create_2_1=db1_log_dev1, /host1_eng/ase/data/db1_dev1.log, 25

# Port numbers for Replication Server and Replication Management Agent on host1
#
# In remote topology, these are the companion Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>/DM/RMA-16_0/instances/AgentContainer/config/bootstrap.prop

# for value
PRIM.rma_rmi_port=7000
PRIM.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
PRIM.srs_port=5005

# Device buffer for Replication Server on host1
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
PRIM.device_buffer_dir=/data/SRS/data
PRIM.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host1
#
# For synchronous replication (synchronization_mode=sync),
# enter directory to an SSD (solid state drive) or other
# type of fast read/write storage device
PRIM.simple_persistent_queue_dir=/data/SRS/ssd
PRIM.simple_persistent_queue_size=8000

###############################################################################
# Site "COMP" on host companionnode with companion role
###############################################################################

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
COMP.ase_host_name=companionnode

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS/RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
COMP.rma_host_name=companionnode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
COMP.site_name=London

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion
COMP.site_role=companion

# directory where SAP ASE installed
COMP.ase_release_directory=/opt/sybase/ASE

# User defined dm data dir for eRSSD
#COMP.dm_database_file_directory=
#COMP.dm_translog_file_directory=
#COMP.dm_log_file_directory=
#COMP.dm_config_file_directory=
#COMP.dm_backup_file_directory_for_database=

# Public IP for host
#COMP.host_public_ip=<companion host public ip>

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
COMP.ase_user_data_directory=/opt/sybase/ASE

COMP.ase_server_name=COMPANION_ASE
COMP.ase_server_port=5000

COMP.backup_server_name=COMPANION_ASE_BS
COMP.backup_server_port=5001

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
COMP.backup_server_dump_directory=/data/ASE/dump

# Data & log devices to create the databases specified
# in "participating_database_[x]" properties. You do
# no need to specify these properties if the database(s)
# already exist in the SAP ASE server.
#
# ase_data_device_create_[x]_[y] - property to create data device
# ase_log_device_create_[x]_[y] - property to create log device
# where
# x is number in "participating_database_[x]" property
# y is number device to create
#
# Format: <logical_device_name>, <physical_device_path>, <size_in_MB>
#
# NOTE: Databases sizes on primary and companion
# SAP ASE must be the same.

# Devices for database "DEM" (See "participating_database_1" property)
# Database size = 100MB
# data device "le_data_dev" = 25MB
COMP.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/dem_dev1.dat, 100

# Devices for database "userdb1" (See "participating_database_2" property)
# Database Size = 100MB
# data device 1 "db1_data_dev1" = 25MB
# data device 2 "db1_data_dev2" = 25MB
# data device 3 "db1_data_dev3" = 25MB
# log device 1 "db1_log_dev1" = 25MB
#site2.ase_data_device_create_2_1=db1_data_dev1, /host2_eng/ase/data/db1_dev1.dat, 25
#site2.ase_data_device_create_2_2=db1_data_dev2, /host2_eng/ase/data/db1_dev2.dat, 25
#site2.ase_data_device_create_2_3=db1_data_dev3, /host2_eng/ase/data/db1_dev3.dat, 25
#site2.ase_log_device_create_2_1=db1_log_dev1, /host2_eng/ase/data/db1_dev1.log, 25

# Devices for database "userdb2" (See "participating_database_3" property)
# Database Size = 100MB
# data device 1 "db2_data_dev1" = 25MB
# data device 2 "db2_data_dev2" = 25MB
# log device 1 "db2_log_dev1" = 25MB
# log device 2 "db2_log_dev2" = 25MB
#site2.ase_data_device_create_3_1=db2_data_dev1, /host2_eng/ase/data/db2_dev1.dat, 25
#site2.ase_data_device_create_3_2=db2_data_dev2, /host2_eng/ase/data/db2_dev2.dat, 25
#site2.ase_log_device_create_3_1=db2_log_dev1, /host2_eng/ase/data/db2_dev1.log, 25
#site2.ase_log_device_create_3_2=db2_log_dev2, /host2_eng/ase/data/db2_dev2.log, 25

# Port numbers for Replication Server and Replication Management Agent on host2
#
# In remote topology, these are the companion Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>/DM/RMA-16_0/instances/AgentContainer/config/bootstrap.prop
# for value
COMP.rma_rmi_port=7000
COMP.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
COMP.srs_port=5005

# Device buffer for Replication Server on host2
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
COMP.device_buffer_dir=/data/SRS/data
COMP.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host2
#
# For synchronous replication (synchronization_mode=sync),
# enter directory to an SSD (solid state drive) or other
# type of fast read/write storage device
COMP.simple_persistent_queue_dir=/data/SRS/ssd
COMP.simple_persistent_queue_size=8000

###############################################################################
# Site "DR" on host drnode with dr role
###############################################################################

# default false
# <sitename>.skip_env_clean_up=true #introduced in SP03PL13

# Host name where SAP ASE run
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
DR.ase_host_name=drnode

# We don't support ASE and SRS on different hosts yet
# This is virtual host name for SRS\RMA
# Optional property
#
# Enter fully qualified domain name (FQDN)
# if your sites are on different subnet.
DR.rma_host_name=drnode

# Site name
#
# Enter value that identifies this site,
# like a geographical location.
# Value must be unique.
DR.site_name=Offsite

# Site role
#
# Enter the role of this site.
# Valid values: primary, companion, dr
DR.site_role=dr

# directory where SAP ASE installed
DR.ase_release_directory=/opt/sybase/ASE

# Directory that stored SAP ASE user data files
# (interfaces, RUN_<server>, error log, etc. files).
# Do not set value if your user data files are in
# SAP ASE installed directory (ase_release_directory).
DR.ase_user_data_directory=/opt/sybase/ASE

DR.ase_server_name=DR_ASE
DR.ase_server_port=5000

DR.backup_server_name=DR_ASE_BS
DR.backup_server_port=5001

# added to support demo
# Devices for database "DEM" (See "participating_database_1" property)
# Database size = 25MB
# data device "DEM" = 25MB
DR.ase_data_device_create_1_1=DEM_data_dev, /data/ASE/data/DEM_dev1.dat, 100

# Devices for database "tpcc" (See "participating_database_2" property)
# Database Size = 4096MB
# data device 1 "tpccdata" = 2048MB
# log device 1 "tpcclog" = 2048MB
DR.ase_data_device_create_2_1=tpccdata, /data/ASE/data/tpccdata.dat, 2048
DR.ase_log_device_create_2_1=tpcclog, /data/ASE/data/tpcclog.dat, 2048

# Directory to store database dumps
# in materialzation
#
# Backup server must able to access this directory
DR.backup_server_dump_directory=/data/ASE/dump

# Port numbers for Replication Server and Replication Management Agent on host3
#
# In remote topology, these are the DR Replication Server and
# Replication Management Agent.
#
# See "rsge.bootstrap.tds.port.number" properties in
# <SAP ASE installed directory>\DM\RMA-16_0\instances\AgentContainer\config\bootstrap.prop
# for value
DR.rma_rmi_port=7000
# RMA RMI occupies five consecutive ports, with the configured port occupying the highest number.
DR.rma_tds_port=7001
#
# Starting port number to use when setup Replication Server.
# Make sure next two ports (+1 and +2) are also available for use.
DR.srs_port=5005

# Device buffer for Replication Server on host3
# Recommend size = 128 * N
# where N is the number of databases to replicate,
# including the master and cluster ID databases.
#
# Note: For HADR on SAP Business Suite Installations use SID database logsize * 1.5
DR.device_buffer_dir=/data/SRS/data
DR.device_buffer_size=8192

# Persistent queue directory for Replication Server running on host3
#
# For synchronous replication (synchronization_mode=async),
# enter directory to an SSD (solid state drive) or other
# type of fast read\write storage device
# Note: For HADR on SAP Business Suite Installations use SID database logsize * 1.5
DR.simple_persistent_queue_dir=/data/SRS/ssd
DR.simple_persistent_queue_size=8000

 

, since the user database, tpcc, has already been added to the replication environment, we not only need to add the tpcc user, to keep SUIDs consistent, but must also create the database and set the options, similar to what was already done for the companion, before adding the DR ASE instance into the HADR cluster.

First, add the tpcc database to the new DR ASE instance:

Now add the tpcc, DR_maint and DR_admin logins and set dbo:

The output from setuphadr should be similar to:
Clean up environment.
Environment cleaned up.
Setup ASE server configurations
Set server configuration "max network packet size" to "16384"...
Reboot SAP ASE "DR_ASE"...
Setup ASE server configurations...Success
Setup user databases
Create user database DEM...
Create user database tpcc...
Setup user databases...Success
Setup ASE HADR maintenance user
Create maintenance login "DR_maint"...
Grant "sa_role" role to "DR_maint"...
Grant "replication_role" role to "DR_maint"...
Grant "replication_maint_role_gp" role to "DR_maint"...
Create "sap_maint_user_role" role...
Grant set session authorization to "sap_maint_user_role"...
Grant "sap_maint_user_role" role to "DR_maint"...
Grant "sybase_ts_role" role to "DR_maint"...
Add auto activated roles "sap_maint_user_role" to user "DR_maint"...
Allow "DR_maint" to be known as dbo in "master" database...
Allow "DR_maint" to be known as dbo in "DEM" database...
Allow "DR_maint" to be known as dbo in "tpcc" database...
Setup ASE HADR maintenance user...Success
Setup administrator user
Create administrator login "DR_admin"...
Grant "sa_role" role to "DR_admin"...
Grant "sso_role" role to "DR_admin"...
Grant "replication_role" role to "DR_admin"...
Grant "hadr_admin_role_gp" role to "DR_admin"...
Grant "sybase_ts_role" role to "DR_admin"...
Add user "DR_admin" to DB "sybsystemprocs".
Setup administrator user...Success
Setup Backup server allow hosts
Backup server on "DR" site: Add host "primarynode" to allow dump and load...
Backup server on "PRIM" site: Add host "drnode" to allow dump and load...
Setup Backup server allow hosts...Success
Setup DR Site
Set maintenance user to "DR_maint"...
Set maintenance user to "DR_maint"...
Set site name "Offsite" with SAP ASE host:port to "drnode:5000" and Replication Server host:port to "drnode:5005"...
Set site "Offsite" as DR node.
Set site name "Offsite" with Backup server port to "5001"...
Set site name "Offsite" databases dump directory to "/data/ASE/dump"...
Set site name "Offsite" device buffer directory to "/data/SRS/data"...
Set site name "Offsite" device buffer directory to "/data/SRS/data"...
Set site name "Offsite" device buffer size to "8192"...
Set site name "Offsite" device buffer size to "8192"...
Set site name "Offsite" simple persistent queue directory to "/data/SRS/ssd"...
Set site name "Offsite" simple persistent queue directory to "/data/SRS/ssd"...
Set site name "Offsite" simple persistent queue size to "8000"...
Set site name "Offsite" simple persistent queue size to "8000"...
Setup DR Site...Success
Setup Replication
Add DR site "Offsite"...
Configuring local replication server.......
Configuring remote replication server................................................
Setting up replication on 'standby' host for local database 'master'..
Setting up replication on 'standby' host for local database 'DEM'...
Setting up replication on 'standby' host for local database 'tpcc'...
Setup Replication...Success
Materialize Databases
Materialize database "master"...
Insuring the Replication Agent for database master is disabled at target Offsite..
Waiting 10 seconds: Before checking if Replication Connection 'DEM_Offsite.master' is suspended......
Materialize database "tpcc"...
Executing ASE dump and load task for database 'tpcc'......
Waiting 10 seconds: Before checking if Replication Connection 'DEM_Offsite.tpcc' is suspended with dump marker....
Waiting 10 seconds: Before checking if Replication Connection 'DEM_Offsite.tpcc' is suspended........
Materialize database "DEM"...
Executing ASE dump and load task for database 'DEM'....
Insuring the Replication Agent for database DEM is disabled at target Offsite..
Waiting 10 seconds: Before checking if Replication Connection 'DEM_Offsite.DEM' is suspended with dump marker...
Waiting 10 seconds: Before checking if Replication Connection 'DEM_Offsite.DEM' is suspended........
Materialize Databases...Success

This will create and materialize the user database.

Connecting to any RMA and running 'sap_status path' should show information similar to:
1> sap_status path
2> go
PATH NAME VALUE INFO
---------------------- ------------------------- ----------------------- ------------------------------------------------------------------------------------
Start Time 2023-05-30 19:16:05.294 Time command started executing.
Elapsed Time 00:00:01 Command execution time.
Toronto Hostname primarynode Logical host name.
Toronto HADR Status Primary : Active Identify the primary and standby sites.
Toronto Synchronization Mode Synchronous The configured Synchronization Mode value.
Toronto Synchronization State Synchronous Synchronization Mode in which replication is currently operating.
Toronto Distribution Mode Remote Configured value for the distribution_mode replication model property.
Toronto Replication Server Status Active The status of Replication Server.
Offsite Hostname drnode Logical host name.
Offsite HADR Status DR Standby : Inactive Identify the primary and standby sites.
Offsite Synchronization Mode Asynchronous The configured Synchronization Mode value.
Offsite Synchronization State Inactive Synchronization Mode in which replication is currently operating.
Offsite Distribution Mode Local Configured value for the distribution_mode replication model property.
Offsite Replication Server Status Active The status of Replication Server.
London Hostname companionnode Logical host name.
London HADR Status Standby : Inactive Identify the primary and standby sites.
London Synchronization Mode Synchronous The configured Synchronization Mode value.
London Synchronization State Inactive Synchronization Mode in which replication is currently operating.
London Distribution Mode Remote Configured value for the distribution_mode replication model property.
London Replication Server Status Active The status of Replication Server.
London.Offsite.DEM State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Offsite.DEM Latency Time Unknown No latency information for database 'DEM'.
London.Offsite.DEM Latency Unknown No latency information for database 'DEM'.
London.Offsite.DEM Commit Time Unknown No last commit time for the database 'DEM'.
London.Offsite.DEM Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Offsite.DEM Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Offsite.master State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Offsite.master Latency Time Unknown No latency information for database 'master'.
London.Offsite.master Latency Unknown No latency information for database 'master'.
London.Offsite.master Commit Time Unknown No last commit time for the database 'master'.
London.Offsite.master Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Offsite.master Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Offsite.tpcc State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Offsite.tpcc Latency Time Unknown No latency information for database 'tpcc'.
London.Offsite.tpcc Latency Unknown No latency information for database 'tpcc'.
London.Offsite.tpcc Commit Time Unknown No last commit time for the database 'tpcc'.
London.Offsite.tpcc Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Offsite.tpcc Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Toronto.DEM State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.DEM Latency Time Unknown No latency information for database 'DEM'.
London.Toronto.DEM Latency Unknown No latency information for database 'DEM'.
London.Toronto.DEM Commit Time Unknown No last commit time for the database 'DEM'.
London.Toronto.DEM Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.DEM Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Toronto.master State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.master Latency Time Unknown No latency information for database 'master'.
London.Toronto.master Latency Unknown No latency information for database 'master'.
London.Toronto.master Commit Time Unknown No last commit time for the database 'master'.
London.Toronto.master Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.master Drain Status Unknown The drain status of the transaction logs of the primary database server.
London.Toronto.tpcc State Suspended Path is suspended (Replication Agent Thread). Transactions are not being replicated.
London.Toronto.tpcc Latency Time Unknown No latency information for database 'tpcc'.
London.Toronto.tpcc Latency Unknown No latency information for database 'tpcc'.
London.Toronto.tpcc Commit Time Unknown No last commit time for the database 'tpcc'.
London.Toronto.tpcc Distribution Path Toronto The path of Replication Server through which transactions travel.
London.Toronto.tpcc Drain Status Unknown The drain status of the transaction logs of the primary database server.
Toronto.London.DEM State Active Path is active and replication can occur.
Toronto.London.DEM Latency Time 2023-05-30 19:12:00.166 Time latency last calculated
Toronto.London.DEM Latency 333 Latency (ms)
Toronto.London.DEM Commit Time 2023-05-30 19:12:00.166 Time last commit replicated
Toronto.London.DEM Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.DEM Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.London.master State Active Path is active and replication can occur.
Toronto.London.master Latency Time 2023-05-30 19:09:33.612 Time latency last calculated
Toronto.London.master Latency 356 Latency (ms)
Toronto.London.master Commit Time 2023-05-30 19:09:33.612 Time last commit replicated
Toronto.London.master Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.master Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.London.tpcc State Active Path is active and replication can occur.
Toronto.London.tpcc Latency Time 2023-05-30 19:10:51.700 Time latency last calculated
Toronto.London.tpcc Latency 384 Latency (ms)
Toronto.London.tpcc Commit Time 2023-05-30 19:10:51.700 Time last commit replicated
Toronto.London.tpcc Distribution Path London The path of Replication Server through which transactions travel.
Toronto.London.tpcc Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.Offsite.DEM State Active Path is active and replication can occur.
Toronto.Offsite.DEM Latency Time 2023-05-30 19:12:00.666 Time latency last calculated
Toronto.Offsite.DEM Latency 583 Latency (ms)
Toronto.Offsite.DEM Commit Time 2023-05-30 19:12:00.672 Time last commit replicated
Toronto.Offsite.DEM Distribution Path London The path of Replication Server through which transactions travel.
Toronto.Offsite.DEM Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.Offsite.master State Active Path is active and replication can occur.
Toronto.Offsite.master Latency Time 2023-05-30 19:09:34.006 Time latency last calculated
Toronto.Offsite.master Latency 553 Latency (ms)
Toronto.Offsite.master Commit Time 2023-05-30 19:09:34.012 Time last commit replicated
Toronto.Offsite.master Distribution Path London The path of Replication Server through which transactions travel.
Toronto.Offsite.master Drain Status Not Applicable The drain status of the transaction logs of the primary database server.
Toronto.Offsite.tpcc State Active Path is active and replication can occur.
Toronto.Offsite.tpcc Latency Time 2023-05-30 19:10:52.272 Time latency last calculated
Toronto.Offsite.tpcc Latency 670 Latency (ms)
Toronto.Offsite.tpcc Commit Time 2023-05-30 19:10:52.272 Time last commit replicated
Toronto.Offsite.tpcc Distribution Path London The path of Replication Server through which transactions travel.
Toronto.Offsite.tpcc Drain Status Not Applicable The drain status of the transaction logs of the primary database server.

(92 rows affected)

All we have done is replicate and materialize the tpcc database.  To complete the process, we must:

  • set any db_options

  • set the owner


DR_maint has been aliased as dbo when DR_ASE was added to the HADR cluster and the tpcc database was created and materialized.
1> use master
2> go
1> sp_dboption tpcc,'trunc. log on chkpt.', true
2> go
1> use tpcc
2> go
1> sp_changedbowner tpcc, true
2> go

Applying transactions at the primary now replicate to the companion and dr:
PRIMARY_ASE
tablename records
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90075
ORDER_LINE 901215
HISTORY 90063
NEW_ORDER 27075
STOCK_PHOTO 0

COMPANION_ASE
tablename records
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90075
ORDER_LINE 901215
HISTORY 90063
NEW_ORDER 27075
STOCK_PHOTO 0

DR_ASE
tablename records
-------------------- --------------------
ITEM 100000
DISTRICT 30
WAREHOUSE 3
STOCK 300000
CUSTOMER 90000
ORDERS 90075
ORDER_LINE 901215
HISTORY 90063
NEW_ORDER 27075
STOCK_PHOTO 0

Now onto SAP Host Agent, the Fault Manager, HA failover and making an application HA-aware.