Technology Blogs by Members
Explore a vibrant mix of technical expertise, industry insights, and tech buzz in member blogs covering SAP products, technology, and events. Get in the mix!
cancel
Showing results for 
Search instead for 
Did you mean: 
BJarkowski
Active Contributor
Using file shares to host the system mount directory was introduced a couple months ago and since then the implementation got significantly easier. No more manual editing of profile parameter files – now everything is reflected during system provisioning. Recently I went through the process again and I think the changes are so big, that it’s worth revisiting my past post. As there are a few improvements on Azure side as well I decided to write a brand-new guide and include additional information about protecting the SQL Server using AlwaysOn availability groups. The file share is still provisioned using a combination of the Storage Spaces Direct and the Scale-Out File Server functionalities in Windows Server 2016. To simplify the setup and avoid building large and expensive landscapes I deployed all required components to just a two-node Failover Cluster.







Using a two node cluster to host all SAP components is not recommended by SAP in production environment. Please read the SAP NetWeaver Master Guide as well as Installation Guide and ensure you understand all risks that comes with such architecture.
If you'd prefer to distribute the SAP components you can still follow the guide, just the number of servers will be higher.


RESOURCE PROVISIONING IN MICROSOFT AZURE

The first innovation can be noticed in the process of provisioning the virtual machines. Previously I explained the concept of Availability Sets to protect system against unexpected hardware failure. Currently, selected Azure regions consist of more than one datacentre – for example the West Europe region which I use in this guide, is divided into three physical locations with separate power supply, cooling and networking. When VMs are deployed in a separate availability zones the probability of potential failure is much lower – Microsoft offers the 99,99% uptime SLA. The higher availability comes with a slightly higher network latency between servers – especially if there is a long distance between zones. Always check if that will cause a negative impact to your environment - I recommend running the database and the application server in the same zone and to failover them together.



(source: Microsoft.com)

My cluster is based on two two DS4_v2 VMs deployed in the West Europe region in two separate Availability Zones. I already have a Windows Domain in my network, and I won’t describe the steps needed to configure active directory, but you can use almost any guide from the internet.

Each VM has a three data disks attached. Storage Spaces Direct functionality needs at least two disks and the third will be used to store applications and data files.




























VM IP Size Data disk 1 Data disk 2 Data disk 3
Bjd-ha-0 10.1.1.13 DS4_v2 128 GB 128 GB 256 GB
Bjd-ha-1 10.1.1.14 DS4_v2 128 GB 128 GB 256 GB




Each component deployed to failover cluster uses a separate virtual IP for communication. Such approach requires an Internal Load Balancer with three Frontend IP’s to direct traffic to a correct address. You can only use a Standard Load Balancer to distribute the traffic to VMs deployed in the Availability Zone.































Name



IP



Backend Pool



Health probe



LoadBalancer_Cluster



10.1.1.20



bjd-vm



HealthProbe_Cluster (62500)



LoadBalancer_SQL



10.1.1.21



bjd-vm



HealthProbe_SQL (62501)



LoadBalancer_ASCS



10.1.1.22



bjd-vm



HealthProbe_ASCS (62502)






Two cluster nodes are forming the backend pool:



I created three health probes to monitor the availability and distribution of services.



The last resource required is a Storage Account that I use as a Cloud Witness:



FAILOVER CLUSTER CONFIGURATION

Storage Spaces Direct uses all unallocated disks to form a storage pool which then hosts the SAP mount directory. Application and database files should reside on a separate disk that won’t be part of the storage cluster so before enabling S2D functionality I created a partition on selected disk



Next, I’m using a PowerShell script to add computers to the domain and install required features:
$domain = "myDomain"
$user = "username"
$password = "myPassword!" | ConvertTo-SecureString -asPlainText -Force
$username = "$domain\$user"
$credential = New-Object System.Management.Automation.PSCredential($username,$password)
Add-Computer -DomainName $domain -Credential $credential

Install-WindowsFeature -Name “Failover-Clustering”, “Data-Center-Bridging”, “RSAT-Clustering-PowerShell”, “FS-FileServer” –IncludeManagementTools

(source: Internet)




I decided to check the configuration before forming a cluster:
Test-Cluster –Node <Node1>, <Node2> –Include "Storage Spaces Direct", "Inventory", "Network", "System Configuration"



The outcome is a nice report which says which drives will be used for Storage Spaces Direct. The partition I created on one of the disks excluded it from the storage pool.



There were no errors in the report so I’m ready to build the cluster. Use the Load Balancer IP address as the cluster IP. Later we will configure load balancing rules to direct the traffic to the active node.
New-Cluster –Name <ClusterName> –Node <Node1>, <Node2> -StaticAddress <LoadBalancerClusterIp> -StaticAddress <LoadBalancerClusterFrontEndIP> –NoStorage



I use the previously created storage account as a Cloud Witness
Set-ClusterQuorum -CloudWitness -AccountName <StorageAccountName> -AccessKey <StorageAccountAccessKey>



When the cluster is operational, I can enable the Storage Spaces Direct functionality:
Enable-ClusterStorageSpacesDirect –CimSession <ClusterName>

Four disks (two from each VM) formed a storage pool. Following command creates a new volume and mount it in the C:\ClusterStorage\
New-Volume -FriendlyName "VolumeName" -FileSystem CSVFS_ReFS -StoragePoolFriendlyName S2D* -UseMaximumSize



Details about the storage pool and the volume are also visible in the Server Manager:



Last PowerShell command used today will enable the Scale-Out File Server role on the cluster:
Add-ClusterScaleOutFileServerRole -Cluster <ClusterName> -Name <SOFSHostname>



I created the SAP mount directory on the Storage Spaces Direct volume:



A file share can be created in the Failover Cluster Manager:





The highly available file share is deployed and can be used to install Central Services instance.

SQL SERVER ALWAYS ON AVAILABILITY GROUPS

Following chapter shows how to deploy a Microsoft SQL Server and enable the AlwaysOn functionality. I included only the important installation steps for the first node, but the process has to be repeated on the second VMs.

As a preparation, I created two users in the Active Directory to run the database processes:



Then during the software installation, I assigned them against Database Engine and Server Agent:



I almost forgot to change the Collation! ?



In the database engine configuration, I chose a Mixed Mode authentication and configured data directories:



Installation doesn’t take too much time:



Once the installation is completed, we have two standalone SQL Servers installed on two nodes of the cluster. As the next step we need to enable the AlwaysOn functionality and configure the replication. In the SQL Configuration Manager select “Enable AlwaysOn Availability Groups”



System account needs additional permissions to manage the availability group:
USE [master]
GO
GRANT ALTER ANY AVAILABILITY GROUP TO [NT AUTHORITY\SYSTEM]
GO
GRANT CONNECT SQL TO [NT AUTHORITY\SYSTEM]
GO
GRANT VIEW SERVER STATE TO [NT AUTHORITY\SYSTEM]
GO

(source: Microsoft.com)




Before I continue with the AlwaysOn activation I need to create an empty database and execute a Full Backup:



You can follow the wizard to create availability group:



In the first window I’m asked to provide the Availability Group name:



Then I selected the database to replicate:



Specifies Replicas step allows to change the replication settings and include the hosts that should be included in the Availability Group. Synchronous replication minimizes the amount of lost data in case of node failure, but it comes with a negative impact to system performance. In most cases you should go with Asynchronous replication.

I haven’t created a Listener at this point.



I use Automatic Seeding to perform initial data synchronization.



When the configuration is completed, we receive a summary:



The database listener is a combination of virtual hostname and port that will be used for communication with the database. You can use the SQL Management Studio to define new listener:



The listener IP should match the Load Balancer SQL Front-end IP. Saving the settings will also create additional DNS entry in the domain.



The AlwaysOn configuration is quick and straightforward. The current cluster status can be monitored from the Cluster Manager, but the failover can only be triggered from the AlwaysOn dashboard.



Azure Load Balancer routes each request addressed to the listener to the correct host. But at this moment the Load Balancer is not aware on which node the SQL service is running. A health probe is a solution - every few seconds the Azure service will try to establish the connection to both nodes, but the health probe is active only on the host running the SQL process. This way the Load Balancer can identify the active node and direct the network traffic accordingly.

Use the following PowerShell script to create a health probe:
$ClusterNetworkName = "<ClusterNetworkName>" 
$IPResourceName = "<IPResourceName>"
$ListenerILBIP = "<n.n.n.n>"
[int]$ListenerProbePort = <nnnnn>

Import-Module FailoverClusters

Get-ClusterResource $IPResourceName | Set-ClusterParameter -Multiple @{"Address"="$ListenerILBIP";"ProbePort"=$ListenerProbePort;"SubnetMask"="255.255.255.255";"Network"="$ClusterNetworkName";"EnableDhcp"=0}

(source: Microsoft.com)


How to get input parameters?

Cluster Network Name – can be retrieved using command Get-ClusterNetwork

IP Resource Name – can be retrieved from IP Address properties in the failover cluster manager or using the command Get-ClusterResource | ? ResourceType -eq "IP Address"

Listener – IP address assigned to the LoadBalancer Front-End IP Address

Probe Port – port defined in the Load Balancer



Now I can define a rule on the Load Balancer to route the requests to the correct host.



You can verify the configuration using sqlcmd.
sqlcmd -S <listenerName> -E



CENTRAL SERVICES INSTANCE – FIRST CLUSTER NODE

To enable the high availability of the SAP NetWeaver it is required to install the central services instance on both nodes of the cluster. Start the Software Provisioning Manager and select the installation on the First Cluster Node.



In my initial blog post about using a file share as an SAP mount directory most of the configuration had to be performed manually. Fortunately, such deployment option is now available already in the Software Provisioning Manager.



In the SAP System Cluster Parameters type the desired system ID. Network name is the hostname that the Central Services instance uses for communication. It has to be associated with the ASCS Load Balancer Front-End IP – DNS entry should be created before clicking Next. In the File Share Host Name enter the Scale-Out File Server hostname that you chose during initial cluster configuration.



Carefully choose the system numbers. If you use different instance number than bellow some of the load balancer rules will require an update:



When the installation is completed, I can create a health probe assigned to the ASCS Cluster IP resource.



Load Balancer configuration has to be enhanced to include Central Services instance ports: 3210, 3310, 3610, 3910, 51013, 51016, 8110,



The status of Central Services instance is displayed in SAP Management Console:



DATABASE INSTANCE

The database is running and the central services instance is deployed to the first node so it’s time to start the database instance installation.



At the beginning the Software Provisioning Manager asks to supply the path to profile directory. Of course, we should provide the file share created using Scale-Out File Server:



The installer also asks to provide the database connection. Please use the database listener and not the node hostname! Otherwise the profile parameter will have the incorrect information and the communication won’t happen through the Load Balancer but just directly with a SQL Server node. If you can’t see the listener name then please revisit the SQL Server configuration, as most probably something is not OK. Possible reasons are missing listener or problems with DNS entry.



Select the previously created database which is replicated to the secondary node:



The server has multiple components installed on the host, so the standard memory configuration won’t apply.



The database instance installation takes the most time as it takes the installation data and import it to the database during the Import Proper step.



SECONDARY ASCS NODE

Install the Central Services instance on the secondary node. The process is basically the same as for the primary node, so I don’t think you require additional explanation.



SECONDARY SQL NODE

SQL Server AlwaysOn works a bit differently compared to standard High Available solutions. Instead of using a shared storage it uses a replication mechanism to propagate the data from the primary to secondary node. Such approach has a lot of benefits, but it also requires additional work from us.

Not all SQL objects are created within the database, but they may apply to the entire SQL Server configuration. For example, all logins configuration doesn’t belong directly to the database and therefore won’t be replicated in the secondary node but should be re-created manually. Fortunately, SAP has a solution ready and we don’t have to copy the objects.

This step requires that the database software is running on the currently inactive node. Before starting the Software Provisioning Manager, it is required to perform a failover. That’s also a good test to check if SQL AlwaysOn is correctly working. Open the dashboard and follow the wizard to failover the database:





Now you can start the Software Provisioning Manager and configure the SQL Cluster node:



Provide the details about the current installation. Don’t forget to update the domain name:



Unlike during the central services install this time it’s OK to provide the node details:



The installer asks pretty much the same questions as during the database install.



When the process is finished you can go ahead to install the Dialog Instances.

APPLICATION INSTANCE

A highly available SAP NetWeaver system requires at least two instances of the application server – one instance per cluster node. Open the SWPM and select the Primary Application Server installation on one of the hosts – then when the installation is completed start the process of deploying Additional Application Server.



The process is again very similar to what we’ve done before, so I won’t go screen by screen. Remember to point the system to the correct file share.



You need to provide the instance number. I decided to go for 00 on both nodes:



Provide the details of the Message Server port and the file share to store the transport files:



A few moments later the installation is completed:

Congratulations! You have completed the setup of SAP NetWeaver running on two node failover cluster!

HOW TO MANAGE THE CLUSTER

The easy way to manage the cluster is to use the tools delivered by SAP and Microsoft. To start and stop SAP Central Services instance and check where the processes are currently running, I recommend using the Failover Cluster Manager:



To see the details about database replication I use the AlwaysOn dashboard available in SQL Management Studio.



Starting and stopping SAP Dialog Instances can be done using SAP MMC. Don’t use it to manage the central services instance.

Wow! That's quite a long post, but I hope such optimised scenario let's you practice managing highly available SAP solutions without spending a fortune on Azure VMs!
2 Comments
Labels in this area