Skip to Content

This article is the continuation of my previous post about SAP ASE Cluster Edition 15.7. I will detail now installation setup, administration tools and some advanced features of this Business Continuity solution.

Installation.
By contrast to other High Availability solutions I previously described in this blog, SAP ASE Cluster Edition does not require additional clusterware to operate. You do not need to purchase failover cluster software from other vendors. I listed the main requirements for setting up ASE Cluster Edition.

Requirements.

ASE Cluster Edition is certified on a main UNIX platforms (AIX, HP-UX, Linux, Solaris). Before planning installation, check certifications details about O/S and minimal patch levels (See http://certification.sybase.com)

  • Retrieve and install the licenses files
  • Minimal RAM requirement: 1 GB RAM
  • Shared file system for software distribution:
    • Minimal file system space requirement 2 GB
    • Network File System (NFS) or a clustered file system (CFS or GFS) for instance GFS 6.1 on RHEL or OCFS2 on SuSE 11

  • Shared storage for data, log:
    • SAN storage
    • Minimal space requirement for system databases 2 GB
    • RAW device only
    • Multipathing enabled on O/S level. Multipathing provides connection fault tolerance, failover, redundancy, high availability, load balancing, and increased bandwidth and throughput across the active connections.
    • I/O Fencing, Data integrity is not guaranteed unless you enable I/O fencing. SAP ASE CE supports the SCSI-3 Persistent Group Reservation (PGR) feature of SCSI-3 devices to provide I/O fencing. PGR is the SCSI-3 standard for managing disk access in an environment where a single disk is shared by multiple hosts for read and write access.
  • Networks:
    • 2 private interconnect networks ( a primary and a secondary):
    • Private interconnect networks are used for internode communication. The Cluster Edition supports the current standards for interconnects.  1GB bandwidth is recommended. The Cluster Edition supports Infiniband in IP over IB (internet protocol over Infiniband) mode.

    • 1 SAN network

    • 1 public network, for client applications

  • Homogeneous physical nodes
    Same O/S and architecture are required. ASE CE is certified up from 2 up to  32 physical nodes.

Diagram below depicts a typical network topology for ASE Cluster Edition with 4 nodes.

/wp-content/uploads/2013/05/graph01_221480.png

Installation planning.

  1. Check if all requirements are met.
  2. Define your cluster architecture.
    Essential cluster information must be properly defined before creating a cluster (the list of physical nodes, the list of RAW devices, the shared file system). The Installation Guide of ASE Cluster Edition provides a worksheet to collect this information.

Installation process.

  1. Prepare the shared file system and the RAW devices. IO Fencing must be enabled.
  2. On UNIX, create a sybase account with administrative privileges for consistency and security
  3. Log as sybase UNIX account
  4. Install the ASE Cluster Edition software distribution on the shared file system
  5. Run Sybase Control Center  Agent ($SYBASE/SCC-3_2/bin/scc.sh). Sybase Control Center is known as SCC.
  6. Run sybcluster utility to create a new cluster with the worksheet information.

An example of a creation of a new cluster with sybcluster session is given below.

Managing a shared-disk cluster.

Sybcluster utility.
Sybcluster utility is a multi purposes tool. Sybcluster command interacts with the cluster via the Sybase Control Center agent. Thus SCC Agent must be running.  Sybcluster tool is used to administer SAP ASE Cluster Edition. It can create a cluster, start/stop the entire cluster or individual instances of the cluster, retrieve the configuration and the status.

Create a shared-disk cluster.

You can use sybcluster to create a cluster interactively or with non interactively with an input xml file. Below, you you have an example of cluster creation with an xml input file (mycluster.xml). In this example a shared-disk cluster named ‘mycluster’ is created with two instances (ASE1 and ASE2). Only one master device is created for both instance, whereas two local  temporary devices are created.

[sybase@asece sybase]$ sybcluster -U uafadmin  -P sybase  -C mycluster -F “node1,node2”

> create cluster mycluster file /opt/sybase/mycluster.xml
Enter the ASE sa user password:     
Re-enter the ASE sa user password:     
INFO  – Creating the Cluster Agent plugin on host address asece using agent: asece:9999
For instance ASE1, enter the path to the Interfaces file on asece:  [ /opt/sybase ]
For instance ASE2, enter the path to the Interfaces file on asece:  [ /opt/sybase ]
Would you like to check whether this device supports IO fencing capability (Y/N)?  [ Y ] N
INFO  – Cluster “mycluster” creation in progress.
INFO  – Choosing the first instance to be created using the connected agent…
INFO  – The Sybase home directory is /opt/sybase.
INFO  – The ASE home directory is /opt/sybase/ASE-15_0.
INFO  – Retrieving environment variables from /opt/sybase/SYBASE.sh.
INFO  – The first instance created will be ASE1.
INFO  – Warning: You have selected ‘4k’ as the logical page size for the Adaptive
INFO  – Server. If you plan to load dump from another database, make sure this logical
INFO  – page size matches the size of the source database. The default logical page
INFO  – size in previous Adaptive Server versions was 2KB.
INFO  – Building Adaptive Server ‘ASE1’:
INFO  – Writing entry into directory services…
INFO  – Directory services entry complete.
INFO  – Building master device…
INFO  – Master device complete.
INFO  – Starting server…
INFO  – Server started.
INFO  – Set SA password…
INFO  – SA password is set.
INFO  – Building sysprocs device and sybsystemprocs database…
INFO  – sysprocs device and sybsystemprocs database created.
INFO  – Running installmaster script to install system storedprocedures…
INFO  – installmaster: 10% complete.
INFO  – installmaster: 20% complete.
INFO  – installmaster: 30% complete.
[ ..].
INFO  – Server ‘ASE1’ was successfully created.
INFO  – Connecting to the dataserver using the host and query port asece:10001.
INFO  – Creating the Local System Temporary device ASE1_LST at /opt/sybase/data/mycluster.ASE1.LST of size 100M.
INFO  – Creating the Local System Temporary device ASE2_LST at /opt/sybase/data/mycluster.ASE2.LST of size 100M.
INFO  – The cluster is now configured. Shutting down this first instance.
The cluster mycluster was successfully created

Once the cluster is created you can use sybcluster to administer it. A useful sub-command of sybcluster is the ‘help’ sub-command. It displays the complete list of commands and sub-commands you can issue. Click here to see an example of sybcluster help command

Below, you have examples of various commands: retrieving the cluster status, starting the cluster, stopping the cluster, starting an individual instance). Before running a actual command with sycluster you have to connect to the shared-disk cluster first with the ‘connect’ command.

[sybase@asece ~]$ sybcluster -U uafadmin  -P sybase  -C mycluster -F “node1,node2”
> connect

Check the status of a shared-disk cluster.

The ‘show cluster status’ displays the status of the cluster and the instances. Here the shared-disk cluster is down.

mycluster> show cluster status
INFO  – Listening for the cluster heartbeat. This may take a minute. Please wait… (mycluster::AseProbe:434)

      Id  Name  Node  State  Heartbeat
      —  —-  —–  —–  ———
      1  ASE1  asece  Down      No 
      2  ASE2  asece  Down      No 
      —  —-  —–  —–  ———

Start the shared-disk cluster.

Following command start the entire shared-disk cluster. When you start the cluster, instances are started individually. The errorlog of the shared-disk cluster is displayed. Error messages of all the instance are logged in the same errorlog. The first number in the errorlog line is the actual instance number (01 for the first instance (ASE1) and 02 for the second instance (ASE2)).

mycluster> start cluster
INFO  – Starting the cluster mycluster instance ASE1 using the operating system command:
/opt/sybase/ASE-15_0/bin/dataserver –quorum_dev=/opt/sybase/data/mycluster.quorum –instance_name=ASE1

[ … ]
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.04 server  Database ‘sybsystemprocs’ is now online.
INFO  – 01:0002:00000:00089:2013/04/18 11:30:55.05 kernel  network name asece, interface IPv6, address ::ffff:127.0.0.2, type tcp, port 10001, filter NONE
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.06 server  Skipping recovery of local temporary database ‘mycluster_tdb_2’ (dbid 5) because it is not owned by the booting cluster instance.
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.06 server  Recovery started cleaning up lock cache.
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.06 server  Recovery cleaned up 0 unused locks.
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 server  Recovery complete.
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 kernel  recovery event handler task 3604508 is started.
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 server  ASE’s default unicode sort order is ‘binary’.
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 server  ASE’s default sort order is:
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 server        ‘bin_iso_1’ (ID = 50)
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 server  on top of default character set:
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 server        ‘iso_1’ (ID = 1).
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.08 server  Master device size: 60 megabytes, or 30720 virtual pages. (A virtual page is 2048 bytes.)
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.09 kernel  Warning: Cannot set console to nonblocking mode, switching to blocking mode.
INFO  – 01:0002:00000:00001:2013/04/18 11:30:55.09 kernel  Console logging is disabled. This is controlled via the ‘enable console logging’ configuration parameter.

[ … ]

INFO  – Starting the cluster mycluster instance ASE2 using the operating system command:
/opt/sybase/ASE-15_0/bin/dataserver –quorum_dev=/opt/sybase/data/mycluster.quorum –instance_name=ASE2
[ … ]

mycluster>

Instances can be started individually as well:

mycluster> start instance ASE1
INFO  – Starting the cluster mycluster instance ASE1 using the operating system command:
/opt/sybase/ASE-15_0/bin/dataserver –quorum_dev=/opt/sybase/data/mycluster.quorum –instance_name=ASE1
[ … ]

Shutdown the shared-disk cluster.

Following command stops the entire shared-disk cluster. All the instances are stopped.

mycluster> shutdown cluster
Are you sure you want to shutdown the cluster? (Y or N):  [ N ] Y
INFO  – Shutdown of cluster mycluster has completed successfully.

mycluster> show cluster status
INFO  – Listening for the cluster heartbeat. This may take a minute. Please wait… (mycluster::AseProbe:434)

      Id  Name  Node  State  Heartbeat
      —  —-  —–  —–  ———
      1  ASE1  asece  Down      No 
      2  ASE2  asece  Down      No 
      —  —-  —–  —–  ———

Instances can be stopped individually with the shutdown instance command.

Administration GUIs.
Two graphical administration tools are available: Sybase Central and Sybase Control Center (SCC). Like sybcluster utility, both require that the SCC Agent runs on the shared-disk cluster. Sybase Central is older, SCC is the current administration tool. Both tools have following features:

  • Display cluster properties
  • Start/Shutdown a cluster
  • Display the status of a cluster
  • Add/Drop an instance to/from a cluster
  • Display instance property
  • Start/stop an instance
  • Manage logical clusters
  • Selects or configures load profiles that the system uses
  • Monitors instances in the cluster and the workload on each instance

/wp-content/uploads/2013/05/graph02_221481.pngSybase Central screenshot.

Logical clusters and application partitioning.

Description.
A logical cluster is an abstract representation of one or more instances in a physical shared-disk cluster. Each logical cluster has a set of instances it runs on and can have a set of instances to which it fails over. Routing rules direct incoming connections to specific logical clusters based on an application, user login, or server alias supplied by the client. Applications/Clients will be associated to the logical clusters within the shared-disk cluster as depicted below:

/wp-content/uploads/2013/05/graph03_221482.png

Purpose.
Defining logical clusters is a way to split a shared-disk cluster into subsets of instances and associate them to applications and logins. The main idea behind logical clusters is to specialize and dedicate certain instances to certain applications. This is called application partitioning with shared disk clusters. Partitioning applications avoids unnecessary cache and cluster information exchanges between the cluster nodes (due to distributed buffer and lock management).
Basically, the best method for scaling OLTP workloads on a shared-disk cluster is to partition the applications and data into mutually exclusive sets (that is, to separate the data to different databases) to avoid processing coordination across server instances and access the data for an application from the same instance. Because of this, you must carefully consider how you partition “data” at a database level to eliminate the log and data contention across the participating instances.
Partitioning applications belongs to best practices of shared disk cluster as it ensures that physical resources are properly and optimally used.

Setting up logical clusters.

In the previous graphic, “Sales LC” logical cluster was defined to handle applications and logins from the Sales Department. Let’s see how to create this logical cluster, define which instances form the logical cluster and associate it with 2 applications and a login:

exec sp_cluster logical, “create”, SalesLC

exec sp_cluster logical, “add”, SalesLC, instance, ASE1
exec sp_cluster logical, “add”, SalesLC, instance, ASE2

exec sp_cluster logical, “add”, SalesLC, route, application, “field_sales; sales_reports”
exec sp_cluster logical, “add”, SalesLC, route, login, sales_web_user

The two last statements cause the login name “sales_web_user” and the applications “field_sale” and ”sales_reports” to be routed to ASE1 or ASE2 instances. The same can be achieved with graphical administration tools.

Conclusion.
I described basic features of ASE Cluster Edition. There are other interesting features of shared disk cluster to cover like workload management and connections redirection. ASE Cluster Edition is a rich and powerful solution. Nevertheless, cluster solutions like ASE Cluster Edition or failover clusters do not protect for data disaster. They are aimed to protect from hardware failure like CPU/memory/network failures.
My next post will talk about another Business Continuity Solution: Replication Server. Replication Server is used for two decades for protecting data.

References.

[1] “SAP Sybase Adaptive Server Enterprise Getting Started with the Sybase Database and the SAP System” http://www.sdn.sap.com/irj/scn/go/portal/prtroot/docs/library/uuid/0040e969-b4a1-2f10-998d-e0eeec6fb284?QuickLink=index&overridelayout=true&55963423912057

[2] SAP Note 1650511 SYB: High Availability Offerings with SAP ASE

[3] “Clusters Users Guide Adaptive Server® Enterprise 15.7” – DOCUMENT ID: DC00768-01-1570-01 LAST REVISED: Feburary 2012

http://infocenter.sybase.com/help/topic/com.sybase.infocenter.dc00768.1570/pdf/ase_ce_ug.pdf

[4]  “jConnect for JDBC 7.0 Programmers Reference > Programming Information > Working with database s > Implementing high availability failover support” – Chapter 2: Programming Information /Implementing failover in jConnect http://infocenter.sybase.com/help/index.jsp?topic=/com.sybase.infocenter.dc39001.0700/html/prjdbc0700/x39002.htm

Stay in the conversation by following SAP Services on SCN

Check our Database Services Content Library 

Follow along throughout the event on Twitter at  @SAPServices

To report this post you need to login first.

Be the first to leave a comment

You must be Logged on to comment or reply to a post.

Leave a Reply