Additional Blogs by Members
cancel
Showing results for 
Search instead for 
Did you mean: 
Former Member
0 Kudos
Part I: Introduction (Creating SAP system clones using Solaris 10 virtualization concepts (Part 1)) Part II: Creating the shadow systems (Creating SAP system clones using Solaris 10 virtualization concepts (Part 2)) This is the 3rd part of a blogger series describing how to easily create runable shadow copies of productive systems using Sun Solaris 10 native OS virtualization functionalities (Solaris Zones). These shadow systems can be used for applying updates, patches or similar tasks which would result in a downtime of the productive system otherwise. After the desired task was performed successfully, the shadow systems can be switched with the productive systems. This way the planned downtime of the productive system landscape can be reduced dramatically. In this part the basic steps for switching the zones are described. h2. Switching the Systems +Caution: Do not restart the zones during the following steps! Make sure that the shadow systems are started before performing the steps described below.+ h3. 0.1. Optional: Promoting the clone (Replacing the file system with the clone) Before starting to promote the clone make sure that the shadow systems are started and running smoothly. * In this step the ZFS clone becomes the new "master" volume: zfs promote pool/shadow_volume The old volume becomes the clone implicitly. This step is required if you want to delete the old file system afterwards but can also be performed later. * h3.
  • Adapting the zone configuration files 1. 2. When switching the zones, the shadow systems becomes the productive systems. Therefore they (sap_zone_2, db_zone_2) should acquire the resources which were used by the productive zones (sap_zone_1, db_zone_1) before. To do so, change the pool configuration within the zone configuration files: zonecfg -z sap_zone_1 set pool=sap_shadow_pool verify commit exit zonecfg -z sap_zone_2 set pool=sap_prod_pool verify commit exit Repeat theses steps for configuring the DB zone using the related pools. 3. 4. Alter the IP addresses switching the public address from the old system to the new one. These changes will be applied when restarting the zone: 0.1. 0.2. First adapt the configuration of the productive zones (sap_zone_1, db_zone_1): zonecfg -z sap_zone_1 select net address=10.17.70.105/22 (=public address available via DNS) set address=192.168.1.10/24 end verify commit exit After this step the public IP address 10.17.70.105 is longer bound to the productive zone and replaced by a private address. 0.3. 0.4. Adapt the configuration of the shadow zones now: zonecfg -z sap_zone_2 select net address=192.168.2.10/24 set address=10.17.70.105/22 end verify commit exit This binds the public address to the dummy interface created during copying the zones. 0.5. 0.6. If you configured a public network for the database, repeat these steps for the database, too. 0.7. 5. h3.
  • Optional: Switching some DIs in advance 1. 2. To apply the update from the primary application server (=Central Instance) to the additional application servers (=Dialog Instances) via SAPCPE, it is required to restart the application servers after switching the systems (1). This is also required because the enqueue locking tables differs. Therefore it could make sense to switch some of the DI's in advance. To do so, further considerations regarding the network, /sapmnt and so on are required. +Hint: It is absolutely required to ensure that the already switched DI’s are separated from the public network as long as the systems are not switched. Otherwise you may face situations of mixed system landscapes running on different releases which may cause unpredictable errors.+ h3.
  • Switching Resource Pools h3.
  • Shutting down/ Starting up interfaces +Hint: After performing this step the systems are switched. To avoid unsuspected behaviors you should shut down all DI’s connected to the productive zone (sap_zone_1) first (they need to be restarted for applying the changes anyway).+ 1. 2. Optional: If you want to follow up the switch of the system, open the Management Console and take a look at the Process ID's of the different systems. +Figure 1: The process ID of JControl before switching the systems (rigsunvirtual01 is the public virtual hostname registered via DNS)+ +Hint: rigsun02 is currently linked to rigsunvirtual01, i.e. the process ID's are the same.+i> 0.1. Shutting down interfaces of the productive hosts (within the global zone) ifconfig ce0:1 down (= sap_zone_1 public interface) 0.1. Optional: If you have configured a public interface for the DB instance shutdown this interface as well: ifconfig ce0:2 down (=db_zone_1 public interface) 0.1. Startup the interface for the new zones with the specified public net address of the virtual sap host (i.e. rigsunvirtual01). The interface used for this is the "wooden leg" as described in Part II (Creating SAP system clones using Solaris 10 virtualization concepts (Part 2)). ifconfig ce0:3 10.18.70.25 netmask 255.255.252.0 up (= the new public interface of sap_zone_2) To avoid confusions you may startup the sap_zone_1 interfaces with the dummy address before: ifconfig ce0:1 192.168.253.1 netmask 255.255.255.0 up 0.1. Optional: If you have configured a public interface for the DB instance startup this interface, too: ifconfig ce0:4 10.18.70.26 netmask 255.255.252.0 up (=the new public interface of db_zone_2) 0.1. Optional: If you want to follow up the switch of the system take a look at the SAP MC again: