Professional Documents
Culture Documents
Clustering
Installation Manual
Version 1.0
This manual mainly describes the steps for Configuring server cluster.
This chapter describes Software, Hardware and Network requirements that are required
for configuring the cluster.
• Storage cables to attach the shared storage device to all computers. Refer to the
manufacturers instructions for configuring storage devices. See the appendix that
accompanies this article for additional information on specific configuration needs
when using SCSI or Fibre Channel
• All hardware should be identical, slot for slot, card for card, BIOS, firmware
revisions, and so on, for all nodes. This makes configuration easier and
eliminates compatibility problems
Note: Server Clustering does not support the use of IP addresses assigned from
• Each node must have at least two network adapters—one for connection to the
client public network and the other for the node-to-node private cluster network.
A dedicated private network adapter is required for HCL certification.
• All nodes must have two physically independent LANs or virtual LANs for public
and private communication.
• If you are using fault-tolerant network cards or network adapter teaming, verify
that you are using the most recent firmware and drivers. Check with your network
adapter manufacturer for cluster compatibility.
• All shared disks, including the quorum disk, must be physically attached to a
shared bus.
• Shared disks must be on a different controller then the one used by the system
drive.
• Creating multiple logical drives at the hardware level in the RAID configuration is
recommended rather than using a single logical disk that is then divided into
multiple partitions at the operating system level. This is different from the
configuration commonly used for stand-alone servers. However, it enables you to
have multiple disk resources and to do Active/Active configurations and manual
load balancing across the nodes in the cluster.
• Verify that disks attached to the shared bus can be seen from all nodes. This can
be checked at the host adapter setup level. Refer to the manufacturer’s
documentation for adapter-specific instructions.
• SCSI devices must be assigned unique SCSI identification numbers and properly
terminated according to the manufacturer’s instructions. See the appendix with
this article for information on installing and terminating SCSI devices
• All shared disks must be configured as master boot record (MBR) disks on
systems running the 64-bit versions of Windows Server 2003
5. Repeat steps 1 through 3, and then rename the public network adapter as Public.
6. An illustration is displayed as shown below.
3. In the Connections box, make sure that your bindings are in the following order, and
then click OK:
a. Public
b. Private
c. Remote Access Connections
2. On the General tab, make sure that only the Internet Protocol (TCP/IP) check box is
selected, as shown in Figure 3 below. Click to clear the check boxes for all other
clients, services, and protocols.
6. On the General tab, verify that you have selected a static IP address that is not on
the same subnet or network as any other public network adapter. It is recommended
that you put the private network adapter in one of the following private network
ranges:
10. On the DNS tab, verify that no values are defined. Make sure that the Register this
connection's addresses in DNS and Use this connection's DNS suffix in DNS
registration check boxes are cleared.
11. On the WINS tab, verify that there are no values defined. Click Disable NetBIOS
over TCP/IP as shown in Figure below.
13. When you close the dialog box, you may receive the following prompt: “This
connection has an empty primary WINS address. Do you want to continue?” If you
receive this prompt, click Yes
15. If you have a network adapter that is capable of transmitting at multiple speeds, you
should manually specify a speed and duplex mode. Do not use an auto-select
setting for speed, because some adapters may drop packets while determining the
speed. The speed for the network adapters must be hard set (manually set) to be
the same on all nodes according to the card manufacturer's specification. If you are
not sure of the supported speed of your card and connecting devices, Microsoft
recommends you set all devices on that path to 10 megabytes per second (Mbps)
and Half Duplex, as shown in Figure below.
For public adapter in Node 2 enter the IP Address of Node 1 in the preferred
Note:
DNS server field and IP Address of Node 2 in Alternate DNS server field.
Figure 4-1: Server clustering: Add or remove programs short cut menu
2. Select Add or Remove Components and this displays the Windows Components
Wizard as shown below.
6. The system now prompts you to insert the Operating System CD in the CD drive as
shown below.
9. Click Next.
15. Now Right click Reverse Lookup zone to create a new zone as shown below.
27. Enter the domain name in the Primary DNS suffix text field as shown above.
30. After the system boots up follow the steps mentioned below to configure active
directory.
4. Click OK.
5. This displays the welcome screen of the Active Directory Installation Wizard as
shown below.
Figure 4-26: Server Clustering: Active directory installation wizard welcome screen
6. Click Next.
8. Select ‘Domain Controller for a new domain’ radio button as shown below.
14. Enter the NetBIOS domain name as shown in the illustration below.
18. Select the SYSVOL folder location as shown in the illustration below.
28. This displays the Windows Set Up installing the DNS server as shown below.
31. This displays the active directory wizard installation finish screen as shown below.
33. This displays a message for restarting the system as shown below.
4. Click OK.
5. This displays the welcome screen of the Active Directory Installation Wizard as
shown below.
Figure 4-56: Server Clustering: Active directory installation wizard welcome screen
6. Click Next.
8. Select ‘Domain Controller for a new domain’ radio button as shown below.
14. Enter the NetBIOS domain name as shown in the illustration below.
18. Select the SYSVOL folder location as shown in the illustration below.
If you configure domain controller as mentioned in Section 4.1.2 then the slide
(Figure 2-29) is displayed prompting you to configure the DNS. If you try to
Note:
configure domain controller as mentioned in Section 4.1.1 then the slide (Figure
2-30) is displayed.
20. Select ‘Install and configure DNS server on this computer’ radio button as shown
below.
23. Select ‘Permissions compatible with only Windows 2000’ radio button as shown
below.
27. This displays the summary of the selections made so far as shown below.
30. A dialog box is displayed prompting you to insert the disk as shown below.
32. This displays the Windows Set Up installing the DNS server as shown below.
35. This displays the active directory wizard installation finish screen as shown below.
37. This displays a message for restarting the system as shown below.
39. If you choose ‘Restart Now’ button this shuts down the system as shown below.
5. Enter the computer name and domain as shown in the illustration below.
If one cluster node in a two-node cluster is a domain controller, the other node must be a
domain controller
If the cluster nodes are the only domain controllers, then each must be a DNS server as
well. They should point to each other for primary DNS resolution and to themselves for
secondary resolution.
The first domain controller in the forest/domain will take on all Operations Master Roles.
You can redistribute these roles to any node. However, if a node fails, the Operations
Master Roles assumed by that node will be unavailable. Therefore, it is recommended
that you do not run Operations Master Roles on any cluster node. This includes Scheme
Master, Domain Naming Master, Relative ID Master, PDC Emulator, and Infrastructure
Master. These functions cannot be clustered for high availability with failover.
Enter the look up command in the terminal and this displays the configured system as
shown below.
Several steps must be taken before configuring the Cluster service software. These steps
are:
• Setting up disks.
Perform these steps on each cluster node before proceeding with the installation of
cluster service on the first node.
To configure the cluster service, you must be logged on with an account that has
administrative permissions to all nodes. Each node must be a member of the same
domain. If you choose to make one of the nodes a domain controller, have another
domain controller available on the same subnet to eliminate a single point of failure and
enable maintenance on that node.
• Connect the SCSI cables (A2, B2) of MSA to PCI slots of Node 2.
1. Power ON MSA. Wait until the MSA front panel LCD screen displays ‘MSA start up
complete’.
3. Select Start > Programs > HP Array Configuration Utility as shown below.
6. This displays the configuration available for the selected controller as shown in the
figure below.
10. Select the physical drives for the new array as shown below.
15. Select ‘Disable’ for Max Boot and ‘Enable’ for Array Accelerator as shown below.
25. Select Disk Management and this displays the disk wizard welcome screen as
shown below.
28. This displays the success of completing the initialization of disk as shown below.
30. Now select ‘Disk Management’ in the explorer window as shown below.
32. Select Disk 1 which is unallocated, right click and select ‘Convert to Basic Disk’ as
shown below.
34. Now select the disk, right click and select New Partition as shown below.
39. Select the partition size as 996 as shown below and click Next.
At OnMobile the convention followed for naming the Quorum disk is Z instead
Note
of Q.
45. In the window this displays the disk and the selected configurations as shown below.
47. This displays the assigned letters for the drives as shown below.
49. Switch OFF Node 1 and Switch ON Node 2 by keeping MSA On.
50. Perform the same steps from 1 to 48 as mentioned above for Node 2.
51. Create a folder with any name in the MSA drive and switch OFF Node 2.
52. Now switch ON Node 1 and ensure that the created folder is available in the same
MSA drive.
8. Right-click Cluster in the left pane of the Active Directory Users and Computers
snap-in, and then click Properties on the shortcut menu.
11. This displays the ‘Select Groups’ dialog box as shown below.
As seen in the flow chart, the form (Create a new Cluster) and the Join (Add nodes) take
a couple different paths, but they have a few of the same pages. Namely, Credential
Login, Analyze, and Re-Analyze and Start Service are the same. There are minor
differences in the following pages: Welcome, Select Computer, and Cluster Service
Account. In the next two sections of this lesson, you will step through the wizard pages
presented on each of these configuration paths. In the third section, after you follow the
step-through sections, this white paper describes in detail the Analyze, Re-Analyze and
Start Service pages, and what the information provided in these screens means.
During Cluster service configuration on node 1, you must turn off all other
Note:
nodes. All shared storage devices should be turned on.
1. Select Start > Programs > Administrative Tools > Cluster Administrator as shown
below.
5. Click Next.
Figure 5-43: Server Clustering: Enter cluster name and domain name
8. Click Next.
11. This action makes the wizard to determine whether there are any existing clusters
as shown below.
14. Enter the user name and password and select the domain under which the cluster
will be configured as shown below.
24. This displays the success message of completing the configuration as shown below.
d. Cluster Name
e. Cluster IP Address
f. Disk Q
g. Disk Z
28. Select Active Groups under Node 1 and this displays the configured node for the
cluster as shown below.
32. Enter the computer name which you wish to be the member of the domain as shown
below.
36. This displays the summary for the added node as shown below.
2. In the left pane, click Cluster Configuration, click Networks, right-click Private, and
then click Properties.
3. Click Internal cluster communications only (private network), as shown in Figure
below.
5. Right-click Public and then click Properties (as shown in Figure below).
6. Click to select the Enable this network for cluster use check box.
7. Click the All communications (mixed network) option, and then click OK.
5. Click OK.
By default, all disks not residing on the same bus as the system disk will have
Physical Disk Resources created for them, and will be clustered. Therefore, if
the node has multiple buses, some disks may be listed that will not be used as
shared storage, for example, an internal SCSI drive. Such disks should be
Note:
removed from the cluster configuration. If you plan to implement Volume Mount
points for some disks, you may want to delete the current disk resources for
those disks, delete the drive letters, and then create a new disk resource
without a drive letter assignment.
2. Right-click the cluster name in the upper-left corner and then Click Properties.
3. Click the Quorum tab.
4. In the Quorum resource list box, select a different disk resource. In the Figure below,
Disk Q is selected in the Quorum resource list box.
2. In Node 1 click Start, click Programs, click Administrative Tools, and then click
Cluster Administrator, as shown in Figure 26 below.
3. Right-click the Disk Group 1 group, and then click Move Group. The group and all its
resources will be moved to another node. After a short period of time, the Disk F: G:
will be brought online on the second node. Watch the window to see this shift. Quit
Cluster Administrator.
4. Congratulations! You have completed the configuration of the cluster service on all
nodes. The server cluster is fully operational. You are now ready to install cluster
resources such as file shares, printer spoolers, cluster aware services like
Distributed Transaction Coordinator, DHCP, WINS, or cluster-aware programs such
as Exchange Server or SQL Server.
6. Click Next.
9. Select ‘To all domain controllers in the Active Directory domain as shown below.
16. This displays the successful completion of the new zone as shown below.
18. In the DNS management window select the added IP Address for reverse look up
and this displays records as shown below.
19. Select Forward look up zones and this displays the configured zones as shown
below.
3. Click OK.
4. Select Start > Administrative Tools > Cluster Administrator as shown below.
2. Power ON MSA.
3. Power ON Node1.
13. Select Start > Programs > Administrative Tools > Cluster Administrator.
16. This displays the form for creating New Resource as shown below.
17. Enter a name and description for the Resource and select the ‘Resource Type’ and
Group and click on Next.
18. This displays the form selecting the possible owners as shown below.
23. Execute the Move Group on Cluster Group and SAPDB TECH and ensure that all
the resources are online for Node 2.
8.1 Requirements
8.1.1 Servers
• 2 servers (identical servers recommended) - each with 2 Network
interfaces.Servers must have identical SCSI RAID cards.(2x2 cards)
• 1 dual channel capable shared Storage array (fully redundant). ( Note: HP MSA
500 in fully redundant configuration requires 2 MSA G2 Controllers and a 4 port
dual SCSI connector)
8.1.2 Software
• Windows 2000 Advanced Server or Windows Server 2003 Enterprise Edition
• SAP DB 7.3.0.48
8.1.3 IP addresses
• 2 IPs address for the private network between the two server.
• 4 IPs on the public network.1 for each server.1 for the cluster.1 for the SAPDB
clustered database.
2. Create a new domain with node1 as domain controller. (Required for cluster)
3. Setup RAID 0+1 or better for the disks on the storage array. Create two logical RAID
disks , the quorum disk (500 MB - 1 GB), and the disk for the database instance.
Using Disk Management write signature to both logical drives, format and create
new disks. (Y: 1GB Z:rest).
5. On node 1 create a new cluster specifying the domain created earlier along with a
cluster ip address (on "Local Area Connection") and the quorum disk. (Y).
6. Make node 2 join the existing cluster. Before doing this make sure that the Domain
Controller Machine is a Member Of the Administrator group in the domain.
8.4.1 Assumptions
Cluster IP 192.168.21.180
Quorum Disk Y:
8.4.2 Node 1
Note: Make sure that, Node 1 owns Z: and the cluster resources at this time.
1. Install SAPDB server to Z:\sapdb.You should now have these three folder
z:\sapdb\data,z:\sapdb\depend,z:\sapdb\prog
For Ozone Installation, create the MMP database, with data files created in
Z:\sapdb\data.Create the OZONE database, with data files created in Z:\sapdb\data.
3. Stop the SAPDB MMP, SAPDB OZONE, and XServer services, make sure they are
in manual mode.
4. Copy the cluster files from Z:\sapdb\depend\cluster to the correct locations.(Rename
the files already present)
Z:\sapdb\depend\cluster\dbmsrv_clu.exe to Z:\sapdb\depend\pgm\dbmsrv.exe
Z:\sapdb\depend\cluster\serv_clu.exe to Z:\sapdb\prog\pgm\serv.exe
Z:\sapdb\depend\cluster\service_clu.exe to Z:\sapdb\depend\pgm\service.exe
Z:\sapdb\depend\cluster\stp_clu.exe to Z:\sapdb\depend\pgm\stp.exe
Z:\sapdb\depend\cluster\strt_clu.exe to Z:\sapdb\depend\pgm\strt.exe
Z:\sapdb\depend\cluster\*.dll to C:\WINNT\cluster
Z:\sapdb\depend\cluster\SAPDBMSCSMan.exe to C:\WINNT\cluster
C:\WINNT\cluster\SAPDBMSCSMan.exe –C
7. Add the resource for the network
set ipaddress=192.168.21.90
set netmask=255.255.255.0
C:\WINNT\cluster\SAPDBMSCSMan.exe -B
"%network%,%ipaddress%,%netmask%,Disk Z:"
8. Add the resource for the shared disks used by the MMP database.
C:\WINNT\cluster\SAPDBMSCSMan.exe -a MMP
C:\WINNT\cluster\SAPDBMSCSMan.exe -a OZONE
10. Open the Microsoft Cluster Administrator. In the properties of the database
MMP/Ozone instance, select "Run this resource in a separate resource monitor".
The resources for the physical disk drives, the IP address, the X server and the
database instance are displayed under the "SAP DBTech" group. Also change the
failover times for all resources (values of 30 seconds for all resources are suitable.
The default displayed is 900).
11. Move the instance independent sapdb software to the local hard drive. Copy
Z:\sapdb\prog and all its contents to C:\sapdb\prog
x_server remove
x_server install
irconf –s
irconf -i -p C:/sapdb/prog/runtime/7240
irconf -r -p Z:/sapdb/prog/runtime/7240
irconf -i -p C:/sapdb/prog/runtime/7250
irconf -r -p Z:/sapdb/prog/runtime/7250
irconf -i -p C:/sapdb/prog/runtime/7300
irconf -r -p Z:/sapdb/prog/runtime/7300
irconf -i -p C:/sapdb/prog/runtime/7301
irconf -r -p Z:/sapdb/prog/runtime/7301
15. Change the sapdb entries in the PATH environment entry to point to C:\sapdb\prog
instead of Z:\sapdb\prog
8.4.3 Node 2
Note: Note that you must NOT run SAPDB server installer on Node 2
x_server install
irconf -i -p C:/sapdb/prog/runtime/7250
irconf -i -p C:/sapdb/prog/runtime/7300
irconf -i -p C:/sapdb/prog/runtime/7301
6. Make Node 2 the owner of the "SAPDBTech" cluster resource group. To do this right
click on Node 1 in cluster administrator and "pause node". Then right click on "SAP
DBTech" and "move group".
x_server start
8. Open the cluster administrator. Delete any resources in the "SAPDBTech" group
starting with _ (underscore).Bring All resources in sapdbtech group online on
node2.Use the cluster administrator to resume Node 1.
9. With the db running on Node 2 run this command from a command window (This
registers DCOM libraries).
xregcomp Z:\sapdb\depend\pgm\dbpinstall
10. Open cluster administrator, right click on sap dbtech group and add a new resource
of type "Network Name", name "Sap DBTech-HostName" ,click next; on the page for
possible owners, set both nodes (NODE1 & NODE2) as owners; on the
dependencies page add resource dependency "SAP DBTechIP-Address". Specify a
name such as "dbserver". Bring this resource online.
11. On both dbservers and all clients in the hosts file (C:\WINNT\system32\drivers\host)
add an entry for SapdbTechIP-HostName v/s SapdbTechIP Address.(substituting
appropriate values for both). In this example SapdbTechIP-Hostname is 'dbserver'
and SapdbTechIP Address is '192.168.21.90'.
• All SQL client must use the database clustered ip for connections (in the example
above:192.168.21.90).
• The other node will automatically takeover and bring up the database in case of
hardware/software failures on the database node.
• From the perspective of clients any switchover appears as if the database has
gone down (normally or abnormally) and has come back up.
• Switchover takes 30 seconds to 1 minute
2. All O3 Environment Variables must use the floating DB hostname. (i.e. the "sapdb
tech-hostname" of the SapdbTech Cluster Group).
6. Install the "Host Management Agent" service on both database servers. (To do this
you have to move the sapdb tech group between DB servers using cluster
administrator and run z:\O3HostMgr -i on each server)
7. On both servers keep the "Host Management Agent" service, into manual
mode.(Cluster service will take care of starting it)
10. On all servers (Two db and all telephony servers) in the host file
(C:\WINNT\system32\drivers\etc\host) make entries for all other servers.Make an
additional entry for the sapdbtech-hostname with the sapdbtech-Ip address.
11. All dburls in the system should be ozone style dburls.(mmp or mmpapps)
12. The config file for the clustered servers will be called "sapdbtech-hostname".xml.
(Substitite the value of "sapdbtech-hostname")
13. For windows 2003 the SNMP Agent needs to be manually configured. Go to the
services panel right click on "SNMP Service" and go to properties. On the security
tab Add the accepted community names "public" (READ ONLY).Select "Accept
SNMP packets from any host". This required by the ozone O3Sysmon process.(Do
this on both servers)
14. Internet Explorer is deployed in a high security mode, on windows 2003.The ozone
gui may not display/work correctly. It is recommended that firefox be installed and
used.
2. Right click on the SapdbTech group and add a new Resource of type "Ozone Host
Management Agent". **Specify a name "Host Management Agent".
• Select "run this resource in a seperate resource monitor" and click next.Click
next on the next window also.(i.e. leave both servers as possible resource
owners)
• In the dependencies select the "Disk Z:" resource and the "SapdbTech IP
Address" resource.
12. In the dependencies select the "Disk Z:" resource. Click Next
13. Set share name as "xdrive" and path as "z:\onmobile\xdrive"
14. On all servers map \\sapdbtech-ip\xdrive as x: (bring the xdrive resourcce online
before mapping)
15. Go to the parameters for the resource and grant full permissions
16. Setup IIS ftp servers on both db servers, with z:\onmobile\xdrive as ftp home (make
sure that they have the SAME username/password)
17. Right click on the SapdbTech group and add a new Resource of type "IIS".(setup for
FTP and additionally for Web if required.)
This does NOT work on windows 2003.These are the steps for 2003
• Right-click the cluster group and select New, Resource.
• Enter a name "IIS" and description for the resource,
Note: • For the resource type, select Generic Script. Click Next.
• The screen will display a list of possible owners. Click Next.
• Next, configure the dependencies for the IIS resource, Sapdb Tech-IP and
Disk z:. Click Next.
18. The final Sapdbtech group should look like the one in the image below
• The "Host Managment Agent" service is integrated with microsoft cluster server
to be started in the case of server startup or failover. It can also be taken offline
(along with all ozone processes) from the cluster administrator.
• The ozone gui will show a virtual hostname for the cluster servers, irrespective of
which db server ozone processes are actually up on.
• Stopping the "Host Managment Agent" service from service control panel is ok
for maintenance/management purposes. However start it manually only if you
stopped it manually or used the "Stop all" task from the ozone gui.
• The "Reboot Server"/"Shutdown Server" tasks on the ozone gui will result in the
sapdbtech group of resources moving to the cluster standby server (if
FIGURE 4-1: SERVER CLUSTERING: ADD OR REMOVE PROGRAMS SHORT CUT MENU .................. 26
FIGURE 4-2: SERVER CLUSTERING: WINDOWS COMPONENTS WIZARD............................................ 26
FIGURE 4-41: SERVER CLUSTERING: INSTALLING DNS SERVER PROGRESS BAR ........................... 51
FIGURE 4-42: SERVER CLUSTERING: MS ® SERVER 2003 WELCOME SCREEN................................. 52
FIGURE 4-61: SERVER CLUSTERING: ENTER THE DNS NAME ILLUSTRATION ................................... 63
FIGURE 4-62: SERVER CLUSTERING: ENTER NETBIOS NAME ............................................................. 64
FIGURE 4-63: SERVER CLUSTERING: SELECT DATABASE AND LOG FOLDERS................................. 65
FIGURE 4-64: SERVER CLUSTERING: SELECT SHARED SYSTEM VOLUME ........................................ 66
FIGURE 4-74: SERVER CLUSTERING: ACTIVE DIRECTORY INSTALLATION WIZARD MESSAGE ...... 71
FIGURE 4-75: SERVER CLUSTERING: SYSTEM SHUT DOWN SCREEN ................................................ 71
FIGURE 5-11: SERVER CLUSTERING: SAVE LOGICAL DRIVE CONFIG SETTINGS .............................. 86
FIGURE 5-12: SERVER CLUSTERING: CONTROLLER ALERT MESSAGE .............................................. 86
FIGURE 5-13: SERVER CLUSTERING: DESKTOP SCREEN..................................................................... 87
FIGURE 5-14: SERVER CLUSTERING: MANAGE WINDOW ..................................................................... 87
FIGURE 5-43: SERVER CLUSTERING: ENTER CLUSTER NAME AND DOMAIN NAME ....................... 107
FIGURE 5-44: SERVER CLUSTERING: ENTER NODE NAME................................................................. 108
FIGURE 5-45: SERVER CLUSTERING: ANALYZE CONFIGURATION .................................................... 108
FIGURE 5-46: SERVER CLUSTERING: ENTER IP ADDRESS................................................................. 109
FIGURE 5-52: SERVER CLUSTERING: CLUSTER CONFIGURATION COMPLETION SCREEN ........... 113
FIGURE 5-53: SERVER CLUSTERING: COMPLETION OF CLUSTER CONFIGURATION ..................... 113
FIGURE 5-60: SERVER CLUSTERING: ADD NODE SUMMARY INFORMATION ................................... 117
FIGURE 5-61: SERVER CLUSTERING: ADDING NODES TO CLUSTER ................................................ 118
FIGURE 5-62: SERVER CLUSTERING: PRIVATE PROPERTIES DIALOG BOX ..................................... 119
FIGURE 6-25: SERVER CLUSTERING: MOVE GROUP SAPDB TECH ................................................... 139
FIGURE 7-1: SERVER CLUSTERING: ADD A NEW INSTANCE .............................................................. 141
FIGURE 9-2: SERVER CLUSTERING: ADDING DEPENDENCIES FOR THE RESOURCE..................... 157
FIGURE 9-3: SERVER CLUSTERING: ADDING OZONECFG .................................................................. 158
FIGURE 9-4: SERVER CLUSTERING: ADDING DEPENDENCIES .......................................................... 158