You are on page 1of 96

Installing and Configuring ACI on MSCS

0List of Figures and Tables


1LMA F 150 Introduction

5
6

1.1 Definition of requirement.....................................................................................6

2Product Architecture
3Installation Procedure

8
10

3.1 Installing und Configuring Cluster Service.....................................................11


3.1.1Installing Cluster Service................................................................................................... 11
3.1.2Using the Cluster Administration Utilities........................................................................20
3.1.3Configuring Resources, Resource Groups, and Virtual Servers...................................24

3.2 Creating an ACI Group and basic resources...................................................33


3.2.1Creating a Group................................................................................................................ 33
3.2.2Creating a Physical Disk Resource..................................................................................33
3.2.3Creating an IP Address Resource.....................................................................................36
3.2.4Creating a Network Name Resource.................................................................................36

3.3 Installing Versant on Microsoft Cluster Server...............................................38


3.3.1Generic................................................................................................................................ 38
3.3.2Configuration instructions................................................................................................ 38
3.3.2.1 Installation overview...................................................................................................... 38
3.3.2.2 Installing Versant on local nodes..................................................................................38
3.3.3Using Cluster Administrator.............................................................................................. 40
3.3.3.1 Viewing the Default Groups........................................................................................... 40
3.3.3.2 Creating a New Versant Group......................................................................................40
3.3.4Insert Versant on Cluster................................................................................................... 41
3.3.5All Resources in Cluster Administrator............................................................................43
3.3.6Set Versant environment variables...................................................................................44
3.3.7Changes in the Registry.................................................................................................... 44
3.3.8Shared Data........................................................................................................................ 44
3.3.9Verify of Installation and Configuration............................................................................45

3.4 Installing Orbix/Corba on Microsoft Cluster Server.......................................46


3.4.1Generic................................................................................................................................ 46
3.4.2Configuration instructions................................................................................................ 46
3.4.2.1 Basic Installing Corba on one computer......................................................................46
3.4.2.1.1 Configuration of ORBIX for ACI...............................................................................46
3.4.2.1.2 Step 1 - Installation of Orbix from distribution medium.........................................48
3.4.2.1.3 Step 2 - Creation of a configuration domain ........................................................48
3.4.2.2 Installing Corba on both computers.............................................................................57
3.4.2.3 Change Corba Services Properties on local nodes.....................................................57
3.4.3Using Cluster Administrator.............................................................................................. 63
3.4.3.1 Introduction..................................................................................................................... 63
3.4.3.2 Viewing the Default Groups........................................................................................... 63
3.4.3.3 Creating a New Corba Group......................................................................................... 64
3.4.3.4 Generic Service............................................................................................................... 64
3.4.4Insert Corba on Cluster..................................................................................................... 65
3.4.4.1 Change Corba Services Properties on Cluster............................................................65
3.4.5All Resources in Cluster Administrator............................................................................67
3.4.6Set Corba environment variables.....................................................................................68
3.4.7Changes in the Registry.................................................................................................... 68
3.4.8Shared Data........................................................................................................................ 68
3.4.9Verify of Installation and Configuration............................................................................69

3.5 Installing ACI Process on Microsoft Cluster Server.......................................70


3.5.1Generic................................................................................................................................ 70
3.5.2Configuration instructions................................................................................................ 71
3.5.2.1 Installing ACI process on both computers...................................................................71
3.5.2.2 Change ACI Services Properties on local nodes.........................................................71
3.5.3Using Cluster Administrator.............................................................................................. 73
3.5.3.1 Introduction..................................................................................................................... 73
3.5.3.2 Viewing the Default Groups........................................................................................... 73
3.5.3.3 Creating or use a Group for ACI process.....................................................................74
3.5.3.4 Generic Service............................................................................................................... 74
2

3.5.4Insert ACI on Cluster.......................................................................................................... 75


3.5.4.1 Change ACI Services Properties on Cluster.................................................................75
3.5.5All Resources in Cluster Administrator............................................................................77
3.5.6Set ACI environment variables.......................................................................................... 78
3.5.7Changes in the Registry.................................................................................................... 78
3.5.8Shared Data........................................................................................................................ 78
3.5.9Verify of Installation and Configuration............................................................................78

3.6 Hard/Software Requirements and Customer Supply......................................80


3.6.1Requirements Section (for Windows)...............................................................................80
3.6.2Customer Supply Section.................................................................................................. 85
3.6.3Order Packages.................................................................................................................. 85

Table of

0 List of Figures and Tables


Figure 1 2-Node MSCS Cluster...................................................................................................8
Figure 2 ACI Configuration in a Cluster.......................................................................................9
Figure 3 The Select An Account page........................................................................................15
Figure 4 The Add Or Remove Managed Disks page..................................................................15
Figure 5 The Cluster File Storage page.....................................................................................16
Figure 6 The Configure Cluster Networks page.........................................................................16
Figure 7 The Network Connections page...................................................................................17
Figure 8 The Internal Cluster Communication page...................................................................18
Figure 9 The Cluster IP Address page.......................................................................................18
Figure 10 Cluster Administrator.................................................................................................20
Figure 11 Cluster Administrator..................................................................................................20
Figure 12 Cluster Administrator's Open Connection To Cluster dialog box................................21
Figure 13 The Node Offline icon................................................................................................22
Figure 14 The Failback tab of the Cluster Group Properties window.........................................30
Figure 15 The Advanced tab of the Cluster IP Address Properties window................................32
Figure 16 Versant in a Cluster with shared RAID System..........................................................38
Figure 17 Versantd properties, General, (local Computer).........................................................39
Figure 18 Versantd properties, LogOn, (locaol Computer).........................................................40
Figure 19 Versantd Properties, Genaral....................................................................................42
Figure 20 Versantd Properties, Dependencies..........................................................................42
Figure 21 Versantd Properties, Parameters...............................................................................43
Figure 22 Cluster Administrator, Database Group.....................................................................43
Figure 23 Corba in a Cluster......................................................................................................46
Figure 24 Orbix Environment with Centralized Configuration....................................................48
Figure 25 Corba Services configuration.....................................................................................57
Figure 26 IT activator default-domain Properties.......................................................................58
Figure 27 IT config_rep cfr-ACI_Network Properties.................................................................59
Figure 28 IT activator default-domain Properties.......................................................................60
Figure 29 IT activator default-domain Properties.......................................................................61
Figure 30 IT activator default-domain Properties.......................................................................62
Figure 31 IT activator default-domain Properties.......................................................................63
Figure 32 IT activator default-domain General properties..........................................................65
Figure 33 IT activator default-domain Dependencies properties................................................66
Figure 34 IT activator default-domain Advanced properties.......................................................66
Figure 35 IT activator default-domain Parameter properties......................................................67
Figure 36 IT activator default-domain Registry properties.........................................................67
Figure 37 Cluster Administrator, Group including corba services...............................................68
Figure 38 ACI in a Cluster..........................................................................................................70
Figure 39 ACI Service configuration..........................................................................................71
Figure 40 Domain manager Properties......................................................................................72
Figure 41 General properties.....................................................................................................75
Figure 42 Dependencies properties...........................................................................................76
Figure 43 Advanced properties..................................................................................................76
Figure 44 Parameter properties.................................................................................................77
Figure 45 Registry properties....................................................................................................77
Figure 46 Cluster Administrator shows Group including online services....................................78

1 LMA F 150 Introduction


In certain industries, down-time has always been unacceptable (e.g., communications/telephony,
finance and banking, reservation systems). Today, given the realities of global competition,
difficulties in product differentiation, low operating margins and the like, many industries must
be up and running for their customers whenever the customer sees fit to call.
In doing so you need an audit of your system. When you audit network risk, you identify the possible failures that can
interrupt access to network resources. A single point of failure is any component in your environment that would block
data or applications if it failed. Single points of failure can be hardware, software, or external dependencies, such as a
power supply. The following are some of the possible failure points.
Network hub
Power Supply
Server connection
Disk
Other server hardware, such as CPU or memory
Server software, such as the operating system or specific applications
WAN links, such as routers and dedicated lines
There are varying levels of solutions one could use for from use of UPS for power supply to say use of a RAID
(Redundant Array of Inexpensive disks) for disk replication.
In general, you provide maximum reliability when you:
Minimise the number of single points of failure in your environment.
Provide mechanisms that maintain service when a failure occurs.
A good idea of downtime can be understood by looking at possible slippages. A slippage of one-tenth of one percent in
uptime can cause minutes, if not hours, of server outages during the course of a year.
For example, an availability level of 99.999% on a round-the-clock basis would mean that an organisation would
experience at least five minutes of unscheduled downtime during a year. A level of 99.99% would mean 52 minutes of
downtime. A level of 99.9% would translate to 8.7 hours of downtime. A level of 99% would equal 3.7 days of downtime
throughout the course of a year.
Users of business applications may be able to cope with a few seconds, or even minutes, of downtime during a business
day. But many minutes of downtime would cause productivity and business losses that most companies would find
unacceptable.
By taking advantage of high availability solutions, such as clustering, organisations can improve availability, reduce
downtime and reduce user disruption for unplanned outages and planned maintenance. For example, without clustering, a
disk drive failure might require the user to sign on to the backup system, restore data from backup files, restart the
application and re-enter one or more transactions. Clustering support will make possible a planned switchover for
scheduled backups with only a slight delay at the user's workstation.
To declare a system to be declared highly available would mean a uptime of 99.9% to 99.999%, which means a
downtime of about a maximum of 8 hours and a minimum of 5 minutes a year.
The scope of this LMA is limited to ensuring a high level of availability of:
the server connections, both to the ACI clients and the ACI servers.
server hardware (CPU, memory) and
server software such as the operating system or the ACI server applications.

1.1

Definition of requirement

The following is a possible listing of requirements, which would need to be satisfied to ensure ACI as a highly available
system.
1. Ensure the availability of the ACI server processes to the ACI clients (or the corresponding server that connects
to it, e.g. the subNMS to the EMS) in the case of either a failure of the process or of the host on which it runs
with a minimum downtime.
In the current ACI scenario the existing process monitor is the only source of recovery to the system and suffers
from the following problems:

2.

3.

It suffers from being a possible single point of failure for the system. This means in case the process monitor
crashes for some reason, it does not have a recovery mechanism of its own and hence cannot monitor and restart
the ACI server resources.
In case the host on which it runs suffers from a power failure or a malfunction of any of the OS resources such
that it becomes unavailable, the process monitor becomes totally in adequate and cannot do anything is such a
situation.
Limit learning of the network by the ACI server processes to only a first time start (cold start).
Currently ACI learns parts of the network even on warm starts (i.e. subsequent starts), which itself could take hours.
The solution of requirement 1, should be able to negate this behaviour. In other words ACI should be able to assume
that it is always connected to the network and that the database always reflects the true state of the network. In case of
a loss of connection to the network for a maximum delay of say 10-30 seconds, it should still be possible to assume a
synchronised state between the MIB and the network. Although a possibility of a forced resynchronisation can be
carried out through a consistency check.
Clients and ACI server process, should not proceed to shutdown in case it recognises a failure in connection to
its corresponding server processes.

Currently within ACI, clients shutdown when they sense a loss in connection the server processes
and additionally the server processes themselves need to be re-started when they lose
connection to the server they have been connected to. For e.g. the case of a DCN server
crashes, the Classic server or any EMS connected to it would need to be restarted. This
behaviour would need to be changed. This implies in case a client loses connection, it should
inform the operator that there has been a loss of connection and that the client was trying to reestablish one. All objects w.r.t. the current operation should be cleaned up. In the case of a server
it should also behave in a similar way, i.e. simply wait for its corresponding server to be made
available again.
The requirements listed above are necessary for the current ACI versions. Additional requirements, which maybe
necessary, are:
4. Provide High Availability of ACI as an add-on option to customers, which may want it, which would imply to
sell it as a separate feature.
This implies the following.
The existing installation program would not need to be modified for making any changes that may be need (such
as registry entries, or copies of software on different drives) for high availability. A separate installation
program would need to be written such that it makes the necessary changes on the ACI server machines or nodes.
The behaviour defined in requirement 2 would not be satisfied in this case (i.e. with no high availability) and
hence ACI should be able to detect that it is not running in a high availability environment and should
learn/force synchronisation with the network.
5. Provide a way for the maintenance of the ACI server machines without interruption in services of ACI.
Currently if the ACI machine on which the ACI server processes are running need to be upgraded to a new service
pack or the customer would like to add a new utility software which requires a reboot of the system or even upgrade
the system to a new hard disk or memory, the ACI server processes are disrupted and need to be restarted. This causes
a loss of service to ACI. The possibility that the customer can do the above should exist.

2 Product Architecture
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
The following figure illustrates components of a two-node server cluster that may be composed of servers running either
Windows 2000 Advanced Server or Windows NT Server 4.0, Enterprise Edition with shared storage device connections
using SCSI or SCSI over Fiber Channel.

Figure 1 2-Node MSCS Cluster

There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects all the defined groups and his resources to that of the failed node,
resources like the share disk, the Global Ip Address, Global Hostname, Versant, Corba and ACI.

High Availability for Access Integrator Server

DM Client

GUI

GUI

Application
Logic

Application
Logic

API

API

ACI Group

Active
node

NM, DM, EM, SNMP


Brass, Corba, Versant

NM, EM, SNMP


Redundant node Brass, Corba, Versant

Server Process

Server Process

Shared RAID System

Switch over
Database Files
ACI Logs

Figure 2 ACI Configuration in a Cluster


This Architecture is needed to reach a downtime of about a maximum of 8 hours and a minimum of 5 minutes a year. It
corresponds to an availability level of 99.9% to 99.999%. This level is enough to declare a system highly available.

3 Installation Procedure
All processes are to be installed on both nodes on the boot disk. It is not recommended to install executables on share disk
because of following reason (LMA 150 and HA Definitions):
Maximum reliability
- Minimize the number of single points of failure in your environment
Comment: The installation on the share disk is a single point. When the executable files of ACI in
mistake are overwrite, is not possible a restart from the other node and the high availability is stopped.
(Here have not in mind a hardware crash of disk!). A overwrite is always possible throw other process or
ACI self.
- Provide mechanisms that maintain service when a failure occurs
Comment: General is an upgrade of version not possible on the share disk without a stop of the High
Availability because the files are locked. The High Availability definition set a maximum of 8 hours in
year for state out of function. In this case is this lot of time not enough for maintenance and upgrade.
LMA150 part

Provide a way for the maintenance of the ACI server machines without interruption in services of ACI.
- Currently if the ACI machine on which the ACI server processes are running need to be upgraded to a
new service pack or the customer would like to add a new utility software which requires a reboot of the
system or even upgrade the system to a new hard disk or memory, the ACI server processes are disrupted
and need to be restarted. This causes a loss of service to ACI. The possibility that the customer can do the
above should exist.
Databases, logs, os_backup, corba variables are to be installed and configured on the shared disk.

3.1

Installing und Configuring Cluster Service

3.1.1 Installing Cluster Service

Installation Options
Three options are available for installing Cluster Service:
Installation with a fresh installation of Windows 2000 Advanced Server
Installation on an existing installation of Windows 2000 Advanced Server
Unattended Cluster Service installation
Regardless of the installation type you select, you will use the Cluscfg.exe application. Cluscfg.exe runs as a standard
Windows 2000 wizard unless you automate the installation of Cluster Service. When you run Cluscfg.exe from the
command line, it supports the command line options listed in Table 3.1.
Table 3.1 Cluscfg.exe Command-Line Options
Parameter
ACC[OUNT] <accountname>
ACT[ION] {FORM | JOIN}
D[OMAIN] <domainname>
EXC[LUDEDRIVE] <drive list>
I[PADDR] <xxx.xxx.xxx.xxx>
L[OCALQUORUM]
NA[ME] <clustername>
NE[TWORK] <connectionname>
{INTERNAL | CLIENT | ALL}
[priority]

Description
Specifies the domain service account used for Cluster Service.
Specifies whether to form a new cluster or join an existing cluster.
Specifies the domain used by Cluster Service.
Specifies which drives should not be used by Cluster Service as shared disks.
Specifies the IP address for the cluster.
Specifies a disk on a nonshared SCSI bus that should be used as the quorum
device.
Specifies the name of the cluster.
Specifies how the network connection specified should be used by Cluster
Service.

Specifies the password for the domain service account used for Cluster
Service.
Q[UORUM] <x:>
Specifies the drive letter to use for the quorum device.
S[UBNET] <xxx.xxx.xxx.xxx>
Specifies the subnet to use for the private network.
Suppresses the user interface in order to perform an unattended installation.
U[NATTEND] [<path to answer file>]
Also specifies an optional external answer file.
P[ASSWORD] <password>

Installation with Windows 2000 Advanced Server


The most common method for installing Cluster Service is to install it as an option during the operating system setup
process. When you install Windows 2000 Advanced Server, you can select Cluster Service in the Windows Components
Setup dialog box. This causes the Cluster Configuration Wizard to start at the end of the Windows setup process. When
you install Cluster Service as part of a new setup, you must manually enter the appropriate information for Cluster Service.
An alternative to this is the unattended installation option described later in this section. You must have administrative
rights in order to install Cluster Service.

Installation on Existing Windows 2000 Advanced Server


You can also install Cluster Service on a computer that is already running Windows 2000 Advanced Server. When you do
this, you can take some of the additional preparation steps outlined in Chapter 2, such as verifying that each server has read
and write access to the shared device. Although this is not required, it will help ensure a successful installation of Cluster
Service.
To install Cluster Service on an existing installation of Windows 2000 Advanced Server, you can use the Configure Your
Server dialog box or Add/Remove Programs in Control Panel.
You must take the following steps before performing a manual Cluster Service installation:
You must be logged on as an administrator.
Because the installation steps vary slightly on the second node, you must decide which node will be installed first.
During the installation, the second node must be powered off in order to ensure that the shared device does not
become corrupt due to multiple computers accessing the drive.
You should be prepared for a possible power outage during the installation. If both servers come online (when
power is restored) before Cluster Service is fully installed, data on the shared device could be corrupted. One
solution is to configure the second server, through Boot.ini, for a delayed start other than the default 30 seconds.
The complete setup steps for existing Windows 2000 Advanced Server installations will be presented later in this lesson.

10

Unattended Installation
If you are installing and configuring a number of clusters, you can elect to automate the setup process for Cluster Service
as part of a new Windows 2000 Advanced Server installation or as an installation on an existing server. In either case,
you will use the Cluscfg.exe application with an associated answer file. When you use Cluscfg.exe to automate an
installation, the answers it requires can come from the answer file used by Sysprep or from an external answer file that you
create.
NOTE
Sysprep is used to install only new instances of Windows 2000 Advanced Server. To automate the installation process of
Cluster Service on an existing Windows 2000 Advanced Server, you must supply an external answer file.

Automating Cluster Service Setup on a New Server


You can automate the installation of Windows 2000 Advanced Server and Cluster Service using an answer file. This file is
a script used by the setup program that provides answers to dialog boxes, resulting in an unattended installation. Since
answer files are standard ASCII files, they can be created quickly using the Setup Manager utility or manually using any
text editor. Even if you use Setup Manager to create an answer file, you can still edit the file with any text editor and create
custom installations.
For even greater efficiency, you can use an image file in addition to an answer file. Image files are created with the
Sysprep utility from an existing Windows 2000 configuration. Sysprep is used to install the same image onto multiple
servers, resulting in identical installations. To further automate this process, you can use Setup Manager to create a
Sysprep-specific answer file, called Sysprep.inf. The Sysprep.inf answer file can be customized as required.
To automate Cluster Service installation, you must specify the following settings to the standard answer file created by the
Setup Manager or the Sysprep.inf answer file:
[Components]
Cluster = On
[GuiRunOnce]
%windir%\cluster\cluscfg.exe u
[Cluster]
Name = MyCluster
Action = Form
Account = ClusterAdmin
Password = MyPassword
Domain = Reskit
IPAddr = 192.168.0.3
Subnet = 255.255.255.0
Network = Public,ALL
Network = Private,INTERNAL
The [Components] section contains requirements for installing components of Windows 2000. A value of On installs the
component, and a value of Off prevents the component from being installed.
Cluster = On installs Windows Clustering and Administration components on Windows 2000 Advanced Server.
The [GuiRunOnce] section specifies applications that will run automatically after the setup program reboots the server.
You must provide the path and filename information for Cluscfg.exe. You can optionally include the u command line
option to run Cluscfg.exe in unattended mode. The answers required for Cluscfg.exe will come from the [Cluster] section.
The [Cluster] section is described later in this lesson. Regardless of which technique you use to automate the installation
process, you can add Cluster Servicespecific information to the appropriate answer file to create an unattended
installation of Cluster Service.
NOTE
In order for Cluscfg.exe to run in an unattended manner, the setup process must first complete the installation of Windows
2000, reboot the server, and have you log in with administrative rights.

Automating Cluster Service Setup on an Existing Server


If you want to perform an unattended installation of Cluster Service on an existing installation of Windows 2000 Advanced
Server, you can manually run the Cluscfg.exe utility with its own answer file. The following command line can be used to
run Cluscfg.exe with the CSAnswer.inf answer file:
Cluscfg.exe u c:\CSAnswer.inf
The contents of the answer file you provide are similar to the contents of the answer file used when you install Cluster
Service with a new installation of Windows 2000. This answer file needs to contain only a [Cluster] section. This section
contains several keys, which are presented here with specific examples:
Account
11

Account = <account name>


This key specifies the name of the account under which Cluster Service runs. This key is required only if Action = Form.
(See below.)
Example:
Account = adminname
Action
Action = <Form | Join>
This key specifies whether a cluster is to be formed or joined.
Form specifies that the cluster is to be created. If this is the first node in a cluster, you are creating a new cluster. When you
specify Form, you must specify the Account and Domain keys.
Join specifies that your machine is to join an existing cluster. If at least one other node already exists, you are joining a
cluster. When you specify Join, you should not specify the Account and Domain keys.
Example:
Action = Form
Domain
Domain = <domain name>
This key specifies the domain to which the cluster belongs. It is required only if Action = Form.
Example:
Domain = domainname
ExcludeDrive
ExcludeDrive = <drive letter>[, <drive letter> [, . . . ]]
This optional key specifies a drive to be excluded from the list of possible quorum devices.
Example:
ExcludeDrive = q, r
IPAddr
IPAddr = <IP address>
This key specifies the IP address of the cluster.
Example:
IPAddr = 193.1.1.95
LocalQuorum
LocalQuorum = Yes | No
This optional key specifies that a system drive should be used as the quorum device. (Normally, only a disk that is on a
shared SCSI bus not used by the system disk can be selected as the quorum device.)
Example:
LocalQuorum = Yes
NOTE
This parameter should be used only for demo, testing, and development purposes. The local quorum resource cannot fail
over.
Name
Name = <cluster name>
This key specifies the name of the cluster. The value can contain a maximum of 15 characters.
Example:
Name = MyCluster
Network
Network = <connection name string>, <role>[, <priority>]
This key specifies the connection name associated with a network adapter and the role that adapter is to fulfill in the
cluster. The first two parameters, <connection name string> and <role>, are required. The third parameter, <priority>,
should be supplied only for network connections configured for internal communications.
The <role> parameter specifies the type of cluster communication for the network connection. Valid parameters are All,
Internal, and Client. To use the network connections for communication with clients and between the nodes, specify All.
To use the network connections only for internal communication between the nodes, specify Internal. To use the network
connections only for communication with clients, specify Client.
The <priority> parameter specifies the order in which the network connections are used for internal communication.
Example:
Network="Local Area Connection 2", INTERNAL, 1
Password
Password = <password>
This key specifies the password of the account under which Cluster Service runs.
Example:
Password = MyPassword
NOTE
12

Some security risks are associated with using the Password key because the password is stored as plain text within the
answer file. However, the Password key is deleted after the upgrade.
Quorum
Quorum = <drive letter>
This key specifies the drive to be used as the quorum device.
Example:
Quorum = Q:
Subnet
Subnet = <IP subnet mask>
This key specifies the IP subnet mask of the cluster.
Example:
Subnet = 255.255.0.0

The Cluster Service Configuration Wizard


To install Cluster Service, you use the Cluscfg.exe application, which is called the Cluster Service Configuration Wizard.
When you run this wizard manually, you must respond to each dialog box; alternatively, you can run the wizard as an
automated process using an answer file (as described earlier in this lesson). You can start the wizard manually through
Add/Remove Programs in Control Panel. Complete steps for installing Cluster Service are included later in this lesson.
The following sections describe some important pages of the wizard and the configuration options available when you
install Cluster Service.

Hardware Configuration
Although Cluster Service can be implemented on a variety of hardware configurations, Microsoft supports only Cluster
Service installations performed on configurations listed on the Cluster Service Hardware Compatibility List (HCL). The
wizard's Hardware Configuration page, shown in Figure 3.1, reviews this policy. For more information about these
configurations, see Chapter 2, Lesson 1.

Figure 3.1 The Cluster Service Configuration Wizard's Hardware Configuration page
To continue with the installation of Cluster Service you must confirm that you understand Microsoft's support policy by
clicking I Understand.

Create Or Join A Cluster


If a cluster does not already exist or if you do not want to join an existing cluster, you must create a new cluster. Otherwise,
you will join an existing cluster. The Create Or Join A Cluster page, shown in Figure 3.2, provides these options. If you are
joining an existing cluster, the cluster-specific information, such as the location of the quorum, will be provided for you
automatically.

Select An Account
Before running the wizard, you must first create a domain user account for the cluster. This account must be a Domain
Administrator or have local administrative rights on each node, plus the following permissions:
Lock pages in memory
Log on as a service
13

Act as part of the operating system


Back up files and directories
Increase quotas
Increase scheduling priority
Load and unload device drivers
Restore files and directories
The wizard requires you to enter this account information on the Select An Account page, shown in Figure 3.3.
After you enter the information for the Cluster Service account, the wizard validates the user account and password. If the
node on which you are installing Cluster Service is a member server, you will be prompted to add this account to the local
Administrators group.

Figure 3 The Select An Account page

Add Or Remove Managed Disks


The Add Or Remove Managed Disks page, shown in Figure 3.4, provides a list of SCSI disks available to this server for
use with Cluster Service. However, not all of the disks residing on the shared SCSI bus will necessarily appear in the list.
Also, if the server has more than one SCSI bus, such as any for use by internal hard drives, these disks might appear but
should not be made part of the cluster. You should remove these disks from the list.

Figure 4 The Add Or Remove Managed Disks page

Cluster File Storage


The Cluster File Storage page, shown in Figure 3.5, lets you designate which shared partition or drive will store the cluster
log files and checkpoint information. It is best to use a partition with at least 100 MB of free space to ensure that you do
not run out of space even though the minimum size of a partition is only 50 MB of free space. For best results, this
partition should have not have any user applications.

14

Figure 5 The Cluster File Storage page

Configure Cluster Networks


The Configure Cluster Networks page, shown in Figure 3.6, presents some basic recommendations that you should
consider for your deployment of Cluster Service. An instance of this page is presented for each network adapter installed in
your server. If you have only one adapter installed, the wizard will notify you that this configuration is not recommended
because it introduces a single point of failure for communication between nodes.

Figure 6 The Configure Cluster Networks page

Network Connections
The Network Connections page, shown in Figure 3.7, allows you to configure the cluster to allow it to communicate
properly.

15

Figure 7 The Network Connections page


This page contains the following properties:
Network Name In the Network Name text box, enter the name of the connection. This should match the name
used for the private network connection. By naming your connections appropriately based on their use, it will be
easier to manage and maintain the cluster. This is especially important when multiple administrators are managing
the cluster.
Device The Device text box is populated automatically with the name of the network adapter currently being
configured. Your server should have more than one adapter, so you should make sure to apply the private network
settings and public network settings on the appropriate adapter.
IP Address In the IP Address text box, enter the IP address that the cluster will use to communicate with the other
nodes in the cluster.
Enable This Network For Cluster Use If you select this option, Cluster Service will use this network adapter by
default.
Client Access Only (Public Network) When this option is selected, the public network adapter will be used by
the cluster only for communication with clients. No node-to-node communication will occur on this adapter. You
should select this option only if you have another adapter that can act as a backup if the primary private adapter
becomes unavailable.
Internal Cluster Communications Only (Private Network) If you select this option, Cluster Service will not
use this adapter for any client communication. This adapter will be used only for internal node-to-node
communication within the cluster. You should configure the second adapter in each node to act as a backup for
this adapter in the event of a failure.
All Communications (Mixed Network) By default, this option is selected for the adapter card you are
configuring. It specifies that Cluster Service can use this card for client communication as well as private, nodeto-node communication. You should select this option if you have only two adapters and the other adapter is being
used exclusively for node-to-node communication. If the other adapter fails, this adapter will assume
responsibility for all cluster communication.

Internal Cluster Communication


The Internal Cluster Communication page, shown in Figure 3.8, allows you to change the order of the adapters for use by
Cluster Service.
Cluster Service attempts to use the first adapter listed for node-to-node communication and will move to the next card if
the first fails. Therefore, because Private Cluster Connection is used for private cluster communication between nodes, it
should be placed at the top of the list. The second and all remaining adapters should be listed next, in order of their
configuration for private communication. In the event of a failure, Cluster Service will automatically try the next adapter in
the list.

16

Figure 8 The Internal Cluster Communication page

Cluster IP Address
The Cluster IP Address page, shown in Figure 3.9, requires you to enter the public IP address assigned to the cluster.

Figure 9 The Cluster IP Address page


This address is used for accessing the cluster for remote management by an administrator. You must use a unique static IP
address that is available and accessible from your corporate network. You must also enter the appropriate subnet mask for
this IP address. If you enter an invalid IP address or subnet mask, the wizard will not allow you to proceed.

Practice: Installing Cluster Service


In this practice, you will install Cluster Service on an existing Windows 2000 Advanced Server. You should already have
Windows 2000 Advanced Server installed and configured on both nodes (and the shared device). (For information on how
to do this, see the practices in Chapter 2.)
To see a demonstration of this practice, run the Cluster Install demonstration located in the Media folder on the companion
CD.

Creating a Cluster User Account


1.
1.
1.
1.
1.
1.
1.
1.

On the first server, from the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and
then click Active Directory Users and Computers.
If reskit.com is not already expanded, click the plus sign to expand it.
Click Users.
Right-click Users, point to New, and click User.
In the First Name text box, type ClusterAdmin.
In the Last Name text box, type Account.
In the User Logon Name text box, type clusteradmin.
Click Next.
17

1.
1.
1.
1.
1.
1.

In the Password text box, type a password.


In the Confirm Password text box, type the same password.
Select the User Cannot Change Password and Password Never Expires check boxes.
Click Next.
Click Finish.
In the right pane of the Active Directory Users And Computers snap-in, right-click the Cluster ServiceAdmin
Account and click Add Members To Group.
1. Click Administrators, and then click OK.
40397.
Close the Active Directory Users And Computers window.

Configuring the First Node


You must have the Windows 2000 Advanced Server CD-ROM in order to do this exercise. The second server to be
included in this cluster must be turned off before you start.
1. On the first server, from the Windows 2000 Start menu, point to Settings, and click Control Panel.
1. Double-click Add/Remove Programs.
1. Double-click Add/Remove Windows Components.
1. Select Cluster Service.
1. Click Next.
1. The setup program will start copying files, and the Cluster Server Configuration Wizard will open. Click Next.
1. On the Hardware Configuration page, click I Understand and click Next.
1. You must now create the cluster. Select the This Is The First Node In The Cluster option, and then click Next.
1. On the Cluster Name page, enter the name MyCluster in the text box and click Next.
1. On the Select An Account page, enter the user name and password you created in the Creating A Cluster User
Account exercise on the previous page. Click Next.
1. On the Add Or Remove Managed Disks page, which displays the SCSI disks that reside on the shared SCSI bus,
add or remove disks as necessary. Click Next.
1. On the Cluster File Storage page, designate the disk drive on which to store the cluster's checkpoint and log files.
Click Next.
1. On the Configure Cluster Networks page, click Next.
1. The Network Connections page will appear for each network adapter installed on your server. If this page is
associated with the public adapter, skip to steps 19 through 23. Then return to this step to configure the private
adapter.
1. Verify that the network name and IP address correctly identify the private network.
42064.
Verify that the Enable This Network For Cluster Use check box is selected.
1. Select the Internal Cluster Communications Only (Private Network) option.
1. Click Next.
1. The Network Connections page will appear again, this time for the public adapter.
1. Verify that the network name and IP address correctly identify the public network.
1. Verify that the Enable This Network For Cluster Use check box is selected.
1. Select the All Communications (Mixed Network) option.
1. Click Next.
1. The Internal Cluster Communication page, which appears next, lets you change the order of how the networks are
used. Private Cluster Connection is a direct connection between nodes, so you should move it to the top of the list,
if necessary, by selecting it and clicking on the Up button on the right. This connection is used for normal
communication between the nodes. However, in the case of a failure, Cluster Service will automatically attempt to
use the next network in the list. Click Next.
NOTE
Be sure that Private Cluster Connection is always first in the list.
1. On the Cluster IP Address page, enter the cluster's unique IP address (192.168.0.3) and the subnet mask
(255.255.255.0). If you are connecting this cluster to your corporate LAN, be sure to enter an appropriate IP
address and subnet mask.
1. Select Public Cluster Connection from the Network list. (Note: If you did not complete the practice in Chapter 2
or did not rename your network connection, Public Network Connection will not appear in this list.)
1. Click Next.
1. Click Finish to complete the Cluster Service installation.
1. The installation will complete and Cluster Service will start. Click OK when prompted.
1. Click Finish to close the wizard.

Verifying the Cluster Service Installation on the First Node


1.

On the first server, from the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and
click Cluster Administrator.
18

1.

Cluster Administrator should look like Figure 3.10. If you cannot open Cluster Administrator or if you see errors
in it, the Cluster Service installation did not complete successfully or your node is incorrectly configured.

Figure 10 Cluster Administrator


1.

Close Cluster Administrator.

Configuring the Second Node


For this exercise, the first node must be running with Cluster Service started.
1. Boot the second server and log in with an account that has administrative permissions.
1. From the Windows 2000 Start menu, point to Programs, point to Settings, and point to Control Panel.
1. Double-click Add/Remove Programs.
1. Double-click Add/Remove Windows Components.
1. Select Cluster Service.
1. Click Next.
1. The setup program will start copying files, and the Cluster Server Configuration Wizard will open. Click Next.
1. On the Hardware Configuration page, click I Understand, and then click Next.
1. You already created the cluster in the first exercise, so you want to join an existing cluster. Select the Second Or
Next Node In The Cluster option. (If at least one other node already exists, you are joining a cluster.)
1. Click Next.
1. On the Cluster Name page, type the name of the cluster you created when you configured the first node
(MyCluster).
1. Leave the Connect To Cluster As check box cleared. If the check box is selected, clear it.
1. Click Next.
1. On the Select An Account page, enter the password for the account listed and click Next.
1. Click Finish on the Completing The Cluster Service Configuration Wizard page.
45602.
The wizard will now copy the files needed for Cluster Service. Once the files are copied, the setup
program will try to start Cluster Service. A message will appear stating that Cluster Service has been started. Click
OK.
1. Click Finish on the Completing The Windows Components Wizard page.

3.1.2 Using the Cluster Administration Utilities

Using Cluster Administrator


Cluster Administrator (Cluadmin.exe) is the primary tool for cluster management, maintenance, and troubleshooting. On
each node in the cluster, Cluster Administrator (shown in Figure 4.1) is installed automatically and placed on the
Administrative Tools menu.

19

Figure 11 Cluster Administrator


In addition to using Cluster Administrator from any node in order to manage the cluster, you can install it on nonclustered
computers for remote administration. The cluster domain must be able to authenticate the account you use on the remote
system. To install it on a computer that is not part of the cluster, you can use the Adminpak.msi installer file included with
Windows 2000.
Cluster Administrator supports the following operating systems:
Windows NT 4 Server Enterprise Edition, Service Pack 3
Windows 2000 Server
Windows 2000 Advanced Server
Windows 2000 Datacenter Server
To install Cluster Administrator manually on a computer that isn't a node of a cluster, follow these steps:
2. From the Windows 2000 Start menu, click Run.
3. Type Adminpak.msi and press Enter.
4. Follow the instructions on the screen.
After the installation is complete, Cluster Administrator will be listed on the Windows 2000 Administrative Tools menu.

Starting Cluster Administrator


When you open Cluster Administrator for the first time, you will be prompted to enter the cluster name, or individual node,
that you want to connect to. Once Cluster Administrator is connected to a cluster, you can manage any node within that
cluster.
If you are not sure of the cluster name, you can use the Browse command, as shown in Figure 4.2, to obtain a list of all the
clusters in your current domain. If you are opening Cluster Administrator on a node within a cluster, you can enter a single
period (.) to specify that it should connect to the current cluster. You can also connect to more than one cluster
simultaneously, although this might slow the performance of Cluster Administrator.

Figure 12 Cluster Administrator's Open Connection To Cluster dialog box


You should use only a given node's name if you cannot connect to the cluster directly. (This might be the case if the cluster
is configured incorrectly or there is a problem with Cluster Service.)
When you start Cluster Administrator, any previously opened connections will be restored automatically. You will be asked
only to specify a cluster name or node name if there are no previous connections to restore.

Common Cluster Administrator Tasks


The following list describes some of the most common tasks that Cluster Administrator is used for.
Creating new resources and resource groups Cluster Administrator includes wizards to help you add a new
resource or create a new resource group in the cluster. This includes assigning resource dependencies. You cannot
configure all resource and group properties using the wizard. Once a resource or group has been created, you must
use the object's property sheet to configure a group's failover and failback policy settings or a resource's Restart,
LooksAlive, and Pending Timeout settings.
Renaming resources and resource groups You can use Cluster Administrator to rename groups and edit their
properties.
Removing resources and resource groups Using Cluster Administrator, you can delete resources and groups.
When a group is deleted, all the resources that were members of the group are also deleted. A resource cannot be
deleted until all resources that depend on it are deleted.
Viewing default groups Every new cluster includes two default groups: the Cluster Group and the Disk Group.
These groups contain default cluster settings and general information about failover policies for the cluster.
The default Cluster Group includes the IP Address and Network Name resources for the cluster. The resource information
presented in this group was entered when you configured the new cluster using the Cluster Service Configuration Wizard.
This group is required for administration of the cluster and should not be renamed.
The Disk Group is also created when you initially install Cluster Service. Each disk on the shared storage device will
receive its own disk group that includes a Physical Disk resource.
When you create new groups, you should implement them as modifications to the disk groups. You can then rename each
group to something meaningful. For example, you might add resources to Disk Group 1 that will be used by your Web
server. Once the resources have been added, you can rename Disk Group 1 to Web Group. The Cluster Group name does
not change.

20

Modifying the state of groups and resources You can use Cluster Administrator to bring resources and groups
online or take them offline. If you change the state of a group, all the resources within that group will be updated
automatically. These resources have their state changed in the order of their dependencies.
Changing ownership Using Cluster Administrator, you can specify the ownership of a resource or an entire
group. Resources are owned by groups, and groups, in turn, are owned by a node. You can transfer resources
between groups to satisfy dependencies and application requirements. You can also transfer group ownership,
using the Move Group command, to assign groups to other nodes in the cluster.
You typically transfer group ownership when you need to bring down a node for maintenance or upgrades. When a group's
ownership is moved to another node, all resources in that group are taken offline, the group is then transferred, and the
resources are brought back online. As a result, you must carefully plan when to move groups because clients might be
affected temporarily as the resources are shut down and then restarted.
Once the resource ownership has changed, the resource will be automatically brought online. However, when a resource is
moved between groups on the same node, it will not be taken offline.
Changing the maximum Quorum log size By default, the Quorum log file size is set to 64 KB. Depending on
the number of shares supported on the cluster and the number of transactions managed by the cluster, this might
be too small. In this case, you will receive a notification in the Event Viewer. When the Quorum log reaches the
specified size, Cluster Service will save the database and reset the log file. If you change the Quorum size on one
node, it will automatically take affect on the other.
Initiating a failure In order to help you test your failover policies, Cluster Administrator can initiate a failure.
This feature also allows you to test the restart settings on individual resources.
Identifying failovers In addition to configuring and managing a cluster, Cluster Administrator can also quickly
provide information on the health of the cluster. This is accomplished with indicators such as the Node Offline
icon shown in Figure 4.3.

Figure 13 The Node Offline icon


In this case, the node has been taken down and all resources that can be failed over have been failed over to the other node
in the cluster. You should check the status of the nodes frequently to make sure the cluster is up and healthy. You can also
use third-party software to help monitor your cluster. The overall performance of the cluster can be hurt if all the resources
are running on a single node due to a failure. And if the second node fails, the cluster will be completely offline. You
should periodically check the ownership of the various resource groups to verify that failover and failback policies are
meeting expectations.
Renaming the cluster In some cases, you might need to rename the cluster. Because the name must be unique on
the network, a conflict might occur that requires you to rename the cluster. Using Cluster Administrator, you can
change the cluster's name, but the change will not take affect until you take the Cluster Name resource offline and
bring it back online.
Deploying clustered applications Cluster Administrator includes a wizard to lead you through the process of
adding an application to the cluster. Before starting the Cluster Application Wizard, you should have all the
required information ready, such as a virtual server name, an IP address for the server, and your failover policy for
the application. Table 4.1 describes the Cluster Application Wizard's required information.
Table 4.1 Required Cluster Application Wizard Information
Information
IP address
Virtual server

Description
If the application will run on a new virtual server, you need a unique IP address. You do not
need an IP address if your application will run on an existing virtual server.
Even though the application resides on the cluster, clients access the application using a
standard computer name. If you do not want to run the application on an existing virtual server,
the wizard will prompt you for the new virtual server's name and a unique IP address. The
wizard will then create the appropriate resource group and implement the virtual server.
The wizard lets you create a resource to manage your application. You must select the
appropriate resource type for your needs.

Resource type for


your application
Application
If you use the wizard to create a resource for your application, you must name the resource.
resource name
Application
If the application requires other resources in order to run, you can create resource dependencies
resource
using the wizard.
dependencies
Changing the Quorum resource location You can use Cluster Administrator to configure the location of the
Quorum resource after you install Cluster Service. The cluster's Property page includes a Quorum tab with a
number of settings, including the location of the Quorum resource. You can edit and reconfigure the Quorum
resource settings as needed.

Using Cluster.exe
21

In addition to managing your cluster using the GUI-based Cluster Administrator, you can also execute administrative tasks
from the command line. For example, you might need to configure a property on more than one cluster. Using Cluster.exe,
you can set properties through a single command execution. You can also execute command line tasks from within a script
to automate the configuration of many clusters, nodes, resources, and resource groups.
Cluster.exe is automatically installed with Cluster Service on each node. You can also run Cluster.exe in Windows NT 4
Server Enterprise Edition with Service Pack 3 or later.
NOTE
Unlike Cluster Administrator, Cluster.exe does not automatically restore previous connections when you use it to
administer a cluster.
Table 4.2 describes the primary arguments supported by Cluster.exe. For a complete listing of the properties and options
supported by each command, see Appendix B.
All but the first two options listed in Table 4.2 apply to the /CLUSTER options. If these options are used alone, Cluster.exe
will attempt to connect to the cluster on the node that is running Cluster.exe and apply the command-line option to this
cluster.
Table 4.2 Cluster.exe Command-Line Arguments
Argument

Description
Displays a list of clusters in the specified domain. If no domain is specified,
/LIST[:domain-name]
the domain that the computer belongs to is used. Do not use the cluster name
with this option.
If you do not specify the cluster name, Cluster.exe will attempt to connect to
[[/CLUSTER:]cluster-name]
the cluster running on the node that is running Cluster.exe. If the name of your
<options>
cluster is also a cluster command or its abbreviation, such as cluster or c, use
/cluster: to explicitly specify the cluster name.
Displays or sets the cluster's common properties. See Appendix B for more
/PROP[ERTIES] [<prop-list>]
information on common properties.
Displays or sets the cluster's private properties. See Appendix B for more
/PRIV[PROPERTIES] [<prop- list>]
information on private properties.
/REN[AME]:cluster-name
Renames the cluster to the specified name.
/VER[SION]
Displays the Cluster Service version number.
/QUORUM[RESOURCE]
Changes the name or location of the Quorum resource or the sizeof the
[:resource-name] [/ PATH:path]
Quorum log.
[/MAXLOGSIZE:max-size-kbytes]
/REG[ADMIN]EXT:adminRegisters a Cluster Administrator extension DLL with the cluster.
extension-dll[,admin-extension-dll...]
/UNREG[ADMIN]EXT:adminextension-dll[,admin-extensionUnregisters a Cluster Administrator extension DLL from the cluster.
dll...]
A node-specific cluster command. See Appendix B for a list of available
NODE [node-name] node-command
commands.
GROUP [group-name] groupA group-specific cluster command. See Appendix B for a list of available
command
commands.
RES[OURCE] [resourceA resource-specific cluster command. See Appendix B for a list of available
name]resource-command
commands.
{RESOURCETYPE|RESTYPE}
A resource typespecific cluster command. See Appendix B for a list of
[resourcetype- name].-command
available commands
A network-specific cluster command. See Appendix B for a list of networkNET[WORK] [network-name]
commanavailable commands.
NETINT[ERFACE] [interfaceA network interfacespecific cluster command. See Appendix B for a list of
name] interface-command
available commands.
/? Or /help
Displays cluster command line options and syntax.

Practice: Administering a Cluster Using Cluster Administrator


Opening Cluster Administrator
2.
3.

From either node in a cluster, open the Windows 2000 Start menu, point to Programs, point to Administrative
Tools, and click Cluster Administrator.
Type the cluster's name, Mycluster, or a single period (.) to specify the current cluster, and then click OK. (Using
the period notation for the cluster name is supported only when Cluster Administrator is running on a node in the
cluster.) Cluster Administrator will open and connect to the current cluster.

22

Moving a Group to Another Node


1.
1.
1.
1.
1.
1.

From the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and click Cluster
Administrator.
If NodeA is not already expanded, click + to expand it.
Click Active Groups. Verify that the default Cluster Group is listed.
In the left pane, double-click Groups.
In the left pane, right-click Cluster Group and click Move Group. The group will be moved from NodeA to
NodeB. Verify that the Owner column has been updated in the Cluster Administrator and now appears under
NodeB's Active Groups.
Repeat the process to return the Cluster Group to NodeA.

Changing the Size of the Quorum Resource


2.
3.
1.

In the left pane, right-click the cluster's name, MYCLUSTER, and click Properties.
Click the Quorum tab.
In the Reset Quorum Log field, type 128 and click OK.

Practice: Administering a Cluster Using Cluster.exe


In this practice, you will use Cluster.exe to perform simple administrative tasks such as renaming the cluster, moving a
resource group to another node, setting the maximum size of the Quorum log, pausing a node, and resuming a node.

Verifying That All Groups and Resources Are on NodeA


1.
1.
2.
1.

From the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and click Cluster
Administrator.
Click on the Groups folder in the tree. You will see a list of all the cluster groups in the right pane. Verify that the
owner for each group is NodeA. At this point, the only group present is the Cluster Group.
If any group is owned by NodeB, right-click on the group and click Move Group to move the group to NodeA.
Repeat this process until all groups are on NodeA.
Leave Cluster Administrator open for the remainder of this practice.

Moving a Resource Group to Another Node


2.

Switch to the Windows 2000 command prompt window, type cluster GROUP "Cluster Group" /MOVE:NODEB
at the command prompt, and press Enter. You should see the following message displayed:
Moving resource group `Cluster Group'...
Group
Node
Status
------------------ -----Cluster Group
NODEB
Pending
1. Switch to Cluster Administrator and click on the plus sign to expand the Groups folder. Notice that the owner of
Cluster Group is now NodeB.

Modifying the Maximum Quorum Log Size


1.
1.
1.
1.

Switch to the Windows 2000 command prompt window, type cluster /QUORUM /MAXLOGSIZE:256 at the
command prompt, and press Enter. This command will change the maximum allowed size of the Quorum log
from the current value of 128 KB to 256 KB.
Switch to Cluster Administrator, right-click on MYCLUSTER, and click Properties.
In the MYCLUSTER Properties dialog box, click the Quorum tab. The Reset Quorum Log At field should show
256 KB.
Type 128 in the Reset Quorum Log At field and click OK. This will return the maximum Quorum log size to its
original value.

Pausing and Resuming a Node


1.
1.
1.
1.
1.

Switch to the Windows 2000 command prompt window, type cluster NODE NODEA /PAUSE at the command
prompt, and press Enter. A message will indicate that NodeA has been paused.
Switch to Cluster Administrator. You should see an icon with a exclamation point in a yellow triangle on NodeA,
indicating that it has been paused.
Switch to the Windows 2000 command prompt window, type cluster NODE NODEA /RESUME at the command
prompt, and press Enter. A message will indicate that NodeA has resumed.
Switch to Cluster Administrator.
The icon indicating that NodeA was paused should no longer be present.

3.1.3 Configuring Resources, Resource Groups, and Virtual Servers

Cluster Resource Types


Resources are categorized by type. Several default types are provided with Cluster Service. These are associated with
resource DLLs. If you want to deploy a cluster-unaware application, the two generic resource types might meet the needs
of your application. However, if your application cannot interact with the cluster using these resource types, you must
23

create a custom resource DLL for the application. This will effectively make a cluster-unaware application cluster-aware.
Not all applications can support a custom resource DLL. But if you can develop a resource DLL for an application, you
will have a new resource type specific to that application. (The procedure for creating custom resource DLLs is outside the
scope of this training kit.)

Standard Resource Types


A number of standard resource types are available for implementing applications and services. In addition, you can use
specific types when you deploy certain applications and services such as Dynamic Host Configuration Protocol (DHCP)
and Windows Internet Name Service (WINS).
The following are standard resource types that you can use when you configure applications on a cluster:
Physical Disk This resource type is for managing shared drives on your cluster. Because data corruption can
occur if more than one node has control of the drive, the Physical Disk type allows you to configure which node
has control of the resource at a given time.
DHCP and WINS Cluster Service provides direct support for both the DHCP and WINS services. Using the
DHCP resource type, you can implement the DHCP service on your cluster. Likewise, using the WINS resource
type, you can install WINS for client use. In both cases, you can install the databases associated with these
services on a shared drive to support failover from one node to another. For complete information about
implementing these services, see Chapter 6.
Print Spooler Using this resource type, you can configure your cluster to support network printers. Only network
printers are supported on the cluster because a locally connected printer will not fail over. You can implement
multiple print spoolers on a cluster, but no more than one can appear in a given resource group. Clients can access
the clustered network printer using standard network names or IP addresses.
If a Print Spooler resource fails over, the document that was being printed at the time will start over on the other node. If a
Print Spooler resource is taken offline or is manually moved, Cluster Service will attempt to first finish any queued print
jobs. Any documents that are currently spooling will be discarded. They must be resubmitted once the resource group
finishes failing over to the other node.
In addition to creating a Print Spooler resource, you must ensure that each node in the cluster has the appropriate ports and
drivers configured for the printer. A complete practice illustrating how to implement clustered print spoolers is provided
later in this chapter.
File Share Using this resource type, you can cluster drive access points if the cluster is acting as a file server.
There are three File Share resource types:
Basic A basic File Share resource provides high availability to a single folder using a standard network
share name
Share subdirectories The File Share resource can be used to cluster a number of folders and their
subfolders in subdirectories. This provides an easy way to increase availability for large numbers of
folders on a clustered file server.
DFS root The File Share resource can also be used to cluster a distributed file system root folder (DFS
root). Like the other File Share resource types, a DFS root can be accessed by using a network name and
the associated IP address. Therefore, dependent Network Name and IP Address resources must be
associated with the clustered DFS root. Note, however, that you cannot implement fault tolerant DFS
roots on a cluster.
IP Address A number of cluster implementations require IP addresses. The IP Address resource type is used for
this purpose. Typically, the IP Address resource is used with a Network Name resource in order to create a virtual
server.
Network Name This resource type is used to assign a name to a resource on the cluster. This is typically
associated with an IP Address resource type in order to create a virtual server. Many applications and services that
you might want to cluster require a virtual server.
Generic Application When you implement an application that is not cluster-aware, you can use this resource type
to provide basic clustering capabilities. If the application qualifies to be clustered and can support being
terminated and restarted as the result of a failover, the Generic Application type might be all that is required to
increase the the application's availability. If the application is not compatible with the generic type, you might
need to implement a cluster resource DLL in order for the application to support Cluster Service.
When you implement a cluster-unaware application using the Generic Application resource type, you must verify that the
application can run from both nodes in the cluster. This includes installing copies of the application on each node. If you
want the application to support failing over, you must configure it to use a shared disk for data storage. In this way, if the
application fails over to the other node, it can still access the required data.
An alternative to installing the application on each node is to install the application to a shared disk. While this
implementation offers the benefit of using less drive space (because it does not have to be installed twice), it does not
support rolling upgrades. If you intend to perform rolling upgrades to the application, the application will need to be
installed locally on each node.
Generic Service This is similar to the Generic Application resource type. If you intend to support a clusterunaware service, you can use the Generic Service resource type for basic cluster functionality. This resource type
24

will provide only the most fundamental level of clustering services. If your service requires advanced clustering
support, you must develop and use a custom resource DLL for the service.

Cluster Resource Dependencies


When you configure a cluster resource, you often need to implement dependencies for the resource. A resource with
dependencies requires additional resources in order to operate correctly. All the resources associated with the application or
service must be configured in the same resource group. Cluster Service brings resources online, or takes them offline,
based on the dependencies you specify. For example, if an application is dependent on Network Name and IP Address
resources, Cluster Service ensures that these resources are available before the application starts. Table 4.3 lists common
resources and their dependencies.
Table 4.3 Common Resource Dependencies
Resource
DHCP Service
Distributed Transaction
Coordinator
File Share
Generic Application
Generic Service
IIS Server Instance
IP Address
Message Queuing
Network Name
Physical Disk
Print Spooler
WINS Service

None
None

None

None

Dependencies (Required or Recommended)


Physical Disk or other storage class device (on which the files are located)
IP Address (for client access to the DHCP server)
Network Name
Physical Disk or other storage class device
Network Name
Network Name (for a file share that is configured as a DFS root)A file share that
is not DFS has no required dependencies.
IP Address (that corresponds to the virtual root)
Physical Disk or other storage class device
Network Name (so that remote clients can access it)
Physical Disk or other storage class device
Network Name (so that remote clients can access it)
IP Address (that corresponds to the name)
Physical Disk or other storage class device
Network Name (so that remote clients can access it)
Physical Disk or other storage class device
IP Address (for client access to the WINS server)
Network Name

Resource-Specific Properties
In addition to the standard properties that each resource type includes, such as name and description, specific properties
might need to be configured. Table 4.4 lists resource-specific properties.
Table 4.4 Resource-Specific Properties
Resource

Distributed Transaction Coordinator None

File Share

Generic Application

Generic Service

IIS Server Instance

DHCP Service

Property
DHCP database file path
DHCP database files backup path
Audit log file location
Access permissions
Simultaneous user limit
Share name and comment
Path
Command line
Current directory
Use network name for computer name
Whether the application can interact with the desktop
Service name
Startup parameters
Use network name for computer name
Service for this instance (FTP or WWW)
Alias used by the virtual root
25

IP Address
MSMQ Server
Network Name
Physical Disk

None

Print Spooler
WINS Service

IP address
Subnet mask
Network parameters
NetBIOS option
Computer name
Drive to be managed (cannot change once the resource has been
configured)
Path for the print spooler folder
Job completion time-out
Path to WINS database
Path to WINS backup database

Cluster Resource Groups


Resources must be organized into groups, called resource groups, which are managed by Cluster Service. In addition to the
general properties such as name, description, and preferred owner, groups also have failover and failback properties.
Together, these properties control how the resource group and the associated application or service responds when a node
is taken offline.

Failover Policy
The failover policy for a group is set using the Failover tab of the group's property sheet. You can set the Failover
Threshold and Failover Period properties based on your needs. The Failover Threshold specifies the number of times the
group can fail within the number of hours specified by the Failover Period property. If the group fails more than the
threshold value, Cluster Service will leave the affected resource within the group offline. For example, if a group Failover
Threshold is set to 3 and its Failover Period is set to 8, Cluster Service will fail over the group up to three times within an
eight-hour period. The fourth time a resource in the group fails, Cluster Service will leave the resource in the offline state
instead of failing over the group. All other resources in the group will be unaffected.

Failback Policy
By default, resource groups are not configured to fail back to the original node. Instead, after a failover, the group remains
on the second node until you manually move the group to the appropriate node. If you want a group to run on a preferred
node and return to that node after a failover, you must implement a failback policy for the group. You can specify whether
the group should fail back immediately after the original node comes back online or at a specified time during the day. For
example, you might want to fail back a group only during non-business hours to minimize the impact on clients. In order
for a group to fail back to a specific node, you must set the Preferred Owners property of the group.

Practice: Creating a File Share Resource


In this practice, you will configure a File Share resource and associated group and then manually bring the group online.

Creating a Group
5.
6.
7.
8.
9.
10.
11.
12.
13.

Open Cluster Administrator. The Open Connection To Cluster dialog box appears.
Type the name of the cluster (in this case, MYCLUSTER) and click Open.
Right-click Groups, point to New, and click Group. The New Group Wizard will start.
Type Cluster Printer in the Name box.
Type Group For Printer Resources in the Description box.
Click Next.
In the Preferred Owners dialog box, add both nodes to the Preferred Owners list.
Click Finish. A message box will appear stating that the group was created successfully.
Click OK.

Transferring a Resource
4.
5.
6.
7.

Open the Resources folder.


Right-click Disk W:, and from the popup menu that appears, click Change Group. A listing with all of the
available groups in the cluster will appear.
Click the Cluster Printer group. A message box will appear asking if you are sure you want to change the group.
Click Yes. The Disk W: resource will be displayed as part of the Cluster Printer group. Having the disk resource
as part of the group will allow you to add resources to the Cluster Printer group that have a dependency on a disk
resource.

26

Creating a File Share Resource


Before you begin this practice, create a Test folder on drive W.
2. Click the Groups folder.
3. Right-click Cluster Printer, point to New, and click Resource.
4. In the New Resource dialog box, type in the information below:
5.
Name
Test Share A
Description
Test file share
Resource Type
Choose File Share
Group
Choose Cluster Printer
1. Click Next.
1. Both nodes should appear in the Possible Owners list. If they do not, add them to the list.
1. Click Next.
1. In the Dependencies dialog box, add the Disk W: resource to the Resource Dependencies list and click Next.
1. In the File Share Parameters dialog box, type in the following information:
2.
Share Name
TestShareA
Path
W:\Test
Comment
Test share for the cluster
Specifying the path will not automatically create the folder, so the test folder must already exist on the W: drive before this
step can be successful.
1. Click Finish. A message box will appear stating that the file share was created successfully.
1. Click OK.

Bringing the Resources Online


3.
4.

Right-click Cluster Printer, and click Bring Online.


Close Cluster Administrator.

Virtual Server Overview


When an application is installed on a cluster, clients access the application as they would any normal server on the
network. However, because the physical server itself can potentially change as the result of a failover, Cluster Service
implements virtual servers for client access. A virtual server consists of a resource group that includes a dedicated, and
unique, network name and IP address. Each virtual server therefore has its own failover and failback policy (as described
earlier in this lesson). The virtual server also consists of one or more resources associated with the application being
hosted. As a result, in the event of a failover, all the resources associated with the virtual server are moved to the other
node in the cluster. Cluster Service reassigns the network name and IP address to the surviving node, and client requests
continue to be sent to this virtual server. The client itself never needs to know which node is currently hosting the
application.
A virtual server has the same basic characteristics as a physical server on the network, including the following:
It allows clients access to network resources, such as file and print shares.
It appears on the network as a normal, physical server.
It has both a network name and IP address for client access.

Practice: Creating a Virtual Server


Creating a Group
1.
2.
1.
1.
1.
1.
1.
1.
1.

Open Cluster Administrator.


In the Open Connection To Cluster dialog box, type the name of the cluster (in this case, MYCLUSTER) and
click Open.
Right-click Groups, point to New, and click Group.
Type Virtual Server in the Name box.
Type Group for virtual server in the Description box.
Click Next.
In the Preferred Owners dialog box, add NodeA to the Preferred Owners list.
Click Finish. A message box will appear stating that the group was created successfully.
Click OK.

Creating an IP Address Resource


2.

In Cluster Administrator, click the Groups folder.


27

3. Right-click Virtual Server, point to New, and click Resource.


4. In the New Resource dialog box, type in the information below:
5.
Name
Server IP Address
Description
IP address of virtual server
Resource Type
Choose IP Address
Group
Choose Virtual Server
2. Click Next.
3. Both nodes should appear in the Possible Owners list. If they do not, add them to the list.
1. Click Next.
1. In the Dependencies dialog box, the Resource Dependencies list should be blank. Click Next.
1. In the TCP/IP Address Parameters dialog box, type in the following information:
2.
Address
192.168.0.2
Subnet Mask
255.255.255.0
Network
Public Cluster Connection
Do not select the Run This Resource In A Separate Resource Monitor check box. You can leave Enable NetBIOS For This
Address selected. By default, NetBIOS calls can be made over the TCP/IP connection.
1. Click Finish. A message box will appear stating that the IP Address resource was created successfully.
1. Click OK.

Creating a Network Name Resource


1. Click the Groups folder.
1. Right-click Virtual Server, point to New, and click Resource.
1. In the New Resource dialog box, type in the information below:
2.
Name
Test Share A
Name
Network Name
Description
Network name for virtual server
Resource Type
Choose Network Name
Group
Choose Virtual Server
Do not select the Run This Resource In A Separate Resource Monitor check box.
1. Click Next.
1. NodeA should appear in the list of Possible Owners. If it does not, add it to the list.
1. Click Next.
1. In the Dependencies dialog box, add the server IP Address resource to the Resource Dependencies list and click
Next.
1. In the Network Name Parameters dialog box, type CLUSTERSVR.
1. Click Finish. A message box will appear stating that the Network Name -resource was created successfully.
1. Click OK.

Practice: Cluster Service Properties


Several properties for resources and groups determine the actions that occur during a failover or failback. To set these
properties, you can use Cluster Administrator or the Cluster.exe command-line utility. This practice will introduce several
of the properties that are important in configuring and monitoring the failover and failback processes.

Viewing Properties for a Cluster Service Group


1.

From the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and then click Cluster
Administrator.
1. If needed, expand the My Cluster tree in the left pane.
1. Expand the Groups folder and then click on the Cluster Group in that folder.
1. Verify that the Cluster Group and all of its resources are online on the node you are using for this practice.
1. Right-click on Cluster Group and click Properties. View the settings on the Failover and Failback tabs.
1. From the Windows 2000 Start menu, click Run, and then type command in the Open field. Click OK to open a
command window.
1. At the command prompt, type cd\winnt\cluster, and then press Enter.
1. At the command prompt, type cluster group "Cluster Group" /prop, and then press Enter. You should see a result
similar to the following:
Listing properties for `cluster group':
T
Resource Group
Name
Value
----------------------------------------------------28

SR
Cluster Group
Name
Cluster Group
S
Cluster Group
Description
D
Cluster Group
PersistentState
1 (0x1)
D
Cluster Group
FailoverThreshold
10 (0xa)
D
Cluster Group
FailoverPeriod
6 (0x6)
D
Cluster Group
AutoFailbackType
0 (0x0)
D
Cluster Group
FailbackWindowStart
4294967295 (0xffffffff)
D
Cluster Group
FailbackWindowEnd
4294967295 (0xffffffff)
D
Cluster Group
LoadBalState
1 (0x1)
The following are definitions for some of these group properties. (See Appendix B for more information on resource group
properties and settings.)
PersistentState This property holds the last known state of a group or resource. When it is set to True, the group
or resource is online. When it is set to False, the group or resource is offline.
FailoverThreshold This property specifies the number of times that Cluster Service will attempt to fail over a
group before it decides that the group cannot be brought online anywhere in the cluster.
FailoverPeriod This property specifies the interval, in hours, during which Cluster Service will attempt to fail
over a group.
AutoFailbackType This property specifies whether a Cluster Group is allowed to fail back. The
ClusterGroupPreventFailback (0) setting prevents failback, and the ClusterGroupAllowFailback (1) setting allows
failback.
FailbackWindowStart This property specifies the start time, on a 24-hour clock, for a group to fail back to its
preferred node. You can set values from 0 (midnight) to 23 (11:00 P.M.) in local time for the cluster. For
immediate failback, you must set both FailbackWindowStart and FailbackWindowEnd to -1.
FailbackWindowEnd This property specifies the end time, on a 24-hour clock, for a group to fail back to its
preferred node. You can set values from 0 (midnight) to 23 (11:00 P.M.) in local time for the cluster. For
immediate fail-back, you must set both FailbackWindowStart and FailbackWindowEnd to -1.

Setting Resource Group Failback Properties


1.
1.

1.
1.
1.

At the command prompt, type cluster group "Cluster Group" /prop AutoFailbackType = 1, and then press Enter.
This will set the Cluster Group to allow failback after a failover occurs.
At the command prompt, enter the following commands to set the Cluster Group to fail back only between 8 A.M.
and 6 P.M.:
cluster group "Cluster Group" /Prop FailbackWindowStart = "8"
cluster group "Cluster Group" /Prop FailbackWindowEnd = "18"
In Cluster Administrator, right-click on Cluster Group, and then click Properties. On the Failback tab, you should
see that failback is allowed between the 8th and 18th hours, as seen in Figure 4.4. Click OK to close the Cluster
Group Properties dialog box.
At the command prompt, enter the following commands to set the Cluster Group to fail back immediately:
cluster group "Cluster Group" /Prop FailbackWindowStart = "-1"
cluster group "Cluster Group" /Prop FailbackWindowEnd = "-1"
In Cluster Administrator, right-click on Cluster Group, and then click Properties. On the Failback tab, you should
see that failback is set to occur immediately.

29

Figure 14 The Failback tab of the Cluster Group Properties window.


1.

Click the option to prevent failback and then click OK to close the Cluster Group Properties dialog box.

Viewing Properties for a Cluster Service Resource


At the command prompt, type cluster resource "Cluster IP Address" /prop, and then press Enter. You should see a result
similar to the following:
Listing properties for `Cluster IP Address':
T
Resource
Name
Value
---------------------------------------- ---------------------SR
Cluster IP Address
Name
Cluster IP Address
S
Cluster IP Address
Type
IP Address
S
Cluster IP Address
Description
S
Cluster IP Address
DebugPrefix
D
Cluster IP Address
SeparateMonitor
0 (0x0)
D
Cluster IP Address
PersistentState
0 (0x0)
D
Cluster IP Address
LooksAlivePollInterval
5000 (0x1388)
D
Cluster IP Address
IsAlivePollInterval
60000 (0xea60)
D
Cluster IP Address
RestartAction
2 (0x2)
D
Cluster IP Address
RestartThreshold
3 (0x3)
D
Cluster IP Address
RestartPeriod
900000 (0xdbba0)
D
Cluster IP Address
RetryPeriodOnFailure
4294967295 (0xffffffff)
D
Cluster IP Address
PendingTimeout
180000 (0x2bf20)
D
Cluster IP Address
LoadBalStartupInterval
300000 (0x493e0)
D
Cluster IP Address
LoadBalSampleInterval
10000 (0x2710)
D
Cluster IP Address
LoadBalAnalysisInterval 300000 (0x493e0)
D
Cluster IP Address
LoadBalMinProcessorUnits 0 (0x0)
D
Cluster IP Address
LoadBalMinMemoryUnits
0 (0x0)
The following are definitions for some of these resource properties. (See Appendix B for more information on resource
properties and settings.)
LooksAlivePollInterval This property specifies how often, in milliseconds, Cluster Service should poll a
resource to determine whether it appears operational. If this property is not set or is set to 1 for a resource, the
default LooksAlivePollInterval property for the resource type associated with the resource is used. If the property
set to 0, the resource will not be polled to see whether it is operational.
IsAlivePollInterval This property specifies the amount of time, in milliseconds, that Cluster Service will poll a
resource to determine whether it is operational. If this property is not set or is set to 1 for a resource, the default
IsAlivePollInterval property for the resource type associated with the resource is used. This property cannot be set
to 0.
30

RestartAction This property specifies the action to perform if a resource fails. You can use one of the following
settings:
ClusterResourceDontRestart (0) Do not restart after a failure.
ClusterResourceRestartNoNotify (1) Attempt to restart the resource after a failure. If the restart threshold is
exceeded by the resource within its restart period, Cluster Service will not attempt to fail over the group to
another node in the cluster.
ClusterResourceRestartNotify (2) Attempt to restart the resource after a failure. If the restart threshold is
exceeded by the resource within its restart period, Cluster Service will attempt to fail over the group to another
node in the cluster. This is the default setting.
Unless the RestartAction property is set to ClusterResourceDontRestart, Cluster Service will attempt to restart a failed
resource.
RestartThreshold This property specifies the number of restart attempts that will be made on a resource before
Cluster Service initiates the action specified by the RestartAction property. These restart attempts must also be
made within the time interval specified by the RestartPeriod property. Both the RestartPeriod and the
RestartThreshold properties are used to limit restart attempts.
RestartPeriod This property specifies the amount of time, in milliseconds, during which restart attempts will be
made on a resource. The number of attempts allowed within a RestartPeriod is determined by the
RestartThreshold setting. Both the RestartPeriod and the RestartThreshold properties are used to limit restart
attempts. The RestartPeriod property is reset to 0 once the interval setting is exceeded. If no value is specified for
RestartPeriod, the default value of 90000 is used.
PendingTimeout This property specifies the amount of time, in seconds, that a resource in a Pending Online or
Pending Offline state must resolve its status before Cluster Service fails the resource or puts it offline. The default
value is three minutes.
PendingTimeout has the following relationship with RestartPeriod and RestartThreshold:
RestartPeriod >= RestartThreshold x PendingTimeout
RetryPeriodOnFailure This property specifies the amount of time, in milliseconds, that a resource will remain in
a failed state before Cluster Service attempts to restart it. Until an attempt is made to locate and restart a failed
resource, the resource will remain in a failed state by default. Setting the RetryPeriodOnFailure property allows a
resource to automatically recover from a failure.

Setting Resource Failover Properties


1.

Enter the following commands at the command prompt to change the restart properties for the IP Address
resource:
cluster resource "Cluster IP Address" /Prop RestartThreshold = 5
cluster resource "Cluster IP Address" /Prop RestartPeriod = 1000
cluster resource "Cluster IP Address" /Prop PendingTimeout = 50
1. In Cluster Administrator, right-click on the Cluster IP Address resource located in the Cluster Group, and then
click Properties. On the Advanced tab, you should see the RestartThreshold, RestartPeriod, and PendingTimeout
properties set to the new values you entered, as shown in Figure 4.5.

31

Figure 15 The Advanced tab of the Cluster IP Address Properties window.


1.
1.
1.
1.

Make sure that the resource is set to restart after a failure, and set the Restart Threshold back to its original value
of 3.
Set the Restart Period back to its original value of 900000 seconds.
Set the Pending Timeout field back to its original value of 180000 seconds.
Close the Cluster IP Address Properties dialog box to save the new settings.

32

3.2

Creating an ACI Group and basic resources

For the inserting of ACI parts and necessary applications as Corba and Versant are 2 basic steps necessary
creating of a ACI group
creating of basic Resources in the cluster
Resource
Services for ACI

Dependencies (Required)
Physical Disk or other storage class device (on which the files are located)
IP Address (for client access to the ACI server)
Network Name

Table 4.3 Common Resource Dependencies

Note

Physical Disk This resource type is for managing shared drives on your cluster. Because data corruption can occur if
more than one node has control of the drive, the Physical Disk type allows you to configure which node has control of
the resource at a given time.
IP Address A number of cluster implementations require IP addresses. The IP Address resource type is used for this
purpose. Typically, the IP Address resource is used with a Network Name resource in order to create a virtual server.
Network Name This resource type is used to assign a name to a resource on the cluster. This is typically associated
with an IP Address resource type in order to create a virtual server. Many applications and services that you might
want to cluster require a virtual server.

3.2.1 Creating a Group

Open Cluster Administrator.


In the Open Connection To Cluster dialog box, type the name of the cluster (in this case, MYCLUSTER) and click
Open.
Right-click Groups, point to New, and click Group.
Type Virtual Server in the Name box.
Type Group for virtual server in the Description box.
Click Next.
In the Preferred Owners dialog box, add NodeA to the Preferred Owners list.
Click Finish. A message box will appear stating that the group was created successfully.
Click OK.

3.2.2 Creating a Physical Disk Resource


A Physical Disk resource is required for applications to access a shared cluster disk. You must create a Physical Disk
resource before a shared disk will be usable in the cluster. At least one Physical Disk resource is created when you first
install and configure the cluster using the Cluster Service Configuration Wizard. This disk resource, called the Quorum
resource, includes the Quorum disk. You should create additional Physical Disk resources only for disks that are accessible
by all cluster nodes and not associated with existing Physical Disk resources.
Physical Disk resources can be created only for entire physical drives, not for partitions or logical drives. Only an entire
physical drive can be failed over because you cannot independently fail over drive partitions on the same disk. Thus, a
Physical Disk resource created on a partitioned drive will be associated with the entire disk, regardless of how many
partitions and drive letters are formed on it. You can use Cluster Administrator or Cluster.exe to create a new Physical Disk
resource.

Creating a Physical Disk Resource Using Cluster Administrator


You can create a new Physical Disk resource using Cluster Administrator if you have added or replaced one or more disks
in your cluster. Since only one Physical Disk resource can be created for each physical disk in your cluster, the resource
can designate multiple logical drives if they are partitions of the same physical disk. Also, because the Physical Disk
resource is the only default resource type that can operate as a Quorum resource for the cluster, many other resource types
require a Physical Disk resource as a dependency.
Table 5.2 describes the properties of the Physical Disk resource in Cluster Administrator.
Table 5.2 Physical Disk Resource Properties
33

Property
Name
Description

Description
Required. Specifies the name of the resource.
Optional. Describes the resource.
Required. Specifies which nodes own the resource. If the Quorum resource resides on the disk, all
Possible Owners
nodes must be owners.
Required
None
Dependencies
Disk
Required. Specifies the drive letter or letters for the Physical Disk resource.
Once created, the Physical Disk resource can be brought online, used as a dependency, or otherwise controlled using the
Resource Management functions. It will appear as a Cluster resource in Cluster Administrator and in Cluster.exe. You can
view and set the Physical Disk resource properties by using the Properties page, shown in Figure 5.1.

Figure 5.1 Physical Disk resource properties page

Creating a Physical Disk Resource Using Cluster.exe


Creating a new Physical Disk resource using Cluster.exe is more tedious than doing so using Cluster Administrator. The
only situation in which you might want to do this is for a script that will automatically create the resource at a remote site.
To create a new disk resource using Cluster.exe, you use the /Create option. You must specify all required parameters for
the resource before it can be brought online. The /Create option allows you to create resources that will be in an incomplete
state. You must set additional disk resource properties as appropriate using subsequent commands.
For example, here's the command sequence for adding a Physical Disk resource. (Note that the log entry lines shown here
have been wrapped because of space constraints in this book. The lines do not normally wrap.)
CLUSTER mycluster RESOURCE mydisk /Create /Group:mygroup /Type:"Physical Disk"
CLUSTER mycluster RESOURCE myshare /PrivProp Drive="W:"
You must include the group name and resource type when you create the resource. Once the resource is created with valid
parameters, it can be brought online.

Physical Disk Resource Private Properties


Physical Disk resources have private properties that determine settings and status for the disk on which they're created.
These private properties are useful for monitoring and troubleshooting the resource, and are described below.

Drive
The Drive property specifies the drive letter for the Physical Disk resource. If you're using the Drive property and multiple
drive letters are associated with the disk, you must set the Drive property to include all of the drive letters. You must also

34

make sure that the assigned drive letter does not conflict with existing drive letters anywhere in the cluster, including each
node's local drives.

Signature
The Signature property specifies an identifier for the disk. It is a DWORD value with a range from 0 to 0xFFFFFFFF.
When you create a new disk resource using Cluster.exe, you set the Drive or Signature private property to the drive or
signature of the disk. You must set one of these two properties, but you cannot set both. Neither property can be changed
once the assignment is made and the resource is created. When you create a new disk resource using Cluster Administrator,
you're not required to provide one of these properties. Instead, a list of available disks is displayed for you to choose from.

SkipChkdsk
The SkipChkdsk property determines whether the operating system runs chkdsk on a physical disk before attempting to
mount the disk. A TRUE setting causes the operating system to mount the disk without running chkdsk. A FALSE setting
causes the operating system to run chkdsk first and, if errors are found, take action based on the ConditionalMount
property. However, if both the SkpChkDsk and ConditionalMount values are 0 (FALSE), chkdsk will not run and the disk
will be left offline. Table 5.3 summarizes the interaction between SkipChkdsk and ConditionalMount.
Table 5.3 SkipChkdsk and ConditionalMount Interaction
Chkdsk
Disk Mounted?
Runs?
FALSE
TRUE
Yes
If chkdsk reports errors, no. Otherwise, yes.
FALSE
FALSE
No
No
TRUE
TRUE
No
Yes
TRUE
FALSE
No
Yes
Because forcing a disk to mount when chkdsk reports errors can result in data loss, you should exercise caution when
changing these properties.
SkipChkdsk Setting ConditionalMount Setting

ConditionalMount
The ConditionalMount property determines whether a physical disk is mounted, depending on the results of chkdsk. A
TRUE setting prevents the operating system from mounting the disk if chkdsk reports errors. A FALSE setting causes the
operating system to attempt to mount the disk regardless of chkdsk failures. The default is TRUE. Note that if chkdsk has
not run, it will not produce errors, so the operating system will attempt to mount the disk regardless of the
ConditionalMount setting.

MountVolumeInfo
The MountVolumeInfo property stores information used by the Windows 2000 Disk Manager. Cluster Service updates the
property data stored in MountVolumeInfo whenever a disk resource is brought online. Cluster Service also updates
MountVolumeInfo when the drive letter of a disk resource is changed using Disk Manager.
MountVolumeInfo data consists of a byte array organized as follows:
A 16-byte "header" consisting of the disk signature (first 8 bytes) and the number of volumes (second 8 bytes).
One or more 48-byte descriptive entries. (See Table 5.4.)
Table 5.4 MountVolumeInfo Data
Position
First 16 bytes
Second 16 bytes
Next 8 bytes
Next 2 bytes
Next 2 bytes
Last 4 bytes

Data
Starting offset
Partition length
Volume number
Disk type
Drive letter
Padding

Displaying Private Properties


You can display the Physical Disk resource private properties by using Cluster.exe. These properties can help
administrators determine when to run chkdsk against a cluster disk. You can use the following command to display disk
resource private properties:
cluster <clustername> resource "Disk Q:" /priv
Here's an example of the output for a disk resource named Disk Q:
Listing private properties for `Disk Q:':
T Resource
Name
Value
35

D Disk Q:
Signature
1415371731 (0x545cdbd3)
D Disk Q:
SkipChkdsk
0 (0x0)
D Disk Q:
ConditionalMount
1 (0x1)
B Disk Q:
DiskInfo
03 00 00 00 ... (264 bytes)
B Disk Q:
MountVolumeInfo
D3 DB 5C 54 ... (104 bytes)
The values assigned to SkipChkdsk and ConditionalMount determine the behavior of chkdsk. If the MSCS folder on the
Quorum drive is inaccessible or if the disk is found to be corrupt (via checking of the dirty bit), chkdsk will behave as
follows:
If SkipChkdsk = 1 (which means TRUE), Cluster Service will not run chkdsk against the dirty drive and will
mount the disk for immediate use. (Note that SkipChkdsk = 1 overrides the ConditionalMount setting and that
Cluster Service performs the same no matter what the ConditionalMount property is set to.)
If SkipChkdsk = 0 (which means FALSE) and ConditionalMount = 0, Cluster Service fails the disk resource and
leaves it offline.
If SkipChkdsk = 0 and ConditionalMount = 1, Cluster Service runs chkdsk /f against the volume found to be dirty
and then mounts it. This is the current default behavior for Windows 2000 clusters and is the only behavior for
Windows NT 4 clusters.
You can use the following commands to modify these resource private properties:
cluster clustername res "Disk Q:" /priv Skipchkdsk=0[1]
cluster clustername res "Disk Q:" /priv ConditionalMount=0[1]
You can track disk management changes using the fixed-length values returned by the MountVolumeInfo property.
MountVolumeInfo replaces DiskInfo in Windows 2000.
Here's a sample MountVolumeInfo entry:
D3DB5C540400000000020000000000000000400600000000010000000746000000024
0060000000000FE3F060000000002000000074B00000000800C000000000000400600
00000003000000074C00000040C0120000000000C03F06000000000400000007490000
The signature is D3DB5C54, and the number of volumes is 04000000. The table below describes how to interpret the rest
of the information.
Offset
Partition Length
Volume Letter
Disk Type Drive Number Padding
00020000.00000000
00004006.0000000 0
0100000 0
07
46
0000
00024006.00000000
00FE3F06.0000000 0
0200000 0
07
4B
0000
0000800C.00000000
00004006.0000000 0
0300000 0
07
4C
0000
0040C012.00000000
00C03F06.0000000 0
0400000 0
07
49
0000
For compatibility in a mixed-node cluster where one node is running Windows 2000 and the other is running Windows NT
4, DiskInfo is retained in the properties of the disk resource.
Whenever a disk resource is brought online, Cluster Service checks the physical disk configuration and updates the
information in MountVolumeInfo and DiskInfo. Corrections are made to the physical disk configuration registry entries as
needed. When changes are made using Disk Manager, any values related to drive letters are updated dynamically.

3.2.3 Creating an IP Address Resource


8. In Cluster Administrator, click the Groups folder.
9. Right-click Virtual Server, point to New, and click Resource.
10. In the New Resource dialog box, type in the information below:
11.
Name
Server IP Address
Description
IP address of virtual server
Resource Type
Choose IP Address
Group
Choose Virtual Server
6. Click Next.
7. Both nodes should appear in the Possible Owners list. If they do not, add them to the list.
8. Click Next.
1. In the Dependencies dialog box, the Resource Dependencies list should be blank. Click Next.
1. In the TCP/IP Address Parameters dialog box, type in the following information:
2.
Address
192.168.0.2
Subnet Mask
255.255.255.0
Network
Public Cluster Connection
Do not select the Run This Resource In A Separate Resource Monitor check box. You can leave Enable NetBIOS For This
Address selected. By default, NetBIOS calls can be made over the TCP/IP connection.
1. Click Finish. A message box will appear stating that the IP Address resource was created successfully.
36

1.

Click OK.

3.2.4 Creating a Network Name Resource


2. Click the Groups folder.
3. Right-click Virtual Server, point to New, and click Resource.
4. In the New Resource dialog box, type in the information below:
5.
Name
Test Share A
Name
Network Name
Description
Network name for virtual server
Resource Type
Choose Network Name
Group
Choose Virtual Server
Do not select the Run This Resource In A Separate Resource Monitor check box.
5. Click Next.
6. NodeA should appear in the list of Possible Owners. If it does not, add it to the list.
7. Click Next.
8. In the Dependencies dialog box, add the server IP Address resource to the Resource Dependencies list and click
Next.
9. In the Network Name Parameters dialog box, type CLUSTERSVR.
10. Click Finish. A message box will appear stating that the Network Name -resource was created successfully.
11. Click OK.

37

3.3

Installing Versant on Microsoft Cluster Server

3.3.1 Generic
Overview
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
Configuration
There are a number of configurations for server clusters. Here the two nodes share a disk array. The secondary node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and changes the IP address of the backup node to that of the failed node.

Versant
Client

GUI

GUI

Application
Logic

Application
Logic

Versant
API

Versant
API

Redundant
Node

Versant
Server
Versant
Server Process

Versant
Server Process

Shared RAID System


Switch
over

Database Files

Figure 16 Versant in a Cluster with shared RAID System


Reliability
This configuration provides the same level of reliability as is available on a RAID configuration.
Availability
The duration of an outage will depend primarily on the time required to restart all the databases, the message oriented
middleware and the services.
Failback
Failback time is expected to be symmetric with failover.

3.3.2 Configuration instructions


3.3.2.1 Installation overview

Install Versant software on both nodes on a local disk and change the properties of the local Versantd service.
Create a new Cluser Group with following resources Shard Physical Disk, Network Name, IP Address and
Generic Service Versantd.
In this document the both nodes are called Node1 and Node2. The shared network name is GlobalHostName and the
shared drive is called R.

3.3.2.2 Installing Versant on local nodes


Install Versant on both the nodes on a local disk drive and let the installation pick the default directories for binaries and
select the share disk for the databases. Precondition is a active connection of the share disk to the current node and one
global IP address and hostname.

Change Versantd Service on local nodes


38

After installing Versant and before rebooting the machines, the Startup Type for the Versantd service needs to be changed
from Automatic to Manual. To change this setting bring up the services dialog box from the Control Panel and double
click on Versantd and change the setting as shown below.

Figure 17 Versantd properties, General, (local Computer)

39

Figure 18 Versantd properties, LogOn, (locaol Computer)


The Service works as the "local System account" and the MSCS should be able to interact with this Service. Now the new
generic Service of the MSCS will start and stop this local Versantd services.

3.3.3 Using Cluster Administrator


Cluster Administrator shows you information about the groups and resources on all of your clusters and specific
information about the clusters themselves. A copy of Cluster Administrator is automatically installed on both cluster nodes
when you install MSCS. For remote administration, you can install separate copies of Cluster Administrator on other
computers on your network. The remote and local copies of Cluster Administrator are identical.

3.3.3.1 Viewing the Default Groups


Every new MSCS cluster includes two types of default resource groups: Cluster Group and Disk Group. These groups
contain the settings for the default cluster and some typical resources that provide generic information and failover
policies.
Cluster Group
The default Cluster Group contains an IP Address resource, a Cluster Name resource, and a Time Service
resource. (This group is essential for connectivity to the cluster).
Disk Group
One Disk Group is created for each disk resource on the shared SCSI bus. A Physical Disk resource is
included in each group.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group as we will see in the following sections.

40

Note

0.1.1.1
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.

3.3.3.2 Creating a New Versant Group


Use the New Group wizard in Cluster Administrator to add a new group to your cluster. To start the New Group wizard, on
the File menu, click New, and then click Group. For step-by-step instructions, see Cluster Administrator Help.
When you add a new group, the New Group wizard guides you through the two-step process. Before running the New
Group wizard, make sure you have all the information you need to complete the wizard. Use the following table to prepare
to run the wizard.
Information required to run the New Group wizard
Information required

What it is used for

The name you will assign to the group

The name that you give the group is used


only for administrative purposes. It is not
the same as the Network Name, which
allows users to access resources through
Virtual Servers.

The text you will use to describe the group

The group description appears in the right


pane of Cluster Administrator when you
select the Groups folder in the left pane.

The name of the node that will be the


preferred owner of the new group

The preferred owner is the node on which


you prefer each group to run.

For a Versant database group you will need the following resources to be part of this new group,
Information required when adding resources
Resource Type

Specific information you must supply

Physical Disk

Drive letter

IP Address

The IP address (x.x.x.x) and subnet mask (x.x.x.x)

Network Name

The computer name you want to create

Generic Service

Name of the service (Versantd)

3.3.4 Insert Versant on Cluster


This generic service will control the local Versantd services.
When you add a service to the MSCS clustered environment, click Generic Service as the resource type.
After you configure the possible owners and the dependencies, you must supply the name of the service. For example, the
Figure below shows the Versantd service name: Versantd. You must type the exact name of the service because the
method of maintenance of services is specific to the name. The service name is not case sensitive.

41

Figure 19 Versantd Properties, Genaral


The Generic Service depends on some other resources. Here it is the Physical Disk and the Network Name.

Figure 20 Versantd Properties, Dependencies


42

Then, Startup parameters should be as follows for the Versantd service


VERSANT_HOST_NAME= GlobalHostName VERSANT_IP=10.233.3.33 HOMEDRIVE=R: HOMEPATH=\db\
VERSANT_DB=R:\DB VERSANT_DBID=R:\DB VERSANT_DBID_NODE= GlobalHostName
Important Note : With Windows NT 4.0 Cluster Server there was a bug in the way the startup parameters are passed on
to the service, as a workaround add an extra parameter at the start and end of the parameter list say junk=123. With
Windows 2000 Advanced Cluster Server this bug is fixed.

Figure 21 Versantd Properties, Parameters

3.3.5 All Resources in Cluster Administrator


By default, the resource is offline, so you must bring the service online before it successfully operates as an MSCS
resource.

43

Figure 22 Cluster Administrator, Database Group


Host file changes
Add a line in the hosts file on both nodes in C:\WINNT\system32\drivers\etc\hosts as shown below,
10.233.3.33 GlobalHostName

(fiction address)

3.3.6 Set Versant environment variables


The versant installation sets following variables and writes following files.
VERSANT_DBID_NODE in all files on both nodes.
C:\versant\NT\6_0_1\lib\sysinfo
C:\WINNT\vr060001.ini
VERSAT_HOST_NAME on both nodes.
VERSANT_HOST_NAME= GlobalHostName
This point is to check by cluster administrator

Creating a new osc-dbid file


You need to create a new osc-dbid file on the shared disk before using the configuration, to create this file run dbid N on
one of the MSCS nodes which owns the physical disk.

3.3.7 Changes in the Registry


To run Versant on MSC you don't have to change parameters in the Registry.

3.3.8 Shared Data


During of installation and configuration is not necessary to share data here.

44

3.3.9 Verify of Installation and Configuration


A simple way to check the functionality of versant on cluster is following:
Precondition
Versant is installed and configured on cluster
One node is active
One 3rd computer (no node from cluster) belongs to the same domain and have installed the same version of versant
From command prompt of the 3rd computer (login as domainuser x) start the commands for creation of a test
database:
makedb g dbname@globalhostname
createdb i dbname@globalhostname
Normal flow
From command prompt of the 3rd computer (login as domainuser x) start the command
db2tty d dbname@globalhostname
dbname@globalhostname is a placeholder for your extended database name (with cluster hostname)
If not exists an output, then is the installation in this node ok. Then move versant to the another node and repeat the test.
Sample
C:\>makedb -g test1@lion2
VERSANT Utility MAKEDB Version 6.0.0.2.0
Copyright (c) 1989-2000 VERSANT Corporation
C:\>createdb -i test1@lion2
VERSANT Utility CREATEDB Version 6.0.0.2.0
Copyright (c) 1989-2000 VERSANT Corporation
C:\>db2tty -d test1@lion2
C:\>

45

3.4

Installing Orbix/Corba on Microsoft Cluster Server

3.4.1 Generic
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects the share disk, the Global Ip Address, and Global Hostname to
that of the failed node.

Corba
Client

GUI

GUI

Application
Logic

Application
Logic

API

API

Redundant
Node

Corba
Server I
Corba
Server Process

Corba
Server Process

Shared RAID System


Switch
over

Database Files

Figure 23 Corba in a Cluster


Reliability:
This configuration provides the same level of reliability as is available on a RAID configuration, because for Corba in not
needed to save persistent data.

Availability
The duration of an outage of Corba will depend primarily on the time required to restart all Corba services and clients.

Failback and Failover


Failback time is expected to be symmetric with failover.

3.4.2 Configuration instructions


3.4.2.1 Basic Installing Corba on one computer
3.4.2.1.1 Configuration of ORBIX for ACI
In order to allow seamless interaction between multiple ACI CORBA servers and CORBA clients
installed on different machines, their Orbix environment must use a centralized place to store its
configuration. In other words all the hosts where the ACI servers and clients are installed would share
Orbixs configuration domain information. This also greatly reduces administrative tasks that need to

46

be performed. Namely, changes to the configuration domain can be carried out from any host in the
configuration domain and are visible to all the machines that are configured to use it.
Orbix 2000 offers two types of domain configuration :
-

File Based - stores configuration information in a file on the local machine,

Repository Based - stores configuration information in a centralized Configuration Repository,


which can be easily accessed from multiple machines.

Both of them allows sharing a configuration domain between a number of hosts. However, the File
Based type implies a creation of configuration files and then making them available to other
machines. This can be realized either through copying them from the host where the domain has
been initially created or through a shared network file system (e.g. Windows Networking). The
second type is more flexible thus preferred, it assumes creation of configuration information on one
highly reliable and always accessible host and links from other hosts Orbix environments to the
created configuration domain. The configuration repository approach allows a centralized store of
configuration information (such as loaded pug-ins, initial object references) for all machines running
ACI servers. This model is depicted on the figure below. The configuration repository itself is an NT
service that runs on the dedicated host. This host would also be running domain wide services such
as the Naming Service. It can be either a stand alone machine or the one running ACI servers.
Especially, a good candidate is one that is running Network Manager NM server as this server needs
to contact more frequently the Naming Service where the Domain Managers TDM servers are
registered.

47

Config.
Repos.

CORBA transport
Figure 24 Orbix Environment with Centralized Configuration

Preparation of Orbix 2000 environment for ACI usage involves three step process. The first step is
executed on every machine destined to run any ACI server or a dedicated host to run a configuration
repository and other Orbix services. The second step is performed only on the host destined to run
the configuration repository and other Orbix services. The last step applies to all machines except the
one hosting the configuration repository and Orbix services. These steps include :

3.4.2.1.2 Step 1 - Installation of Orbix from distribution medium


(follow the Orbix 2000 installer instructions) if the previous installation exists please do as follows :
1. Stop Orbix services starting with IT * name (Control Panel | Services) if any started
2. Uninstall Orbix 2000
3. Manually delete the previous Orbix 2000 directory (after uninstallation)
4. Use Regedit.exe to remove all branches starting with IT * name under
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Control\Service]

...
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet003\Control\Service]
5. Restart the computer
6. Install Orbix 2000

48

3.4.2.1.3

Step 2 - Creation of a configuration domain

This step is to be performed only once


Open the command prompt window
Change directory to the following location
%Orbix-installed-dir%\orbix_art\1.2\bin\

Run the configure.exe utility


%Orbix-installed-dir%\orbix_art\1.2\bin\configure

The following information appears :


Orbix 2000 Configuration
----------------------------------------------(c) Copyright 1993-2000 IONA Technologies PLC. All rights reserved.
Welcome to the Orbix 2000 Configuration application.
will

walk

you

through

the

process of

configuring

This application
the

following

Orbix 2000 components:


Configuration Domain
Locator Daemon
Activator Daemon
Orbix 2000 Services
Configuration Scripts
You can enter input interactively or from a pre-configured file.
Execute 'configure -?' for more details.
Press any key to begin. [continue]:

After confirmation, check the Orbix installation path


Is your IONA product directory E:\Program Files\IONA [yes]:

This will show a selection menu :


Configuration Activities
----------------------------------------------Orbix 2000 organizes configuration information into domains that can be
shared across multiple machines.

This

utility allows you to create a

49

new domain
machine.

or build a link

to an existing domain

running on another

Do you want to:

[1] Create a new Orbix 2000 domain


[2] Create a link to an existing Orbix 2000 domain
Enter [1]:

Choose to create a new domain,


What should this domain be called [orbix2000]:

Provide ACI_Network for the domain name


Where should configuration files for this domain be placed [E:\Program
Files\IONA\etc\domains]:

Accept the default path provided


Choose Domain Type
----------------------------------------------Orbix 2000 currently supports two types of configuration domain:
[1] Configuration
configuration

Repository

Based.

information

Repository, which
Select this

in

This
a

type

domain

centralized

can be easily accessed

type of domain

of

stores

Configuration

from multiple machines.

if you expect to

share configuration

information across multiple systems.


[2] File Based.
in a

file on

This type

of domain stores configuration information

the local

machine.

It can

be useful

for smaller

systems that cannot afford the overhead of additional services.


Enter [1]:

Choose Configuration Repository Based for the domain type


Do you want to install the selected services as NT services? [yes]:

Confirm the question to install the selected services as NT services


Configuration Repository Settings
----------------------------------------------The

Configuration Repository

is

accessed through

50

a CORBA

service,

which
listen

makes it
on a

available to

fixed

well-known location.

port so

to advertise

multiple machines.

You

that

applications can

may also want to specify

the Configuration Repository.

hostname, the

This

service must

access

the hostname used

If you do not

Configuration Repository will use

it at

specify a

the system's current

hostname.
What hostname should the Configuration Repository advertise (leave
blank for default):

Leave blank to use the local host's name


What port should the Configuration Repository use [3076]:

Confirm the proposed port number


Where do you want to place databases and logfiles for this domain [E:\Program
Files\IONA\var\ACI_Network]:

Use the default path provided


Use as Default Domain
----------------------------------------------Orbix 2000 can designate one
this

machine.

This

domain as being

the default

domain for

domain will be used for applications that are

started without an explicit domain.


Do you wish to designate this domain as the default domain [yes]:

Accept yes to designate this domain as the default domain


Deploy Locator
----------------------------------------------The Orbix 2000 Locator service
deploy this service if you

manages persistent servers.

You must

intend to develop servers using PERSISTENT

POAs, or if you intend to use other Orbix 2000 services such as the
Naming Service or Interface Repository.
Do you want to deploy a Locator into this domain [yes]:

Accept yes to deploy a Locator


To insure interoperability among services of the same type across domains, a unique name
must be provided for
the locator.

51

Locator name:

[rts120]:

Use the default local hosts name


The Orbix 2000 Locator must listen on a fixed port so that applications
can access it at a well-known
the hostname used

location.

You may also want to specify

to advertise the Locator.

If you

do not specify a

hostname, the Locator will use the system's current hostname.


What hostname should the Locator advertise (leave blank for default):

Leave blank for default hostname


What port should the Locator use [3075]:

Accept the default port number


Deploy Activator
----------------------------------------------The

Orbix 2000

Activator

particular machine.
machine

on

which

You must

service
deploy

you intend

to

starts

applications

an Activator

service on

use Orbix 2000's automatic

on

every
server

activation feature.
Do you want to deploy an Activator into this domain [yes]:

Choose yes to deploy an Activator


Orbix 2000 Services
----------------------------------------------Orbix 2000 offers the following standard CORBA services.

Using a comma-

separated list, please select the set of services you wish to deploy
in this domain.

(For example: 1,3):

[1] Naming Service


[2] Interface Repository
[3] Event Service
[4] All additional services
[5] No additional services
Services [4]:

Choose 1 to deploy the Naming Service only

52

Client Preparation File


----------------------------------------------Orbix 2000 can generate a preparation

file that allows you

to easily

configure other machines with access to this domain.


Would you like to generate client and service preparation files [yes]:

Accept the preparation files generation


Please enter the name of the client preparation file [E:\Program
Files\IONA\var\ACI_Network\client.prep]:

Use the default path


creating configuration files........................................done.
building Configuration Domain...........done.
deploying Locator daemon.......done.
deploying Activator daemon.....done.
deploying Naming Service.........done.
done
To use this domain, setup your environment as follows:
Run "E:\Program Files\IONA\bin\ACI_Network_env.bat"
to setup your environment for this domain
Run "E:\Program Files\IONA\bin\ACI_Network_java_env.bat"
to setup your java environment for this domain
Run "E:\Program Files\IONA\bin\start_ACI_Network_services.bat"
to start the services associated with the domain
Run "E:\Program Files\IONA\bin\stop_ACI_Network_services.bat"
to stop the services associated with the domain

Instead of using the above batches (they would have to be used every time the system is restarted)
open the Control Panel | System Properties dialog and set the following system variables (adjust
the path to the Orbix installation directory, if necessary ):
IT_PRODUCT_DIR=E:\Program Files\IONA (this should be already set due to the Orbix installation,
please check only)
IT_CONFIG_DOMAINS_DIR=E:\Program Files\IONA\etc\domains
IT_DOMAIN_NAME=ACI_Network
The creation of Orbix configuration domain ACI_Network is now done. The computer must be
restarted to allow changes to take effects. If all the ACI servers are intended to run on this machine
53

(hosting the configuration repository at the same time) this is the only step needed, otherwise
continue with step 3 for other machines

Step 3 - Linking an installed Orbix environment to an existing configuration domain (created


in the Step 2)
This step is intended to repeat for all the machines where the ACI servers are to be run. If the
given ACI server is intended to be run on the same host as the one where configuration
domain has been created (in the previous step) please omit this step
Precondition :The machine hosting the configuration repository must be running while
executing this step
Open the command prompt window
Change directory to the following location
%Orbix-installed-dir%\orbix_art\1.2\bin\

Run the configure.exe utility


%Orbix-installed-dir%\orbix_art\1.2\bin\configure

The following information appears :


Orbix 2000 Configuration
----------------------------------------------(c) Copyright 1993-2000 IONA Technologies PLC. All rights reserved.
Welcome to the Orbix 2000 Configuration application.
will

walk

you

through

the

process of

configuring

This application
the

following

Orbix 2000 components:


Configuration Domain
Locator Daemon
Activator Daemon
Orbix 2000 Services
Configuration Scripts
You can enter input interactively or from a pre-configured file.
Execute 'configure -?' for more details.
Press any key to begin. [continue]:

54

After confirmation, check the Orbix installation path


Is your IONA product directory E:\Program Files\IONA [yes]:

This will show a selection menu :


Configuration Activities
----------------------------------------------Orbix 2000 organizes configuration information into domains that can be
shared across multiple machines.
new domain
machine.

or build a link

This

utility allows you to create a

to an existing domain

running on another

Do you want to:

[1] Create a new Orbix 2000 domain


[2] Create a link to an existing Orbix 2000 domain
Enter [1]:

Choose 2 to create a link to an existing Orbix 2000 domain,


What should this domain be called [orbix2000]:

Enter ACI_Network for the domain name


Where should configuration files for this domain be placed [E:\Program
Files\IONA\etc\domains]:

Accept the default path provided


Remote Configuration Repository Settings
----------------------------------------------In order

to share a

Configuration Repository domain, Orbix 2000 must

generate a minimal configuration


on this

file that tells applications running

machine where the remote Configuration

Repository service is

located.
On what host is the remote Configuration Repository running [new_aci_server]:

Enter the hostname where the configuration domain has been created (the host used in the step 2)
On what port is the remote Configuration Repository listening [3076]:

Accept the default port setting

55

Where do you want to place databases and logfiles for this domain [E:\Program
Files\IONA\var\ACI_Network]:

Confirm the proposed path


Use as Default Domain
----------------------------------------------Orbix 2000 can designate one
this

machine.

This

domain as being

the default

domain for

domain will be used for applications that are

started without an explicit domain.


Do you wish to designate this domain as the default domain [yes]:

Select this domain as the default domain


Deploy Locator
----------------------------------------------The Orbix 2000 Locator service
deploy this service if you

manages persistent servers.

You must

intend to develop servers using PERSISTENT

POAs, or if you intend to use other Orbix 2000 services such as the
Naming Service or Interface Repository.
Do you want to deploy a Locator into this domain [yes]:

Enter no, a Locator needs to be run only on the host where the configuration domain has been
created
Deploy Activator
----------------------------------------------The

Orbix 2000

Activator

particular machine.
machine

on

which

You must

service

you intend

deploy
to

starts

applications

an Activator

service on

use Orbix 2000's automatic

on

every
server

activation feature.
Do you want to deploy an Activator into this domain [yes]:

Enter no, an Activator needs to be run only on the host where the configuration domain has been
created
This will produce the following output
creating configuration files........done.
creating link to Configuration Domain....done.
To use this domain, setup your environment as follows:

56

Run "E:\Program Files\IONA\bin\ACI_Network_env.bat"


to setup your environment for this domain
Run "E:\Program Files\IONA\bin\ACI_Network_java_env.bat"
to setup your java environment for this domain

Instead of using the above batches (they would have to be used every time the system is restarted)
open the Control Panel | System Properties dialog and set the following system variables (adjust
the path to the Orbix installation directory, if necessary ):
IT_PRODUCT_DIR=E:\Program Files\IONA (this should be already set due to the Orbix installation,
please check only)
IT_CONFIG_DOMAINS_DIR=E:\Program Files\IONA\etc\domains
IT_DOMAIN_NAME=ACI_Network
The configuration of this machine to use ACI_Network configuration domain is now completed. The
computer must be restarted to allow changes to take effects.

3.4.2.2 Installing Corba on both computers


Install Corba software on both nodes on a local disk (C:). Let the installation pick the default directories.
Each node shall place to disposition an own corba domain and no link. The name both domains shall be equal and all
services will be installed. This part of the installation and configuration is independently from the cluster configuration.
For the configuration details shall be use that document "Orbix configuration for ACI.doc".
The installed Corba services are:
1. IT config_rep cfr-CorbaDomainName
2. IT locator default-domain
3. IT naming default-domain
4. IT activator default-domain
5. IT ifr default-domain
6. IT event default-domain
Next Step:
For Cluster Software you have to change the properties of all local Corba services on both nodes with the computer
Management tool:

Figure 25 Corba Services configuration

57

3.4.2.3 Change Corba Services Properties on local nodes


After installing Corba and before rebooting the machines, the Startup Type for the Corba services needs to be changed
from Automatic to Manual. To change this setting bring up the services dialog box from the Control Panel and double
click on Corba services and change the setting as shown below.
The Service works as the "local System account" and the MSCS should be able to interact with this Service. Now the new
generic Service of the MSCS will start and stop these local Corba services.
Figure 26 IT activator default-domain Properties.

58

Figure 27 IT config_rep cfr-ACI_Network Properties.

ACI_Network is a placeholder for your corba domain name

59

Figure 28 IT activator default-domain Properties.

60

Figure 29 IT activator default-domain Properties.

61

Figure 30 IT activator default-domain Properties.

62

Figure 31 IT activator default-domain Properties.

Note: If the 1. Installation is erroneous, then is recommended a manual delete of the registry for the IT services.

3.4.3 Using Cluster Administrator


3.4.3.1 Introduction
Cluster Administrator shows you information about the groups and resources on all of your clusters and specific
information about the clusters themselves. A copy of Cluster Administrator is automatically installed on both cluster nodes
when you install MSCS. For remote administration, you can install separate copies of Cluster Administrator on other
computers on your network. The remote and local copies of Cluster Administrator are identical.

3.4.3.2 Viewing the Default Groups


Every new MSCS cluster includes two types of default resource groups: Cluster Group and Disk Group. These groups
contain the settings for the default cluster and some typical resources that provide generic information and failover
policies.
Cluster Group
The default Cluster Group contains an IP Address resource, a Cluster Name resource, and a Time Service
resource. (This group is essential for connectivity to the cluster).
63

Note

0.1.1.2
Disk Group
One Disk Group is created for each disk resource on the shared SCSI bus. A Physical Disk resource is
included in each group.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group, as we will see in the following sections.
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.

3.4.3.3 Creating a New Corba Group


Use the New Group wizard in Cluster Administrator to add a new group to your cluster. To start the New Group wizard, on
the File menu, click New, and then click Group. For step-by-step instructions, see Cluster Administrator Help.
When you add a new group, the New Group wizard guides you through the two-step process. Before running the New
Group wizard, make sure you have all the information you need to complete the wizard. Use the following table to prepare
to run the wizard.
Information required to run the New Group wizard
Information required

What it is used for

The name you will assign to the group

The name that you give the group is used


only for administrative purposes. It is not
the same as the Network Name, which
allows users to access resources through
Virtual Servers.

The text you will use to describe the group

The group description appears in the right


pane of Cluster Administrator when you
select the Groups folder in the left pane.

The name of the node that will be the


preferred owner of the new group

The preferred owner is the node on which


you prefer each group to run.

For a Corba database group you will need the following resources to be part of this new group,
Information required when adding resources
Resource Type

Specific information you must supply

Physical Disk

Drive letter

IP Address

The IP address (x.x.x.x) and subnet mask (x.x.x.x)

Network Name

The computer name you want to create

Generic Service

Name of the corba services

3.4.3.4 Generic Service


This generic service will control the local Corba services.
When you add a service to the MSCS clustered environment, click Generic Service as the resource type.
After you configure the possible owners and the dependencies, you must supply the name of the service. For example, the
Figure below shows the IT config_rep cfr-ACI_Network service name: IT config_rep cfr-ACI_Network. You must type the
exact name of the service because the method of maintenance of services is specific to the name. The service name is not
case sensitive.
The order of succession to the booting from the corba services under a cluster configuration directs itself after the real
internal dependencies these services.
Corba services became follow order of succession by booting:
1. IT config_rep cfr-ACI_Network
64

2. IT locator default-domain
3. IT naming default-domain
4. IT activator default-domain
5. IT ifr default-domain
6. IT event default-domain
ACI_Network is a placeholder for your corba domain name

3.4.4 Insert Corba on Cluster


Use or created a new Cluster Group with following resources Shard Physical Disk, Network Name, IP Address.
Add Generic Services as resources for each corba services.
In this document the both nodes are called Node1 and Node2. The shared network name is GlobalHostName and the
shared drive is called R.
The Generic Service depends on some other resources. Here it is the Physical Disk and the Network Name.

3.4.4.1 Change Corba Services Properties on Cluster


Following pictures show the properties of the IT_Activator_default_domain service.This example can be use as sample for
the other corba services

Figure 32 IT activator default-domain General properties


For the next side is to heed the order of succession of the dependencies.
Corba services became follow order of succession by booting:
1. IT config_rep cfr-ACI_Network
2. IT locator default-domain
3. IT naming default-domain
4. IT activator default-domain
5. IT ifr default-domain
6. IT event default-domain

E.g.: IT activator service should be started after IT naming service

65

Figure 33 IT activator default-domain Dependencies properties

Figure 34 IT activator default-domain Advanced properties

66

Figure 35 IT activator default-domain Parameter properties

Figure 36 IT activator default-domain Registry properties

3.4.5 All Resources in Cluster Administrator


By default, the resource is offline, so you must bring the service online before it successfully operates as an MSCS
resource.

67

Figure 37 Cluster Administrator, Group including corba services

3.4.6 Set Corba environment variables


Before you run Corba on MSC you have to check up follow environment variables.
IT_CONFIG_DIR = c:\Program Files\IONA\etc
IT_CONFIG_DOMAINS_DIR = C:\Program Files\IONA\etc\domains
IT_DOMAIN_NAME = ACI_Network (Your corba domain name)

3.4.7 Changes in the Registry


To run Corba on MSC you don't have to change parameters in the Registry.

3.4.8 Shared Data


The Folder VAR and DOMAINS are to be place on the share disk during the installation

68

3.4.9 Verify of Installation and Configuration


A simple way to check the functionality of Corba on cluster is following:
Precondition
Corba is installed and configured on cluster
One node is active
One 3rd computer (no node from cluster) belongs to the same domain and have installed the same version of Corba and
links to the same Corba Domain
Normal flow
From command prompt of the 3rd computer (login as domainuser x) start the command
itadmin
then puts
locator show
If the output uses the globalhostname, is the installation and configuration ok in the node
Locator Name: globalhostname/it_locator
Domain Name: globalhostname
Host Name: globalhostname
Then move Corba group to the another node and repeat the test.
Sample
Microsoft Windows 2000 [Version 5.00.2195]
(C) Copyright 1985-2000 Microsoft Corp.
C:\>itadmin
% locator show
Locator Name: GlobalHostName /it_locator
Domain Name: GlobalHostName
Host Name: GlobalHostName
%

69

3.5

Installing ACI Process on Microsoft Cluster Server

3.5.1 Generic
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects the share disk, the Global Ip Address, and Global Hostname to
that of the failed node.

ACI
Client

GUI

GUI

Application
Logic

Application
Logic

API

API

Redundant
Node

ACI
Server I
ACI
Server Process

ACI
Server Process

Shared RAID System


Switch
over

Database Files

Figure 38 ACI in a Cluster

70

ACI is a placeholder for NM, DM or EM servers.

Reliability:
This configuration provides the same level of reliability as is available on a RAID configuration, because all needed
process are already configured for MSCS. Here not exists single point of failure.

Availability
The duration of an outage of Application will depend primarily on the time required to restart all resources of his group
and clients (server, client, snmpserver, brass).

Failback and Failover


Failback time is expected to be symmetric with failover.

3.5.2 Configuration instructions


3.5.2.1 Installing ACI process on both computers
Install ACI software (NM, DM or EM) on both nodes on a local disk (C:). Let the installation pick the default directories.
Each node shall place to disposition an own ACI service for each needed process (NM server, DM server, EM server,
SNMPserver, QD2server, Brass). This part of the ACI installation and configuration is independently from the cluster
configuration but is needed first to activate the node resources (disk, IP Adress, hostname, corba and versant), because they
are needed for your Installation. During the installation (NM, DM or EM) is to place the shared data on the shared disk.
Shared data are all Databases, all logs, BACKGROUND). Here is the installation of the Database the most important
point.
After the installation is recommended to review the files host and services under C:\WINNT\system32\drivers\etc.
E.g. for host

127.0.0.1
localhost
218.1.17.83 VERSANDHOST
218.1.17.80
Node1
218.1.17.81
Node2
E.g. for services

acis-nmtdm
50195/tcp
# ACI NM-TDM server service
acis-dm
50196/tcp
# ACI DM server service (SubNM)
acis-dmtdm
50190/tcp
# ACI DM-TDM server service (or V8.2 DM)
acid-qd2-dmtdm 50191/tcp
# ACI DM-TDM Qd2 DCN server service
acid-snmp-dmtdm 50192/tcp
# ACI DM-TDM SNMP server service
acis-emgen
50193/tcp
# ACI GenEM server service
acid-snmp-emgen 50194/tcp
# ACI GenEM SNMP server service
acis-emxl
50200/tcp
# ACI EM XL server service
acid-snmp-emxl 50201/tcp
# ACI EM XL SNMP server service
acis-emamgw
50202/tcp
# ACI EM AMGW server service
acid-snmp-emamgw 50203/tcp
# ACI EM AMGW SNMP server service
acis-emacc
50204/tcp
# ACI EM ACC server service
acid-snmp-emacc 50205/tcp
# ACI EM ACC SNMP server service
Next Step:
For Cluster Software you have to change the properties of local ACI service on both nodes with the computer Management
tool:

Figure 39 ACI Service configuration

71

3.5.2.2 Change ACI Services Properties on local nodes


After installing ACI application and before rebooting the machines, the Startup Type for the ACI services needs to be
changed from Automatic to Manual. To change this setting bring up the services dialog box from the Control Panel and
double click on ACI services and change the setting as shown below.
The Service works as the "local System account" and the MSCS should be able to interact with this Service. Now the new
generic Service of the MSCS will start and stop these local ACI services.
Figures of DM as sample of these step
Figure 40 Domain manager Properties.

72

3.5.3

Using Cluster Administrator

3.5.3.1 Introduction
Cluster Administrator shows you information about the groups and resources on all of your clusters and specific
information about the clusters themselves. A copy of Cluster Administrator is automatically installed on both cluster nodes
when you install MSCS. For remote administration, you can install separate copies of Cluster Administrator on other
computers on your network. The remote and local copies of Cluster Administrator are identical.

3.5.3.2 Viewing the Default Groups


Every new MSCS cluster includes two types of default resource groups: Cluster Group and Disk Group. These groups
contain the settings for the default cluster and some typical resources that provide generic information and failover
policies.
Cluster Group
73

Note

0.1.1.3
The default Cluster Group contains an IP Address resource, a Cluster Name resource, and a Time Service
resource. (This group is essential for connectivity to the cluster).
Disk Group
One Disk Group is created for each disk resource on the shared SCSI bus. A Physical Disk resource is
included in each group.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group, as we will see in the following sections.
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.

3.5.3.3 Creating or use a Group for ACI process


Use the New Group wizard in Cluster Administrator to add a new group to your cluster. To start the New Group wizard, on
the File menu, click New, and then click Group. For step-by-step instructions, see Cluster Administrator Help.
When you add a new group, the New Group wizard guides you through the two-step process. Before running the New
Group wizard, make sure you have all the information you need to complete the wizard. Use the following table to prepare
to run the wizard.
Information required to run the New Group wizard
Information required

What it is used for

The name you will assign to the group

The name that you give the group is used


only for administrative purposes. It is not
the same as the Network Name, which
allows users to access resources through
Virtual Servers.

The text you will use to describe the group

The group description appears in the right


pane of Cluster Administrator when you
select the Groups folder in the left pane.

The name of the node that will be the


preferred owner of the new group

The preferred owner is the node on which


you prefer each group to run.

For a database group you will need the following resources to be part of this new group,
Information required when adding resources
Resource Type

Specific information you must supply

Physical Disk

Drive letter

IP Address

The IP address (x.x.x.x) and subnet mask (x.x.x.x)

Network Name

The computer name you want to create

Generic Service

Name of the ACI service

3.5.3.4 Generic Service


This generic service will control the local ACI service.
When you add a service to the MSCS clustered environment, click Generic Service as the resource type.
After you configure the possible owners and the dependencies, you must supply the name of the service. You must type the
exact name of the service because the method of maintenance of services is specific to the name. The service name is not
case sensitive.
The order of succession to the booting from the ACI services under a cluster configuration corresponds directs itself to the
real internal dependencies these services.
ACI service should be booting after follow resources:
Disk
74

IP address
Hostname
Corba
Versant
Important: The order of succession for the booting of Resources is flexible. For example, it is possible corba after versant
to start up, or reversed, there both Resources are really apart independent
Then, it is possible to start the ACI resources in following order of succession:
For NM
NM server
For DM
QD2 server (only when it is needed )
DM server
For EM
Brass
Snmp server
EM server

3.5.4 Insert ACI on Cluster


Use or created a new Cluster Group with following resources Shard Physical Disk, Network Name, IP Address.
Add Generic Services as resources for the ACI service.
In this document the both nodes are called Node1 and Node2. The shared network name is GlobalHostName and the
shared drive is called R.
The Generic Service depends on some other resources. Here it is the Physical Disk and the Network Name and IP Address.

3.5.4.1 Change ACI Services Properties on Cluster


Following pictures show the properties of DM service as sample for the step

Figure 41 General properties

75

DM gets starting after the last corba service

Figure 42 Dependencies properties

Figure 43 Advanced properties

76

Figure 44 Parameter properties

Figure 45 Registry properties

3.5.5 All Resources in Cluster Administrator


By default, the resource is offline, so you must bring the service online before it successfully operates as an MSCS
resource.
Sample: NM, DM, EMXL Resources

77

Figure 46 Cluster Administrator shows Group including online services

3.5.6 Set ACI environment variables


This point is resolved by the intallation

3.5.7 Changes in the Registry


To run ACI servers on MSCS you have first to change and share parameters in the Registry for both nodes. Use in this
cases use the installation or the *.reg files of the version to install.
Second step is the configuration under cluster administrator for the registry key properties of the resource to this process.
Add the Key HKEY_LOCAL_MACHINE\SOFTWARE\Siemens AG

3.5.8 Shared Data


Databases for all ACI process are to be place on the share disk. The step is to configure during the installation of ACI
product.

3.5.9 Verify of Installation and Configuration


A way to check the functionality of NM, DM or EM server on cluster is following:
Precondition
All needed process for NM, DM or EM are installed and configured on cluster
One node is active
One 3rd computer (no node from cluster) belongs to the same domain and have installed the same version of NM, DM
and EM (only clients are needed)
Normal flow
From the 3rd computer (login as domainuser x) start the clients with the command parameter
SERVER=GlobalHostName
If the connection to server work, then is the installation in this node ok. Then move the group to the another node and
repeat the test.

78

3.6

Hard/Software Requirements and Customer Supply

3.6.1 Requirements Section (for Windows)


Please notice: all values in this section are minimum requirements for sufficient operating performance. The recommended values are as defined in the Customer Section.
Operation with less performant servers is possible, but will decrease operating performance. It is advised to meet at least memory reqirement.

ACI-Server / ACI-Single User


General
Single User systems (i.e. systems with both ACI server and client installed) can be used for networks up to 10.000 subscribers (depending on customer operational and maintenance
organization). For bigger networks a system with separate server (ACI-Server PC) and multiple clients (ACI-Client PCs) has to be installed.
The requirements for both ACI-Server and ACI-Single User are almost identical. One difference is (as the ACI-Single User also acts as a workplace for the operator ) the size of the monitor
used. For Single User systems a 21 monitor is absolutely recommended for proper human engineering. The other difference is the necessity of an audio controller for the Single User
system.

Hardware Requirements
Component
General
CPU
RAM
Hard Disk
Floppy
CD-ROM
DAT-Streamer
LAN-Adapter
Disk-Controller
Keyboard
Mouse
Graphics Adapter Card
Audio Controller
Monitor Server
Monitor Single User

Type
Tower

Remark
Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)

Pentium III 700


MHz
512 MB
6 GB (Ultra-SCSI)
9 GB (Ultra-SCSI)
1.44 MB
32x (SCSI)
12 GB (SCSI)
10/100 Mbit/s
SCSI
PS/2 (recommended)
PS/2 (recommended)
SVGA, 4 MB VRAM
any WinNT compatible
Color, 17
Color, 21

>10.000 subscriber additional 512 MB recommended


max. DB size = 500 MB
max. DB size = 2 GB

optional, for backup

typically needed only for Single User configuration

Software Requirements
Component
Operating System

Type
Windows NT 4.0 Server
Windows 2000 Server

Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards

ACI-Client
General
An ACI-Client shall be planned to handle 6.000 up to 10.000 subscribers ( depending on customer operational and maintenance organization).

Hardware Requirements
Component
General
CPU
Cache

Type/Value
Mini-Tower/Desktop
Pentium III 600 MHz

RAM
Hard Disk
Floppy
CD-ROM
Disk Controller
LAN-Adapter
Keyboard
Mouse
Graphics Adapter Card
Audio Controller
Monitor

128 MB
3,2 GB (Fast-IDE)
1.44 MB
32x IDE
FAST-IDE
10/100 M bit/s
PS/2 (recommended)
PS/2 (recommended)
SVGA, 4 MB VRAM
Any WinNT/2000 compatible
Color, 21

Remark
Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)

512
kB
Sec.
Level Cache

Software Requirements
Component
Operating System

Type
Windows NT 4.0 WS
Windows 2000 Professional
Windows NT 4.0 Server
Windows 2000 Server

Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards

82

ACI/-TRIAL/-MINI
General
That PC can be used as an ACI-Trial (small configuration, max. 120 subscribers) or as an ACI-Mini , which configuration was defined due to the necessity to have a low
cost system for small networks up to 1.000 subscribers. It is used as a single user system.

Hardware Requirements
Component
General
CPU
Cache

Type/Value
Mini-Tower/Desktop
Pentium III 600 MHz

Remark
Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)

RAM
Hard Disk
Floppy
DAT-Streamer
CD-ROM
Disk Controller
LAN-Adapter
Keyboard
Mouse
Graphics Adapter Card
Audio Controller
Monitor

128 MB
6 GB (Fast-IDE)
max. DB size=500MB
1.44 MB
4/8 GB (SCSI)
optional, for backup (only for ACI-Mini)
32x DIE
FAST-DIE
10/100 M bit/s
PS/2 (recommended)
PS/2 (recommended)
SVGA, 4 MB VRAM
Any WinNT/2000 compatible
Color, 17

512
kB
Sec.
Level Cache

Software Requirements
Component
Operating System

Type
Windows NT 4.0 Server
Windows 2000 Server

Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards

83

ACI-High-Availability-Server-Cluster
General
The High-Availability-Server-Cluster solutions will only be offered together with SNI server clusters with 99,99% HW availability mentioned in chapter 2.
LCT
Hardware Requirements
Component
General

Type/Value
Notebook/Laptop

CPU
Cache
RAM
Hard Disk
Floppy
CD-ROM
Disk-Controller
Graphics Adapter
LAN Adapter
LCD-Display

Pentium 300 MHz


256 kB Sec. Level Cache
128 MB
2.1 GB
1.44 MB
24x
IDE
SVGA, 2 MB VRAM
10 Mbit/s, Twisted Pair
Color, 11.3 , Res. 800x600

Remark
with optional accumulator power supply
Interfaces (needed): 1 x RS232C
additional Interfaces (recommendation): 1 x VGA, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)

(DB size = 100 MB)

Software Requirements
Component
Operating System

Type
Windows NT 4.0
Windows 2000

Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards

84

3.6.2 Customer Supply Section


This section shows the hard/software which is currently shipped to the customer. It can also be seen as the definition for recommended hard- and software.

ACI-SERVER / ACI-Single User


General
The standard ACI-Single User system may also be called as BASIC server system. It is equipped with single processor and single harddisk.

Configuration BASIC
Component

Type/Value

Order Number

Remark

General
CPU
Floppy
SCSI-Controller
Fast-IDE-Controller
LAN-Adapter
Graphics Adapter Card
Flexy Bay Option FD

PRIMERGY F200 GE FS
Pentium III 1.26 GHz/512kB
3.5 /1.44 MB
1xU160,int/ex, SE
2x2 IDE
Intel 10/100 on-board
PCI-Graphic ATI 8MB on bord

S26361-K643-V103

Floorstand
Slots: 6 x PCI (2 x 33MHz/32-bit; 4 x 66MHz/64-bit)

Power-Supply
Power-Supply-Modul
Fan-Unit
RAM
CD-ROM
Hard Disk
DAT-Streamer
Keyboard
Country-Kit

400W Upgrade (hot-plug)


400W (hot-plug)
Upgrade-Kit hot-plug redundant

Operating System
Audio Controller
Speaker
Monitor 21

512 MB SDRAM PC133 ECC


ATAPI/IDE
18GB, 10k, U160, Hot Plug, 1
DDS-3, 12 GB intern, SE-SCSI
KBPC P2 Light Basic

S26361-F2575-E1
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
S26361-F2306-E523

SNP:SY-F2240E1-A
SNP:SY-F2336E118-A
S26361-F1730-E2

S26381-K297-V122
T26139-Y1740-E10
2x Power-Cable grey 1,8m
T26139-Y1744-L10
Windows NT Server (US) V4.0+10CL S26361-F2565-E305

Creativlabs CT5808
Active
MCM 21P3

Package #
1

German(D)/International(Int)
German (D)
English (UK)/Ireland(IR)

2
3
4

S26361-F2560-L1
S26361-F2300-L1
S26361-K618-V150

3.6.3 Order Packages


Package # Supplier
1
Fujitsu-SIEMENS

Order Number
S42022-D1801-V101

Content
ACI - US/International

Remark

85

FujitsuSIEMEN
S

FujitsuSIEMEN
S

FujitsuSIEMEN
S

S42022-D1801-V102
S42022-D1801-V103
S42022-D1801-V104
Error: Reference source
not found

ACI - English UK
ACI - France
ACI - German
Speaker

Only on request
Only on request
Only on request

S26361-F2560-L1

Audio Controller

Creativlabs CT5808

Error: Reference source


not found

Monitor 21

ACI-Server
General Comment
The ACI-Server can be a MIDDLE or HIGH system. The MIDDLE variant is equipped with a second processor in addition to the configuration of the BASIC system (ACI
Single User).
The HIGH variant also includes two CPUs and in addition to this it has a RAID controller and redundant power supply inside. Furthermore it is equipped with 3 hardddisk.
The two additional harddisk should be used to store and mirror the ACI-Database.

Configuration MIDDLE
Component
General
CPU
Floppy
SCSI-Controller
LAN-Adapter
Graphics Adapter Card
Flexy Bay Option FD
2nd Processor
Power-Supply
Power-Supply-Modul
Fan-Unit
RAM

Type/Value
PRIMERGY F200 GE FS
Pentium III 1,13 Ghz/512kB
3.5 /1.44 MB
1xU160,int/ex, SE
Intel 10/100 on-board
PCI-Graphic ATI 8MB on bord
PentiumIII 1,13GHz/512kB
400W Upgrade (hot-plug)
400W (hot-plug)
Upgrade-Kit hot-plug redundant

1 GB SDRAM PC133 ECC

Order Number
S26361-K643-V102

Remark
Floorstand
Slots: 6 x PCI (2 x 33MHz/32-bit; 4 x 66MHz/64-bit)
power-supply1 (+1) x 400 W hot-plug / redundant (optional)

Package #
1

S2636-F2399-E1
S26361-F2575-E1
S26361-F2599-E113
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
S26361-F2306-E524

86

CD-ROM
Hard Disk
DAT-Streamer
Keyboard
Country-Kit
Operating System
Audio Controller
Speaker
Monitor 17

ATAPI / IDE
18GB, 10k, U160, Hot Plug, 1
DDS-3, 12 GB intern
KBPC S2

SNP:SY-F2240E1-A
SNP:SY-F2336E118-P
S26361-F1730-E2
S26381-K297-V122

T26139-Y1740-E10
2x Power-Cable grey 1,8m
T26139-Y1744-L10
Windows NT Server (US) V4.0+10CL S26361-F2565-E305

Creativlabs CT5808

S26361-F2560-L1

MCM 17P3

S26361-K707-V150

used for NT, ACI and DB


German (D)/(INT)
German (D)
English (UK)/Ireland(IR)

2
3
4

optional
optional

87

Order Packages MIDDLE


Package # Supplier
1
Fujitsu-SIEMENS

FujitsuSIEMEN
S

FujitsuSIEMEN
S

FujitsuSIEMEN
S

Order Number
S42022-D1801-V201
S42022-D1801-V202
S42022-D1801-V203
S42022-D1801-V204
S26361-F2560-L1

Content
ACI-MIDDLE - US/International
ACI-MIDDLE - English UK
ACI-MIDDLE France
ACI-MIDDLE German
Audio Controller

Error: Reference source


not found

Speaker

S26361-K707-V150

Monitor 17

Remark
Only on request
Only on request
Only on request
Creativlabs CT5808

88

Configuration HIGH
Component
General
CPU
Floppy
SCSI-Controller
LAN-Adapter
Graphics Adapter Card
Flexy Bay Option FD
2nd Processor
Power-Supply
Power-Supply-Modul
Fan-Unit
CD-ROM
RAM
Hard Disks
DAT-Streamer
RAID-Controller
Keyboard
Country-Kit
Operating System
Audio Controller
Speaker
Monitor 17

Type/Value
PRIMERGY F200 GE FS PIII
Pentium III 1,13 Mhz
3.5 /1.44 MB
2-Canel SCSI Controller on-board
Intel 10/100 on-board
ATI 4MB Graphic on-board

Order Number
S26361-K643-V102

Pentium III 1,13 MHz/512kB

S26361-F2575-E1
S26361-F2599-E113

400W Upgrade (hot-plug)


400W (hot-plug)
Upgrade-Kit hot-plug redundant

S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1

ATAPI, IDE
1 GB SDRAM 133 MHz
18GB, 10k, U160, Hot Plug, 1
18GB, 10k, U160, Hot Plug, 1
18GB, 10k, U160, Hot Plug, 1
DDS-3, 12 GB intern,
Adaptec, 1xU160 int/ext, 32MB
KBPC S2

SNP:SY-F2240E1-A

MCM 17P3

Package #
1

S26361-F2306-E524

SNP:SY-F2336E118-P
SNP:SY-F2336E118-P
SNP:SY-F2336E118-P
S26361-F1730-E2
S26361-F2405-E32
S26381-K297-V122

T26139-Y1740-E10
2x Power-Cable grey 1,8m
T26139-Y1744-L10
Windows NT Server (US) V4.0+10CL S26361-F2565-E305

Creativlabs CT5808
Error: Reference source not found

Remark
Floorstand
Slots: 6 x PCI (2 x 33MHz/32-bit; 4 x 66MHz/64-bit)
power-supply1 (+1) x 400 W hot-plug / redundant (optional)

S26361-F2560-L1
Error: Reference source
not found
S26361-K707-V150

HD #1 used for NT, ACI without DB


HD #2 used for DB
HD #3 used for mirrored DB (RAID 1)

German (D)/(INT)
German (D)
English (UK)/Ireland(IR)

4
2
3

optional
optional

89

Order Packages HIGH

Package # Supplier
1
Fujitsu-SIEMENS

FujitsuSIEMEN
S

FujitsuSIEMEN
S

FujitsuSIEMEN
S

Order Number
S42022-D1801-V301
S42022-D1801-V302
S42022-D1801-V303
S42022-D1801-V304
S26361-F2560-L1

Content
ACI-HIGH - US/International
ACI-HIGH - English UK
ACI-HIGH - France
ACI-HIGH German
Audio Controller

Remark
Only on request
Only on request
Only on request
Creativlabs CT5808

Speaker

S26361-K707-V150

Monitor 17

90

ACI/-CLIENT
Configuration
Component
General
Floppy
Disk-Controller
Audio-Controller
CPU, Cache
RAM
Graphics Adapter Card
Hard Disk
CD-ROM
LAN-Adapter
Keyboard

Type/Value
SCENIC L, i815e, PS, LAN
3.5 /1.44 MB
Ultra DMA-100 Cont. on-board
on-board
PIII 1,0 GHz / 133, 256 KB
128 MB SDRAM PC 133
Matrox Millenium G450 16 MB, DH
HDD 10 GB, Ultra DMA-100, 5.4k
48x ATAPI
on bord
KBPC P2 Light Basic

Country-Kit

SCENIC L

Operating System

Dual Install NT/2000

Speaker
Monitor 21

Active
MCM 21P3

Order Number
S26361-F2424-E220

Remark
Slots: 5 x PCI , 1 x AGP

Package #
1

S26361-F2271-E270
S26361-F2272-E2
S26361-F2421-E18
S26361-F2413-E10
S26361-F2273-E51
S26381-K240-E122
S26381-K240-E165
S26381-K240-E140
S26381-K240-V120
S26361-F2285-E502
S26361-F2285-E505
S26361-F2285-E503
S26361-F2285-E501
S26361-F1818-E722
S26361_F1818-E721
S26361-F1818-E722
S26361-K618-V150

US/International (INT)
English (UK)
France (F)
German (D)
US/International (INT)
English (UK)
France (F)
German (D)
English (INT/UK/F)
German (D)
2

91

Order Packages

Package # Supplier
1
Fujitsu-SIEMENS

FujitsuSIEMEN
S

Order Number
S42022-D1802-V101
S42022-D1802-V102
S42022-D1802-V103
S42022-D1802-V104
S26361-K618-V150

Content
ACI US/International
ACI English UK
ACI France
ACI German
Monitor 21

Remark
Only on request
Only on request
Only on request

92

ACI- High-Availability-Cluster-Server
General
The High-Availibility-Server-Cluster solutions will only be offered together with SNI server clusters with 99,99% HW availability.
Configuration general (with storage)
Component
General

Storage

Type/Value

Order Number

1x DataCenter Rack 38U


2x blindplate 3U
3x blindplate 5U
1x flexirailpair for Rack-Components
2x Keyb.-mon.-mouse
Servercableset customized
1x Consolswitch 4x (ES4+) 1U +
built-in, UPS
1x Rack Console for TFT monitor +
keyboard, UPS
1x UPS APC grey COM signal-cabel
for built-in + NT
1x UPS APC COM-Port additional +
built-in
1x UPS APC 3000 VA, 3U
3x Rack-built-in
1x carrier-angle 2U
2x cabel FC Cu HSSDC-HSSDC 3m
+ built-in
1x PRIMERGY S60 RH storage,
UPS saved
Supsystem S60 FC RAID Ctrl64MB
BBU
Fibre Channel GBIC Cu
3x hard disk 36GB,10k, Hot plug,1
1x built-in-kit 19" DC-Rack S30/S60

SNP:SY-K614V101-P
SNP:SY-F1609E3-P
SNP:SY-F1609E5-P
SNP:SY-F1331E51-P
SNP:SY-F2293E500-P

Type/Value

Order Number

1x PRIMERGY F200 GE RS PIII

S26361-K643-V303

Remark

SNP:SY-F2293E40-P
SNP:SY-F1806E12-P
S26113-F231-E1

S26113-F81-E1
SNP:PS-E421E1-P
SNP:SY-F1647E301-P
SNP:SY-F2262E15-P
SNP:SY-F1828E3-P

S26231-K714-V210
S26361-F2436-E1
SNP:SY_F1832E1-P
S26361-F2435-E136
SNP:SY-F2261E8-P

Configuration Server 1
Component
Generals

Remark

93

CPU/Cache
RAM
DAT_STREAMER
CD-ROM
Hard Disk
Raid Controller
FC Controller
LAN-Adapter
Power-Supply
Power-Supply-Modul
Fan-Unit
Built-In-Kit

Software

1,26GHz/512kB UPS-saved
PIII 1,26GHz 512kB
1GB SDRAM PC133 ECC
Flexy Bay Option FD
DDS4 20GB, 3MB/s, intern
DVD-ROM, ATAPI
2x 18GB,10k,U160, hotplug,1"
U160 int/ext,16MB, Mylex
66MHz, Cu Interface
Fast Ether-Express-Pro/100+ Server
400W Upgrade (hot-plug)
400W (hot-plug)
Upgrade-Kit hot-plug redundant
19" DC-Rack P6xx/Hxxx/F2xx
Windows 2000 Adv.SRV + 25 CL,18Proz. US

S26361-F2599-E126
S26361-F2306-E524
S26361-F2575-E1
S26361-F2233-E3
SNP:SY-F2234E1-A
SNP:SY-F2336E118-P
S26361-F2406-E16
SNP:SY-F2244E1-A
SNP:SY-F2071E1-A
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
SNP:SY-F2261E31-A
S26361-F2565-E706

Type/Value

Order Number

1x PRIMERGY F200 GE RS PIII


1,26GHz/512kB UPS-saved
PIII 1,26GHz 512kB
1GB SDRAM PC133 ECC
Flexy Bay Option FD

S26361-K643-V303

PCI-Card

Configuration Server 1_1


Component
Generals
CPU/Cache
RAM

Remark

S26361-F2599-E126
S26361-F2306-E524
S26361-F2575-E1

94

DAT_STREAMER
CD-ROM
Hard Disk
Raid Controller
FC Controller
LAN-Adapter
Power-Supply
Power-Supply-Modul
Fan-Unit
Built-In-Kit

DDS4 20GB, 3MB/s, intern


DVD-ROM, ATAPI
2x 18GB,10k,U160, hotplug,1"
U160 int/ext,16MB, Mylex
66MHz, Cu Interface
Fast Ether-Express-Pro/100+ Server
400W Upgrade (hot-plug)
400W (hot-plug)
Upgrade-Kit hot-plug redundant
19" DC-Rack P6xx/Hxxx/F2xx

S26361-F2233-E3
SNP:SY-F2234E1-A
SNP:SY-F2336E118-P
S26361-F2406-E16
SNP:SY-F2244E1-A
SNP:SY-F2071E1-A
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
SNP:SY-F2261E31-A

PCI-Card

Order Packages
Package # Supplier
1
Fujitsu-SIEMENS

Order Number
S42022-D1808-V301

Content
ACI US/International

Remark

95

LCT
Configuration
Component
General
Floppy
Disk-Controller
Graphics Adapter
RAM
CD-ROM
Hard Disk
Modem
Keyboard
Operating System

Type/Value
LIFEBOOK E-6646 PIII 1066Mhz
for E-6646
3.5 /1.44 MB
onboard
16MB Video RAM
SDRAM 128 MB 133 MHz

Order Number

CD-ROM E-Serie
10 GB E-Serie
E-Serie LAN/Modem (intern)
E-Series, Keyboard

S26391-F258-E105
S26391-F2430-E100
S26391-F2431-E100

E-Serie Win NT/2000 &Word+Works

S26391-K114-V170

Remark
LCD TFT 14,1 SXGA+1400x1050, ATI Mobility-M6, 1x
seriell, 1x parallel, 1x VGA, 1x PS/2
LI-ION Battery, Card Bus-Connectors

Package #
1

S26391-F2424-E200

S26391-F263-E233
S26391-F2436-E225

US/International (INT)
English (INT/UK/F)

Order Packages
Package # Supplier
1
Fujitsu-SIEMENS

Fujitsu-SIEMENS

Order Number
S42022-D1700-V401
S42022-D1700-V402
S42022-D1700-V403
S42022-D1700-V404
S26391-F1491-L400

Content
LCT US/International
LCT English UK
LCT France
LCT German
LAN-Adapter 3com 3CCE589ET

Remark
Only on request
Only on request
Only on request

96

You might also like