Professional Documents
Culture Documents
5
6
2Product Architecture
3Installation Procedure
8
10
Table of
1.1
Definition of requirement
The following is a possible listing of requirements, which would need to be satisfied to ensure ACI as a highly available
system.
1. Ensure the availability of the ACI server processes to the ACI clients (or the corresponding server that connects
to it, e.g. the subNMS to the EMS) in the case of either a failure of the process or of the host on which it runs
with a minimum downtime.
In the current ACI scenario the existing process monitor is the only source of recovery to the system and suffers
from the following problems:
2.
3.
It suffers from being a possible single point of failure for the system. This means in case the process monitor
crashes for some reason, it does not have a recovery mechanism of its own and hence cannot monitor and restart
the ACI server resources.
In case the host on which it runs suffers from a power failure or a malfunction of any of the OS resources such
that it becomes unavailable, the process monitor becomes totally in adequate and cannot do anything is such a
situation.
Limit learning of the network by the ACI server processes to only a first time start (cold start).
Currently ACI learns parts of the network even on warm starts (i.e. subsequent starts), which itself could take hours.
The solution of requirement 1, should be able to negate this behaviour. In other words ACI should be able to assume
that it is always connected to the network and that the database always reflects the true state of the network. In case of
a loss of connection to the network for a maximum delay of say 10-30 seconds, it should still be possible to assume a
synchronised state between the MIB and the network. Although a possibility of a forced resynchronisation can be
carried out through a consistency check.
Clients and ACI server process, should not proceed to shutdown in case it recognises a failure in connection to
its corresponding server processes.
Currently within ACI, clients shutdown when they sense a loss in connection the server processes
and additionally the server processes themselves need to be re-started when they lose
connection to the server they have been connected to. For e.g. the case of a DCN server
crashes, the Classic server or any EMS connected to it would need to be restarted. This
behaviour would need to be changed. This implies in case a client loses connection, it should
inform the operator that there has been a loss of connection and that the client was trying to reestablish one. All objects w.r.t. the current operation should be cleaned up. In the case of a server
it should also behave in a similar way, i.e. simply wait for its corresponding server to be made
available again.
The requirements listed above are necessary for the current ACI versions. Additional requirements, which maybe
necessary, are:
4. Provide High Availability of ACI as an add-on option to customers, which may want it, which would imply to
sell it as a separate feature.
This implies the following.
The existing installation program would not need to be modified for making any changes that may be need (such
as registry entries, or copies of software on different drives) for high availability. A separate installation
program would need to be written such that it makes the necessary changes on the ACI server machines or nodes.
The behaviour defined in requirement 2 would not be satisfied in this case (i.e. with no high availability) and
hence ACI should be able to detect that it is not running in a high availability environment and should
learn/force synchronisation with the network.
5. Provide a way for the maintenance of the ACI server machines without interruption in services of ACI.
Currently if the ACI machine on which the ACI server processes are running need to be upgraded to a new service
pack or the customer would like to add a new utility software which requires a reboot of the system or even upgrade
the system to a new hard disk or memory, the ACI server processes are disrupted and need to be restarted. This causes
a loss of service to ACI. The possibility that the customer can do the above should exist.
2 Product Architecture
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
The following figure illustrates components of a two-node server cluster that may be composed of servers running either
Windows 2000 Advanced Server or Windows NT Server 4.0, Enterprise Edition with shared storage device connections
using SCSI or SCSI over Fiber Channel.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects all the defined groups and his resources to that of the failed node,
resources like the share disk, the Global Ip Address, Global Hostname, Versant, Corba and ACI.
DM Client
GUI
GUI
Application
Logic
Application
Logic
API
API
ACI Group
Active
node
Server Process
Server Process
Switch over
Database Files
ACI Logs
3 Installation Procedure
All processes are to be installed on both nodes on the boot disk. It is not recommended to install executables on share disk
because of following reason (LMA 150 and HA Definitions):
Maximum reliability
- Minimize the number of single points of failure in your environment
Comment: The installation on the share disk is a single point. When the executable files of ACI in
mistake are overwrite, is not possible a restart from the other node and the high availability is stopped.
(Here have not in mind a hardware crash of disk!). A overwrite is always possible throw other process or
ACI self.
- Provide mechanisms that maintain service when a failure occurs
Comment: General is an upgrade of version not possible on the share disk without a stop of the High
Availability because the files are locked. The High Availability definition set a maximum of 8 hours in
year for state out of function. In this case is this lot of time not enough for maintenance and upgrade.
LMA150 part
Provide a way for the maintenance of the ACI server machines without interruption in services of ACI.
- Currently if the ACI machine on which the ACI server processes are running need to be upgraded to a
new service pack or the customer would like to add a new utility software which requires a reboot of the
system or even upgrade the system to a new hard disk or memory, the ACI server processes are disrupted
and need to be restarted. This causes a loss of service to ACI. The possibility that the customer can do the
above should exist.
Databases, logs, os_backup, corba variables are to be installed and configured on the shared disk.
3.1
Installation Options
Three options are available for installing Cluster Service:
Installation with a fresh installation of Windows 2000 Advanced Server
Installation on an existing installation of Windows 2000 Advanced Server
Unattended Cluster Service installation
Regardless of the installation type you select, you will use the Cluscfg.exe application. Cluscfg.exe runs as a standard
Windows 2000 wizard unless you automate the installation of Cluster Service. When you run Cluscfg.exe from the
command line, it supports the command line options listed in Table 3.1.
Table 3.1 Cluscfg.exe Command-Line Options
Parameter
ACC[OUNT] <accountname>
ACT[ION] {FORM | JOIN}
D[OMAIN] <domainname>
EXC[LUDEDRIVE] <drive list>
I[PADDR] <xxx.xxx.xxx.xxx>
L[OCALQUORUM]
NA[ME] <clustername>
NE[TWORK] <connectionname>
{INTERNAL | CLIENT | ALL}
[priority]
Description
Specifies the domain service account used for Cluster Service.
Specifies whether to form a new cluster or join an existing cluster.
Specifies the domain used by Cluster Service.
Specifies which drives should not be used by Cluster Service as shared disks.
Specifies the IP address for the cluster.
Specifies a disk on a nonshared SCSI bus that should be used as the quorum
device.
Specifies the name of the cluster.
Specifies how the network connection specified should be used by Cluster
Service.
Specifies the password for the domain service account used for Cluster
Service.
Q[UORUM] <x:>
Specifies the drive letter to use for the quorum device.
S[UBNET] <xxx.xxx.xxx.xxx>
Specifies the subnet to use for the private network.
Suppresses the user interface in order to perform an unattended installation.
U[NATTEND] [<path to answer file>]
Also specifies an optional external answer file.
P[ASSWORD] <password>
10
Unattended Installation
If you are installing and configuring a number of clusters, you can elect to automate the setup process for Cluster Service
as part of a new Windows 2000 Advanced Server installation or as an installation on an existing server. In either case,
you will use the Cluscfg.exe application with an associated answer file. When you use Cluscfg.exe to automate an
installation, the answers it requires can come from the answer file used by Sysprep or from an external answer file that you
create.
NOTE
Sysprep is used to install only new instances of Windows 2000 Advanced Server. To automate the installation process of
Cluster Service on an existing Windows 2000 Advanced Server, you must supply an external answer file.
Some security risks are associated with using the Password key because the password is stored as plain text within the
answer file. However, the Password key is deleted after the upgrade.
Quorum
Quorum = <drive letter>
This key specifies the drive to be used as the quorum device.
Example:
Quorum = Q:
Subnet
Subnet = <IP subnet mask>
This key specifies the IP subnet mask of the cluster.
Example:
Subnet = 255.255.0.0
Hardware Configuration
Although Cluster Service can be implemented on a variety of hardware configurations, Microsoft supports only Cluster
Service installations performed on configurations listed on the Cluster Service Hardware Compatibility List (HCL). The
wizard's Hardware Configuration page, shown in Figure 3.1, reviews this policy. For more information about these
configurations, see Chapter 2, Lesson 1.
Figure 3.1 The Cluster Service Configuration Wizard's Hardware Configuration page
To continue with the installation of Cluster Service you must confirm that you understand Microsoft's support policy by
clicking I Understand.
Select An Account
Before running the wizard, you must first create a domain user account for the cluster. This account must be a Domain
Administrator or have local administrative rights on each node, plus the following permissions:
Lock pages in memory
Log on as a service
13
14
Network Connections
The Network Connections page, shown in Figure 3.7, allows you to configure the cluster to allow it to communicate
properly.
15
16
Cluster IP Address
The Cluster IP Address page, shown in Figure 3.9, requires you to enter the public IP address assigned to the cluster.
On the first server, from the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and
then click Active Directory Users and Computers.
If reskit.com is not already expanded, click the plus sign to expand it.
Click Users.
Right-click Users, point to New, and click User.
In the First Name text box, type ClusterAdmin.
In the Last Name text box, type Account.
In the User Logon Name text box, type clusteradmin.
Click Next.
17
1.
1.
1.
1.
1.
1.
On the first server, from the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and
click Cluster Administrator.
18
1.
Cluster Administrator should look like Figure 3.10. If you cannot open Cluster Administrator or if you see errors
in it, the Cluster Service installation did not complete successfully or your node is incorrectly configured.
19
20
Modifying the state of groups and resources You can use Cluster Administrator to bring resources and groups
online or take them offline. If you change the state of a group, all the resources within that group will be updated
automatically. These resources have their state changed in the order of their dependencies.
Changing ownership Using Cluster Administrator, you can specify the ownership of a resource or an entire
group. Resources are owned by groups, and groups, in turn, are owned by a node. You can transfer resources
between groups to satisfy dependencies and application requirements. You can also transfer group ownership,
using the Move Group command, to assign groups to other nodes in the cluster.
You typically transfer group ownership when you need to bring down a node for maintenance or upgrades. When a group's
ownership is moved to another node, all resources in that group are taken offline, the group is then transferred, and the
resources are brought back online. As a result, you must carefully plan when to move groups because clients might be
affected temporarily as the resources are shut down and then restarted.
Once the resource ownership has changed, the resource will be automatically brought online. However, when a resource is
moved between groups on the same node, it will not be taken offline.
Changing the maximum Quorum log size By default, the Quorum log file size is set to 64 KB. Depending on
the number of shares supported on the cluster and the number of transactions managed by the cluster, this might
be too small. In this case, you will receive a notification in the Event Viewer. When the Quorum log reaches the
specified size, Cluster Service will save the database and reset the log file. If you change the Quorum size on one
node, it will automatically take affect on the other.
Initiating a failure In order to help you test your failover policies, Cluster Administrator can initiate a failure.
This feature also allows you to test the restart settings on individual resources.
Identifying failovers In addition to configuring and managing a cluster, Cluster Administrator can also quickly
provide information on the health of the cluster. This is accomplished with indicators such as the Node Offline
icon shown in Figure 4.3.
Description
If the application will run on a new virtual server, you need a unique IP address. You do not
need an IP address if your application will run on an existing virtual server.
Even though the application resides on the cluster, clients access the application using a
standard computer name. If you do not want to run the application on an existing virtual server,
the wizard will prompt you for the new virtual server's name and a unique IP address. The
wizard will then create the appropriate resource group and implement the virtual server.
The wizard lets you create a resource to manage your application. You must select the
appropriate resource type for your needs.
Using Cluster.exe
21
In addition to managing your cluster using the GUI-based Cluster Administrator, you can also execute administrative tasks
from the command line. For example, you might need to configure a property on more than one cluster. Using Cluster.exe,
you can set properties through a single command execution. You can also execute command line tasks from within a script
to automate the configuration of many clusters, nodes, resources, and resource groups.
Cluster.exe is automatically installed with Cluster Service on each node. You can also run Cluster.exe in Windows NT 4
Server Enterprise Edition with Service Pack 3 or later.
NOTE
Unlike Cluster Administrator, Cluster.exe does not automatically restore previous connections when you use it to
administer a cluster.
Table 4.2 describes the primary arguments supported by Cluster.exe. For a complete listing of the properties and options
supported by each command, see Appendix B.
All but the first two options listed in Table 4.2 apply to the /CLUSTER options. If these options are used alone, Cluster.exe
will attempt to connect to the cluster on the node that is running Cluster.exe and apply the command-line option to this
cluster.
Table 4.2 Cluster.exe Command-Line Arguments
Argument
Description
Displays a list of clusters in the specified domain. If no domain is specified,
/LIST[:domain-name]
the domain that the computer belongs to is used. Do not use the cluster name
with this option.
If you do not specify the cluster name, Cluster.exe will attempt to connect to
[[/CLUSTER:]cluster-name]
the cluster running on the node that is running Cluster.exe. If the name of your
<options>
cluster is also a cluster command or its abbreviation, such as cluster or c, use
/cluster: to explicitly specify the cluster name.
Displays or sets the cluster's common properties. See Appendix B for more
/PROP[ERTIES] [<prop-list>]
information on common properties.
Displays or sets the cluster's private properties. See Appendix B for more
/PRIV[PROPERTIES] [<prop- list>]
information on private properties.
/REN[AME]:cluster-name
Renames the cluster to the specified name.
/VER[SION]
Displays the Cluster Service version number.
/QUORUM[RESOURCE]
Changes the name or location of the Quorum resource or the sizeof the
[:resource-name] [/ PATH:path]
Quorum log.
[/MAXLOGSIZE:max-size-kbytes]
/REG[ADMIN]EXT:adminRegisters a Cluster Administrator extension DLL with the cluster.
extension-dll[,admin-extension-dll...]
/UNREG[ADMIN]EXT:adminextension-dll[,admin-extensionUnregisters a Cluster Administrator extension DLL from the cluster.
dll...]
A node-specific cluster command. See Appendix B for a list of available
NODE [node-name] node-command
commands.
GROUP [group-name] groupA group-specific cluster command. See Appendix B for a list of available
command
commands.
RES[OURCE] [resourceA resource-specific cluster command. See Appendix B for a list of available
name]resource-command
commands.
{RESOURCETYPE|RESTYPE}
A resource typespecific cluster command. See Appendix B for a list of
[resourcetype- name].-command
available commands
A network-specific cluster command. See Appendix B for a list of networkNET[WORK] [network-name]
commanavailable commands.
NETINT[ERFACE] [interfaceA network interfacespecific cluster command. See Appendix B for a list of
name] interface-command
available commands.
/? Or /help
Displays cluster command line options and syntax.
From either node in a cluster, open the Windows 2000 Start menu, point to Programs, point to Administrative
Tools, and click Cluster Administrator.
Type the cluster's name, Mycluster, or a single period (.) to specify the current cluster, and then click OK. (Using
the period notation for the cluster name is supported only when Cluster Administrator is running on a node in the
cluster.) Cluster Administrator will open and connect to the current cluster.
22
From the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and click Cluster
Administrator.
If NodeA is not already expanded, click + to expand it.
Click Active Groups. Verify that the default Cluster Group is listed.
In the left pane, double-click Groups.
In the left pane, right-click Cluster Group and click Move Group. The group will be moved from NodeA to
NodeB. Verify that the Owner column has been updated in the Cluster Administrator and now appears under
NodeB's Active Groups.
Repeat the process to return the Cluster Group to NodeA.
In the left pane, right-click the cluster's name, MYCLUSTER, and click Properties.
Click the Quorum tab.
In the Reset Quorum Log field, type 128 and click OK.
From the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and click Cluster
Administrator.
Click on the Groups folder in the tree. You will see a list of all the cluster groups in the right pane. Verify that the
owner for each group is NodeA. At this point, the only group present is the Cluster Group.
If any group is owned by NodeB, right-click on the group and click Move Group to move the group to NodeA.
Repeat this process until all groups are on NodeA.
Leave Cluster Administrator open for the remainder of this practice.
Switch to the Windows 2000 command prompt window, type cluster GROUP "Cluster Group" /MOVE:NODEB
at the command prompt, and press Enter. You should see the following message displayed:
Moving resource group `Cluster Group'...
Group
Node
Status
------------------ -----Cluster Group
NODEB
Pending
1. Switch to Cluster Administrator and click on the plus sign to expand the Groups folder. Notice that the owner of
Cluster Group is now NodeB.
Switch to the Windows 2000 command prompt window, type cluster /QUORUM /MAXLOGSIZE:256 at the
command prompt, and press Enter. This command will change the maximum allowed size of the Quorum log
from the current value of 128 KB to 256 KB.
Switch to Cluster Administrator, right-click on MYCLUSTER, and click Properties.
In the MYCLUSTER Properties dialog box, click the Quorum tab. The Reset Quorum Log At field should show
256 KB.
Type 128 in the Reset Quorum Log At field and click OK. This will return the maximum Quorum log size to its
original value.
Switch to the Windows 2000 command prompt window, type cluster NODE NODEA /PAUSE at the command
prompt, and press Enter. A message will indicate that NodeA has been paused.
Switch to Cluster Administrator. You should see an icon with a exclamation point in a yellow triangle on NodeA,
indicating that it has been paused.
Switch to the Windows 2000 command prompt window, type cluster NODE NODEA /RESUME at the command
prompt, and press Enter. A message will indicate that NodeA has resumed.
Switch to Cluster Administrator.
The icon indicating that NodeA was paused should no longer be present.
create a custom resource DLL for the application. This will effectively make a cluster-unaware application cluster-aware.
Not all applications can support a custom resource DLL. But if you can develop a resource DLL for an application, you
will have a new resource type specific to that application. (The procedure for creating custom resource DLLs is outside the
scope of this training kit.)
will provide only the most fundamental level of clustering services. If your service requires advanced clustering
support, you must develop and use a custom resource DLL for the service.
None
None
None
None
Resource-Specific Properties
In addition to the standard properties that each resource type includes, such as name and description, specific properties
might need to be configured. Table 4.4 lists resource-specific properties.
Table 4.4 Resource-Specific Properties
Resource
File Share
Generic Application
Generic Service
DHCP Service
Property
DHCP database file path
DHCP database files backup path
Audit log file location
Access permissions
Simultaneous user limit
Share name and comment
Path
Command line
Current directory
Use network name for computer name
Whether the application can interact with the desktop
Service name
Startup parameters
Use network name for computer name
Service for this instance (FTP or WWW)
Alias used by the virtual root
25
IP Address
MSMQ Server
Network Name
Physical Disk
None
Print Spooler
WINS Service
IP address
Subnet mask
Network parameters
NetBIOS option
Computer name
Drive to be managed (cannot change once the resource has been
configured)
Path for the print spooler folder
Job completion time-out
Path to WINS database
Path to WINS backup database
Failover Policy
The failover policy for a group is set using the Failover tab of the group's property sheet. You can set the Failover
Threshold and Failover Period properties based on your needs. The Failover Threshold specifies the number of times the
group can fail within the number of hours specified by the Failover Period property. If the group fails more than the
threshold value, Cluster Service will leave the affected resource within the group offline. For example, if a group Failover
Threshold is set to 3 and its Failover Period is set to 8, Cluster Service will fail over the group up to three times within an
eight-hour period. The fourth time a resource in the group fails, Cluster Service will leave the resource in the offline state
instead of failing over the group. All other resources in the group will be unaffected.
Failback Policy
By default, resource groups are not configured to fail back to the original node. Instead, after a failover, the group remains
on the second node until you manually move the group to the appropriate node. If you want a group to run on a preferred
node and return to that node after a failover, you must implement a failback policy for the group. You can specify whether
the group should fail back immediately after the original node comes back online or at a specified time during the day. For
example, you might want to fail back a group only during non-business hours to minimize the impact on clients. In order
for a group to fail back to a specific node, you must set the Preferred Owners property of the group.
Creating a Group
5.
6.
7.
8.
9.
10.
11.
12.
13.
Open Cluster Administrator. The Open Connection To Cluster dialog box appears.
Type the name of the cluster (in this case, MYCLUSTER) and click Open.
Right-click Groups, point to New, and click Group. The New Group Wizard will start.
Type Cluster Printer in the Name box.
Type Group For Printer Resources in the Description box.
Click Next.
In the Preferred Owners dialog box, add both nodes to the Preferred Owners list.
Click Finish. A message box will appear stating that the group was created successfully.
Click OK.
Transferring a Resource
4.
5.
6.
7.
26
From the Windows 2000 Start menu, point to Programs, point to Administrative Tools, and then click Cluster
Administrator.
1. If needed, expand the My Cluster tree in the left pane.
1. Expand the Groups folder and then click on the Cluster Group in that folder.
1. Verify that the Cluster Group and all of its resources are online on the node you are using for this practice.
1. Right-click on Cluster Group and click Properties. View the settings on the Failover and Failback tabs.
1. From the Windows 2000 Start menu, click Run, and then type command in the Open field. Click OK to open a
command window.
1. At the command prompt, type cd\winnt\cluster, and then press Enter.
1. At the command prompt, type cluster group "Cluster Group" /prop, and then press Enter. You should see a result
similar to the following:
Listing properties for `cluster group':
T
Resource Group
Name
Value
----------------------------------------------------28
SR
Cluster Group
Name
Cluster Group
S
Cluster Group
Description
D
Cluster Group
PersistentState
1 (0x1)
D
Cluster Group
FailoverThreshold
10 (0xa)
D
Cluster Group
FailoverPeriod
6 (0x6)
D
Cluster Group
AutoFailbackType
0 (0x0)
D
Cluster Group
FailbackWindowStart
4294967295 (0xffffffff)
D
Cluster Group
FailbackWindowEnd
4294967295 (0xffffffff)
D
Cluster Group
LoadBalState
1 (0x1)
The following are definitions for some of these group properties. (See Appendix B for more information on resource group
properties and settings.)
PersistentState This property holds the last known state of a group or resource. When it is set to True, the group
or resource is online. When it is set to False, the group or resource is offline.
FailoverThreshold This property specifies the number of times that Cluster Service will attempt to fail over a
group before it decides that the group cannot be brought online anywhere in the cluster.
FailoverPeriod This property specifies the interval, in hours, during which Cluster Service will attempt to fail
over a group.
AutoFailbackType This property specifies whether a Cluster Group is allowed to fail back. The
ClusterGroupPreventFailback (0) setting prevents failback, and the ClusterGroupAllowFailback (1) setting allows
failback.
FailbackWindowStart This property specifies the start time, on a 24-hour clock, for a group to fail back to its
preferred node. You can set values from 0 (midnight) to 23 (11:00 P.M.) in local time for the cluster. For
immediate failback, you must set both FailbackWindowStart and FailbackWindowEnd to -1.
FailbackWindowEnd This property specifies the end time, on a 24-hour clock, for a group to fail back to its
preferred node. You can set values from 0 (midnight) to 23 (11:00 P.M.) in local time for the cluster. For
immediate fail-back, you must set both FailbackWindowStart and FailbackWindowEnd to -1.
1.
1.
1.
At the command prompt, type cluster group "Cluster Group" /prop AutoFailbackType = 1, and then press Enter.
This will set the Cluster Group to allow failback after a failover occurs.
At the command prompt, enter the following commands to set the Cluster Group to fail back only between 8 A.M.
and 6 P.M.:
cluster group "Cluster Group" /Prop FailbackWindowStart = "8"
cluster group "Cluster Group" /Prop FailbackWindowEnd = "18"
In Cluster Administrator, right-click on Cluster Group, and then click Properties. On the Failback tab, you should
see that failback is allowed between the 8th and 18th hours, as seen in Figure 4.4. Click OK to close the Cluster
Group Properties dialog box.
At the command prompt, enter the following commands to set the Cluster Group to fail back immediately:
cluster group "Cluster Group" /Prop FailbackWindowStart = "-1"
cluster group "Cluster Group" /Prop FailbackWindowEnd = "-1"
In Cluster Administrator, right-click on Cluster Group, and then click Properties. On the Failback tab, you should
see that failback is set to occur immediately.
29
Click the option to prevent failback and then click OK to close the Cluster Group Properties dialog box.
RestartAction This property specifies the action to perform if a resource fails. You can use one of the following
settings:
ClusterResourceDontRestart (0) Do not restart after a failure.
ClusterResourceRestartNoNotify (1) Attempt to restart the resource after a failure. If the restart threshold is
exceeded by the resource within its restart period, Cluster Service will not attempt to fail over the group to
another node in the cluster.
ClusterResourceRestartNotify (2) Attempt to restart the resource after a failure. If the restart threshold is
exceeded by the resource within its restart period, Cluster Service will attempt to fail over the group to another
node in the cluster. This is the default setting.
Unless the RestartAction property is set to ClusterResourceDontRestart, Cluster Service will attempt to restart a failed
resource.
RestartThreshold This property specifies the number of restart attempts that will be made on a resource before
Cluster Service initiates the action specified by the RestartAction property. These restart attempts must also be
made within the time interval specified by the RestartPeriod property. Both the RestartPeriod and the
RestartThreshold properties are used to limit restart attempts.
RestartPeriod This property specifies the amount of time, in milliseconds, during which restart attempts will be
made on a resource. The number of attempts allowed within a RestartPeriod is determined by the
RestartThreshold setting. Both the RestartPeriod and the RestartThreshold properties are used to limit restart
attempts. The RestartPeriod property is reset to 0 once the interval setting is exceeded. If no value is specified for
RestartPeriod, the default value of 90000 is used.
PendingTimeout This property specifies the amount of time, in seconds, that a resource in a Pending Online or
Pending Offline state must resolve its status before Cluster Service fails the resource or puts it offline. The default
value is three minutes.
PendingTimeout has the following relationship with RestartPeriod and RestartThreshold:
RestartPeriod >= RestartThreshold x PendingTimeout
RetryPeriodOnFailure This property specifies the amount of time, in milliseconds, that a resource will remain in
a failed state before Cluster Service attempts to restart it. Until an attempt is made to locate and restart a failed
resource, the resource will remain in a failed state by default. Setting the RetryPeriodOnFailure property allows a
resource to automatically recover from a failure.
Enter the following commands at the command prompt to change the restart properties for the IP Address
resource:
cluster resource "Cluster IP Address" /Prop RestartThreshold = 5
cluster resource "Cluster IP Address" /Prop RestartPeriod = 1000
cluster resource "Cluster IP Address" /Prop PendingTimeout = 50
1. In Cluster Administrator, right-click on the Cluster IP Address resource located in the Cluster Group, and then
click Properties. On the Advanced tab, you should see the RestartThreshold, RestartPeriod, and PendingTimeout
properties set to the new values you entered, as shown in Figure 4.5.
31
Make sure that the resource is set to restart after a failure, and set the Restart Threshold back to its original value
of 3.
Set the Restart Period back to its original value of 900000 seconds.
Set the Pending Timeout field back to its original value of 180000 seconds.
Close the Cluster IP Address Properties dialog box to save the new settings.
32
3.2
For the inserting of ACI parts and necessary applications as Corba and Versant are 2 basic steps necessary
creating of a ACI group
creating of basic Resources in the cluster
Resource
Services for ACI
Dependencies (Required)
Physical Disk or other storage class device (on which the files are located)
IP Address (for client access to the ACI server)
Network Name
Note
Physical Disk This resource type is for managing shared drives on your cluster. Because data corruption can occur if
more than one node has control of the drive, the Physical Disk type allows you to configure which node has control of
the resource at a given time.
IP Address A number of cluster implementations require IP addresses. The IP Address resource type is used for this
purpose. Typically, the IP Address resource is used with a Network Name resource in order to create a virtual server.
Network Name This resource type is used to assign a name to a resource on the cluster. This is typically associated
with an IP Address resource type in order to create a virtual server. Many applications and services that you might
want to cluster require a virtual server.
Property
Name
Description
Description
Required. Specifies the name of the resource.
Optional. Describes the resource.
Required. Specifies which nodes own the resource. If the Quorum resource resides on the disk, all
Possible Owners
nodes must be owners.
Required
None
Dependencies
Disk
Required. Specifies the drive letter or letters for the Physical Disk resource.
Once created, the Physical Disk resource can be brought online, used as a dependency, or otherwise controlled using the
Resource Management functions. It will appear as a Cluster resource in Cluster Administrator and in Cluster.exe. You can
view and set the Physical Disk resource properties by using the Properties page, shown in Figure 5.1.
Drive
The Drive property specifies the drive letter for the Physical Disk resource. If you're using the Drive property and multiple
drive letters are associated with the disk, you must set the Drive property to include all of the drive letters. You must also
34
make sure that the assigned drive letter does not conflict with existing drive letters anywhere in the cluster, including each
node's local drives.
Signature
The Signature property specifies an identifier for the disk. It is a DWORD value with a range from 0 to 0xFFFFFFFF.
When you create a new disk resource using Cluster.exe, you set the Drive or Signature private property to the drive or
signature of the disk. You must set one of these two properties, but you cannot set both. Neither property can be changed
once the assignment is made and the resource is created. When you create a new disk resource using Cluster Administrator,
you're not required to provide one of these properties. Instead, a list of available disks is displayed for you to choose from.
SkipChkdsk
The SkipChkdsk property determines whether the operating system runs chkdsk on a physical disk before attempting to
mount the disk. A TRUE setting causes the operating system to mount the disk without running chkdsk. A FALSE setting
causes the operating system to run chkdsk first and, if errors are found, take action based on the ConditionalMount
property. However, if both the SkpChkDsk and ConditionalMount values are 0 (FALSE), chkdsk will not run and the disk
will be left offline. Table 5.3 summarizes the interaction between SkipChkdsk and ConditionalMount.
Table 5.3 SkipChkdsk and ConditionalMount Interaction
Chkdsk
Disk Mounted?
Runs?
FALSE
TRUE
Yes
If chkdsk reports errors, no. Otherwise, yes.
FALSE
FALSE
No
No
TRUE
TRUE
No
Yes
TRUE
FALSE
No
Yes
Because forcing a disk to mount when chkdsk reports errors can result in data loss, you should exercise caution when
changing these properties.
SkipChkdsk Setting ConditionalMount Setting
ConditionalMount
The ConditionalMount property determines whether a physical disk is mounted, depending on the results of chkdsk. A
TRUE setting prevents the operating system from mounting the disk if chkdsk reports errors. A FALSE setting causes the
operating system to attempt to mount the disk regardless of chkdsk failures. The default is TRUE. Note that if chkdsk has
not run, it will not produce errors, so the operating system will attempt to mount the disk regardless of the
ConditionalMount setting.
MountVolumeInfo
The MountVolumeInfo property stores information used by the Windows 2000 Disk Manager. Cluster Service updates the
property data stored in MountVolumeInfo whenever a disk resource is brought online. Cluster Service also updates
MountVolumeInfo when the drive letter of a disk resource is changed using Disk Manager.
MountVolumeInfo data consists of a byte array organized as follows:
A 16-byte "header" consisting of the disk signature (first 8 bytes) and the number of volumes (second 8 bytes).
One or more 48-byte descriptive entries. (See Table 5.4.)
Table 5.4 MountVolumeInfo Data
Position
First 16 bytes
Second 16 bytes
Next 8 bytes
Next 2 bytes
Next 2 bytes
Last 4 bytes
Data
Starting offset
Partition length
Volume number
Disk type
Drive letter
Padding
D Disk Q:
Signature
1415371731 (0x545cdbd3)
D Disk Q:
SkipChkdsk
0 (0x0)
D Disk Q:
ConditionalMount
1 (0x1)
B Disk Q:
DiskInfo
03 00 00 00 ... (264 bytes)
B Disk Q:
MountVolumeInfo
D3 DB 5C 54 ... (104 bytes)
The values assigned to SkipChkdsk and ConditionalMount determine the behavior of chkdsk. If the MSCS folder on the
Quorum drive is inaccessible or if the disk is found to be corrupt (via checking of the dirty bit), chkdsk will behave as
follows:
If SkipChkdsk = 1 (which means TRUE), Cluster Service will not run chkdsk against the dirty drive and will
mount the disk for immediate use. (Note that SkipChkdsk = 1 overrides the ConditionalMount setting and that
Cluster Service performs the same no matter what the ConditionalMount property is set to.)
If SkipChkdsk = 0 (which means FALSE) and ConditionalMount = 0, Cluster Service fails the disk resource and
leaves it offline.
If SkipChkdsk = 0 and ConditionalMount = 1, Cluster Service runs chkdsk /f against the volume found to be dirty
and then mounts it. This is the current default behavior for Windows 2000 clusters and is the only behavior for
Windows NT 4 clusters.
You can use the following commands to modify these resource private properties:
cluster clustername res "Disk Q:" /priv Skipchkdsk=0[1]
cluster clustername res "Disk Q:" /priv ConditionalMount=0[1]
You can track disk management changes using the fixed-length values returned by the MountVolumeInfo property.
MountVolumeInfo replaces DiskInfo in Windows 2000.
Here's a sample MountVolumeInfo entry:
D3DB5C540400000000020000000000000000400600000000010000000746000000024
0060000000000FE3F060000000002000000074B00000000800C000000000000400600
00000003000000074C00000040C0120000000000C03F06000000000400000007490000
The signature is D3DB5C54, and the number of volumes is 04000000. The table below describes how to interpret the rest
of the information.
Offset
Partition Length
Volume Letter
Disk Type Drive Number Padding
00020000.00000000
00004006.0000000 0
0100000 0
07
46
0000
00024006.00000000
00FE3F06.0000000 0
0200000 0
07
4B
0000
0000800C.00000000
00004006.0000000 0
0300000 0
07
4C
0000
0040C012.00000000
00C03F06.0000000 0
0400000 0
07
49
0000
For compatibility in a mixed-node cluster where one node is running Windows 2000 and the other is running Windows NT
4, DiskInfo is retained in the properties of the disk resource.
Whenever a disk resource is brought online, Cluster Service checks the physical disk configuration and updates the
information in MountVolumeInfo and DiskInfo. Corrections are made to the physical disk configuration registry entries as
needed. When changes are made using Disk Manager, any values related to drive letters are updated dynamically.
1.
Click OK.
37
3.3
3.3.1 Generic
Overview
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
Configuration
There are a number of configurations for server clusters. Here the two nodes share a disk array. The secondary node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and changes the IP address of the backup node to that of the failed node.
Versant
Client
GUI
GUI
Application
Logic
Application
Logic
Versant
API
Versant
API
Redundant
Node
Versant
Server
Versant
Server Process
Versant
Server Process
Database Files
Install Versant software on both nodes on a local disk and change the properties of the local Versantd service.
Create a new Cluser Group with following resources Shard Physical Disk, Network Name, IP Address and
Generic Service Versantd.
In this document the both nodes are called Node1 and Node2. The shared network name is GlobalHostName and the
shared drive is called R.
After installing Versant and before rebooting the machines, the Startup Type for the Versantd service needs to be changed
from Automatic to Manual. To change this setting bring up the services dialog box from the Control Panel and double
click on Versantd and change the setting as shown below.
39
40
Note
0.1.1.1
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.
For a Versant database group you will need the following resources to be part of this new group,
Information required when adding resources
Resource Type
Physical Disk
Drive letter
IP Address
Network Name
Generic Service
41
43
(fiction address)
44
45
3.4
3.4.1 Generic
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects the share disk, the Global Ip Address, and Global Hostname to
that of the failed node.
Corba
Client
GUI
GUI
Application
Logic
Application
Logic
API
API
Redundant
Node
Corba
Server I
Corba
Server Process
Corba
Server Process
Database Files
Availability
The duration of an outage of Corba will depend primarily on the time required to restart all Corba services and clients.
46
be performed. Namely, changes to the configuration domain can be carried out from any host in the
configuration domain and are visible to all the machines that are configured to use it.
Orbix 2000 offers two types of domain configuration :
-
Both of them allows sharing a configuration domain between a number of hosts. However, the File
Based type implies a creation of configuration files and then making them available to other
machines. This can be realized either through copying them from the host where the domain has
been initially created or through a shared network file system (e.g. Windows Networking). The
second type is more flexible thus preferred, it assumes creation of configuration information on one
highly reliable and always accessible host and links from other hosts Orbix environments to the
created configuration domain. The configuration repository approach allows a centralized store of
configuration information (such as loaded pug-ins, initial object references) for all machines running
ACI servers. This model is depicted on the figure below. The configuration repository itself is an NT
service that runs on the dedicated host. This host would also be running domain wide services such
as the Naming Service. It can be either a stand alone machine or the one running ACI servers.
Especially, a good candidate is one that is running Network Manager NM server as this server needs
to contact more frequently the Naming Service where the Domain Managers TDM servers are
registered.
47
Config.
Repos.
CORBA transport
Figure 24 Orbix Environment with Centralized Configuration
Preparation of Orbix 2000 environment for ACI usage involves three step process. The first step is
executed on every machine destined to run any ACI server or a dedicated host to run a configuration
repository and other Orbix services. The second step is performed only on the host destined to run
the configuration repository and other Orbix services. The last step applies to all machines except the
one hosting the configuration repository and Orbix services. These steps include :
...
[HKEY_LOCAL_MACHINE\SYSTEM\ControlSet003\Control\Service]
5. Restart the computer
6. Install Orbix 2000
48
3.4.2.1.3
walk
you
through
the
process of
configuring
This application
the
following
This
49
new domain
machine.
or build a link
to an existing domain
running on another
Repository
Based.
information
Repository, which
Select this
in
This
a
type
domain
centralized
type of domain
of
stores
Configuration
if you expect to
share configuration
file on
This type
the local
machine.
It can
be useful
for smaller
Configuration Repository
is
accessed through
50
a CORBA
service,
which
listen
makes it
on a
available to
fixed
well-known location.
port so
to advertise
multiple machines.
You
that
applications can
hostname, the
This
service must
access
If you do not
it at
specify a
hostname.
What hostname should the Configuration Repository advertise (leave
blank for default):
machine.
This
domain as being
the default
domain for
You must
POAs, or if you intend to use other Orbix 2000 services such as the
Naming Service or Interface Repository.
Do you want to deploy a Locator into this domain [yes]:
51
Locator name:
[rts120]:
location.
If you
do not specify a
Orbix 2000
Activator
particular machine.
machine
on
which
You must
service
deploy
you intend
to
starts
applications
an Activator
service on
on
every
server
activation feature.
Do you want to deploy an Activator into this domain [yes]:
Using a comma-
separated list, please select the set of services you wish to deploy
in this domain.
52
to easily
Instead of using the above batches (they would have to be used every time the system is restarted)
open the Control Panel | System Properties dialog and set the following system variables (adjust
the path to the Orbix installation directory, if necessary ):
IT_PRODUCT_DIR=E:\Program Files\IONA (this should be already set due to the Orbix installation,
please check only)
IT_CONFIG_DOMAINS_DIR=E:\Program Files\IONA\etc\domains
IT_DOMAIN_NAME=ACI_Network
The creation of Orbix configuration domain ACI_Network is now done. The computer must be
restarted to allow changes to take effects. If all the ACI servers are intended to run on this machine
53
(hosting the configuration repository at the same time) this is the only step needed, otherwise
continue with step 3 for other machines
walk
you
through
the
process of
configuring
This application
the
following
54
or build a link
This
to an existing domain
running on another
to share a
Repository service is
located.
On what host is the remote Configuration Repository running [new_aci_server]:
Enter the hostname where the configuration domain has been created (the host used in the step 2)
On what port is the remote Configuration Repository listening [3076]:
55
Where do you want to place databases and logfiles for this domain [E:\Program
Files\IONA\var\ACI_Network]:
machine.
This
domain as being
the default
domain for
You must
POAs, or if you intend to use other Orbix 2000 services such as the
Naming Service or Interface Repository.
Do you want to deploy a Locator into this domain [yes]:
Enter no, a Locator needs to be run only on the host where the configuration domain has been
created
Deploy Activator
----------------------------------------------The
Orbix 2000
Activator
particular machine.
machine
on
which
You must
service
you intend
deploy
to
starts
applications
an Activator
service on
on
every
server
activation feature.
Do you want to deploy an Activator into this domain [yes]:
Enter no, an Activator needs to be run only on the host where the configuration domain has been
created
This will produce the following output
creating configuration files........done.
creating link to Configuration Domain....done.
To use this domain, setup your environment as follows:
56
Instead of using the above batches (they would have to be used every time the system is restarted)
open the Control Panel | System Properties dialog and set the following system variables (adjust
the path to the Orbix installation directory, if necessary ):
IT_PRODUCT_DIR=E:\Program Files\IONA (this should be already set due to the Orbix installation,
please check only)
IT_CONFIG_DOMAINS_DIR=E:\Program Files\IONA\etc\domains
IT_DOMAIN_NAME=ACI_Network
The configuration of this machine to use ACI_Network configuration domain is now completed. The
computer must be restarted to allow changes to take effects.
57
58
59
60
61
62
Note: If the 1. Installation is erroneous, then is recommended a manual delete of the registry for the IT services.
Note
0.1.1.2
Disk Group
One Disk Group is created for each disk resource on the shared SCSI bus. A Physical Disk resource is
included in each group.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group, as we will see in the following sections.
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.
For a Corba database group you will need the following resources to be part of this new group,
Information required when adding resources
Resource Type
Physical Disk
Drive letter
IP Address
Network Name
Generic Service
2. IT locator default-domain
3. IT naming default-domain
4. IT activator default-domain
5. IT ifr default-domain
6. IT event default-domain
ACI_Network is a placeholder for your corba domain name
65
66
67
68
69
3.5
3.5.1 Generic
Server clusters are configurations of two or more machines that share some disks. A failure in one machine causes the data
services to automatically relocate to the backup machine. If the shared disk is itself a RAID, then the configuration admits
no single point of failure.
There are a number of configurations for server clusters. Here the two nodes share a disk array. The inactive node
monitors the primary to determine if it has failed. When it detects a failure, it immediately starts a script that restarts all
the failed services on the secondary node and reconnects the share disk, the Global Ip Address, and Global Hostname to
that of the failed node.
ACI
Client
GUI
GUI
Application
Logic
Application
Logic
API
API
Redundant
Node
ACI
Server I
ACI
Server Process
ACI
Server Process
Database Files
70
Reliability:
This configuration provides the same level of reliability as is available on a RAID configuration, because all needed
process are already configured for MSCS. Here not exists single point of failure.
Availability
The duration of an outage of Application will depend primarily on the time required to restart all resources of his group
and clients (server, client, snmpserver, brass).
127.0.0.1
localhost
218.1.17.83 VERSANDHOST
218.1.17.80
Node1
218.1.17.81
Node2
E.g. for services
acis-nmtdm
50195/tcp
# ACI NM-TDM server service
acis-dm
50196/tcp
# ACI DM server service (SubNM)
acis-dmtdm
50190/tcp
# ACI DM-TDM server service (or V8.2 DM)
acid-qd2-dmtdm 50191/tcp
# ACI DM-TDM Qd2 DCN server service
acid-snmp-dmtdm 50192/tcp
# ACI DM-TDM SNMP server service
acis-emgen
50193/tcp
# ACI GenEM server service
acid-snmp-emgen 50194/tcp
# ACI GenEM SNMP server service
acis-emxl
50200/tcp
# ACI EM XL server service
acid-snmp-emxl 50201/tcp
# ACI EM XL SNMP server service
acis-emamgw
50202/tcp
# ACI EM AMGW server service
acid-snmp-emamgw 50203/tcp
# ACI EM AMGW SNMP server service
acis-emacc
50204/tcp
# ACI EM ACC server service
acid-snmp-emacc 50205/tcp
# ACI EM ACC SNMP server service
Next Step:
For Cluster Software you have to change the properties of local ACI service on both nodes with the computer Management
tool:
71
72
3.5.3
3.5.3.1 Introduction
Cluster Administrator shows you information about the groups and resources on all of your clusters and specific
information about the clusters themselves. A copy of Cluster Administrator is automatically installed on both cluster nodes
when you install MSCS. For remote administration, you can install separate copies of Cluster Administrator on other
computers on your network. The remote and local copies of Cluster Administrator are identical.
Note
0.1.1.3
The default Cluster Group contains an IP Address resource, a Cluster Name resource, and a Time Service
resource. (This group is essential for connectivity to the cluster).
Disk Group
One Disk Group is created for each disk resource on the shared SCSI bus. A Physical Disk resource is
included in each group.
Do not delete or rename the Cluster Group. Instead, model your new groups on the Cluster Group by modifying the Disk
Groups or creating a new group, as we will see in the following sections.
Each cluster needs only one Time Service resource. You do not need, and should not create, a Time Service resource in
each group.
For a database group you will need the following resources to be part of this new group,
Information required when adding resources
Resource Type
Physical Disk
Drive letter
IP Address
Network Name
Generic Service
IP address
Hostname
Corba
Versant
Important: The order of succession for the booting of Resources is flexible. For example, it is possible corba after versant
to start up, or reversed, there both Resources are really apart independent
Then, it is possible to start the ACI resources in following order of succession:
For NM
NM server
For DM
QD2 server (only when it is needed )
DM server
For EM
Brass
Snmp server
EM server
75
76
77
78
3.6
Hardware Requirements
Component
General
CPU
RAM
Hard Disk
Floppy
CD-ROM
DAT-Streamer
LAN-Adapter
Disk-Controller
Keyboard
Mouse
Graphics Adapter Card
Audio Controller
Monitor Server
Monitor Single User
Type
Tower
Remark
Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
Software Requirements
Component
Operating System
Type
Windows NT 4.0 Server
Windows 2000 Server
Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards
ACI-Client
General
An ACI-Client shall be planned to handle 6.000 up to 10.000 subscribers ( depending on customer operational and maintenance organization).
Hardware Requirements
Component
General
CPU
Cache
Type/Value
Mini-Tower/Desktop
Pentium III 600 MHz
RAM
Hard Disk
Floppy
CD-ROM
Disk Controller
LAN-Adapter
Keyboard
Mouse
Graphics Adapter Card
Audio Controller
Monitor
128 MB
3,2 GB (Fast-IDE)
1.44 MB
32x IDE
FAST-IDE
10/100 M bit/s
PS/2 (recommended)
PS/2 (recommended)
SVGA, 4 MB VRAM
Any WinNT/2000 compatible
Color, 21
Remark
Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
512
kB
Sec.
Level Cache
Software Requirements
Component
Operating System
Type
Windows NT 4.0 WS
Windows 2000 Professional
Windows NT 4.0 Server
Windows 2000 Server
Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards
82
ACI/-TRIAL/-MINI
General
That PC can be used as an ACI-Trial (small configuration, max. 120 subscribers) or as an ACI-Mini , which configuration was defined due to the necessity to have a low
cost system for small networks up to 1.000 subscribers. It is used as a single user system.
Hardware Requirements
Component
General
CPU
Cache
Type/Value
Mini-Tower/Desktop
Pentium III 600 MHz
Remark
Interfaces (recommendation): 2 x RS232C, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
RAM
Hard Disk
Floppy
DAT-Streamer
CD-ROM
Disk Controller
LAN-Adapter
Keyboard
Mouse
Graphics Adapter Card
Audio Controller
Monitor
128 MB
6 GB (Fast-IDE)
max. DB size=500MB
1.44 MB
4/8 GB (SCSI)
optional, for backup (only for ACI-Mini)
32x DIE
FAST-DIE
10/100 M bit/s
PS/2 (recommended)
PS/2 (recommended)
SVGA, 4 MB VRAM
Any WinNT/2000 compatible
Color, 17
512
kB
Sec.
Level Cache
Software Requirements
Component
Operating System
Type
Windows NT 4.0 Server
Windows 2000 Server
Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards
83
ACI-High-Availability-Server-Cluster
General
The High-Availability-Server-Cluster solutions will only be offered together with SNI server clusters with 99,99% HW availability mentioned in chapter 2.
LCT
Hardware Requirements
Component
General
Type/Value
Notebook/Laptop
CPU
Cache
RAM
Hard Disk
Floppy
CD-ROM
Disk-Controller
Graphics Adapter
LAN Adapter
LCD-Display
Remark
with optional accumulator power supply
Interfaces (needed): 1 x RS232C
additional Interfaces (recommendation): 1 x VGA, 1 x Centronics, 1 x Keyboard (PS/2), 1x Mouse (PS/2)
Software Requirements
Component
Operating System
Type
Windows NT 4.0
Windows 2000
Remark
Installation of Service Pack (valid version stored on ACI-CD) is mandatory
Will be supported from ACI Version 8.2 onwards
84
Configuration BASIC
Component
Type/Value
Order Number
Remark
General
CPU
Floppy
SCSI-Controller
Fast-IDE-Controller
LAN-Adapter
Graphics Adapter Card
Flexy Bay Option FD
PRIMERGY F200 GE FS
Pentium III 1.26 GHz/512kB
3.5 /1.44 MB
1xU160,int/ex, SE
2x2 IDE
Intel 10/100 on-board
PCI-Graphic ATI 8MB on bord
S26361-K643-V103
Floorstand
Slots: 6 x PCI (2 x 33MHz/32-bit; 4 x 66MHz/64-bit)
Power-Supply
Power-Supply-Modul
Fan-Unit
RAM
CD-ROM
Hard Disk
DAT-Streamer
Keyboard
Country-Kit
Operating System
Audio Controller
Speaker
Monitor 21
S26361-F2575-E1
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
S26361-F2306-E523
SNP:SY-F2240E1-A
SNP:SY-F2336E118-A
S26361-F1730-E2
S26381-K297-V122
T26139-Y1740-E10
2x Power-Cable grey 1,8m
T26139-Y1744-L10
Windows NT Server (US) V4.0+10CL S26361-F2565-E305
Creativlabs CT5808
Active
MCM 21P3
Package #
1
German(D)/International(Int)
German (D)
English (UK)/Ireland(IR)
2
3
4
S26361-F2560-L1
S26361-F2300-L1
S26361-K618-V150
Order Number
S42022-D1801-V101
Content
ACI - US/International
Remark
85
FujitsuSIEMEN
S
FujitsuSIEMEN
S
FujitsuSIEMEN
S
S42022-D1801-V102
S42022-D1801-V103
S42022-D1801-V104
Error: Reference source
not found
ACI - English UK
ACI - France
ACI - German
Speaker
Only on request
Only on request
Only on request
S26361-F2560-L1
Audio Controller
Creativlabs CT5808
Monitor 21
ACI-Server
General Comment
The ACI-Server can be a MIDDLE or HIGH system. The MIDDLE variant is equipped with a second processor in addition to the configuration of the BASIC system (ACI
Single User).
The HIGH variant also includes two CPUs and in addition to this it has a RAID controller and redundant power supply inside. Furthermore it is equipped with 3 hardddisk.
The two additional harddisk should be used to store and mirror the ACI-Database.
Configuration MIDDLE
Component
General
CPU
Floppy
SCSI-Controller
LAN-Adapter
Graphics Adapter Card
Flexy Bay Option FD
2nd Processor
Power-Supply
Power-Supply-Modul
Fan-Unit
RAM
Type/Value
PRIMERGY F200 GE FS
Pentium III 1,13 Ghz/512kB
3.5 /1.44 MB
1xU160,int/ex, SE
Intel 10/100 on-board
PCI-Graphic ATI 8MB on bord
PentiumIII 1,13GHz/512kB
400W Upgrade (hot-plug)
400W (hot-plug)
Upgrade-Kit hot-plug redundant
Order Number
S26361-K643-V102
Remark
Floorstand
Slots: 6 x PCI (2 x 33MHz/32-bit; 4 x 66MHz/64-bit)
power-supply1 (+1) x 400 W hot-plug / redundant (optional)
Package #
1
S2636-F2399-E1
S26361-F2575-E1
S26361-F2599-E113
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
S26361-F2306-E524
86
CD-ROM
Hard Disk
DAT-Streamer
Keyboard
Country-Kit
Operating System
Audio Controller
Speaker
Monitor 17
ATAPI / IDE
18GB, 10k, U160, Hot Plug, 1
DDS-3, 12 GB intern
KBPC S2
SNP:SY-F2240E1-A
SNP:SY-F2336E118-P
S26361-F1730-E2
S26381-K297-V122
T26139-Y1740-E10
2x Power-Cable grey 1,8m
T26139-Y1744-L10
Windows NT Server (US) V4.0+10CL S26361-F2565-E305
Creativlabs CT5808
S26361-F2560-L1
MCM 17P3
S26361-K707-V150
2
3
4
optional
optional
87
FujitsuSIEMEN
S
FujitsuSIEMEN
S
FujitsuSIEMEN
S
Order Number
S42022-D1801-V201
S42022-D1801-V202
S42022-D1801-V203
S42022-D1801-V204
S26361-F2560-L1
Content
ACI-MIDDLE - US/International
ACI-MIDDLE - English UK
ACI-MIDDLE France
ACI-MIDDLE German
Audio Controller
Speaker
S26361-K707-V150
Monitor 17
Remark
Only on request
Only on request
Only on request
Creativlabs CT5808
88
Configuration HIGH
Component
General
CPU
Floppy
SCSI-Controller
LAN-Adapter
Graphics Adapter Card
Flexy Bay Option FD
2nd Processor
Power-Supply
Power-Supply-Modul
Fan-Unit
CD-ROM
RAM
Hard Disks
DAT-Streamer
RAID-Controller
Keyboard
Country-Kit
Operating System
Audio Controller
Speaker
Monitor 17
Type/Value
PRIMERGY F200 GE FS PIII
Pentium III 1,13 Mhz
3.5 /1.44 MB
2-Canel SCSI Controller on-board
Intel 10/100 on-board
ATI 4MB Graphic on-board
Order Number
S26361-K643-V102
S26361-F2575-E1
S26361-F2599-E113
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
ATAPI, IDE
1 GB SDRAM 133 MHz
18GB, 10k, U160, Hot Plug, 1
18GB, 10k, U160, Hot Plug, 1
18GB, 10k, U160, Hot Plug, 1
DDS-3, 12 GB intern,
Adaptec, 1xU160 int/ext, 32MB
KBPC S2
SNP:SY-F2240E1-A
MCM 17P3
Package #
1
S26361-F2306-E524
SNP:SY-F2336E118-P
SNP:SY-F2336E118-P
SNP:SY-F2336E118-P
S26361-F1730-E2
S26361-F2405-E32
S26381-K297-V122
T26139-Y1740-E10
2x Power-Cable grey 1,8m
T26139-Y1744-L10
Windows NT Server (US) V4.0+10CL S26361-F2565-E305
Creativlabs CT5808
Error: Reference source not found
Remark
Floorstand
Slots: 6 x PCI (2 x 33MHz/32-bit; 4 x 66MHz/64-bit)
power-supply1 (+1) x 400 W hot-plug / redundant (optional)
S26361-F2560-L1
Error: Reference source
not found
S26361-K707-V150
German (D)/(INT)
German (D)
English (UK)/Ireland(IR)
4
2
3
optional
optional
89
Package # Supplier
1
Fujitsu-SIEMENS
FujitsuSIEMEN
S
FujitsuSIEMEN
S
FujitsuSIEMEN
S
Order Number
S42022-D1801-V301
S42022-D1801-V302
S42022-D1801-V303
S42022-D1801-V304
S26361-F2560-L1
Content
ACI-HIGH - US/International
ACI-HIGH - English UK
ACI-HIGH - France
ACI-HIGH German
Audio Controller
Remark
Only on request
Only on request
Only on request
Creativlabs CT5808
Speaker
S26361-K707-V150
Monitor 17
90
ACI/-CLIENT
Configuration
Component
General
Floppy
Disk-Controller
Audio-Controller
CPU, Cache
RAM
Graphics Adapter Card
Hard Disk
CD-ROM
LAN-Adapter
Keyboard
Type/Value
SCENIC L, i815e, PS, LAN
3.5 /1.44 MB
Ultra DMA-100 Cont. on-board
on-board
PIII 1,0 GHz / 133, 256 KB
128 MB SDRAM PC 133
Matrox Millenium G450 16 MB, DH
HDD 10 GB, Ultra DMA-100, 5.4k
48x ATAPI
on bord
KBPC P2 Light Basic
Country-Kit
SCENIC L
Operating System
Speaker
Monitor 21
Active
MCM 21P3
Order Number
S26361-F2424-E220
Remark
Slots: 5 x PCI , 1 x AGP
Package #
1
S26361-F2271-E270
S26361-F2272-E2
S26361-F2421-E18
S26361-F2413-E10
S26361-F2273-E51
S26381-K240-E122
S26381-K240-E165
S26381-K240-E140
S26381-K240-V120
S26361-F2285-E502
S26361-F2285-E505
S26361-F2285-E503
S26361-F2285-E501
S26361-F1818-E722
S26361_F1818-E721
S26361-F1818-E722
S26361-K618-V150
US/International (INT)
English (UK)
France (F)
German (D)
US/International (INT)
English (UK)
France (F)
German (D)
English (INT/UK/F)
German (D)
2
91
Order Packages
Package # Supplier
1
Fujitsu-SIEMENS
FujitsuSIEMEN
S
Order Number
S42022-D1802-V101
S42022-D1802-V102
S42022-D1802-V103
S42022-D1802-V104
S26361-K618-V150
Content
ACI US/International
ACI English UK
ACI France
ACI German
Monitor 21
Remark
Only on request
Only on request
Only on request
92
ACI- High-Availability-Cluster-Server
General
The High-Availibility-Server-Cluster solutions will only be offered together with SNI server clusters with 99,99% HW availability.
Configuration general (with storage)
Component
General
Storage
Type/Value
Order Number
SNP:SY-K614V101-P
SNP:SY-F1609E3-P
SNP:SY-F1609E5-P
SNP:SY-F1331E51-P
SNP:SY-F2293E500-P
Type/Value
Order Number
S26361-K643-V303
Remark
SNP:SY-F2293E40-P
SNP:SY-F1806E12-P
S26113-F231-E1
S26113-F81-E1
SNP:PS-E421E1-P
SNP:SY-F1647E301-P
SNP:SY-F2262E15-P
SNP:SY-F1828E3-P
S26231-K714-V210
S26361-F2436-E1
SNP:SY_F1832E1-P
S26361-F2435-E136
SNP:SY-F2261E8-P
Configuration Server 1
Component
Generals
Remark
93
CPU/Cache
RAM
DAT_STREAMER
CD-ROM
Hard Disk
Raid Controller
FC Controller
LAN-Adapter
Power-Supply
Power-Supply-Modul
Fan-Unit
Built-In-Kit
Software
1,26GHz/512kB UPS-saved
PIII 1,26GHz 512kB
1GB SDRAM PC133 ECC
Flexy Bay Option FD
DDS4 20GB, 3MB/s, intern
DVD-ROM, ATAPI
2x 18GB,10k,U160, hotplug,1"
U160 int/ext,16MB, Mylex
66MHz, Cu Interface
Fast Ether-Express-Pro/100+ Server
400W Upgrade (hot-plug)
400W (hot-plug)
Upgrade-Kit hot-plug redundant
19" DC-Rack P6xx/Hxxx/F2xx
Windows 2000 Adv.SRV + 25 CL,18Proz. US
S26361-F2599-E126
S26361-F2306-E524
S26361-F2575-E1
S26361-F2233-E3
SNP:SY-F2234E1-A
SNP:SY-F2336E118-P
S26361-F2406-E16
SNP:SY-F2244E1-A
SNP:SY-F2071E1-A
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
SNP:SY-F2261E31-A
S26361-F2565-E706
Type/Value
Order Number
S26361-K643-V303
PCI-Card
Remark
S26361-F2599-E126
S26361-F2306-E524
S26361-F2575-E1
94
DAT_STREAMER
CD-ROM
Hard Disk
Raid Controller
FC Controller
LAN-Adapter
Power-Supply
Power-Supply-Modul
Fan-Unit
Built-In-Kit
S26361-F2233-E3
SNP:SY-F2234E1-A
SNP:SY-F2336E118-P
S26361-F2406-E16
SNP:SY-F2244E1-A
SNP:SY-F2071E1-A
S26113-F453-E1
S26113-F453-E10
S26361-F2544-E1
SNP:SY-F2261E31-A
PCI-Card
Order Packages
Package # Supplier
1
Fujitsu-SIEMENS
Order Number
S42022-D1808-V301
Content
ACI US/International
Remark
95
LCT
Configuration
Component
General
Floppy
Disk-Controller
Graphics Adapter
RAM
CD-ROM
Hard Disk
Modem
Keyboard
Operating System
Type/Value
LIFEBOOK E-6646 PIII 1066Mhz
for E-6646
3.5 /1.44 MB
onboard
16MB Video RAM
SDRAM 128 MB 133 MHz
Order Number
CD-ROM E-Serie
10 GB E-Serie
E-Serie LAN/Modem (intern)
E-Series, Keyboard
S26391-F258-E105
S26391-F2430-E100
S26391-F2431-E100
S26391-K114-V170
Remark
LCD TFT 14,1 SXGA+1400x1050, ATI Mobility-M6, 1x
seriell, 1x parallel, 1x VGA, 1x PS/2
LI-ION Battery, Card Bus-Connectors
Package #
1
S26391-F2424-E200
S26391-F263-E233
S26391-F2436-E225
US/International (INT)
English (INT/UK/F)
Order Packages
Package # Supplier
1
Fujitsu-SIEMENS
Fujitsu-SIEMENS
Order Number
S42022-D1700-V401
S42022-D1700-V402
S42022-D1700-V403
S42022-D1700-V404
S26391-F1491-L400
Content
LCT US/International
LCT English UK
LCT France
LCT German
LAN-Adapter 3com 3CCE589ET
Remark
Only on request
Only on request
Only on request
96