You are on page 1of 18

Introduction An Application Server cluster is a set of application servers running the same application.

The application server processes can run on the same machine, or on different machines. From a client view, a cluster appears to be a single server instance. Clustered application servers behave similarly to single servers, except that they provide load balancing and failover. All server instances in a cluster must reside in the same cell (defined below) of a WebSphere domain. The benefits of clustering are to provide:

Scalability - to increase the capacity of an application, and High availability - if an application server instance in the cluster fails, the application processing will continue on the other application server instances of the cluster.

To understand the Application Server architecture topology, start with some common terms: Managed server An application server process running in its own Java Virtual Machine (JVM). Node A logical group of servers located on the same physical computer. Multiple nodes can exist on the same computer, but for this article, assume that only one node exists per physical computer. Node agent An administrative process that manages the servers running on a node. A node agent resides on a single node. Cell A logical group of nodes belonging to the same administrative domain. Cell manager Also called network deployment manager, it manages the multiple nodes in a distributed topology. Technically it is an application server running an instance of the administration console that can manage all the application servers configured in the same cell. It does this by interacting with the node agent running on each physical computer in the cell. A base Application Server installation includes everything needed for a single application server process, and the code needed to run the node agent. The node agent is used only after the node is added to a cell. A network deployment installation of Application Server can support a network of computer systems that are configured to run collaborating instances of a single server installation. Figure 1 shows which CD is needed to install the product.

Figure 1. CDs for installation

Back to top Setting up a cluster for load balancing Load balancing is the distribution of tasks across the application servers of a cluster. This section details the steps required to set up an Application Server cluster that supports workload management. Installation Figure 2 shows the installation I will use for this example.

Figure 2. Cluster setup

To recreate this example on your own, you will first need to install the product. For this you will need two network-connected computers with an operating system properly installed and configured. The examples in this article are based on Windows 2000 but can be easily extended to other operating systems. The two computers, TP1 and TP2, must have fixed IP addresses and must belong to a network with a DNS server. If you do not have access to a DNS server, you should use the etc/hosts file to configure your network. The configuration for TP1 and TP2 in the etc/hosts file is shown in Figure 3. Note that the IP address can vary depending on your network configuration.

Figure 3. Configuration for TP1 and TP2

You must install Application Server on each computer. If you aren't familiar with installing the product, follow these steps: 1. Launch the installer (install.bat or install.sh) from your Application Server CD. 2. Choose the install language (English has been used for this example). English is the default choice if your Windows 2000 regional settings are English-US. 3. When asked for the setup type, select Full.

4. For the installation directories, change the defaults according to Figure 4. Figure 4. Installation wizard

5. When you are asked to enter the node name and host name for this installation: - Enter TP1 for the node name and TP1 for the host name on the first computer. - Enter TP2 for the node name and TP2 for the host name on the second computer. (Be sure that TP1 and TP2 are valid entries in your DNS server or are in your etc/hosts files). 6. For Windows it is not necessary to run WebSphere as a service, but you should keep IBM HTTP Server as a service on TP1. Enter an appropriate administrator user ID and password for your system. 7. Follow the default steps to finish the installation. Now, Application Server installation is complete for the two computers. The next step is to install WebSphere Application Server Network Deployment on TP1, as follows: 1. Launch the installer (install.bat or install.sh) from the WebSphere Application Server Network Deployment CD. 2. Select English. 3. Follow the installer steps. In the features selection, only the deployment manager is required. 4. Change the default installation directory to C:\WebSphere\DeploymentManager 5. Keep TP1Manager for the node name, TP1 for the host name and TP1Network for the cell name.

6. For Windows you can keep the deployment manager to run as a service. 7. Follow the steps to complete the install. After completing the installation, your configuration will be: Computer 1 Base Application Server installed. Application Server deployment manager installed. IBM HTTP Server installed and running as a Windows service. Computer 2 Base Application Server installed. Configuration Figure 5 shows the configuration for the cluster.

Figure 5. Configuration for the cluster

Creating the deployment manager cell Start the deployment manager process on TP1. The two WebSphere nodes must be added to the cell for the deployment manager. To do this, use the addNode command, which can be found in the /bin directory for the WebSphere Application Server install on each system, as shown in Figure 6 below. The command must not be issued on the two nodes at the same time. Execute addNode on computer TP1 first, wait for the command to complete, and then do it on computer TP2. The command requires the parameter for the host name of the computer running the deployment manager process. In this case, the host name is TP1.

Figure 6. addNode

After the command completes, you should get a message that the node has been successfully federated, as shown in Figure 7.

Figure 7. Success message

Do not forget to issue the same command on TP2 (addNode.bat TP1, where TP1 is the host name of the computer running the deployment manager). So far you have a deployment manager, a node agent running on TP1, and a node agent running on TP2. To verify the installation to this point, review the following logs:

C:\WebSphere\AppServer\logs\nodeagent\SystemOut.log (on TP1 and TP2) C:\WebSphere\DeploymentManager\logs\dmgr\SystemOut.log (on TP1)

A message "Server xxx open for e-business" will indicate that the process is started, where xxx is either dmgr or nodeagent. Creating the cluster As described earlier, a cluster is a set of application servers running the same application. Within a cluster, each application server is often called a clone. In Figure 3, there are three clones: two on TP2, and one on TP1. In Version 5 of Application Server, a clone is allocated a given weight used for load balancing. The weight is a relative value between all the clones in the cluster. For this scenario, you want to share requests evenly across all computers. To do this, set the weighting for clone 3 to be the same as the weighting of clone 1 added to the weighting of clone 2. For example, you can set the weight values to 2 for clone 1, 2 for clone 2, and 4 for clone 3. Using other values such as 3, 3, and 6 would have given the same result. You shouldn't use high numbers for the relative weight of the clones. To create the cluster of application servers, use the WebSphere Administrative Console. In Servers > Clusters press New to create a new cluster, as shown in Figure 8.

Figure 8. Creating the cluster of application servers

In step 1, choose cluster1 for the name of the cluster. You will see the meaning of the replication domain later in Failover and replication in a cluster. For now, set create Republication Domain for this cluster and don't enable prefer local.

Figure 9a. Entering basic cluster information

Click Next to go to step 2.

Figure 9b. Creating new clustered servers

In step 2, as shown in Figure 9b above, create a clone called clone1 on node TP2. Set a weight of 2, Unique Http Ports, and Create Replication Entry in this Server. Use the default application server template and click Apply. Repeat step 2 to create a clone called clone2 on node TP2. Set a weight of 2, Unique Http Ports, and Create Replication Entry in this Server. Use the default application server template and click Apply. Repeat step 2 again to create a clone called clone3 on node TP1. Set a weight of 4, Unique Http Ports, and Create Replication Entry in this Server. Use the default application server template and click Next. Check in the summary that the cluster contains the three clones with the right attributes, click Finish, and save the changes. You're done creating the cluster definition. The next step is to install an enterprise application. For this exercise, use the DefaultApplication.ear in the /installableApps directory of the WebSphere Application Server installation. The procedure for installing an enterprise application onto a cluster is identical to installing onto a standalone application server, with one exception: at the step "Map modules to applications servers," you must select cluster1 for all modules. See Figure 10 below for an example. (It is out of the scope of this article to give detailed installation steps for an enterprise application.)

Figure 10. Map modules to application servers

During the setup of the cluster, three clones were created with unique HTTP ports, meaning that the HTTP listeners for the internal servers running in each application server have specific values. The algorithm used by the admin process to create new ports uses a default value (9080 for HTTP transport) and increments up to the next free value for each new server that is defined. In this case, when you installed Application Server, a standalone server called server1 was created. When the node was added to the deployment manager, server1 was also migrated to become a manager server. Server1 uses port 9080 for its HTTP transport. On TP2, clone1 was created using the port 9081 and clone2 using the port 9082. When installing the DefaultApplication.ear, the default_host virtual host was used for the Web modules. By default, the default_host accepts HTTP requests only on port 9080, so you need to configure this virtual host to also accept requests for ports 9081 and 9082. To add support for additional ports to the default host, in the WebSphere Administrative Console, use Environment > Virtual Hosts > default_host > Host Aliases. Click New to add a new alias. Use * for the hostname ( * represents any hostname) and 9081 for the port. Repeat this task for the port 9082, as shown in Figure 11.

Figure 11. Additional ports to the default host

Save the configuration, ensuring that Synchronize changes with Nodes is checked. Now that you have an application installed in your cluster, it's time to test it. There are three types of processes in the runtime: the cell manager, node agents, and application servers. Each is a separate Java process at the system level. You can use the Administrative Console of the cell manager to start and stop the cluster. Before doing that, the node agent must be running on each node. You can use the commands in Table 1 to control the cell manager and node agents in this installation. Table 1. Commands for controlling cell manager and node agents To: Start the node agent Stop the node agent Start the deployment danager Stop the deployment manager Enter the command: C:\WebSphere\AppServer\bin\startNode.bat C:\WebSphere\AppServer\bin\stopNode.bat C:\WebSphere\DeploymentManager\bin\startManager.bat

C:\WebSphere\DeploymentManager\bin\stopManager.bat

The node agent should still be running from when the addNode command was executed previously. To check that the node agents for TP1 and TP2 are running, use System Administration > Node Agent from the WebSphere Administrative Console. Then look at the status for the nodes TP1 and TP2. If the node agent is unavailable on one of the computers, start it from the command line of that computer. When all the node agents are started, you can start the cluster from the WebSphere Administrative Console, as shown in Figure 12. The starting process might take a few minutes, depending on your hardware configuration.

Figure 12. Starting the cluster

Before testing the workload management, it is useful to verify that the clones are working individually. Launch a browser and check the following URLs:

http://TP2:9081/hello - to see if clone1 is working http://TP2:9082/hello - for clone2 http://TP1:9081/hello - for clone3.

Figure 13. Verifying that servers are functioning

If the three URLs return the result shown in Figure 13 , it demonstrates that all servers in the cluster are functioning and that they're using the correct ports. Close your browser. Testing workload management The HTTP server plugin provides workload management for a Web application. The IBM HTTP Server was installed at the same time as the WebSphere Application Server on TP1. The plugin uses, by default, the configuration file plugin-cfg.xml in the C:\WebSphere\AppServer\config\cells directory. To generate the plugin configuration, use Environment >Update Web Server Plugin from the Administrative Console. The deployment manager Administrative Console will generate the file in C:\WebSphere\DeploymentManager\config\cells. So, you need to manually copy the generated file where the HTTP server plugin expects to find it, such as C:\WebSphere\AppServer\config\cells. The snoop servlet in the DefaultApplication.ear is useful for testing workload. It displays the name of the process used to execute the request. Launch a browser and enter the URL http://TP1/snoop. (The IBM HTTP Server is installed on TP1.)

Figure 14. Testing workload

Click Reload a few times and see which clone executes the request. You should get a sequence such as 1233, 1233, and so on, reflecting the relative weightings applied when the clones were configured. Note: If you always see the same clone ID, close your browser and relaunch it. This will remove the cookie that WebSphere uses for session affinity.

Back to top Failover and replication in a cluster Failover is when a task executed by an application becomes unavailable for any reason, and that task is executed on another process running the same application. To make this possible, the state of the original task must be available on the new process. Application Server provides a mechanism to copy an application state so it can be restored in a different process. Now it's time to test the failover. The best way to demonstrate failover is to kill a process and see what happens. For that, you will need to know the process ID (pid) of each process, which you can learn from:

The server logs, shown in Figure 15 The .pid file in the logs directory (for example, C:\WebSphere\DeploymentManager\logs\dmgr\dmgr.pid) The server runtime information in the WebSphere Administrative Console.

Figure 15. Process ID

Do all the tests to verify that:

Killing a clone in a cluster does not affect other clones. Notice that when the process of an application server is killed, it is automatically restarted. The node agent is in charge of monitoring the application servers running in that node, and if it detects that a process has disappeared for whatever reason (voluntarily killed or a crash), then it restarts it. Killing a node manager doesn't affect the running clones. Killing the deployment manager doesn't affect the runtime of a cluster, but just prevents the administration of the cell through the WebSphere Administrative Console.

By doing the tests you will see that there is no single point of failure in the cluster. Refer to the commands in Table 1 to start the deployment manager and the node agent. During the cluster creation steps, a Replication Domain and a Replication Entry for each clone is created. WebSphere provides a replication service that can replicate HTTP session data among processes and retrieve the HTTP session if the process that currently maintains the HTTP session fails.

1. Launch a browser and enter the URL http://TP1/hitcount. Figure 16. Hit Count demonstration

2. Select Session state (create if necessary) and click Increment. Repeat the operation a few times until you get 5 for the Hit Count value. 3. In the same browser window, change the URL to http://TP1/snoop. 4. Click Reload a few times and see that the same clone always executes the request. By default, WebSphere uses session affinity. When an HTTP session is created by an application, the HTTP plugin will try the same clone first for all subsequent requests. Now see what happens if the clone becomes unavailable. 1. Find the process ID for the clone that executes the snoop servlet and kill it. 2. Do a reload in your browser for the URL http://TP1/snoop. You must get a new clone ID as expected. 3. In the same browser window, change the URL to http://TP1hitcount.

4. Select Session state (create if necessary) and click Increment. The Hit Count value should be 6. This shows that the HTTP session was not lost when the clone was killed and that the replication service worked as expected.

Back to top Conclusion In this article, I showed you how to set up a cluster for load balancing and failover with IBM WebSphere Application Server Version 5.

You might also like