You are on page 1of 48

Key resources http://blogs.technet.com/b/kevinholman/archive/2011/08/05/ how-to-monitor-sql-agent-jobs-using-the-sql-managementpack-and-opsmgr.aspx http://technet.microsoft.com/en-us/magazine/jj554308.

aspx

Windows Clustering (Windows 2003 )


Windows Clustering provides three different, but complementary, clustering technologies. The clustering technologies, which ship in a number of different products, can be used separately or combined to provide scalable and highly-available services.

Clustering Technology

Network Load Balancing (NLB) clusters

Component Load Balancing (CLB) clusters

Server clusters

Available in ...

Microsoft Windows Server 2003, Web Edition; Microsoft Windows Server 2003, Standard Microsoft Edition; Microsoft Application Windows Server 2003, Enterprise Center 2000 Edition; and Microsoft Windows Server 2003, Datacenter Edition 32 12

Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition

Maximum number of nodes

Application

Single point of Load balancing Transmission Control management Failover and failback of Protocol (TCP) and User Datagram and applications Protocol (UDP) traffic configuration for Web farms

No Yes Note

Specialized hardware required?

If you use teaming network adapters, you must select network adapters listed in the No Windows Catalog. For more information, see Network Load Balancing system requirements and the compatibility information in Support resources.

To confirm that your server cluster hardware is designed for Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition, see the compatibility information in Support resources. MS SQL Server, MS Exchange Server, file and print servers, Message Queuing Stateful

Web servers, Microsoft Internet Security and Acceleration (ISA) Typical server, virtual private networks, Web farms deployments Windows Media servers, Mobile Information servers, Terminal Services Stateful or Stateless Stateless stateless? Important

Microsoft will not support the configuration of server clusters and Network Load Balancing clusters on the same server. For more information about how the Windows Clustering technologies can be combined in a multitiered approach to provide highly available services, see "Planning for High Availability" in the Microsoft Windows Server 2003 Deployment Kit at the Microsoft Windows Resource Kits Web site. In addition, see "Server Clusters and Network Load Balancing" in the Microsoft Windows Server 2003 Resource Kit at the Microsoft Windows Resource Kits Web site.

1. Network Load Balancing clusters. Network Load Balancing clusters provide scalability and high availability for TCP- and UDPbased services and applications by combining up to 32 servers running Windows Server 2003, Web Edition; Windows Server 2003, Standard Edition; Windows

Server 2003, Enterprise Edition; or Windows Server 2003, Datacenter Edition, into a single cluster. By using Network Load Balancing to build a group of cloned, or identical, clustered computers, you can enhance the availability of these servers: Web and File Transfer Protocol (FTP) servers, ISA servers (for proxy servers and firewall services), virtual private network (VPN) servers, Windows Media servers, Terminal Services over your corporate LAN. You can install Network Load Balancing clusters through Network Connections or by using the Network Load Balancing Manager. For more information about Network Load Balancing clusters, see Network Load Balancing Overview. 2. Component Load Balancing clusters. Component Load Balancing clusters provide high scalability and availability by enabling COM+ applications (for example, a shopping cart application on an e-commerce Web site) to be distributed across multiple servers. For more information, see the documentation for Microsoft Application Center 2000 in Microsoft TechNet at the Microsoft Web site. Important
o

Component Load Balancing clusters is a feature of Microsoft Application Center 2000. It is not a feature of Windows Server 2003, Standard Edition; Windows Server 2003, Enterprise Edition; or Windows Server 2003, Datacenter Edition.

3. server clusters. server clusters provide high availability for applications through the failover of resources. server clusters focus on preserving client access to applications and system services, such as Microsoft Exchange for messaging, Microsoft SQL Server for database applications, and file and print services.

Server clusters can combine up to eight nodes. In addition, a cluster cannot be made up of nodes running both Windows Server 2003, Enterprise Edition, and Windows Server 2003, Datacenter Edition. In server clusters with more than two nodes, all nodes must run Windows Server 2003, Datacenter Edition, or Windows Server 2003, Enterprise Edition, but not both. By default, all clustering and administration software files are automatically installed on your computer when you install any operating system in the Windows Server 2003 family. For more information about server clusters, see Understanding Server Clusters. For more information about Network Load Balancing clusters and server clusters, see the following topics:

For conceptual and procedural information on Network Load Balancing clusters, see Network Load Balancing Clusters. For conceptual and procedural information on server clusters, see Server Clusters. For guidelines on securing a Network Load Balancing cluster, see Network Load Balancing Best practices. For guidelines on securing a server cluster, see Best practices for securing server clusters.

Introduction to Network Load Balancing


9 out of 10 rated this helpful - Rate this topic

Updated: January 21, 2005 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

Introduction to Network Load Balancing


The Network Load Balancing (NLB) service enhances the availability and scalability of Internet server applications such as those used on Web, FTP, firewall, proxy, VPN, and other missioncritical servers. A single computer running Windows can provide a limited level of server reliability and scalable performance. However, by combining the resources of two or more computers running one of the products in the Windows Server 2003 family into a single cluster, Network Load Balancing can deliver the reliability and performance that Web servers and other mission-critical servers need. The following diagram depicts two connected Network Load Balancing clusters. The first cluster consists of two hosts and the second cluster consists of four hosts:

Each host runs separate copies of the desired server applications, such as that for a Web, FTP, and Telnet server. Network Load Balancing distributes incoming client requests across the hosts in the cluster. The load weight to be handled by each host can be configured as necessary. You can also add hosts dynamically to the cluster to handle increased load. In addition, Network Load Balancing can direct all traffic to a designated single host, called the default host. Network Load Balancing allows all of the computers in the cluster to be addressed by the same set of cluster IP addresses (but also maintains their existing unique, dedicated IP addresses). For load-balanced applications, when a host fails or goes offline, the load is automatically redistributed among the computers still operating. Applications with a single server have their traffic redirected to a specific host. When a computer fails or goes offline unexpectedly, active connections to the failed or offline server are lost. However, if you bring a host down intentionally, you can use the drainstop command to service all active connections prior to bringing the computer offline. In either case, when ready, the offline computer can transparently rejoin the cluster and regain its share of the workload. Note If you plan to use Network Load Balancing in a 64-bit environment, you must use the 64bit Network Load Balancing version. If you do not, the cluster will fail to form.

Overview of Network Load Balancing configuration


Network Load Balancing runs as a Windows networking driver. Its operations are transparent to the TCP/IP networking stack. The following diagram shows the relationship between Network Load Balancing and other software components in a typical configuration of a Network Load Balancing host:

Database access from load-balanced server applications


Some server applications access a database that is updated by client requests. When these applications are load balanced in the cluster, these updates need to be properly synchronized. Each host can use local, independent copies of databases that are merged offline as necessary. Alternatively, the clustered hosts can share access to a separate, networked database server. A combination of these approaches can also be used. For example, static Web pages can be replicated among all clustered servers to ensure fast access and complete fault tolerance. However, database requests would be forwarded to a common database server that handles updates for multiple Web servers. Some mission-critical applications might require the use of highly available database engines to ensure complete fault tolerance for the service. It is recommended that you deploy cluster-aware database software to deliver highly available and scalable database access within an overall clustering scheme. One such example of this is Microsoft SQL Server, which can be deployed with the Cluster service in a server cluster. The Cluster service ensures that if one node fails, a remaining node assumes the responsibilities of the failed computer, thus providing almost continuous service to Microsoft SQL Server clients. It is able to do this because the computers in the server cluster make use of a cluster storage device. For more information on the Cluster service and how it works with Network Load Balancing, see Updated technical information. Notes

It is important to distinguish between the two cluster solutions under discussion. The first, Network Load Balancing, is intended primarily to load balance incoming TCP/IP traffic. The computers participating in this solution form one type of cluster. The second, the Cluster service, is intended primarily to provide failover service from one computer to another. The computers participating in this solution form a different type of cluster. Moreover, the Network Load Balancing cluster would most commonly be running Web server applications. In contrast, the Cluster service would most commonly be running database applications (when used in conjunction with Network Load Balancing). Network Load Balancing and the Cluster service can not both be active on the same computer, but by joining the two cluster solutions together to function in a complementary fashion, the user creates an overall clustering scheme, as shown in the following diagram:

For more information on how Network Load Balancing achieves fault tolerance and scalability, see

How Network Load Balancing works


6 out of 8 rated this helpful - Rate this topic

Updated: January 21, 2005 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

How Network Load Balancing works


Network Load Balancing provides high availability and scalability of servers using a cluster of two or more host computers working together. Internet clients access the cluster using a either a IP address or a set of addresses. The clients are unable to distinguish the cluster from a single

server. Server applications do not identify that they are running in a cluster. However, a Network Load Balancing cluster differs significantly from a single host running a single server application because it can provide uninterrupted service even if a cluster host fails. The cluster can also respond more quickly to client requests than a single host. Network Load Balancing delivers high availability by redirecting incoming network traffic to working cluster hosts if a host fails or is offline. Existing connections to an offline host are lost, but the Internet services remain available. In most cases (for example, with Web servers), client software automatically retries the failed connections, and the clients experience a delay of only a few seconds in receiving a response. Network Load Balancing delivers scaled performance by distributing the incoming network traffic among one or more virtual IP addresses (the cluster IP addresses) assigned to the Network Load Balancing cluster. The hosts in the cluster then concurrently respond to different client requests, even multiple requests from the same client. For example, a Web browser might obtain each of the multiple images in a single Web page from different hosts within a Network Load Balancing cluster. This speeds up processing and shortens the response time to clients. Network Load Balancing enables all cluster hosts on a single subnet to concurrently detect incoming network traffic for the cluster IP addresses. On each cluster host, the Network Load Balancing driver acts as a filter between the cluster adapter driver and the TCP/IP stack in order to distribute the traffic across the hosts. Network Load Balancing employs a fully distributed algorithm to statistically map incoming clients to the cluster hosts based on their IP address and port. No communication between the hosts is necessary for this process to occur. When inspecting an arriving packet, all hosts simultaneously perform this mapping to quickly determine which host should handle the packet. The mapping remains invariant unless the number of cluster hosts changes. The Network Load Balancing filtering algorithm is much more efficient in its packet handling than centralized loadbalancing applications, which must modify and retransmit packets.

Distribution of cluster traffic


Network Load Balancing controls the distribution of TCP and UDP traffic from the Internet clients to selected hosts within a cluster as follows: After Network Load Balancing has been configured, incoming client requests to the cluster IP addresses are received by all hosts within the cluster. Network Load Balancing filters incoming datagrams to specified TCP and UDP ports before these datagrams reach the TCP/IP protocol software. Network Load Balancing manages the TCP and UDP protocols within TCP/IP, controlling their actions on a per-port basis. In multicast mode, Network Load Balancing can limit switch flooding by providing Internet Group Management Protocol (IGMP) support. Network Load Balancing does not control any incoming IP traffic other than TCP and UDP traffic for specified ports and IGMP traffic in multicast mode. It does not filter other IP protocols (for example, ICMP or ARP), except as described below. Be aware that you should expect to see duplicate responses from certain point-

to-point TCP/IP applications (such as ping) when the cluster IP address is used. If required, these applications can use the dedicated IP address for each host to avoid this behavior.

Convergence
To coordinate their actions, Network Load Balancing hosts periodically exchange heartbeats within the cluster (for more information, see Internet Group Management Protocol (IGMP)). IP multicasting allows the hosts to monitor the status of the cluster. When the state of the cluster changes (such as when hosts fail, leave, or join the cluster), Network Load Balancing invokes a process known as convergence, in which the hosts exchange a limited number of messages to determine a new, consistent state of the cluster and to designate the host with the highest host priority as the new default host. When all cluster hosts have reached consensus on the correct new state of the cluster, they record the completion of convergence in the Windows event log. This process typically takes less than 10 seconds to complete. During convergence, the remaining hosts continue to handle incoming network traffic. Client requests to working hosts are unaffected. At the completion of convergence, the traffic destined for a failed host is redistributed to the remaining hosts. Load-balanced traffic is repartitioned among the remaining hosts to achieve the best possible new load balance for specific TCP or UDP ports. If a host is added to the cluster, convergence allows this host to receive its share of the loadbalanced traffic. Expansion of the cluster does not affect ongoing cluster operations and is achieved transparently to both Internet clients and to server applications. However, it might affect client sessions that span multiple TCP connections when client affinity is selected, because clients might be remapped to different cluster hosts between connections. For more information on affinity, see Network Load Balancing and stateful connections. Network Load Balancing assumes that a host is functioning properly within the cluster as long as it exchanges heartbeats with other cluster hosts. If other hosts do not receive a response from any member for several heartbeat exchanges, they initiate convergence to redistribute the load that would have been handled by the failed host. You can control both the message exchange period and the number of missed messages required to initiate convergence. The default values are respectively set to 1,000 milliseconds (1 second) and 5 missed message exchange periods. Because these parameters are not usually modified, they are not configurable through the Network Load Balancing Properties dialog box. They can be adjusted manually in the registry as necessary. The procedures for this are described in Adjust convergence parameters.

Understanding virtual clusters

It is becoming increasingly common to host multiple applications or Web sites on a single Network Load Balancing (NLB) cluster. When doing so, you may require independent load balancing policies, defined through port rules, applied to each of these applications or sites. In the Windows Server 2003 family of products you can configure multiple Network Load Balancing clusters on the same network adapter and then apply specific port rules to each of those IP addresses. These are referred to as "virtual clusters." You can also use virtual clusters to block network traffic to a specific host for a specific application, without affecting traffic for other applications on that host. You can also use virtual clusters to limit each application, Web site, or virtual IP address to a specific subset of computers within your primary cluster. See the diagram below for an example of this use of virtual clusters.

This diagram depicts a four host Network Load Balancing cluster. Through the use of virtual clusters and IP address-specific load weight, Network Load Balancing directs network traffic as follows:

Users accessing Web Site A (IP address nnn.nnn.nnn.1) are directed to any of the four hosts Users accessing Web site B (IP address nnn.nnn.nnn.2), which is a virtual cluster, are directed to hosts 1 and 2 Users accessing Web site C (IP address nnn.nnn.nnn.3), which is a virtual cluster, are directed to hosts 3 and 4

Virtual clusters

You create a virtual cluster by configuring multiple virtual IP addresses (where each IP address typically corresponds to different Web sites or applications hosted on the cluster) on a single network adapter and then configuring different port rules for each virtual IP address. In this way, on each host you are able to have IP address-specific:

Port Range Protocols Affinity Load weight Filtering mode

By using a load weight of zero for specific virtual IP addresses, you can also define a virtual cluster that limits applications, Web sites, or virtual IP addresses to a specific subset of hosts within your Network Load Balancing (NLB) cluster. See the diagram below for an example of this use of virtual clusters.

This diagram depicts a four host Network Load Balancing cluster. Through the use of virtual clusters defined by IP address-specific load weights, Network Load Balancing directs network traffic as follows:

Users accessing Web Site A (IP address nnn.nnn.nnn.1) are directed to any of the four hosts Users accessing Web site B (IP address nnn.nnn.nnn.2), which is a virtual cluster, are directed to hosts 1 and 2 Users accessing Web site C (IP address nnn.nnn.nnn.3), which is a virtual cluster, are directed to hosts 3 and 4

This is accomplished by setting the load weights as follows:

Load weight = 0 on hosts 3 and 4 for IP address xxx.xxx.xxx.2 Load weight = 0 on hosts 1 and 2 for IP address xxx.xxx.xxx.3.

Be aware that to use virtual clusters, all hosts in the cluster must be running one of the Windows Server 2003 family of products. For more information on creating port rules, see Create a new port rule.

Network Load Balancing key features Scalability

Load balances requests for individual TCP/IP services across the cluster. Supports up to 32 computers in a single cluster. Load balances multiple server requests, from either the same client, or from several clients, across multiple hosts in the cluster. Fully pipe-lined implementation ensures high performance and low overhead. Note
o

During packet reception, the Network Load Balancing fully pipe-lined implementation overlaps the delivery of incoming packets to TCP/IP and the reception of other packets by the NDIS driver. This increases the overall processing speed and reduces latency because TCP/IP can process a packet while the NDIS driver receives a subsequent packet. It also reduces

the overhead required for TCP/IP and the NDIS driver to coordinate their actions, and in many cases it eliminates an extra memory copy of packet data. During packet sending, Network Load Balancing also enhances throughput and reduces latency and overhead by increasing the number of packets that TCP/IP can send with one NDIS call. To achieve these performance enhancements, Network Load Balancing allocates and manages a pool of packet buffers and descriptors, that it uses to overlap the actions of TCP/IP and the NDIS driver.

High Availability

Automatically detects and recovers from a failed or offline computer. Automatically balances the network load when hosts are added or removed. Recovers and redistributes the workload within 10 seconds.

Manageability

You can manage and configure multiple Network Load Balancing clusters and the cluster hosts from a single computer using Network Load Balancing Manager. You can specify the load balancing behavior for a single IP port or group of ports using port management rules. If you use the same set of load balanced servers for multiple applications or Web sites, by using virtual clusters you can define different port rules for each Web site based on the destination virtual IP address.

Using optional single-host rules, you can direct all client requests to a single host, in effect using Network Load Balancing to route client requests to a particular host running specific applications. You can block undesired network access to certain IP ports. You can enable Internet Group Management Protocol (IGMP) support on the cluster hosts to control switch flooding when operating in multicast mode.

Network Load Balancing logs all actions and cluster changes in the Windows event log. Using shell commands or scripts, you can remotely start, stop, and control Network Load Balancing actions from any networked computer that is running Windows.

Ease of use

Network Load Balancing is installed as a standard Windows networking driver component. Network Load Balancing requires no hardware changes to enable and run. Network Load Balancing Manager allows you to create new Network Load Balancing clusters and configure and manage clusters and all of the cluster's hosts from a single

remote or local computer. Ideally you should use a second network adapter when managing the cluster from a local computer.

Network Load Balancing lets clients access the cluster with a single logical Internet name and virtual IP address (also know as the cluster IP address) while retaining individual names for each computer. Network Load Balancing allows multiple virtual IP addresses for multihomed servers (although in the case of virtual clusters, the servers do not need to be multihomed in order to have multiple virtual IP addresses). Network Load Balancing can be bound to multiple network adapters allowing you to configure multiple independent clusters on each host. Support for multiple network adapters is different from virtual clusters in that virtual clusters allow you to configure multiple clusters on a single network adapter.

You do not have to modify server applications to run in a Network Load Balancing cluster. If a cluster host fails and then is subsequently brought back online, Network Load Balancing can be configured to automatically add that host to the cluster. The added host will then be able to start handling new server requests from clients.

You can take computers offline for preventive maintenance without disturbing cluster operations on the other hosts.

Network Load Balancing system requirements


3 out of 3 rated this helpful - Rate this topic

Updated: January 21, 2005 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

Network Load Balancing system requirements


Network Load Balancing is designed to work as a standard networking device driver in the Microsoft Windows Server 2003 family of products. Because Network Load Balancing provides clustering support for TCP/IP-based server applications, TCP/IP must be installed in order to take advantage of Network Load Balancing functionality. The current version of Network Load Balancing is designed for use with Ethernet network adapters. It is not compatible with asynchronous transfer mode (ATM), ATM local area network (LAN) emulation, or token ring. It has been tested on 10 megabits per second (Mbps), 100 Mbps, and gigabit Ethernet networks with a wide variety of network adapters, including teaming network adapters listed as compatible with the Windows Server 2003 family of products. For information regarding compatible hardware, click the appropriate link in Support resources. On x86-based computers, Network Load Balancing uses between 750 kilobytes (KB) and 27 MB of RAM per network adapter during operation, using the default parameters and depending on the network load. The parameters can be modified to allow up to 84 MB memory to be used. Typical memory usage ranges between 750 KB and 2 MB. On x64-based computers, Network Load Balancing uses between 825 KB and 32.3 MB of RAM per network adapter during operation, using the default parameters and depending on the network load. The parameters can be modified to allow up to 102 MB memory to be used. Typical memory usage for x64-based computers ranges between 825 KB and 2.5 MB. For optimum cluster performance, you should install a second network adapter on each Network Load Balancing host. In this configuration, the first network adapter handles the network traffic addressed to the server as part of cluster, while the second network adapter is used for intra-host communication. It is important to understand that it is possible to use Network Load Balancing with only a single network adapter. For more details, see Single network adapter limitations.

Note

Network Load Balancing can operate in two modes: unicast and multicast. Unicast support is enabled by default. To enable multicast support, see Enable multicast support.

Using a router
If Network Load Balancing clients are accessing a cluster through a router when the cluster has been configured to operate in multicast mode, be sure that the router meets the following requirements:

Accepts an Address Resolution Protocol (ARP) reply that has one media access control (MAC) address in the payload of the ARP structure, but appears to arrive from a station with another MAC address, as determined by the Ethernet header Accepts an ARP reply for a unicast IP address with a multicast MAC address in the payload of its ARP structure

These conditions allow the router to map the cluster IP addresses to the corresponding MAC address. If your router does not meet these requirements, you can also create a static ARP entry in the router. Cisco routers require a static ARP entry because they do not support the resolution of unicast IP addresses to multicast MAC addresses.

Server clusters
A server cluster is a group of independent computer systems, known as nodes, running Microsoft Windows Server 2003, Enterprise Edition or Microsoft Windows Server 2003, Datacenter Edition, and working together as a single system to ensure that critical applications and resources remain available to clients. The nodes in a cluster remain in constant communication through the exchange of periodic messages, called heartbeats. If one of the nodes becomes unavailable as a result of failure or maintenance, another node immediately begins providing service (a process known as failover). Server clusters can combine up to eight nodes. In addition, a cluster cannot be made up of nodes running both Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition since the different operating systems may be running incompatible versions of the Cluster service. In server clusters with more than two nodes, all nodes must run Windows Server 2003, Enterprise Edition or Windows Server 2003, Datacenter Edition, but not both. However, a server cluster can be operated with some nodes running the Microsoft Windows 2000 operating system and others running Windows Server 2003, Enterprise Edition or Windows Server 2003, Datacenter Edition.

Server clusters can be set up as one of three different cluster model configurations:

Single node server clusters can be configured with, or without, external cluster storage devices. For single node clusters without an external cluster storage device, the local disk is configured as the cluster storage device. Single quorum device server clusters have two or more nodes and are configured so that every node is attached to one or more cluster storage devices. The cluster configuration data is stored on a single cluster storage device. Majority node set server clusters have two or more nodes but the nodes may or may not be attached to one or more cluster storage devices. The cluster configuration data is stored on multiple disks across the cluster and the Cluster service makes sure that this data is kept consistent across the different disks.

It is recommended that you understand the advantages and limitations of the different cluster models before you configure your server cluster. For example, a majority node set cluster can tolerate fewer simultaneous node failures than an equivalent single quorum device cluster. For more information about the three cluster models, see Choosing a Cluster Model. For cluster storage, you can use parallel SCSI, Fibre Channel, Serial Attach SCSI (SAS), or iSCSI. For details about which types of storage can be used with a specific Windows Server 2003 operating system, see Server clusters overview. Server clusters enable users and administrators to access and manage the nodes as a single system rather than as separate computers.

Before creating a server cluster, see Checklists: Creating Server Clusters. Before installing resources for a server cluster, see Checklists: Installing Server Cluster Resources. To find features that have been changed in Windows Server 2003, Enterprise Edition and Windows Server 2003, Datacenter Edition,

see New ways to do familiar Server Cluster tasks.

For guidelines on securing server clusters, see Best practices for securing server clusters. For tips about using server clusters, see Best practices for configuring and operating server clusters. For help with specific tasks, see Server Cluster How To.... For general background information, see Server Cluster Concepts. For problem-solving instructions, see Server Cluster Troubleshooting.

Server clusters overview


6 out of 8 rated this helpful - Rate this topic

Updated: April 10, 2006 Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2

Server clusters overview


A server cluster is a group of independent computer systems, known as nodes, working together as a single system to ensure that critical applications and resources remain available to clients. These nodes must be running Microsoft Windows Server 2003, Enterprise Edition or Microsoft Windows Server 2003, Datacenter Edition. Clustering allows users and administrators to access and manage the nodes as a single system rather than as separate computers. For more information about nodes, see Nodes. A server cluster can consist of up to eight nodes and may be configured in one of three ways: as a single node server cluster, as a single quorum device server cluster, or as a majority node set server cluster. For more information about these three server cluster models, see Choosing a Cluster Model.

Every node may be attached to one or more cluster storage devices. For most versions of Windows Server 2003, Enterprise Edition or Windows Server 2003, Datacenter Edition, the choices for cluster storage include iSCSI, Serial Attached SCSI, parallel SCSI, and Fibre Channel. The following table provides details about the storage you can use with each version of the operating system, along with the maximum number of nodes you can have with each storage type:

Description of Operating System

Storage

Maximum Nodes

Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition Versions:

Windows Server 2003 Windows Server 2003 with Service Pack 1 (SP1) Windows Server 2003 R2

Parallel SCSI

Platforms:

x86 and x64 (not Itanium)

Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition Versions:

Windows Server 2003 Windows Server 2003 with SP1 Windows Server 2003 R2

Fibre Channel

Platforms:

x86, x64, and Itanium

Windows Server 2003, Enterprise Edition, or Windows Server 2003, Datacenter Edition

iSCSI or 8

Versions:

Serial Attached SCSI (SAS)

Windows Server 2003 with SP1 Windows Server 2003 R2

Platforms:

x86, x64, and Itanium

The following figure shows a four-node cluster that provides high availability for four different types of applications or services:

A server cluster runs several pieces of software that fall into two categories: the software that makes the cluster run (clustering software) and the software that you use to administer the cluster (administrative software). By default, all clustering and administration software files are automatically installed on your computer when you install any operating system in the Microsoft Windows Server 2003 family of products. Important

Only computers running Windows Server 2003, Enterprise Edition or Windows Server 2003, Datacenter Edition can be cluster nodes.

Clustering software

The clustering software enables the nodes of a cluster to exchange specific messages that trigger the transfer of resource operations at the appropriate times. There are two main pieces of clustering software: the Resource Monitor and the Cluster service. The Resource Monitor facilitates communication between the Cluster service and application resources. The Cluster service runs on each node in the cluster and controls cluster activity, communication between cluster nodes, and failure operations. When a node or application in the cluster fails, the Cluster service responds by restarting the failed application or dispersing the work from the failed system to the remaining nodes in the cluster. For more information about this process, see Failover and failback.

Administrative software
Administrators use cluster management applications to configure, control, and monitor clusters. The Windows Server 2003 family provides Cluster Administrator for this purpose. Any computer running Microsoft Windows NT version 4.0 Service Pack 3 or later, regardless of whether it is a cluster node, can install Cluster Administrator. By default, a copy of Cluster Administrator is automatically installed on your computer when you install Microsoft Windows Server 2003, Standard Edition; Windows Server 2003, Enterprise Edition; or Windows Server 2003, Datacenter Edition. For information about remote administration, see Installing Cluster Administrator on a remote computer. You can also create, configure, and administer clusters using the cluster command. For more information, see Administering Server Clusters. You can use or create custom administration tools developed using the Cluster automation interfaces. For more information about the Cluster automation interfaces, see the Microsoft Platform Software Development Kit (SDK), available separately. Administrators organize cluster resources into functional units, called groups, and assign these groups to individual nodes. If a node fails, the Cluster service transfers the groups that were being hosted by the node to other nodes in the cluster. This transfer process is called failover. The reverse process, failback, occurs when the failed node becomes active again, and the groups that were failed over to other nodes are transferred back to the original node. For more information about resources, see Server Cluster Resources. For more information about groups, see Server Cluster groups.

Cluster application types


Applications that run in a server cluster fall into one of four categories:

Cluster-unaware applications These types of applications do not interact with the server cluster at all but can still fail

over. Failure detection is limited. The Cluster service protects these applications mainly against hardware failures.

Cluster-aware applications These types of applications are characterized by superior failure detection. The Cluster service can protect these applications not only against hardware but also against software failures.

Cluster management applications These types of applications, which include Cluster Administrator and Cluster.exe, allow administrators to manage and configure clusters. For more information, see Server Cluster Components.

Custom resource types Resource types provide customized cluster management and instrumentation for applications, services, and devices. For more information, see Resource types.

To continue with http://technet.microsoft.com/en-us/library/cc759467(v=ws.10).aspx

https://www.google.co.in/?gws_rd=cr&ei=xSrWUv_BHMnZrQfdzICwAQ#q=MCTS+windows +clustering++filetype%3Apdf

http://technet.microsoft.com/en-us/library/cc783705(v=ws.10).aspx

About Kerberos constrained delegation


5 out of 7 rated this helpful - Rate this topic

Microsoft Forefront Threat Management Gateway can publish Web servers and authenticate users to verify their identity before allowing them to access a published Web server. If a published Web server also needs to authenticate a user that sends a request to it, and if the Forefront TMG computer cannot delegate authentication to the published Web server by passing user credentials to the published Web server or impersonating the user, the published Web server will request the user to provide credentials for a second time. Forefront TMG can pass user credentials directly to a Web published server only when these credentials are received using Basic authentication or HTTP forms-based authentication. In particular, credentials supplied in a Secure Sockets Layer (SSL) certificate cannot be passed to a published server. Forefront TMG provides support for Kerberos constrained delegation (often abbreviated as KCD) to enable published Web servers to authenticate users by Kerberos afterForefront TMG verifies their identity by using a non-Kerberos authentication method. When used in this way, Kerberos constrained delegation eliminates the need for requiring users to provide credentials twice. For example, because it is unrealistic to perform Kerberos authentication over the Internet, SSL certificates might be used for authenticating users at the Forefront TMG computer. After Forefront TMG verifies the user's identity, Forefront TMG cannot pass the SSL client certificate provided by the user to a published server, but it can impersonate the user and obtain a Kerberos service ticket for authenticating the user (client) to a published Web server.

A Forefront TMG computer serving as a firewall that sits between the Internet and your organization's intranet must authenticate clients that send requests over the Internet to servers in your organization to prevent attacks from anonymous and unauthorized users. Every organization determines which authentication method can ensure that external clients are identified with sufficient confidence and that unauthorized clients cannot gain access to a published internal server. Many large organizations (including Microsoft) are moving toward the use of smart cards, which are actually just secured storage devices for an SSL client certificate, as a means to identify their users instead of relying on passwords. Smart cards enable two-factor

authentication based on something that the user has (the smart card) and something that the user knows (the personal identification number (PIN) for the smart card), providing a more secure level of authentication than passwords. Internal servers often need to authenticate users who send requests to them both from computers on the Internet and from computers on the intranet within the organization. For example, a mail server must verify the identity of users, including internal users, before allowing them access to the appropriate personal mailboxes. The authentication performed by an edge firewall clearly does not fully meet the needs of these servers. If Forefront TMG can forward a user's credentials to an internal server, there is no need to prompt the user for a second time to obtain appropriate credentials. However, when SSL client certificates are used, Forefront TMG cannot delegate a user's credentials to an internal mail server, such as a Microsoft Exchange server, because Forefront TMG never receives a password that can be passed on to that server. There is also no way to forward an SSL client certificate to another server. This is an intended security feature of the SSL protocol. Kerberos constrained delegation provides a way for Forefront TMG to impersonate a user sending a Web request and to authenticate to specific services running on specific, published Web servers, including Exchange Outlook Web Access servers, when Forefront TMG knows only the user name after it verifies the identity of the user. For more information about the authentication methods supported by Forefront TMG, see Overview of client authentication. This topic provides general background information that could help you implement Kerberos constrained delegation in diverse Web publishing scenarios.
How Kerberos constrained delegation works

A thorough description of the Kerberos authentication protocol, including Kerberos constrained delegation, is given in "How the Kerberos Version 5 Authentication Protocol Works" at the Microsoft TechNet Web site. This section summarizes the details of the Kerberos authentication protocol related to Kerberos constrained delegation as it is used by Forefront TMG. The Kerberos authentication protocol is used to confirm the identity of users that are attempting to access resources on a network. Kerberos authentication uses tickets that are encrypted and decrypted by secret keys and do not contain user passwords. These tickets are requested and delivered in Kerberos messages. Two types of tickets are used: ticket-granting tickets (TGTs) and service tickets. A Kerberos client (a user or a service) sends requests for tickets to the Key Distribution Center (KDC) in the domain. Requests for TGTs are sent to the authentication service of the KDC, and requests for service tickets are sent to the ticket-granting service of the KDC. When a client sends a request to the authentication service with credentials that can be validated, the KDC

returns a TGT to the client. A TGT is a special service ticket for the authentication service and enables the authentication service to pass the client's credentials to the ticket-granting service in requests for service tickets for a specific service on a specific host computer. A TGT is issued for a specific client and can be reused by the client in requests for additional service tickets for the same service. A client must obtain a new TGT from the authentication service before it can obtain service tickets for another service. Each service ticket issued by the ticket-granting service is for a specific service on a specific host computer. The Kerberos protocol includes a mechanism called delegation of authentication. When this mechanism is used, the client (the requesting service) delegates authentication to a second service by informing the KDC that the second service is authorized to act on behalf of a specified Kerberos security principal, such as a user that has an Active Directory directory service account. The second service can then delegate authentication to a third service. This is accomplished using a proxy TGT or a forwarded TGT. When a proxy TGT is used, the requesting service obtains a TGT for the third service in the security context of a specific user and then passes the TGT to the second service, which uses it to request service tickets. In this case, the requesting service must know the name of the third service. When a forwarded TGT is used, the requesting service obtains a TGT that is marked forwardable for the second service in the security context of the user. The second service can use this TGT to request tickets for other services as needed. Only forwardable TGTs can be used for constrained delegation. A TGT can be marked forwardable only if the account under which the requesting service is running has the ADS_UF_TRUSTED_TO_AUTHENTICATE_FOR_DELEGATION control flag set. For Forefront TMG, this is the Active Directory computer account of the Forefront TMG computer. This flag is automatically set when the account under which the requesting service is running is configured as trusted for Kerberos constrained delegation in Active Directory. Kerberos constrained delegation is a feature that was introduced in Microsoft Windows Server 2003 and is provided by two extensions included in the implementation of the Kerberos V5 authentication protocol in Windows Server 2008:

Protocol transition The protocol transition extension allows a service that uses Kerberos to obtain a Kerberos service ticket to itself on behalf of a Kerberos security principal (a user or a computer) without requiring the principal to initially authenticate to the KDC. Instead, a user sending a request to a service with credentials, such as an SSL client certificate, that are not acceptable for Kerberos authentication can be authenticated by any appropriate Windows authentication method. When authentication is completed, Windows creates a user token. Then, if the service has the necessary impersonation privileges in Windows, when the service uses this token to impersonate the user and request a Kerberos service ticket to another service, the service ticket issued, which is to the requesting service, is mapped to the user token. The service may use the service ticket obtained through protocol transition to obtain service tickets to other services and thereby delegate the credentials if the account under which the service is running is configured correctly to use the Kerberos constrained delegation extension. Constrained delegation The constrained delegation extension allows a service to obtain service tickets (under the delegated user's identity) to a restricted list of other services running on

specific servers on the network after it has been presented with a service ticket, which may be a service ticket obtained through protocol transition.

Constrained delegation provides a way for domain administrators to limit the network resources that a service trusted for delegation can access to a restricted list of network resources. This is accomplished by configuring the account under which the service is running to be trusted for delegation to a specific instance of a service running on a specific computer or to a set of specific instances of services running on specific computers. Each instance of a service running on a computer is specified by a unique identifier, called a service principal name (SPN). The syntax of an SPN is service_class/host_name:port:

The service class is a string that identifies the service. Windows has built-in service classes for many services, but the service class can also be defined by the user. For example, the built-in Windows service class for Internet Information Services (IIS) is http. The host name is the name of the computer on which the instance of the service is running. This can be a fully qualified domain name (FQDN) or a NetBIOS name, but not an IP address. The host name is optional, but it must be included in the SPNs used by Forefront TMG. The port number is optional. It is used to differentiate between multiple instances of the same service on a single host computer. It can be omitted if the service uses the default port for its service class.

Each instance of a service that uses Kerberos authentication needs to have an SPN defined for it so that clients can identify that instance of the service on the network. The SPN is registered in the Active Directory Service-Principal-Name attribute of the Windows account under which the instance of the service is running. This way, the SPN is associated with the account under which the instance of the service specified by the SPN is running. When a service needs to authenticate to another service running on a specific computer, it uses that service's SPN to differentiate it from other services running on that computer. Registration of an SPN in Active Directory maps the SPN to the Windows account under which the service specified in the SPN is running. Instances of services can automatically register their SPNs at startup. Administrators can use the Setspn.exe tool for Windows Server 2003 to manually register SPNs, as well as to read, modify, and delete the SPNs registered in a Windows account. The Setspn.exe tool can be especially useful for verifying that a specific SPN is registered in a specific Active Directory account. For information about obtaining and installing the Setspn.exe tool, see the Microsoft Knowledge Base article 892777, "Windows Server 2003 Service Pack 1 Support Tools." To enforce constrained delegation, Active Directory maintains a list of the SPNs of the instances of services to which a service running under a specific account is allowed to delegate, that is, to obtain service tickets that can be used for constrained delegation. This list of SPNs is stored in the new Active Directory ms-DS-Allowed-to-Delegate-to attribute of computer and user accounts on computers that are running Windows Server 2008. When a Windows Server 2008 KDC processes a service ticket request by using the constrained delegation extension, the KDC verifies that the SPN of the target service is included in the list of SPNs in the ms-DS-Allowedto-Delegate-to attribute.

Computers that are running Windows Server 2008 support two modes of delegation: Kerberos unconstrained delegation and Kerberos constrained delegation. Kerberos unconstrained delegation is supported if a user initially provides credentials for obtaining a TGT that can be forwarded to any service that is trusted for delegation. This behavior is the same as the Microsoft Windows 2000 Server behavior for Kerberos implementations. Constrained delegation can be used by a service if the service can obtain a Kerberos service ticket to itself on behalf of the user whose security context is to be delegated. With Kerberos constrained delegation, it does not matter whether the user obtained the service ticket directly by authenticating through Kerberos or whether the service obtained the service ticket on behalf of the user through the protocol transition extension. For a service to use protocol transition in conjunction with Kerberos constrained delegation and obtain a Kerberos service ticket to a specific service on behalf of a user that was authenticated by a non-Kerberos authentication method, the account under which the service is running must be configured in Active Directory to allow Kerberos constrained delegation to any authentication protocol. In addition, the impersonated user account must not be marked as a sensitive account that cannot be delegated.
Note: Forefront TMG supports Kerberos constrained delegation only within the boundary of a domain. How Forefront TMG uses Kerberos constrained delegation

Without Kerberos constrained delegation, Forefront TMG can delegate credentials only when client credentials are received using Basic authentication or HTTP forms-based authentication. Credentials supplied in an SSL certificate cannot be delegated. With Kerberos constrained delegation, Forefront TMG can delegate client credentials that are supplied for the following types of authentication:

Basic authentication Digest authentication Integrated authentication SSL client certificate authentication Forms-based authentication with user name/password credentials Forms-based authentication with user passcode Note:

Integrated authentication can use either the Kerberos V5 authentication protocol, the NTLM authentication protocol, or a challenge/response authentication protocol.

After verifying the identity of a user sending a Web request using a non-Kerberos authentication protocol, Forefront TMG can use Kerberos protocol transition to switch to the Kerberos protocol for authentication on behalf of the user and then send a Kerberos service ticket instead of the

user's credentials to a published Web server that accepts Kerberos for authentication. This concept is implemented in the following steps:
1. In step 1, a user sending a Web request is authenticated by Forefront TMG through a nonKerberos authentication method with the Windows (Active Directory) validation method. 2. In step 2, when the authentication is completed, a Windows user token is created for the authenticated user. Because the user was not authenticated by Kerberos, the user token is not associated with a Kerberos service ticket. Conversely, when a user is authenticated by the Kerberos protocol, the user token is indirectly linked to a Kerberos service ticket for the user. 3. In step 3, Forefront TMG uses the user token to impersonate the user and request a Kerberos service ticket for a specific service on the published Web server on behalf of the user. 4. In step 4, the Windows Server 2008 operating system on the Forefront TMG computer detects that there is no Kerberos service ticket in the user token and automatically initiates protocol transition by requesting a service ticket to Forefront TMG for the impersonated user. 5. In step 5, the service ticket issued through protocol transition is mapped to the user token, and Forefront TMG uses it to request a service ticket to the published service. When Kerberos constrained delegation is configured properly, the published Web server accepts this Kerberos service ticket instead of user name/password credentials, and the user is authenticated.

The Kerberos service ticket to the published service can be used by the Web server to obtain additional service tickets to other services on behalf of the same user. For example, if an Exchange front-end server that is published by Forefront TMG is configured in Active Directory to be trusted for delegation to an Exchange back-end server that accepts Kerberos authentication, the Exchange front-end server can obtain service tickets to the Exchange back-end server on behalf of users that have been authenticated by the Forefront TMG computer.

Additional technical details


Forefront TMG delegates authentication by requesting a forwardable TGT for a specific target service running on a specific host computer in the security context of an Active Directory user that has been authenticated by Forefront TMG. The KDC then creates a forwardable TGT for the computer that hosts the target service in the user's name and sends it back to Forefront TMG. Forefront TMG then forwards this TGT to the computer that hosts the target service, which can use it to request service tickets for the specified target service and to request new tickets for other services. The KDC issues forwardable TGTs only if the account under which the requesting service is running has the ADS_UF_TRUSTED_TO_AUTHENTICATE_FOR_DELEGATION control flag set in Active Directory. This flag is automatically set when the account under which the requesting service is running is configured as trusted for delegation in Active Directory. For Forefront TMG, this account is the computer account of the Forefront TMG computer that requests forwardable TGTs.
Note: The access of a service ticket to a target service is limited to the access granted to the user on behalf of

whom the service ticket was issued.

Credentials are delegated to the published server for each request when Kerberos constrained delegation is used. If authentication fails, Forefront TMG provides the server's failure notice to the client. If the server requires a different type of credentials, a Forefront TMG alert is triggered.
Setting up Kerberos constrained delegation

The following prerequisites must be met to use Kerberos constrained delegation for authenticating a user to a Web server published by Forefront TMG:

The Forefront TMG computer, the published Web server or member of a published server farm, the domain controller that issues the Kerberos service tickets, and all other computers, such as back-end servers to which the Kerberos service tickets are passed, must be members of the same Active Directory domain. Forefront TMG does not support using cross-domain or crossforest trusts to include additional Active Directory domains in a Kerberos constrained delegation scenario. This domain must be set to the Windows Server 2003 functional level or the Windows Server 2008 functional level. Note: By default, a Windows Server 2003 domain is set to the Windows 2000 functional level.

The user sending a Web request must have an Active Directory user account in this domain or another domain in the local forest. The possibility of including users from other forests is beyond the scope of this documentation. The SPN for the target service on the Web server must be registered in the Windows account under which the service runs. Services can automatically register their SPNs, or administrators can use the Setspn.exe tool for Windows Server 2003 to manually register SPNs. The computer account of the Forefront TMG computer must be configured in Active Directory as trusted for Kerberos constrained delegation, constrained to the SPN that specifies the target service on the published Web server. This is accomplished by selecting the Trust this computer for delegation to specified services only option and specifying the SPN of the target service on the published Web server as a service to which this account can present delegated credentials in the properties of the computer account in Active Directory Users and Computers. Note: If a Forefront TMG computer is compromised when Kerberos constrained delegation is enabled, an attacker can impersonate any domain user and request service tickets to any service that is specified in Active Directory as a service to which delegated credentials can be presented.

Therefore, SPNs should be defined only for servers that are published by Forefront TMG.

To enable protocol transition, the computer account of the Forefront TMG computer must also be configured in Active Directory to allow Kerberos constrained delegation to any authentication protocol. This is accomplished by selecting the Use any authentication protocols option in the properties of the computer account in Active Directory Users and Computers. The Web server must be configured to support Integrated authentication, which includes Kerberos authentication, on the virtual directory to which the requests are sent. A Web listener that uses an authentication method with the Windows (Active Directory) validation method to authenticate users and a Web publishing rule that is configured to use Kerberos constrained delegation for authentication delegation to the published server must be created. Authentication using SSL client certificates requires deployment of a public key infrastructure (PKI) for issuing the client certificates and mapping them to the user accounts in Active Directory, installation of the root certificate of the certification authority that issues the SSL client certificates on the Forefront TMG computer, and installation of an SSL server certificate with the name that users use to access the published Web site on the Forefront TMG computer. For more information about deploying these certificates, see "Outlook Web Access Server Publishing in ISA Server 2004: Client Certificates and Forms-based Authentication" at the Microsoft TechNet Web site.

In Forefront TMG, the service principal name (SPN) is used for requesting a Kerberos service ticket when delegation using Kerberos constrained delegation is configured for a Web publishing rule. By default, the SPN specified in Forefront TMG Management on the Authentication Delegation tab in the properties of a Web publishing rule for an individual Web server is set to http/internal_site_name, and the SPN for a server farm is set to http/*. This SPN must match the SPN that is specified on the Delegation tab in the properties of the computer account of the Forefront TMG computer in Active Directory Users and Computers. In Microsoft Exchange Server 2003, IIS runs under the Network Service account. For published Exchange servers, Forefront TMG uses an SPN consisting of the service class http and the internal site name of the published site for a single Exchange server or an asterisk for a server farm of Exchange servers.
Note: Kerberos authentication depends on UDP packets, which are commonly fragmented. If your Forefront TMG computer is in a domain and the blocking of IP fragments is enabled, Kerberos authentication will fail. We recommend that you do not enable the blocking of packets containing IP fragments in scenarios where Kerberos authentication is used.

By default, Microsoft Office SharePoint Portal Server 2003 disables Kerberos, so NTLM/Kerberos (Negotiate) and Kerberos constrained delegation will not work with Microsoft Windows SharePoint Services publishing. To enable Kerberos, follow the instructions in the Microsoft Knowledge Base article 832769, "How to configure a Windows SharePoint Services

virtual server to use Kerberos authentication and how to switch from Kerberos authentication back to NTLM authentication."

http://en.wikipedia.org/wiki/Network_switch

A network switch (sometimes known as a switching hub) is a computer networking device that is used to connect many devices together on a computer network. A switch is considered more advanced than a hub because a switch will only send a message to the device that needs or requests it, rather than broadcasting the same message out of each of its ports.[1] A switch is a multi-port network bridge that processes and forwards data at the data link layer (layer 2) of the OSI model. Some switches have additional features, including the ability to route packets. These switches are commonly known as layer-3 or multilayer switches. Switches exist for various types of networks including Fibre Channel, Asynchronous Transfer Mode, InfiniBand, Ethernet and others.

Why Virtualize the Network?


By | Aug 1, 2013 | Print this Page http://www.enterprisenetworkingplanet.com/datacenter/why-virtualize-the-network.html Editor's Note: Occasionally, Enterprise Networking Planet is proud to run guest posts from authors in the field. Today, Deepak Kumar, founder and CTO of Adaptiva, shares his thoughts on network virtualization.

By Deepak Kumar In some situations, the hardwired characteristics of a physical network make it a liability rather than the asset it should be. Take the case of an organization that needs to distribute a 20 GB operating system image to thousands of machines in China. Relying on the physical network to transfer gigantic amounts of data over the WAN link would instantly flood it, bringing business traffic to a halt. And that's just one use case. System administrators need to keep business-critical systems running, secure, and current every day. Suppose a security update needs to be deployed instantly. That may mean delivering massive data across the enterprise, across the globe, and across thousands of sites. This poses significant challenges for system administrators. They don't, and shouldn't, own the network. But they must still manage the flow of huge amounts of data across it and control the directions in which that data flows.

In multi-disciplinary environments such as the ones in common use in most Fortune 500 organizations, ownership of the network cannot be delegated to every large user. This accentuates the conundrum faced by the systems administrator and network teams alike, often resulting in friction, business disruption, and sub-optimal network utilization. What if you could virtualize the network, decoupling the physical routers from the imaginary routes that system administrators need their data to take? With such a system in place, the networking team could continue to own the physical assets and the systems administrators the virtual network, independently controlling and managing the flow of large volumes of systems management data such as images, patches, and software. One way this could be accomplished is if applications provided a network virtualization layer. Systems administrators could then create imaginary network locations at will and connect them together arbitrarily, completely dissociating the paths followed by content distribution across the virtualized network from the paths specified in the configuration of the physical network itself. This would untangle network and systems administrator teams, give them ownership of their respective domains, and optimize use of the network. In the use case described above, only one copy of that 20 GB OS image would need to travel across the physical network to China. The network application built upon the network virtualization layer could then make the rest available locally. Network virtualization provides an elegant solution to the network topology problem. It can, however, also become a significant management challenge for enterprises with very large networks. Creating and maintaining a virtual network topology can demand a tremendous amount of manual work and time on a network with thousands of subnets. Automating its use through a workflow system integrated into the technology, with the ability to pull information from diverse sources into the virtualized network configuration, would address this challenge. Only a handful of enterprises have adopted such virtualization technologies to date, but the benefits are already clear. When used correctly, network virtualization can save money, time, bandwidth, and administrators' hair.

http://www.sdncentral.com/whats-network-virtualization/

Whats Network Virtualization?


Network Virtualization (NV) creates logical, virtual networks that are decoupled from the underlying network hardware to ensure the network can better integrate with and support

increasingly virtual environments. Over the past decade, organizations have been adopting virtualization technologies at an accelerated rate to take advantage of the efficiencies and agility of software-based compute and storage resources. While networks have been moving towards greater virtualization, it is only recently, with the true decoupling of the control and forwarding planes, as advocated by software-defined networking (SDN) and network functions virtualization (NFV), that network virtualization has become more of a focus.

What Exactly is Virtualization?


Virtualization is the ability to simulate a hardware platform, such as a server, storage device or network resource, in software. Basically all the functionality is separated from the hardware and simulated as a virtual instance, with the ability to operate just like the traditional, hardware solution would. Of course, somewhere there is host hardware supporting the virtual instances of these resources, but this hardware can be general, off-the-shelf platforms. In addition, a single hardware platform can be used to support multiple virtual devices or machines, which are easy to spin up or down as needed. As a result, a virtualized solution is typically much more portable, scalable and cost-effective than a traditional hardware-based solution.

Applying Virtualization to the Network


When applied to a network, virtualization creates a logical software-based view of the hardware and software networking resources (switches, routers, etc.). The physical networking devices are simply responsible for the forwarding of packets, while the virtual network (software) provides an intelligent abstraction that makes it easy to deploy and manage network services and underlying network resources. As a result, NV can align the network to better support virtualized environments.

Virtual Networks
NV can be used to create virtual networks within a virtualized infrastructure. This enables NV to support the complex requirements in multi-tenancy environments. NV can deliver a virtual network within a virtual environment that is truly separate from other network resources. In these instances, NV can separate traffic into a zone or container to ensure traffic does not mix with other resources or the transfer of other data.

Additional Resource:
Why Virtualize the Network? http://www.enterprisenetworkingplanet.com/datacenter/whyvirtualize-the-network.html

https://www.google.co.in/?gws_rd=cr&ei=njfWUtaNH4bjrAfzIHICw#q=windows+storage+services+layout

http://blogs.technet.com/b/yungchou/archive/2012/08/31/windows-server-2012-storagevirtualization-explained.aspx

Windows Server 2012 Storage Virtualization Explained

YungChou

YungChou
Microsoft

20,934 Points 4 2 2
Recent Achievements Profile Complete Blog Party Starter New Blog Rater
View Profile

11 Apr 2013 1:00 PM


Comments 0 Likes

Build your test lab with Boot-to-VHD. Here are the steps. Deploy a VM to cloud and build your lab in Windows Azure with 90-day free trial. Here's how. Preping for Microsoft certifications? Join our Windows Server 2012 "Early Experts" Study Group

Windows Server 2012 Storage Space subsystem now virtualizes storage by abstracting multiple physical disks into a logical construct with specified capacity. The process is to group selected physical disks into a container, the so-called storage pool, such that the total capacity collectively presented by those associated physical disks can appear and become manageable as a

single and seemingly continuous space. Subsequently a storage administrator creates a virtual disk based on a storage pool, configure a storage layout which is essentially a RAID level, and expose the storage of the virtual disk as a drive letter or a mapped folder in Windows Explorer. With multiple disks presented collectively as one logical entity, i.e. a storage pool, Windows Server 2012 can act as a RAID controller for configuring a virtual disk based on the storage pool as software RAID. However the scalability, resiliency, and optimization that the Storage Space subsystem delivers are much more than just what a software RAID offers. Therefore Windows Server 2012 presents Storage Space subsystem as a set of natively supported storage virtualization and optimization capabilities and not just software RIAD, per se. I am using the term, software RAID, here to convey a well-known concept, however not as an equivalent set of capabilities to those of Storage Space subsystem. Storage Space This is an abstraction to present specified storage capacity of a group of physical disks as if the capacity is from one logical entity called a storage pool. For instance, by grouping four physical disks each with 500 GB raw space into a storage group, Storage Space subsystem enables a system administrator to configure the 2 TB capacity (collectively from four individual physical disks) as one logical, seemingly continuous, storage without the need to directly manage individual drives. Storage Space shields the physical characteristics and presents selected storage capacity as pools in which a virtual disk can be created with a specify storage layout (i.e RIAD level) and provisioning scheme, and exposed to Windows Explorer as a drive or a mapped folder. for consumption. The follow schematic illustrates the concept.

Storage Pool

A storage pool can consist of heterogeneous physical disks. Notice that a physical drive in the context of Windows Server 2012 Storage Space is simply raw storage from a variety of types of drives including USB, SATA, and SAS drives as well as an attached VHD/VHDX file as shown below. With a storage pool, Windows Server 2012 presents the included physical drives as one logical entity. The allocating the capacity of a storage pool is to first create a virtual disk based on the storage pool followed by creating and mapping a volume Later a drive letter or an empty folder. And with the mapping, the volume based on a virtual disk of a storage pool will appear and work just like a conventional hard drive or folder in Windows Explorer.

The process to create a storage pool is straightforward with the UI from Server Manager/File and Storage Services/Volumes/Storage Pools. Over all, first group all intended physical disks into a storage group. Create a virtual disk based on the storage group. Then create volume based on the virtual disk and map the volume to a drive letter or an empty folder. At this time, the mapped drive letter or folder becomes available in Windows Explorer. By organizing physical disks into a storage pool, simply add disks to as needed expand the physical capacity of the storage pool. A typical routine to configure a storage pool as software RAID in Server Manager includes:
1. Connect physical disks and attach VHD/VHDX files to a target Windows Server 2012. 2. Go to File and Storage Services/Volumes/Storage, notice Primordial storage pool includes all unallocated physical disks.

3. Right-click Primordial, or click the TASKS drop-down list of STORAGE POOLS pane to create a storage pool with selected physical disks. 4. Upon creating a storage pool, start the New Virtual Disk wizard in VIRTUAL DISKS pane and select the storage pool created in step 3. The wizard will later present the available storage layouts where Simple, Mirror, and Parity are in essence software RAID settings of RAID 0, RIAD 1, and RAID 5, respectively, providing the number of physical disks are sufficient for an intended RAID configuration. Pick a Provisioning scheme (Thin or Fixed) and specify the size to create a virtual disk. 5. Upon creating a virtual disk, right-click the virtual disk created in step 4 and bring it online as needed followed by creating a new volume which can be assigned to a disk drive letter or mounted on a pre-existing empty folder. 6. Upon creating a volume, the assigned disk drive letter or mounted folder becomes available in Windows Explorer.

In step 4, two storage provisioning schemes are available. Shown as below, Thin provisioning of a virtual disk optimizes the utilization of available storage in a storage pool via over-subscribing capacity with just-in-time allocation. In other words, the pool capacity used by a virtual disk with Thin provisioning is according to only the size of the files on the virtual disk, and not the defined size of the virtual disk. While Thin provisioning offers flexibility and optimization, the other virtual disk provisioning scheme, Fixed, is to acquire specified capacity at disk creation time for best performance.

Storage Layout While creating a virtual disk based on a storage pool form Server Manager/File and Storage Services/Volumes/Storage Pools, there are three levels of software RAID available as illustrated below. These RAID settings are presented as options of Windows Server 2012 Storage Layout including:

Simple - This is a stripe set without parity or mirroring by striping data across multiple disks, similar to RAID 0. Compared with a single disk, this configuration increases throughput and maximizes capacity. There is however no redundancy and it does not protect data from a disk failure. Mirror - This is a mirror set without striping or parity by duplicating data on two or three disks, similar to RAID 1. It increases reliability with reduced capacity. This configuration requires at least two disks to protect data from a single disk failure, or at least five disks to protest from two simultaneous disk failure. Parity - This is a striped set with distributed parity by striping data and parity information across multiple disks, similar to RAID 5. It increases reliability with reduced capacity. This configuration requires at least three disks to protect data from a single disk failure, and cannot be used in a failover cluster.

User Experience A storage administrator can configure storage virtualization, namely storage pools, virtual disks, etc., of local and remote servers with either Server Manager/Volumes/Storage Pools interface, PowerShell, or even Disk Manager. The following is a screen capture of a configured storage pool with a 6 TB virtual disk with RAID 5 level mounted on the S drive. And in case some

wonder, no, I did not have 6 TB storage capacity and it was done by Thin provisioning to oversubscribe what the physical disks were actually offering.

Call to Action

Download and install Windows Server 2012 form http://aka.ms/8 and learn storage virtualization by practicing. Get the free ebooks from http://aka.ms/free and study Windows Server 2012 and virtualization solutions. Register at Microsoft Virtual Academy at http://aka.ms/va and complete the Windows Server 2012 track. Join the Windows Server 2012 virtual launch http://aka.ms/go on Sep. 4th.

- See more at: http://blogs.technet.com/b/yungchou/archive/2012/08/31/windows-server-2012storage-virtualization-explained.aspx#sthash.B1XwEBCE.dpuf

http://blogs.technet.com/b/filecab/archive/2012/05/21/introduction-of-iscsi-target-in-windows-server2012.aspx

http://blogs.technet.com/b/storageserver/archive/2009/12/11/six-uses-for-the-microsoft-iscsisoftware-target.aspx

LAN switching
From Wikipedia, the free encyclopedia Jump to: navigation, search This article addresses packet switching in computer networks.

LAN switching is a form of packet switching used in local area networks. Switching technologies are crucial to network design, as they allow traffic to be sent only where it is needed in most cases, using fast, hardware-based methods.

Contents
[hide]

1 Layer 2 switching o 1.1 Limitations 2 Layer 3 switching 3 Layer 4 switching 4 Multi-layer switching (MLS) 5 See also 6 References 7 External links

Layer 2 switching[edit]
Layer 2 switching uses the media access control address (MAC address) from the host's network interface cards (NICs) to decide where to forward frames. Layer 2 switching is hardware based,[1] which means switches use application-specific integrated circuit (ASICs) to build and maintain filter tables (also known as MAC address tables or CAM tables). One way to think of a layer 2 switch is as a multiport bridge. Layer 2 switching provides the following

Hardware-based bridging (MAC)

Wire speed High speed Low latency

Layer 2 switching is highly efficient because there is no modification to the data packet, only to the frame encapsulation of the packet, and only when the data packet is passing through dissimilar media (such as from Ethernet to FDDI). Layer 2 switching is used for workgroup connectivity and network segmentation (breaking up collision domains). This allows a flatter network design with more network segments than traditional 10BaseT shared networks. Layer 2 switching has helped develop new components in the network infrastructure.

Server farms Servers are no longer distributed to physical locations because virtual LANs can be created to create broadcast domains in a switched internetwork. This means that all servers can be placed in a central location, yet a certain server can still be part of a workgroup in a remote branch, for example. Intranets Allows organization-wide client/server communications based on a Web technology.

These new technologies allow more data to flow off from local subnets and onto a routed network, where a router's performance can become the bottleneck.

Limitations[edit]
Layer 2 switches have the same limitations as bridge networks. Bridges are good if a network is designed by the 80/20 rule: users spend 80 percent of their time on their local segment. Bridged networks break up collision domains, but the network remains one large broadcast domain. Similarly, layer 2 switches (bridges) cannot break up broadcast domains, which can cause performance issues and limits the size of your network. Broadcast and multicasts, along with the slow convergence of spanning tree, can cause major problems as the network grows. Because of these problems, layer 2 switches cannot completely replace routers in the internetwork.

Layer 3 switching[edit]
The only difference between a layer 3 switch and router is the way the administrator creates the physical implementation. Also, traditional routers use microprocessors to make forwarding decisions, and the switch performs only hardware-based packet switching. However, some traditional routers can have other hardware functions as well in some of the higher-end models. Layer 3 switches can be placed anywhere in the network because they handle high-performance LAN traffic and can cost-effectively replace routers. Layer 3 switching is all hardware-based packet forwarding, and all packet forwarding is handled by hardware ASICs. Layer 3 switches really are no different functionally than a traditional router and perform the same functions, which are listed here

Determine paths based on logical addressing Run layer 3 checksums (on header only)

Use Time to Live (TTL) Process and respond to any option information Update Simple Network Management Protocol (SNMP) managers with Management Information Base (MIB) information Provide Security

The benefits of layer 3 switching include the following


Hardware-based packet forwarding High-performance packet switching High-speed scalability Low latency Lower per-port cost Flow accounting Security Quality of service (QoS)

1. SWITCHING:

The switching algorithm is relatively simple and is the same for most of the routed protocols: a host would like to send a packet to a host on another network. Having acquired a router's address by some means, the source host sends the packet directly to that router's physical (MAC) address. The protocol (network layer) address is that of the destination host. The router examines the packet's destination protocol address and determines whether it knows how to forward the packet or not. If the router does not know how to forward the packet, it typically drops the packet. If it knows how to forward packet, it changes the destination physical address to that of the next hop router and transmits the packet. The next hop may be the destination or the next router, which executes the same switching process. As the packet moves through the internetwork, its physical address changes, but its protocol address remains same. IEEE has developed the hierarchical terminology that is useful in describing this process. The network devices without capability to forward packets between subnetworks are called end system (ES), whereas network devices with these capabilities are called intermediate systems (IS). IS are further divided into those that can communicate within routing domain (Intradomain ES) and those that communicate both within and between routing domains (Interdomains IS). A routing domain is generally considered as portion of an internetwork under common administrative authority and is regulated by a particular set of administrative guidelines. Routing domains are also called as autonomous systems.

Layer 4 switching[edit]
Layer 4 switching is considered a hardware-based layer 3 switching technology that can also consider the application used (for example, Telnet or FTP).

Layer 4 switching provides additional routing above layer 3 by using the port numbers found in the Transport layer header to make routing decisions. These port numbers are found in Request for Comments (RFC) 1700 and reference the upperlayer protocol, program, or application. Layer 4 information has been used to help make routing decisions for quite a while. For example, extended access lists can filter packets based on layer 4 port numbers. Another example is accounting information gathered by open standards using sFlow provided by companies like Arista Networks or proprietary solutions like NetFlow switching in Cisco's higher-end routers. The largest benefit of layer 4 switching is that the network administrator can configure a layer 4 switch to prioritize data traffic by application, which means a QoS can be defined for each user. For example, a number of users can be defined as a Video group and be assigned more priority, or band-width, based on the need for video conferencing.

Multi-layer switching (MLS)[edit]


Main article: Multilayer switch

Multi-layer switching combines layer 2, 3, and 4 switching technologies and provides high-speed scalability with low latency. It accomplishes this high combination of high-speed scalability with low latency by using huge filter tables based on the criteria designed by the network administrator. Multi-layer switching can move traffic at wire speed and also provide layer 3 routing, which can remove the bottleneck from the network routers. This technology is based on the idea of "route once, switch many". Multi-layer switching can make routing/switching decisions based on the following

MAC source/destination address in a Data Link frame IP source/destination address in the Network layer header Protocol field in the Network layer header Port source/destination numbers in the Transport layer header

There is no performance difference between a layer 3 and a layer 4 switch because the routing/switching is all hardware based.

http://en.wikipedia.org/wiki/Non-uniform_memory_access https://www.google.co.in/#q=difference+between+enhanced+session+mode+and+remote+fx

see Ryans blog in the result http://blogs.technet.com/b/askperf/archive/2013/10/18/windows-8-1-windows-server-2012-r2vmconnect-enhanced-mode-rdp-over-vmbus.aspx http://en.wikipedia.org/wiki/Solid-state_drive http://www.storagereview.com/ssd_vs_hdd http://technet.microsoft.com/en-us/library/jj612868.aspx http://en.wikipedia.org/wiki/Storage_area_network

http://technet.microsoft.com/en-us/library/dn440675.aspx http://www.systemcentercentral.com/comparing-generation-1-and-generation-2-virtual-machines/

http://technet.microsoft.com/en-us/library/cc720381(v=ws.10).aspx http://technet.microsoft.com/en-us/library/cc720325(v=ws.10).aspx http://msdn.microsoft.com/en-us/library/windows/hardware/gg602137(v=vs.85).aspx

http://www.altaro.com/hyper-v/hyper-v-virtual-hardware-emulated-synthetic-and-sr-iov/

http://www.microsoft.com/en-in/download/details.aspx?id=30707

http://packetlife.net/blog/2009/feb/2/ipv6-neighbor-spoofing/

http://en.wikipedia.org/wiki/Direct_memory_access http://books.google.co.in/books?id=kUJnHJJlnpUC&pg=PA31&lpg=PA31&dq=why+should+application+b e+cluster+aware&source=bl&ots=JTMQUVBN6V&sig=z7obv1NR0oBqbD1mDjBmovxW9Mw&hl=en&sa= X&ei=T23WUtTuFc3irAe6i4GYCQ&ved=0CFgQ6AEwCA

http://blogs.msdn.com/b/clustering/archive/2009/04/10/9542115.aspx http://technet.microsoft.com/en-us/library/cc782179(WS.10).aspx

http://social.technet.microsoft.com/Forums/windowsserver/en-US/23a7c24c-6899-48d0-abd965432ba65c34/clustering-a-non-cluster-aware-application

http://technet.microsoft.com/en-us/library/cc737366(WS.10).aspx

http://blogs.technet.com/b/yungchou/archive/2013/01/10/hyper-v-replica-explained.aspx

http://technet.microsoft.com/en-us/library/gg610600.aspx http://blogs.technet.com/b/keithmayer/archive/2013/10/31/why-r2-your-next-san-with-smb-3-02scale-out-file-server-sofs.aspx#.Utac67G6bZ4 Hyper-V Overview - http://technet.microsoft.com/library/hh831531 Competitive Advantages of Hyper-V http://download.microsoft.com/download/E/8/E/E8ECBD78-F07A-4A6F-9401AA1760ED6985/Competitive-Advantages-of-Windows-Server-Hyper-V-over-VMwarevSphere.pdf Technical Documentation | Virtual Machine Manager: http://www.microsoft.com/enus/download/details.aspx?id=6346 Technical Documentation | App Controller: http://www.microsoft.com/enus/download/details.aspx?id=29694 Technical Documentation | Operations Manager: http://www.microsoft.com/enus/download/details.aspx?id=29256 Technical Documentation | Data Protection Manager: http://www.microsoft.com/enus/download/details.aspx?id=29698 Technical Documentation | Service Manager: http://www.microsoft.com/enus/download/details.aspx?id=27850 Technical Documentation | Orchestrator: http://www.microsoft.com/enus/download/details.aspx?id=29258 Cloud Services Process Pack Download: http://www.microsoft.com/enus/download/details.aspx?id=36497 Microsoft Virtual Machine Converter Download: http://www.microsoft.com/enus/download/details.aspx?id=34591 System Center PowerShell Deployment Toolkit: http://gallery.technet.microsoft.com/PowerShell-Deployment-797b3c6d

http://technetevents.com/virtitcamponline/ http://storagegaga.com/tag/variable-chunking/

You might also like