Professional Documents
Culture Documents
Contents
1 Data Center Architecture Evolution.......................................................................................... 3
1.1 Architecture Drivers................................................................................................................................3
1.1.1 Server Consolidation ...................................................................................................................3
1.1.2 Server Virtualization ....................................................................................................................3
1.1.3 Application Scalability and Service Availability ...........................................................................4
1.1.4 Efficient Workload Management .................................................................................................4
1.1.5 Data Center Uptime.....................................................................................................................4
1.2 Data Center Networking Requirements .................................................................................................4
1.2.1 Higher-Density Server Farms......................................................................................................5
1.2.2 Lower Oversubscription...............................................................................................................5
1.2.3 Increased Layer 2 Adjacency ......................................................................................................5
1.2.4 Increased High Availability ..........................................................................................................6
1.2.5 Integration of Stateful Devices ....................................................................................................6
1.3 Networking Technology Enhancements.................................................................................................7
1.3.1 10GbE Density ............................................................................................................................7
1.3.2 STP Stability................................................................................................................................7
1.3.2.1 Bridge Assurance .................................................................................................................................... 7
1.3.2.2 Dispute Mechanism ................................................................................................................................. 8
1.3.2.3 PVID Mismatch........................................................................................................................................ 8
1.3.2.4 PVST Simulation Disable......................................................................................................................... 8
1.3.3 Virtual Switching..........................................................................................................................8
2 Data Center Network Topologies........................................................................................... 11
2.1 Technology Selection Guidelines.........................................................................................................11
2.2 Current Topologies...............................................................................................................................13
2.2.1 Triangular and V-Shaped Topologies........................................................................................13
2.2.2 Square and U-Shaped Topologies ............................................................................................14
2.2.3 Minimal STP Topologies ...........................................................................................................15
2.3 New Topologies ...................................................................................................................................16
2.3.1 Virtual Switching and STP Stability ...........................................................................................17
2.3.2 High-Density Server Farms and Virtual Switching ....................................................................17
2.3.3 Enhanced Layer 2 Topology .....................................................................................................19
2.3.4 New Topologies and Data Center Services ..............................................................................21
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 1 of 23
Technical Overview
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2 of 23
Technical Overview
● Server consolidation
● Server virtualization
● Application scalability and service availability
● Efficient workload management
● Data center uptime
Although there are business-level drivers such as capital expenditure (CapEx) and operating
expense (OpEx) cost controls; physical facilities related such as data center consolidation, power,
cabling, and cooling; and additional factors related to resiliency, such as disaster recovery and
business continuance, the focus of this document is single-site data centers networking issues and
drivers that influence network changes.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3 of 23
Technical Overview
Service availability is more focused on uninterrupted access to services expected by users or back-
end systems. Besides the infrastructure components supporting services (network and servers),
services are related to the applications and the associated service-level agreements (SLAs).
The physical and network environments must support network policy applied to any available
portion of the physical environment (any rack with available power, cooling, and cabling capacity).
Network policy is typically dependent on the most basic server identity (IP address), which implies
that the server subnet (and its associated VLAN) should also be configurable in any physical rack
when required. Layer 2 adjacency across the data center access layer becomes a key requirement.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4 of 23
Technical Overview
More IP addresses require more subnets and VLANs, which in turn drive additional service policy:
ACLs (security and quality of service [QoS]), load balancing policy, firewall [FW] policies, etc. More
capable servers coupled with increased server quantities require higher network transport capacity,
which needs to be preprovisioned to a reasonable point. Such a point is determined by the target
oversubscription that is to be maintained as network growth is experienced. Increasing numbers of
access switches demand higher port density at the aggregation layer, which in turn drives higher
density at the core layer.
Oversubscription for access layer switches ranges from 1:1 to 20:1 depending largely on the
application environment. Applications that require high bandwidth benefit from very low
oversubscription (1:1 through 4:1). Typical client applications are in the oversubscription range of
10:1 to 20:1 and are generally co-hosted with backup high-bandwidth applications for certain time
slices requiring a lower oversubscription range of 4:1 to 12:1. Virtual servers are expected to have
higher output than non-virtual servers. Hence, their oversubscription trends lower, in the range of
5:1 to 10:1. Access layer oversubscription in modular access switches is expected to be in the 4:1
to 12:1 range assuming that each port is capable of wire-rate performance. Once the server actual
oversubscription is factored in, the overall oversubscription decreases.
● NIC teaming (two server I/O adaptors using the same IP and MAC address pair on different
physical switches)
● Server clusters that use Layer 2 only to communicate
● High-availability server clusters that expect to provide a server backup mechanism on the
same subnet but different access switch
● Backup applications that expect to see devices being backed up in the same subnet
● Subnetting: simple server grouping in which same subnet servers are connected to different
access switches, thus supporting common VLANs across access layer switches
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5 of 23
Technical Overview
● Virtual server movement, which requires the server application environment (including the
server ID and IP and MAC addresses) be moved to a different physical location in the
network
● Workload mobility so that an application and its processing requirements can be moved to
server resources with more available computing capacity
● Automation, or establishment of a dynamic mechanism for provisioning a new server or
application in support of a service. The service presumes the existence of an environment
that can be easily manipulated to allow any new application transparently. This
transparency comes from allowing any application to be deployed dynamically on any
available virtual server that may be placed in any existing subnet or VLAN. The flexibility
required by such automated processes comes from the network’s support for logical
resource instances where and when needed without many physical constraints
Today’s environments are rigid in that the physical data center space is tightly coupled with logical
portions of the network environment. A subnet or VLAN that is available only in a number of racks
is inefficient and restrictive. High-density server farms exacerbate the lack of flexibility as it is
difficult to predict which available physical location has the correct logical network settings. To
avoid these restrictions, the architecture is expected to support data center–wide VLANs. Data
center–wide VLANs support any subnet/VLAN on any access switch, and consequently on any
data center rack. This implies that any access switch should be capable of supporting any subnet
or VLAN; however, not all access switches need to support all VLANs concurrently. In a two-tier
Layer 2 topology it is not uncommon to support VLANs across all access switches connected to a
single aggregation pair. However, support for VLANs across multiple aggregation pairs is not
common as it carries implications related to the size of a failure domain.
Data center network architecture–level high availability centers on reliability, stability, and
deterministic behavior, all of which need to be understood during a steady state and through
normal operational procedures and failover conditions. The switching technology enhancement
section discusses new mechanisms aimed at addressing high availability.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 6 of 23
Technical Overview
The new advanced features addressing STP and Layer 2 stability are:
● Bridge assurance
● Dispute mechanism
● Port VLAN ID (native VLAN ID) mismatch
● Per VLAN Spanning Tree (PVST) simulation disable
These four features collectively address common problems found in Layer 2 environments. The
features provide a mechanism to increase the resiliency of Layer 2 networks to changes, such as
addition of new switches, soft errors, and human-induced problems.
Bridge assurance prevents misbehaving switches that could be forwarding data traffic, but not
control traffic, from affecting the network. BPDUs are sent only on designated ports, but with bridge
assurance enabled, BPDUs are sent on all ports.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 7 of 23
Technical Overview
This feature provides an option to disable PVST simulation, thus preventing the switch from joining
the STP environment. The prevention results when the MSTP port detects a peer inconsistency
and blocks the port.
VS technology is characterized through two main functions: virtual switching and virtual port
channels. Virtual switching refers to the ability to make multiple switches behave like a single
switch (virtual switch), or to have a single switch behave as many (virtual switches). Figure 1
presents a virtual switch instance made out of multiple physical switches, and a single physical
switch that supports multiple virtual switch instances.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 8 of 23
Technical Overview
Figure 1A displays two interconnected switches, A1 and A2, that become a single virtual switch A.
Figure 1B displays a single switch A that support multiple virtual switch instances, A1 through A4.
Virtual switch A sees all ports, I/O modules and supervisor modules as being part of the same
switch. Virtual switches A1 through A4 only see the physical resources assigned to each virtual
switch and are unaware of the other virtual switching instances hosted in the same physical switch.
Figure 2 presents a view of how the control plane is handled on either kind of virtual switch. Figure
2A shows a single virtual switch supporting a single control plane instance with its associated
protocols. The single instance represents a single management and control plane point that carries
a single configuration file and a single set of IP and mac addresses.
Figure 2B presents multiple virtual switching instances hosted on a single physical switch which
provide separate control planes, thus separate control plane protocol instances. All instances are
isolated from one another.
There are inherent benefits to either approach of switch virtualization. On one hand a single logical
instance simplifies the management of the network infrastructure and provides non-blocking Layer
2 paths to and from any device attached to A1 and A2, and on the other hand, multiple virtual
switch instances on a single physical switch that provide control plane and physical topology
isolation over a shared network infrastructure. Figure 3 introduces virtual portchannels.
The left of Figure 3A shows an access switch connected to two upstream switches, AG1 and AG2,
through 2-port point-to-point portchannel. By utilizing virtual switching and virtual portchannels, the
equivalent topology becomes one in which the access switch is connected through a 4-port channel
to the virtual switch. The virtual portchannel is established between the access switch and AG1 and
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 9 of 23
Technical Overview
AG2, forming a multichassis Etherchannel. Figure 3B shows a more complex topology with two
sets of virtual switches, one at the aggregation layer and one at the access layer. All the physical
switches are connected in a full-mesh topology using 2-port channels between them. Through
virtual switching and virtual portchannels, the equivalent environment contains two virtual switches,
AG and AC, interconnected through an 8-port virtual portchannel that contains the individual port
channels between the physical switches.
Isolation at the topology level is done by assigning specific ports to particular virtual switching
instances. Figure 4 displays two topologies. Topology A shows switches AG1, AG2 and AC which
collectively support two virtual switching instances. Two instances of VLANx map to each of he
virtual switching instances thus creating two parallel topologies both supporting the same VLAN,
but completely isolated. Figure 4B shows a more complex topology that essentially supports two
virtual switching instances on access and aggregation switches however each supporting a
different VLAN. The physical separation is achieved by having ports dedicated to a particular switch
instance.
There are additional benefits on Layer 2 topologies. In a typical Layer 2 topology, STP provides
loop avoidance and allows path resiliency. A byproduct of path resiliency is that only one path is
active while all other redundant paths are blocked. This is also true for dual-homed servers that use
network interface card (NIC) teaming, with one of the adaptors active and the second on standby.
Virtual switching and virtual portchannel technology allows a Layer 2 network to use all available
Layer 2 paths concurrently. Hosts and switches connecting the virtual switches use any available
L2 path concurrently. The traffic distribution is granular to a flow (source and destination MAC
addresses, source and destination IP addresses and Layer 4 port number). In addition to non-
blocking paths, virtual portchannels capabilities provide both switch and path redundancy.
Figure 5 show the physical and logical views of a large Layer 2 network using VS technology. The
diagrams show a generic multilayer switch providing virtual switching and virtual portchannel
capabilities across multiple Layer 2 tiers: access, aggregation, and core. The physical view shows
a number of dual-homed servers that have established cross-chassis channel connections with a
pair of access switches. For instance, server B, which is a collection of virtual servers, uses two
groups of 2 I/O ports to connect to the access switches (2 to each switch). These connections
effectively form a 4-port channel when connecting to a set of access switches that are VS capable.
Through this mechanism, server B is capable of using all links concurrently. Likewise a non-
virtualized server using a single IP and MAC address is also capable of using the two links
concurrently. In either case, the servers are unaware of the fact that they are connecting to two
distinct switches through a single PortChannel.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 10 of 23
Technical Overview
The following subsection presents guidelines for the selection of data center technology.
There are, however, certain network locations where the Cisco Nexus 7000 Series switching
products may not apply. The Cisco Nexus 7000 Series consists of purpose-built data center
switches optimized for 10GbE aggregation. As data center switches, they fit data center
environments that require high-density 10GbE. There are other environments outside the data
center where the Cisco Nexus 7000 Series fits based on 10GbE port density and a feature
requirement match.
Additionally, there are environments where the Cisco Nexus 7000 Series does not fit as it does not
offer the required feature set or hardware to support such requirements.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 11 of 23
Technical Overview
While both platforms fit the data center access layer, the Cisco Catalyst 6500 Series is optimized
for GbE access whereas the Cisco Nexus 7000 Series is optimized for 10GbE access.
Both platforms fit the aggregation and core layers, which have traditionally supported both Layers 2
and Layer 3 and only Layer 3, respectively. 10GbE density, oversubscription, and Layer 2 and
Layer 3 features should be used to determine which platform has the better fit.
The Cisco Nexus 7000 Series platform does not apply to the data center services, Internet edge or
WAN edge. From a data center services perspective, it is expected that services are deployed
through services chassis as explained in the new-topologies section. The Internet edge is exposed
to the Internet size routing table, which the Cisco Nexus 7000 Series will support in future releases.
The WAN edge typically requires physical interfaces that the Cisco Nexus 7000 Series, being a
data center switching platform, does not offer.
For more information about the Cisco Nexus 7000 Series feature set, visit:
http://www.cisco.com/go/nxos.
From a data center viewpoint, the selection guidelines depend on when technology enhancements
are available. The enhancements include both new hardware and software features previously
mentioned. Table 2 summarizes the technology enhancements and their availability.
Technology Enhancements Cisco Catalyst 6500 Series Cisco Nexus 7000 Series
New Hardware
Layer 2 Enhancements
Other enhancements: PVID mismatch Future IOS Release Cisco NX-OS 4.0
Virtual Switching
Virtual Switching: Multiple Virtual Switches per – Cisco NX-OS 4.0 - VDC
physical switch
Services Integration
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 12 of 23
Technical Overview
In addition to the technology enhancement areas, the table lists services integration. The dashes
indicate that there are no current plans to support the listed features. The New Hardware column
lists the new I/O modules and supervisor with 10GbE uplinks.
Note: Note that the two 10GbE I/O modules listed (4:1 oversubscribed) support configuration
modes where 1 port in a 4-port group operates at wire rate.
VS offers two major enhancements: virtual switches and virtual portchannels. Virtual switches
comes in two different models. One in which the virtual switch is a group of physical switches
available through Virtual Switching System (VSS) technology on the Cisco Catalyst 6500 switches,
and the other in which the virtual switch is a logical instance of a physical switch, available through
Virtual Device Content (VDC) technology on the Cisco Nexus 7000 switches. Virtual portchannel
technology is integrated in to VSS and available on the Cisco Nexus 7000 in a future release.
Layer 2 enhancements are organized into two groups based on when they are available according
to the release schedule. Services integration, as is explained later in this document, refers to
whether the chassis supports integrated service modules or whether services are possible through
a services chassis model.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 13 of 23
Technical Overview
In the triangular topology, the access switch connects to two aggregation switches while the
aggregation switches interconnect carries the access layer switch VLAN(s), forming a looped
topology. In the V-shaped topology, an access switch is also connected to two aggregation
switches; however, the aggregation switches interconnect does not carry the VLAN used at the
access switch, forming a loop-free topology.
The triangular and V-shaped topologies are widely used because of their predictability in steady
and failover states where each access switch has a primary and redundant path to the aggregation
layer (high availability).
● The uplinks are typically PortChannels, thus providing physical link redundancy.
● The Layer 2 and Layer 3 boundary is at the aggregation layer.
● Aggregation switches typically are the primary and secondary STP root bridges.
● FHRP active and standby instances match the primary and secondary root bridges
respectively.
● VLANs are contained to the aggregation pair, and the failure domain is determined by how
widely VLANs are spread to the access layer (one, many or all access switches).
● Aggregation to core layer connections are Layer 3 equal cost multi-paths (ECMP) in and out
the aggregation layer
● The core layer summarizes routes towards the rest of the network injects a default towards
the aggregation layer
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 14 of 23
Technical Overview
The VLAN scope is logically restricted to the access switch pair, and although they are supported
at the aggregation layer, no other access switch pair sees the VLAN. The topology is predictable
during steady state, but in failover mode the topology could present some challenges depending
upon where the failure is and whether there is a link between the aggregation switches that
transport the traffic for the access layer VLANs.
● Failures that isolate the active and standby FHRP peers in loop-free topologies, causing the
two FHRP peers to become active
● Oversubscription rates when compared to the equivalent triangular or V-shaped topology
would require twice as many ports in the channels from access to aggregation switches
Some of the characteristics of these topologies are:
● Aggregation switches provide FHRP functions to the access layer and do not rely on STP
for loop avoidance.
● Access switches support one or many VLANs, which are trunked to the aggregation layer.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 15 of 23
Technical Overview
Figure 10 shows the classic topology using both the Cisco Catalyst 6500 and Cisco Nexus 7000
Series Switches.
Figure 10. Classic Topology Using Cisco Catalyst 6500 and Cisco Nexus 7000 Series Switches.
In Figure 10, the Cisco Nexus 7000 Series is used in the core and in one of the aggregation
module’s aggregation layer. At the core layer, it provides traditional Layer 3 functions and high-
density 10GbE, and at the aggregation layer it provides both Layer 2 and 3 functions, taking
advantage of its 10GbE density and enhanced STP features. The topology in is used as a
reference to introduce new switching technology functions and associated changes to the classic
topology.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 16 of 23
Technical Overview
Module 1 in shows virtual switches at the access and aggregation layers through the use of VSS
technology. Module 2 in shows a classic triangular topology taking advantage of the advanced STP
features, thus addressing STP stability in a high-density 10GbE aggregation environment. Notice
that module 2 could also reduce the size of the Layer 2 failure domain by using VS technology
available through VDC, by creating virtual switches and assigning ports to to each and allowing
each logical topology to run isolated instances of control plane protocols.
Note: In a VSS environment, STP protocols are used strictly as fail-safe mechanisms as the
loop-free forwarding topology is established through VSS. For more information about VSS, visit:
http://www.cisco.com/en/US/products/ps9336/products_white_paper0900aecd806ee2ed.shtml.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 17 of 23
Technical Overview
The left part of Figure 12 shows a typical topology as used today with the following characteristics:
● 60 access switches x 336 ports per access switch, for 20,160 ports per aggregation module
● Access to aggregation oversubscription
◦ Access to aggregation: 336:40 = 8.4:1
◦ Aggregation layer I/O module oversubscription: 2:1
◦ Total oversubscription: 16.8:1
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 18 of 23
Technical Overview
Note: All the calculations assume that the topology is capable of forwarding concurrently in all
links taking advantage of virtual portchannel technology. Consult Table 2 earlier in the document
for feature availability.
The overall access port count more than doubles due to the higher server density per access
switch and the higher 10GbE port density per aggregation switch. The oversubscription increases
from 12:1 to 16.8:1, although the assumption made thus far is that each GbE port is wire speed. If
instead a true server throughput of 500 Mbps per port is assumed, the oversubscription numbers
for the access layer to the aggregation layer are 6:1 and 8.4:1 respectively. Another factor in the
overall scalability calculation is whether each server uses one or more GbE interfaces. If a server
uses two GbE NICs, the overall server count would be half the port count, and per port
performance could be set at 200 Mbps (or 400 Mbps per server). At 200 Mbps, the access layer to
aggregation layer oversubscription drops to 2.4:1 and 3.36:1 respectively.
In these examples, the calculations could have been done at 4:1 access layer to aggregation layer
oversubscription, which would have yielded higher port access counts and higher access to
aggregation layer oversubscription.
As expected, the enhanced Layer 2 topology goes from two Layer 2 tiers to three Layer 2 tiers,
which increases the size of the Layer 2 domain and therefore the size of the failure domain.
Besides using best-practice recommended STP features such as BPDU guard, root guard, loop
guard, and UDLD, it is important to consider the use of the advanced STP features as well as the
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 19 of 23
Technical Overview
virtual portchannel and virtual switching technology discussed earlier. Notice in Figure 13 that
VLAN E is now a data center–wide VLAN that spans multiple aggregation modules. In this
scenario, the failure domain includes all aggregation modules where VLAN E is present.
To limit the size of the failure domain, a pair of core switches for each group of aggregation
modules is required. Each core pair typically covers the scope of a data center zone. A data center
zone is a physical area of the data center grouping a number of aggregation modules, all
connecting to a pair of common core switches. The zone size is determined by power and cooling
capacity, cabling distances, the density of racks that can be supported per aggregation module,
and the number of modules that the core can support.
Figure 14 displays multiple data center zone requiring a Layer 2 and Layer 3 core switch pair. The
core pair per zone allows each zone to have a separate STP topology with independent STP
instances running, one per zone. VLANs can be as wide as a zone but cannot cross the zone
boundary. Figure 15 shows an alternative to collapsing all core pairs in a single one without
committing to a single large-scale failure domain.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 20 of 23
Technical Overview
Two logical switching instances are established. Each instance maps to a different VDC supported
by the core layer. Zones 1 and 2 increase in size by taking modules from zone N so that larger
Layer 2 domains can be established yet allowing them to remain isolated while using only a single
switch core pair. Each zone has its own instances of Layer 2 and Layer 3 protocols, thus
supporting zone isolation over the same infrastructure.
Starting with the classic topology using new Cisco Catalyst 6500 and Cisco Nexus 7000 Series
technology and features, the integration of services is achieved through services switches
connected to the topology Layer 2 and Layer 3 boundary. The classic topology takes advantage of
virtual switching, virtual portchannels and advanced STP features while conserving the typical
Layer 2 and Layer 3 boundary at the aggregation layer. Services are integrated through Cisco
Catalyst 6500 switches that are connected to the aggregation switch pair. These service switches
provide connectivity to service appliances and house service modules. This topology is shown in
Figure 16.
There are three primary reasons for the use of service switches:
● Service scalability: Service scalability is currently limited by the number of slots at the
aggregation layer. Further service scalability is addressed by adding service devices as
needed without having to change the aggregation layer hardware or software configuration
and without sacrificing valuable slot density.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 21 of 23
Technical Overview
● Topology scalability: Relieving slots used by service modules or GbE I/O modules used for
appliance connectivity from the aggregation switches makes the overall aggregation module
more scalable. Slots normally taken by service devices are repurposed to support uplink
capacity, thus enabling larger topologies to be supported. Some of the ports, however, are
still used to connect to the services chassis.
● Manageability: Decoupling service devices from the aggregation switches make service
devices code upgrades and configuration changes independent from the aggregation
switches. Configuration changes for services are more frequent than Layer 2 and Layer 3
changes, so mitigating risks resulting from service changes is easier when services chassis
are introduced.
Module 1 in Figure 16 shows a virtual switch using VSS technology, which provides connectivity to
a set of Cisco Catalyst 6500 switches that house all service devices and their redundant peers.
Service switch connectivity through virtual portchannel technology allows each service switch to
have a nonblocking redundant path to the virtual switch, which simplifies deployment options.
Module 2 in Figure 16 shows the integration of services chassis in a Cisco Nexus 7000 Series
aggregation environment. The environment takes advantage of the advanced STP features while
allowing the port density used at the aggregation layer to remain used for uplink connectivity.
An additional topology to consider is the enhanced Layer 2 topology using services chassis in
multiple logical topologies. The objective is to provide scalable services in a large data center
network environment that has VLANs applied outside a single aggregation module but not across
the entire data center. This implies that the services are scaled to a particular zone that has a
predetermined size. This configuration is presented in Figure 17.
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 22 of 23
Technical Overview
Figure 17 shows two large zones, each served by a different set of services switches. The service
switches provide services to the particular zone to which they are physically connected, which has
been predetermined by setting certain physical ports on different virtual switches. Service chassis
svcs1 and svcs2 serve the red topology, and svcs3 and svcs4 serve the orange topology. This
makes the service scale to the particular logical topology without having to share the control
protocols.
● http://www.cisco.com/go/nexus
● http://www.cisco.com/go/nxos
● http://www.cisco.com/en/US/products/ps9336/products_white_paper0900aecd806ee2ed.sht
ml
● http://www.cisco.com/go/dcnm
All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 23 of 23