You are on page 1of 23

Technical Overview

Architecture Brief: Using Cisco Catalyst 6500 and Cisco


Nexus 7000 Series Switching Technology in Data Center
Networks

This architecture brief discusses design strategies based on technology enhancements


available in the Cisco Catalyst® 6500 and Cisco® Nexus 7000 Series Switches. The
technology enhancements enable data center architects to address broad data center
network requirements placed on current architectures. New switching technology delivers
a number of capabilities that collectively provide architectural options to meet such
requirements. The reader is expected to have experience with design and
implementation issues and familiarity with technology concepts. This brief targets data
center architects and network engineers seeking to understand data center architecture
evolution strategies.

Contents
1 Data Center Architecture Evolution.......................................................................................... 3
1.1 Architecture Drivers................................................................................................................................3
1.1.1 Server Consolidation ...................................................................................................................3
1.1.2 Server Virtualization ....................................................................................................................3
1.1.3 Application Scalability and Service Availability ...........................................................................4
1.1.4 Efficient Workload Management .................................................................................................4
1.1.5 Data Center Uptime.....................................................................................................................4
1.2 Data Center Networking Requirements .................................................................................................4
1.2.1 Higher-Density Server Farms......................................................................................................5
1.2.2 Lower Oversubscription...............................................................................................................5
1.2.3 Increased Layer 2 Adjacency ......................................................................................................5
1.2.4 Increased High Availability ..........................................................................................................6
1.2.5 Integration of Stateful Devices ....................................................................................................6
1.3 Networking Technology Enhancements.................................................................................................7
1.3.1 10GbE Density ............................................................................................................................7
1.3.2 STP Stability................................................................................................................................7
1.3.2.1 Bridge Assurance .................................................................................................................................... 7
1.3.2.2 Dispute Mechanism ................................................................................................................................. 8
1.3.2.3 PVID Mismatch........................................................................................................................................ 8
1.3.2.4 PVST Simulation Disable......................................................................................................................... 8
1.3.3 Virtual Switching..........................................................................................................................8
2 Data Center Network Topologies........................................................................................... 11
2.1 Technology Selection Guidelines.........................................................................................................11
2.2 Current Topologies...............................................................................................................................13
2.2.1 Triangular and V-Shaped Topologies........................................................................................13
2.2.2 Square and U-Shaped Topologies ............................................................................................14
2.2.3 Minimal STP Topologies ...........................................................................................................15
2.3 New Topologies ...................................................................................................................................16
2.3.1 Virtual Switching and STP Stability ...........................................................................................17
2.3.2 High-Density Server Farms and Virtual Switching ....................................................................17
2.3.3 Enhanced Layer 2 Topology .....................................................................................................19
2.3.4 New Topologies and Data Center Services ..............................................................................21

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 1 of 23
Technical Overview

3 Architecture Topology Summary............................................................................................ 23


4 For More Information ............................................................................................................. 23

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2 of 23
Technical Overview

1 Data Center Architecture Evolution


Existing data center architectures are being revisited. Behind these efforts to craft a new
architecture are a number of business drivers that collectively impose a new set of challenges.
These challenges are addressed through innovative networking technology enhancements offered
by Cisco data center switching products.

1.1 Architecture Drivers


The list of business drivers transforming data centers network environment is extensive. Main
architecture drivers include the following:

● Server consolidation
● Server virtualization
● Application scalability and service availability
● Efficient workload management
● Data center uptime

Although there are business-level drivers such as capital expenditure (CapEx) and operating
expense (OpEx) cost controls; physical facilities related such as data center consolidation, power,
cabling, and cooling; and additional factors related to resiliency, such as disaster recovery and
business continuance, the focus of this document is single-site data centers networking issues and
drivers that influence network changes.

1.1.1 Server Consolidation


Server consolidation aims to control server hardware proliferation as well as standardize server
hardware architecture. Server proliferation, also known as server sprawl, is the result of physical
server growth. Server utilization, however, is not always efficient, yielding low average server
utilization. Hardware standardization means establishing a set of guidelines for officially supported
servers. Guidelines may dictate server sizes, numbers of CPUs or cores per server, memory, bus
architecture, and overall I/O capacity. As a result, server growth continues but at a decreased pace
(increased switch port density). Per server, capabilities continue to increase (more I/O capacity per
server), which results in an increase in average load per server on the network.

1.1.2 Server Virtualization


Server virtualization is a form of server consolidation that helps control server sprawl and supports
higher levels of server usage efficiency and application mobility. Higher server efficiency means
higher average server utilization and therefore higher outbound load per server. Application mobility
implies a virtual server infrastructure, where virtual servers can be moved across physical servers.
There are implications such as a single physical server supporting different applications through
multiple virtual servers, and multiple virtual servers requiring multiple IP addresses. Servers are
also likely to need multiple I/O ports, VLAN trunking, and port channeling. Virtualized servers may
require migration from Gigabit Ethernet (GbE) to 10 Gigabit Ethernet (10GbE) when multiple
applications within a physical server demand more than 1 Gbps on regular basis. Virtualized
servers produce higher outbound traffic loads, which require a lower access layer oversubscription
target (ratio of server load to uplink capacity).

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3 of 23
Technical Overview

1.1.3 Application Scalability and Service Availability


Application scalability addresses both the need to control application response time jitter and the
need to help ensure application uptime. Application response time predictability driven by end-user
satisfaction means addressing application load congestion in the data center, whereas application
uptime involves the need to protect application environments (security) and the need to help ensure
network infrastructure uptime (network high availability).

Service availability is more focused on uninterrupted access to services expected by users or back-
end systems. Besides the infrastructure components supporting services (network and servers),
services are related to the applications and the associated service-level agreements (SLAs).

1.1.4 Efficient Workload Management


Efficient workload management is at the core of business automation. There are two main
objectives: efficiency of resource utilization, and flexibility in achieving rapid on-demand service
provisioning. Business needs call for applications to be placed in service in the shortest possible
time window. The rollout process efficiency depends primarily on the ability to add new physical or
virtual servers to the network along with the required network policy changes (Layer 2, Layer 3,
Layer 4 through Layer 7, and security). The implications are broad as resources are applied on
demand based on a utility-like model.

The physical and network environments must support network policy applied to any available
portion of the physical environment (any rack with available power, cooling, and cabling capacity).
Network policy is typically dependent on the most basic server identity (IP address), which implies
that the server subnet (and its associated VLAN) should also be configurable in any physical rack
when required. Layer 2 adjacency across the data center access layer becomes a key requirement.

1.1.5 Data Center Uptime


Data center uptime is critical. While this is not a new concept or requirement, data center downtime
is more visible than ever because failures on denser server farms have wide repercussions. The
resulting requirement has also wide ramifications: the network architecture and supporting switch
architecture (software and hardware) must be highly scalable and highly available as uptime
requirements arise. New data center technology in support of these is discussed in this document,
which focuses primarily on modular switching data center technology.

1.2 Data Center Networking Requirements


Data center network design and related topologies are influenced by requirements. The common
set of networking requirements include higher density server farms, lower oversubscription,
increased Layer 2 adjacency, increased high-availability needs, and integration of stateful services
devices. Additional considerations include increased number of VLANs and subnets, increased
number of MAC and IP address pairs, larger subnets, increased number of trunks and
PortChannels, increased access control list (ACL) quantity, etc., which are either interrelated or the
result of larger networking requirements.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4 of 23
Technical Overview

1.2.1 Higher-Density Server Farms


Typical server farm growth and data center consolidation produce denser server farms.
Consolidated data centers exhibit faster growth than distributed ones. While the rate of growth is
slowed down by server virtualization, growth is still experienced. Server growth requires a flexible
network architecture to accommodate additional port density without affecting uptime or
oversubscription. Server growth equates to increased switch port density at a rate determined by
the number of physical interfaces per server. Virtual server growth equates to a higher number of
IP addresses and their associated MAC addresses, a higher number of trunks and PortChannels
from server to switch, and use of more I/O ports per server (4 x GbE) or higher I/O capacity (2 x
10GbE).

More IP addresses require more subnets and VLANs, which in turn drive additional service policy:
ACLs (security and quality of service [QoS]), load balancing policy, firewall [FW] policies, etc. More
capable servers coupled with increased server quantities require higher network transport capacity,
which needs to be preprovisioned to a reasonable point. Such a point is determined by the target
oversubscription that is to be maintained as network growth is experienced. Increasing numbers of
access switches demand higher port density at the aggregation layer, which in turn drives higher
density at the core layer.

1.2.2 Lower Oversubscription


Standardization of server hardware implies use of the latest technology: faster processors, latest
bus architecture, and increased and higher server I/O capacity. Considerations such as percentage
of the I/O capacity used on average and whether the number changes when the servers is multi-
homed are often the subject of discussion between server and networking groups. Whatever the
answer is, the network must be able to maintain the established target oversubscription while
providing for server growth and associated network equipment growth.

Oversubscription for access layer switches ranges from 1:1 to 20:1 depending largely on the
application environment. Applications that require high bandwidth benefit from very low
oversubscription (1:1 through 4:1). Typical client applications are in the oversubscription range of
10:1 to 20:1 and are generally co-hosted with backup high-bandwidth applications for certain time
slices requiring a lower oversubscription range of 4:1 to 12:1. Virtual servers are expected to have
higher output than non-virtual servers. Hence, their oversubscription trends lower, in the range of
5:1 to 10:1. Access layer oversubscription in modular access switches is expected to be in the 4:1
to 12:1 range assuming that each port is capable of wire-rate performance. Once the server actual
oversubscription is factored in, the overall oversubscription decreases.

1.2.3 Increased Layer 2 Adjacency


Layer 2 adjacency implies the existence of a Layer 2 path between devices. The range of
applications requiring Layer 2 adjacency is broad and has traditionally included:

● NIC teaming (two server I/O adaptors using the same IP and MAC address pair on different
physical switches)
● Server clusters that use Layer 2 only to communicate
● High-availability server clusters that expect to provide a server backup mechanism on the
same subnet but different access switch
● Backup applications that expect to see devices being backed up in the same subnet
● Subnetting: simple server grouping in which same subnet servers are connected to different
access switches, thus supporting common VLANs across access layer switches

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5 of 23
Technical Overview

More recently the following requirements have emerged:

● Virtual server movement, which requires the server application environment (including the
server ID and IP and MAC addresses) be moved to a different physical location in the
network
● Workload mobility so that an application and its processing requirements can be moved to
server resources with more available computing capacity
● Automation, or establishment of a dynamic mechanism for provisioning a new server or
application in support of a service. The service presumes the existence of an environment
that can be easily manipulated to allow any new application transparently. This
transparency comes from allowing any application to be deployed dynamically on any
available virtual server that may be placed in any existing subnet or VLAN. The flexibility
required by such automated processes comes from the network’s support for logical
resource instances where and when needed without many physical constraints

Today’s environments are rigid in that the physical data center space is tightly coupled with logical
portions of the network environment. A subnet or VLAN that is available only in a number of racks
is inefficient and restrictive. High-density server farms exacerbate the lack of flexibility as it is
difficult to predict which available physical location has the correct logical network settings. To
avoid these restrictions, the architecture is expected to support data center–wide VLANs. Data
center–wide VLANs support any subnet/VLAN on any access switch, and consequently on any
data center rack. This implies that any access switch should be capable of supporting any subnet
or VLAN; however, not all access switches need to support all VLANs concurrently. In a two-tier
Layer 2 topology it is not uncommon to support VLANs across all access switches connected to a
single aggregation pair. However, support for VLANs across multiple aggregation pairs is not
common as it carries implications related to the size of a failure domain.

1.2.4 Increased High Availability


High availability has and continues to be a major requirement. The expectations for high
availability, however, are changing. A highly available network infrastructure requires switch-level
and architecture-level high availability, so it is important to address these independently but
cohesively. Switch-level high availability is addressed by hardware and software architectures and
the capabilities they offer. See
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/White_Paper_Continu
ous_Operations_High_Availability.html for details on high availability.

Data center network architecture–level high availability centers on reliability, stability, and
deterministic behavior, all of which need to be understood during a steady state and through
normal operational procedures and failover conditions. The switching technology enhancement
section discusses new mechanisms aimed at addressing high availability.

1.2.5 Integration of Stateful Devices


Integration of stateful service devices such as load balancers and firewalls is not a new concept.
The new requirements come from the support of virtual instances, active-active instances, and
higher service scalability. The network infrastructure must simplify the pathing to the service
devices so that it is optimal, and the architecture should offer simpler service scalability through the
addition of new services without changing existing topologies.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 6 of 23
Technical Overview

1.3 Networking Technology Enhancements


Networking technology enhancements are the response to requirements that help evolve the
network architecture. Key enhancements include:

● Higher 10GbE port density


● Increase Spanning Tree Protocol (STP) stability
● Virtual Switching

1.3.1 10GbE Density


10GbE port density directly addresses server farm density by providing uplink capacity for access
switch connectivity. The Cisco Catalyst 6509 Switch provides a total of 130 10GbE ports per Cisco
Catalyst 6509 chassis using eight new 16-port 10GbE I/O modules plus 2 10GbE ports in the
supervisor 720-10G VS. The Cisco Nexus 7000 Series provides 256 10GbE ports through eight 32-
port 10GbE I/O modules per chassis. A pair of aggregation switches supports 128 and 256 dual-
homed access switches respectively, thus doubling or quadrupling the current capacity based on
the Cisco Catalyst 6509 8-port 10GbE I/O module. If access layer switches use two uplinks per
aggregation switch, the access switch count goes to 64 and 128 respectively. If an access layer
switch supports 240 GbE ports, the total number of access ports per aggregation pair ranges from
15,360 to 61,440 ports.

1.3.2 STP Stability


STP stability is increased by addressing typical failures seen in operational environments. Many of
these failures are related to the immediately available nature of the STP. Solutions to the various
STP concerns therefore include dealing with some plug-n-play aspects of immediate availability
through the use of advanced features that address specific conditions. In addition to the advanced
features discussed here, the use of virtual switching technology to build loop-free deterministic
topologies lessens the reliance on STP. Effective STP becomes a background fail-safe protocol not
actively determining a loop-free topology.

The new advanced features addressing STP and Layer 2 stability are:

● Bridge assurance
● Dispute mechanism
● Port VLAN ID (native VLAN ID) mismatch
● Per VLAN Spanning Tree (PVST) simulation disable

These four features collectively address common problems found in Layer 2 environments. The
features provide a mechanism to increase the resiliency of Layer 2 networks to changes, such as
addition of new switches, soft errors, and human-induced problems.

1.3.2.1 Bridge Assurance


Bridge assurance is an STP feature used between two switches to help ensure that they
communicate properly and consistently. If the communication fails because one switch ceases to
send a periodic Bridge Protocol Data Unit (BPDU) (control traffic), the other switch places the port
in an inconsistent state (blocking). The port is kept in an inconsistent state until a BPDU is
received; it is then changed to a normal STP state starting with the regular transitions.

Bridge assurance prevents misbehaving switches that could be forwarding data traffic, but not
control traffic, from affecting the network. BPDUs are sent only on designated ports, but with bridge
assurance enabled, BPDUs are sent on all ports.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 7 of 23
Technical Overview

1.3.2.2 Dispute Mechanism


The dispute mechanism is included in the IEEE 802.1D-2004 Rapid Spanning Tree Protocol
standard, which allows a switch to prevent loops resulting from unidirectional link conditions. In a
segment, there could be a designated port (such as switch A) that sends a BPDU (including its
role) that never arrives at the receiving port (switch B). When the designated port (switch A)
receives a BPDU from switch B and the BPDU is not consistent because switch B did not react to
the role of switch A, switch A determines that the link is unidirectional and reverts the port state to
blocking. The dispute mechanism works on shared segments as well as at linkup time, which is an
improvement over Loopguard.

1.3.2.3 PVID Mismatch


PVID Mismatch provides a mechanism to address a native VLAN mismatch over a connection
between two switches and generates a syslog message that could be used by Embedded Event
Manager [EEM] to shutdown the interface. Although this is not a STP feature it is listed here as it
addresses Layer 2 stability.

1.3.2.4 PVST Simulation Disable


A switch running Multiple Instance STP [MSTP] is capable of interoperating with a PVST+ or
PVRST switch at a region boundary. This feature provides plug-n-play interoperability, which may
not be desirable if the switch being connected is not configured properly.

This feature provides an option to disable PVST simulation, thus preventing the switch from joining
the STP environment. The prevention results when the MSTP port detects a peer inconsistency
and blocks the port.

1.3.3 Virtual Switching


Virtual Switching (VS) is both revolutionary and evolutionary technology that supports a broad set
of data center requirements. VS is revolutionary in that through innovative enhancements, it
provides new architectural options to a data center network environment not currently available. VS
is evolutionary in that it establishes a point of departure for next generation data center
architectures from existing ones, and helps set the direction of technology in support of critical
business drivers. VS technology is available on Cisco Catalyst 6500 and Cisco Nexus 7000
switching technology.

VS technology is characterized through two main functions: virtual switching and virtual port
channels. Virtual switching refers to the ability to make multiple switches behave like a single
switch (virtual switch), or to have a single switch behave as many (virtual switches). Figure 1
presents a virtual switch instance made out of multiple physical switches, and a single physical
switch that supports multiple virtual switch instances.

Figure 1. Virtual Switching

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 8 of 23
Technical Overview

Figure 1A displays two interconnected switches, A1 and A2, that become a single virtual switch A.
Figure 1B displays a single switch A that support multiple virtual switch instances, A1 through A4.
Virtual switch A sees all ports, I/O modules and supervisor modules as being part of the same
switch. Virtual switches A1 through A4 only see the physical resources assigned to each virtual
switch and are unaware of the other virtual switching instances hosted in the same physical switch.

Figure 2 presents a view of how the control plane is handled on either kind of virtual switch. Figure
2A shows a single virtual switch supporting a single control plane instance with its associated
protocols. The single instance represents a single management and control plane point that carries
a single configuration file and a single set of IP and mac addresses.

Figure 2. Control Plane on Virtual Switching

Figure 2B presents multiple virtual switching instances hosted on a single physical switch which
provide separate control planes, thus separate control plane protocol instances. All instances are
isolated from one another.

There are inherent benefits to either approach of switch virtualization. On one hand a single logical
instance simplifies the management of the network infrastructure and provides non-blocking Layer
2 paths to and from any device attached to A1 and A2, and on the other hand, multiple virtual
switch instances on a single physical switch that provide control plane and physical topology
isolation over a shared network infrastructure. Figure 3 introduces virtual portchannels.

Figure 3. Virtual PortChannels and Layer 2 topologies

The left of Figure 3A shows an access switch connected to two upstream switches, AG1 and AG2,
through 2-port point-to-point portchannel. By utilizing virtual switching and virtual portchannels, the
equivalent topology becomes one in which the access switch is connected through a 4-port channel
to the virtual switch. The virtual portchannel is established between the access switch and AG1 and

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 9 of 23
Technical Overview

AG2, forming a multichassis Etherchannel. Figure 3B shows a more complex topology with two
sets of virtual switches, one at the aggregation layer and one at the access layer. All the physical
switches are connected in a full-mesh topology using 2-port channels between them. Through
virtual switching and virtual portchannels, the equivalent environment contains two virtual switches,
AG and AC, interconnected through an 8-port virtual portchannel that contains the individual port
channels between the physical switches.

Isolation at the topology level is done by assigning specific ports to particular virtual switching
instances. Figure 4 displays two topologies. Topology A shows switches AG1, AG2 and AC which
collectively support two virtual switching instances. Two instances of VLANx map to each of he
virtual switching instances thus creating two parallel topologies both supporting the same VLAN,
but completely isolated. Figure 4B shows a more complex topology that essentially supports two
virtual switching instances on access and aggregation switches however each supporting a
different VLAN. The physical separation is achieved by having ports dedicated to a particular switch
instance.

Figure 4. Virtual Switching and network topologies

There are additional benefits on Layer 2 topologies. In a typical Layer 2 topology, STP provides
loop avoidance and allows path resiliency. A byproduct of path resiliency is that only one path is
active while all other redundant paths are blocked. This is also true for dual-homed servers that use
network interface card (NIC) teaming, with one of the adaptors active and the second on standby.

Virtual switching and virtual portchannel technology allows a Layer 2 network to use all available
Layer 2 paths concurrently. Hosts and switches connecting the virtual switches use any available
L2 path concurrently. The traffic distribution is granular to a flow (source and destination MAC
addresses, source and destination IP addresses and Layer 4 port number). In addition to non-
blocking paths, virtual portchannels capabilities provide both switch and path redundancy.

Figure 5 show the physical and logical views of a large Layer 2 network using VS technology. The
diagrams show a generic multilayer switch providing virtual switching and virtual portchannel
capabilities across multiple Layer 2 tiers: access, aggregation, and core. The physical view shows
a number of dual-homed servers that have established cross-chassis channel connections with a
pair of access switches. For instance, server B, which is a collection of virtual servers, uses two
groups of 2 I/O ports to connect to the access switches (2 to each switch). These connections
effectively form a 4-port channel when connecting to a set of access switches that are VS capable.
Through this mechanism, server B is capable of using all links concurrently. Likewise a non-
virtualized server using a single IP and MAC address is also capable of using the two links
concurrently. In either case, the servers are unaware of the fact that they are connecting to two
distinct switches through a single PortChannel.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 10 of 23
Technical Overview

Figure 5. Large Scale Layer 2 network.

In a switch-to-switch environment, a switch pair cross-connection as shown in Figure 5 is


established between all switches. In this case, acc1 and acc2 have two links to both agg1 and
agg2. The equivalent logical topology shows a 4-port channel connection between two virtual
switches: one logically formed by the access layer pair, and the other one by the aggregation
switch pair. The logical view of the diagram shows a loop-free 4-port channel connection. A Layer 2
topology built taking advantage of VS technology contains no loops, no blocking ports, and no
blocked paths. The forwarding path is established by the VS mechanisms, and since STP is not
used to help determine the forwarding topology, it is used strictly as a fail-safe backup protocol.

2 Data Center Network Topologies


This section discusses both current topologies supported by Cisco Catalyst 6500 Series switching
technology and new possible topologies that take advantage of new technology available through
the Cisco Nexus 7000 and Cisco Catalyst 6500 Series Switches. The new set of topologies is
aimed at addressing the current networking requirements discussed earlier.

The following subsection presents guidelines for the selection of data center technology.

2.1 Technology Selection Guidelines


Both the Cisco Catalyst 6500 and Cisco Nexus 7000 Series Switches are applicable in the data
center network environment and so is it possible to deploy data center networks using either
switching family. The actual selection must be based on which devices best meet short- and long-
term requirements. The new-topologies section discusses product-specific considerations to ease
the selection process.

There are, however, certain network locations where the Cisco Nexus 7000 Series switching
products may not apply. The Cisco Nexus 7000 Series consists of purpose-built data center
switches optimized for 10GbE aggregation. As data center switches, they fit data center
environments that require high-density 10GbE. There are other environments outside the data
center where the Cisco Nexus 7000 Series fits based on 10GbE port density and a feature
requirement match.

Additionally, there are environments where the Cisco Nexus 7000 Series does not fit as it does not
offer the required feature set or hardware to support such requirements.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 11 of 23
Technical Overview

Table 1 summarizes the guidelines.

Table 1. Switching Technology and Network Layers

Network Layers Cisco Catalyst 6000 Cisco Nexus 7000

Data center access Yes Yes


Gigabit Ethernet Access 10 Gigabit Ethernet Access

Data center aggregation Yes Yes

Data center core Yes Yes

Data center services Yes –

Internet edge Yes –

WAN edge Yes –

While both platforms fit the data center access layer, the Cisco Catalyst 6500 Series is optimized
for GbE access whereas the Cisco Nexus 7000 Series is optimized for 10GbE access.

Both platforms fit the aggregation and core layers, which have traditionally supported both Layers 2
and Layer 3 and only Layer 3, respectively. 10GbE density, oversubscription, and Layer 2 and
Layer 3 features should be used to determine which platform has the better fit.

The Cisco Nexus 7000 Series platform does not apply to the data center services, Internet edge or
WAN edge. From a data center services perspective, it is expected that services are deployed
through services chassis as explained in the new-topologies section. The Internet edge is exposed
to the Internet size routing table, which the Cisco Nexus 7000 Series will support in future releases.
The WAN edge typically requires physical interfaces that the Cisco Nexus 7000 Series, being a
data center switching platform, does not offer.

For more information about the Cisco Nexus 7000 Series feature set, visit:
http://www.cisco.com/go/nxos.

From a data center viewpoint, the selection guidelines depend on when technology enhancements
are available. The enhancements include both new hardware and software features previously
mentioned. Table 2 summarizes the technology enhancements and their availability.

Table 2. Technology Enhancements Availability Schedule

Technology Enhancements Cisco Catalyst 6500 Series Cisco Nexus 7000 Series

New Hardware

Supervisor 720-10G VS Now –


(2 x 10GbE 1:1 oversubscribed)

16-port 10GbE I/O module (4:1 oversubscribed) Now –

32-port 10GbE I/O module (4:1 oversubscribed) – Cisco NX-OS 4.0

Layer 2 Enhancements

STP enhancements: Bridge assurance, dispute 2HCY08 Cisco NX-OS 4.0


mechanism and PVST Simulation disable

Other enhancements: PVID mismatch Future IOS Release Cisco NX-OS 4.0

Virtual Switching

Virtual Switching: Multiple Virtual Switches per – Cisco NX-OS 4.0 - VDC
physical switch

Virtual Switching: Multiple physical switches per 12.2(33)SXH1 - VSS –


virtual switch

Virtual PortChannels 12.2(33)SXH1 - VSS Future Cisco NX-OS release

Services Integration

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 12 of 23
Technical Overview

Cisco Catalyst 6500: Service chassis Current Current

Cisco Catalyst 6500 VSS: In-chassis services 2HCY08 –

In addition to the technology enhancement areas, the table lists services integration. The dashes
indicate that there are no current plans to support the listed features. The New Hardware column
lists the new I/O modules and supervisor with 10GbE uplinks.

Note: Note that the two 10GbE I/O modules listed (4:1 oversubscribed) support configuration
modes where 1 port in a 4-port group operates at wire rate.

VS offers two major enhancements: virtual switches and virtual portchannels. Virtual switches
comes in two different models. One in which the virtual switch is a group of physical switches
available through Virtual Switching System (VSS) technology on the Cisco Catalyst 6500 switches,
and the other in which the virtual switch is a logical instance of a physical switch, available through
Virtual Device Content (VDC) technology on the Cisco Nexus 7000 switches. Virtual portchannel
technology is integrated in to VSS and available on the Cisco Nexus 7000 in a future release.

Layer 2 enhancements are organized into two groups based on when they are available according
to the release schedule. Services integration, as is explained later in this document, refers to
whether the chassis supports integrated service modules or whether services are possible through
a services chassis model.

2.2 Current Topologies


While there are many possible topologies in use for data center environments, a few common ones
are widely used. The topologies are:

● Triangular and V-shaped topologies


● Square and U-shaped topologies
● Minimal Layer 2 topologies

2.2.1 Triangular and V-Shaped Topologies


Triangular and V-shaped topologies are the ones most commonly found in data center
environments. Figure 6 shows the physical layout of both topologies.

Figure 6. Triangular and V-Shaped Topologies

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 13 of 23
Technical Overview

In the triangular topology, the access switch connects to two aggregation switches while the
aggregation switches interconnect carries the access layer switch VLAN(s), forming a looped
topology. In the V-shaped topology, an access switch is also connected to two aggregation
switches; however, the aggregation switches interconnect does not carry the VLAN used at the
access switch, forming a loop-free topology.

The triangular and V-shaped topologies are widely used because of their predictability in steady
and failover states where each access switch has a primary and redundant path to the aggregation
layer (high availability).

Some of the characteristics of these topologies are:

● The uplinks are typically PortChannels, thus providing physical link redundancy.
● The Layer 2 and Layer 3 boundary is at the aggregation layer.
● Aggregation switches typically are the primary and secondary STP root bridges.
● FHRP active and standby instances match the primary and secondary root bridges
respectively.
● VLANs are contained to the aggregation pair, and the failure domain is determined by how
widely VLANs are spread to the access layer (one, many or all access switches).
● Aggregation to core layer connections are Layer 3 equal cost multi-paths (ECMP) in and out
the aggregation layer
● The core layer summarizes routes towards the rest of the network injects a default towards
the aggregation layer

2.2.2 Square and U-Shaped Topologies


The square and U-shaped topologies are less common but still utilized. The topologies are
depicted in Figure 7. In these topologies, a pair of access switches is connected to each other, and
each is connected to a single switch of the aggregation pair. In the U-shaped topology, the
aggregation switches interconnect does not carry the access VLANs, and in the square topology it
does, forming loop-free and looped topologies respectively. This topology is particularly used to
allow server dual-homing so that each NIC connects to one of the access switches in the pair.

Figure 7. Square and U-Shaped Topologies

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 14 of 23
Technical Overview

The VLAN scope is logically restricted to the access switch pair, and although they are supported
at the aggregation layer, no other access switch pair sees the VLAN. The topology is predictable
during steady state, but in failover mode the topology could present some challenges depending
upon where the failure is and whether there is a link between the aggregation switches that
transport the traffic for the access layer VLANs.

The challenges are related to:

● Failures that isolate the active and standby FHRP peers in loop-free topologies, causing the
two FHRP peers to become active
● Oversubscription rates when compared to the equivalent triangular or V-shaped topology
would require twice as many ports in the channels from access to aggregation switches
Some of the characteristics of these topologies are:

● The Layer 2 and Layer 3 boundary is at the aggregation layer.


● Aggregation switches are primary and secondary root switches.
● FHRP active and standby instances match primary and secondary root switches
respectively.
● VLANs are contained to the aggregation pair.
● VLANs may be available only per access switch pair.
● From a Layer 3 perspective the characteristics are the same as the triangular or V-shaped
topologies

2.2.3 Minimal STP Topologies


There are other topologies more focused on minimizing STP on the network. STP runs on each
access layer, limiting the failure domain, and the aggregation switches do not run STP. The
topology restricts Layer 2 adjacency, which prevents NIC teaming and Layer 2 communications
between servers on different access switches. Figure 8 displays the minimal STP topology.

Figure 8. Minimal STP Topology

Some of the characteristics of this topology are:

● Aggregation switches provide FHRP functions to the access layer and do not rely on STP
for loop avoidance.
● Access switches support one or many VLANs, which are trunked to the aggregation layer.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 15 of 23
Technical Overview

● All aggregation to access links are 802.1q trunks.


● Access layer VLANs are available only on a single access switch at a time.
● The access switch can have multiple 802.1q trunks to more than a pair of aggregation
switches.
● Behind each 802.1q trunk is an FHRP instance that offers redundant Layer 2 paths to the
default gateway.

2.3 New Topologies


The section discusses newer topologies that take advantage of new technologies available on the
Cisco Catalyst 6500 and Cisco Nexus 7000 Series switching products. As mentioned earlier, there
are four main areas where technology enhancements provide new functions: higher 10GbE port
density, increased STP stability, and Virtual Switching. Figure 9 shows the Cisco Catalyst 6500
Series icons for Layer 2 and 3–mode and Layer 2–only mode switches, a generic icon for a Layer 2
and 3–mode switch, and the Cisco Nexus 7000 Series icons for Layer 2 and 3–mode and Layer 2–
only mode switches. These icons are used as a reference to differentiate between the Cisco
Catalyst 6500 and Cisco Nexus 7000 Series Switches.

Figure 9. Icon Legend

Figure 10 shows the classic topology using both the Cisco Catalyst 6500 and Cisco Nexus 7000
Series Switches.

Figure 10. Classic Topology Using Cisco Catalyst 6500 and Cisco Nexus 7000 Series Switches.

In Figure 10, the Cisco Nexus 7000 Series is used in the core and in one of the aggregation
module’s aggregation layer. At the core layer, it provides traditional Layer 3 functions and high-
density 10GbE, and at the aggregation layer it provides both Layer 2 and 3 functions, taking
advantage of its 10GbE density and enhanced STP features. The topology in is used as a
reference to introduce new switching technology functions and associated changes to the classic
topology.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 16 of 23
Technical Overview

2.3.1 Virtual Switching and STP Stability


A topology that takes advantage of the Virtual Switching and STP stability features offered by both
the Cisco Catalyst 6500 and Cisco Nexus 7000 Series Switches is displayed in Figure 11.

Figure 11. Virtual Switching and STP Stability

Module 1 in shows virtual switches at the access and aggregation layers through the use of VSS
technology. Module 2 in shows a classic triangular topology taking advantage of the advanced STP
features, thus addressing STP stability in a high-density 10GbE aggregation environment. Notice
that module 2 could also reduce the size of the Layer 2 failure domain by using VS technology
available through VDC, by creating virtual switches and assigning ports to to each and allowing
each logical topology to run isolated instances of control plane protocols.

Note: In a VSS environment, STP protocols are used strictly as fail-safe mechanisms as the
loop-free forwarding topology is established through VSS. For more information about VSS, visit:
http://www.cisco.com/en/US/products/ps9336/products_white_paper0900aecd806ee2ed.shtml.

2.3.2 High-Density Server Farms and Virtual Switching


High density server farms require high 10GbE density at the aggregation layer. Given the new
10GbE I/O modules’ port density (16 ports on the Cisco Catalyst 6500 Series and 32 ports on the
Cisco Nexus 7000 Series), it is possible to increase the server access port and server count simply
by adding more access switches, each with greater port density than the current typical port count.
Figure 12 shows where to address server farm density and scalability.

Figure 12. Server Farm Density

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 17 of 23
Technical Overview

The left part of Figure 12 shows a typical topology as used today with the following characteristics:

● Cisco Catalyst 6509 modular access switches


◦ Dual supervisors
◦ 5 x 48-port GbE I/O modules for 240 access ports per switch
◦ 2 x 4-port 10GbE I/O modules using 4 x 10GbE uplinks, 2 per aggregation switch
● Cisco Catalyst 6509 aggregation switches
◦ Single supervisor
◦ 8 x 8-port 10GbE I/O modules for 64 10GbE ports per switch
◦ 4 ports for aggregation layer inter-switch links (ISLs) leaves 60 ports for access
connectivity
◦ 2 ports per access switch uplink allows 30 access switches
◦ The server farm density is then established as follows:
● 30 access switches x 240 ports per access switch, for 7,200 ports per aggregation module
● Access to aggregation oversubscription
◦ Access to aggregation: 240:40 = 6:1
◦ Aggregation layer I/O module oversubscription: 2:1
◦ Total oversubscription: 12:1
◦ The right side of Figure 12 shows the same topology but taking advantage of the new
hardware capabilities of the Cisco Catalyst 6500 and Cisco Nexus 7000 Series. These
capabilities include:
● Cisco Catalyst 6500 Series: Supervisor 720-10G VS and 16-port 10GbE I/O at 4:1
oversubscription
● Cisco Nexus 7000 Series: 32-port 10GbE I/O module at 4:1 oversubscription
◦ Based on these new hardware options, the following are the main changes to the
scalability of an aggregation module.
● Cisco Catalyst 6509 modular access switches
◦ Dual supervisors using 2 x 10GbE for uplink connectivity
◦ 7 x 48-port GbE I/O modules for 336 access ports per switch
◦ No additional 10GbE I/O modules
● Cisco Nexus 7000 Series aggregation switches
◦ Single or dual supervisor
◦ 8 x 32-port 10GbE I/O modules for 128 10GbE 2:1 ports per switch
◦ 8 ports for aggregation layer ISLs leaves 120 ports for access connectivity
◦ 2 ports per access switch uplink allows 60 access switches
The server farm density is then established as follows:

● 60 access switches x 336 ports per access switch, for 20,160 ports per aggregation module
● Access to aggregation oversubscription
◦ Access to aggregation: 336:40 = 8.4:1
◦ Aggregation layer I/O module oversubscription: 2:1
◦ Total oversubscription: 16.8:1

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 18 of 23
Technical Overview

Note: All the calculations assume that the topology is capable of forwarding concurrently in all
links taking advantage of virtual portchannel technology. Consult Table 2 earlier in the document
for feature availability.

The overall access port count more than doubles due to the higher server density per access
switch and the higher 10GbE port density per aggregation switch. The oversubscription increases
from 12:1 to 16.8:1, although the assumption made thus far is that each GbE port is wire speed. If
instead a true server throughput of 500 Mbps per port is assumed, the oversubscription numbers
for the access layer to the aggregation layer are 6:1 and 8.4:1 respectively. Another factor in the
overall scalability calculation is whether each server uses one or more GbE interfaces. If a server
uses two GbE NICs, the overall server count would be half the port count, and per port
performance could be set at 200 Mbps (or 400 Mbps per server). At 200 Mbps, the access layer to
aggregation layer oversubscription drops to 2.4:1 and 3.36:1 respectively.

In these examples, the calculations could have been done at 4:1 access layer to aggregation layer
oversubscription, which would have yielded higher port access counts and higher access to
aggregation layer oversubscription.

2.3.3 Enhanced Layer 2 Topology


The enhanced topology shown in Figure 13 is aimed at addressing server high density while still
supporting data center–wide VLANs. On a high level, the topology is no different from the classic
topology with multiple aggregation modules. The main change is whether the topology supports
data center–wide VLANs and where services are located in such an environment. The support for
data center–wide VLANs calls for moving the Layer 2 and Layer 3 boundary from the aggregation
to the core layer. This allows any VLAN in any access switch to be reachable from any other
access switch regardless of the aggregation pair to which it connects. In most cases, not every
VLAN needs to be data center–wide, but the topology does allow any VLAN to be data center–
wide. VLANs kept within an aggregation pair are rooted at the aggregation switches. VLANs
expected to be data center–wide are rooted at the core switches.

Figure 13. Enhanced Layer 2 Topology

As expected, the enhanced Layer 2 topology goes from two Layer 2 tiers to three Layer 2 tiers,
which increases the size of the Layer 2 domain and therefore the size of the failure domain.
Besides using best-practice recommended STP features such as BPDU guard, root guard, loop
guard, and UDLD, it is important to consider the use of the advanced STP features as well as the

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 19 of 23
Technical Overview

virtual portchannel and virtual switching technology discussed earlier. Notice in Figure 13 that
VLAN E is now a data center–wide VLAN that spans multiple aggregation modules. In this
scenario, the failure domain includes all aggregation modules where VLAN E is present.

To limit the size of the failure domain, a pair of core switches for each group of aggregation
modules is required. Each core pair typically covers the scope of a data center zone. A data center
zone is a physical area of the data center grouping a number of aggregation modules, all
connecting to a pair of common core switches. The zone size is determined by power and cooling
capacity, cabling distances, the density of racks that can be supported per aggregation module,
and the number of modules that the core can support.

Figure 14. Multiple Data Center Zones

Figure 14 displays multiple data center zone requiring a Layer 2 and Layer 3 core switch pair. The
core pair per zone allows each zone to have a separate STP topology with independent STP
instances running, one per zone. VLANs can be as wide as a zone but cannot cross the zone
boundary. Figure 15 shows an alternative to collapsing all core pairs in a single one without
committing to a single large-scale failure domain.

Figure 15. Large-Scale Zones

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 20 of 23
Technical Overview

Two logical switching instances are established. Each instance maps to a different VDC supported
by the core layer. Zones 1 and 2 increase in size by taking modules from zone N so that larger
Layer 2 domains can be established yet allowing them to remain isolated while using only a single
switch core pair. Each zone has its own instances of Layer 2 and Layer 3 protocols, thus
supporting zone isolation over the same infrastructure.

2.3.4 New Topologies and Data Center Services


All the new topologies previously discussed must also address the integration of data center
services, and in particular stateful services. Stateful services typically require both the inbound and
outbound traffic to be processed by the same stateful service device since the connection or
session state information maintained by the device is used to apply consistent network policies to
all associated packets. Network policies may include ACLs, NATting, FW rules, load-balancing
rules, etc.

Starting with the classic topology using new Cisco Catalyst 6500 and Cisco Nexus 7000 Series
technology and features, the integration of services is achieved through services switches
connected to the topology Layer 2 and Layer 3 boundary. The classic topology takes advantage of
virtual switching, virtual portchannels and advanced STP features while conserving the typical
Layer 2 and Layer 3 boundary at the aggregation layer. Services are integrated through Cisco
Catalyst 6500 switches that are connected to the aggregation switch pair. These service switches
provide connectivity to service appliances and house service modules. This topology is shown in
Figure 16.

Figure 16. Classic Topology and Service Integration

There are three primary reasons for the use of service switches:

● Service scalability: Service scalability is currently limited by the number of slots at the
aggregation layer. Further service scalability is addressed by adding service devices as
needed without having to change the aggregation layer hardware or software configuration
and without sacrificing valuable slot density.

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 21 of 23
Technical Overview

● Topology scalability: Relieving slots used by service modules or GbE I/O modules used for
appliance connectivity from the aggregation switches makes the overall aggregation module
more scalable. Slots normally taken by service devices are repurposed to support uplink
capacity, thus enabling larger topologies to be supported. Some of the ports, however, are
still used to connect to the services chassis.
● Manageability: Decoupling service devices from the aggregation switches make service
devices code upgrades and configuration changes independent from the aggregation
switches. Configuration changes for services are more frequent than Layer 2 and Layer 3
changes, so mitigating risks resulting from service changes is easier when services chassis
are introduced.

Module 1 in Figure 16 shows a virtual switch using VSS technology, which provides connectivity to
a set of Cisco Catalyst 6500 switches that house all service devices and their redundant peers.
Service switch connectivity through virtual portchannel technology allows each service switch to
have a nonblocking redundant path to the virtual switch, which simplifies deployment options.
Module 2 in Figure 16 shows the integration of services chassis in a Cisco Nexus 7000 Series
aggregation environment. The environment takes advantage of the advanced STP features while
allowing the port density used at the aggregation layer to remain used for uplink connectivity.

An additional topology to consider is the enhanced Layer 2 topology using services chassis in
multiple logical topologies. The objective is to provide scalable services in a large data center
network environment that has VLANs applied outside a single aggregation module but not across
the entire data center. This implies that the services are scaled to a particular zone that has a
predetermined size. This configuration is presented in Figure 17.

Figure 17. Enhanced Layer 2 Topology and Services

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 22 of 23
Technical Overview

Figure 17 shows two large zones, each served by a different set of services switches. The service
switches provide services to the particular zone to which they are physically connected, which has
been predetermined by setting certain physical ports on different virtual switches. Service chassis
svcs1 and svcs2 serve the red topology, and svcs3 and svcs4 serve the orange topology. This
makes the service scale to the particular logical topology without having to share the control
protocols.

3 Architecture Topology Summary


Technology enhancements in the Cisco Catalyst 6500 and Cisco Nexus 7000 Series address a
number of architectural challenges facing network architects today. Multiple advanced designs are
available. Careful planning is needed to select the appropriate architectural approach. A good
understanding of short- and long-term requirements will allow architects to build in enough flexibility
for easy expansion in the future. Consult the product documentation for more details about the
features discussed here.

4 For More Information


The following resources contain information complementary to the discussions in this document:

● http://www.cisco.com/go/nexus
● http://www.cisco.com/go/nxos
● http://www.cisco.com/en/US/products/ps9336/products_white_paper0900aecd806ee2ed.sht
ml
● http://www.cisco.com/go/dcnm

Printed in USA C17-449427-00 01/08

All contents are Copyright © 1992–2008 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 23 of 23

You might also like