Professional Documents
Culture Documents
Contents
Goal .............................................................................................................................................................. 5
Audience ...................................................................................................................................................... 5
Objectives .................................................................................................................................................... 5
Design Overview ......................................................................................................................................... 2
Data Center Compute and Storage Components .................................................................................. 10
Cisco Unified Computing System ........................................................................................................... 10
Cisco UCS 2100 Series Fabric Extender ........................................................................................... 12
Cisco UCS 6100 Fabric Interconnect ................................................................................................. 12
Cisco UCSM Configuration ................................................................................................................ 14
UCS 6100 Uplink Storage Configurations .......................................................................................... 17
Fibre Channel Storage Components ...................................................................................................... 18
NFS Storage Components...................................................................................................................... 22
Cisco Unified Computing System C-Series and DAS Storage ............................................................... 22
Cisco Nexus 1000v ................................................................................................................................. 23
Atlantis ILIO ............................................................................................................................................ 27
Cisco VSG .............................................................................................................................................. 27
Cisco vWAAS.......................................................................................................................................... 27
McAfee MOVE-AV .................................................................................................................................. 28
Hypervisor Installation and Configuration ............................................................................................... 28
VMware ESX/ESXi ............................................................................................................................. 28
Microsoft Hyper-V ............................................................................................................................... 28
Desktop Virtualization Software Installation and Configuration .............................................................. 29
VMware View Configuration ............................................................................................................... 29
Citrix XenDesktop Configuration ........................................................................................................ 29
Data Center Networking Infrastructure ................................................................................................... 31
Cisco Nexus 5010 Access Layer Switch ................................................................................................ 31
Cisco Nexus 7010 ................................................................................................................................... 33
Cisco Application Control Engine (ACE4710) ........................................................................................ 33
Cisco Adaptive Security Appliance (ASA5580) ...................................................................................... 35
Cisco WAN Acceleration Engine (WAE-512) ......................................................................................... 36
Cisco Network Analysis Module (NAM2220) .......................................................................................... 36
Campus Network ....................................................................................................................................... 37
Cisco Catalyst 6504E (CAT6504E) ........................................................................................................ 38
Cisco 7206VXR Router........................................................................................................................... 38
Cisco WAN Acceleration (WAE-674) ...................................................................................................... 38
Cisco Adaptive Security Appliance (ASA5540) ...................................................................................... 39
Cisco Integrated Service Router (ISR3945) ........................................................................................... 39
Cisco Catalyst 4500E (Campus Access CAT4500E) ............................................................................. 39
Cisco Catalyst 4507E (Campus Access CAT4507E) ............................................................................. 39
Cisco Aironet Access Points ................................................................................................................... 40
Cisco 5508 Wireless Controller .............................................................................................................. 40
Cisco 3310 Mobility Services Engine ..................................................................................................... 40
Branch Configurations ............................................................................................................................. 40
Branch 1 – Branch connection with PfR ................................................................................................. 40
Branch 2 – Branch connection with external WAVE-674 ....................................................................... 43
Branch 3 – Branch connection with no WAAS ....................................................................................... 44
Branch 4 – Branch connection with WAAS SRE Module ....................................................................... 44
Branch 5 – Identical to Branch 3 except uses 3945 ............................................................................... 45
Branch 6 – Branch connection with WAAS Express .............................................................................. 45
Branch 7 – Identical to Branch 3 except uses 3945 ............................................................................... 46
Branch 8 – Identical to Branch 3 except uses 2951 ............................................................................... 46
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Figures
Figure 1: CVD Test Configuration .............................................................................................................................. 2
Figure 2: Network Map showing IP addresses .......................................................................................................... 3
Figure 3: Virtual Infrastructure ................................................................................................................................... 4
Figure 4: Server Configuration ................................................................................................................................. 10
Figure 5: Logical Diagram of the Cisco UCS Compute Components ...................................................................... 12
Figure 6: Complete UCS System Block ................................................................................................................... 13
Figure 7: UCS 6120 Uplink Storage Connectivity .................................................................................................... 18
Figure 8: Data Center Network Components ........................................................................................................... 31
Figure 9: Campus Network Components ................................................................................................................. 37
Figure 10: Branch 1 Design ..................................................................................................................................... 41
Figure 11: Branch 2 Network Design ....................................................................................................................... 43
Figure 12: Branch 3 Network Design ....................................................................................................................... 44
Figure 13: Branch 4 Network Design ....................................................................................................................... 45
Figure 14: Branch 6 Network Design ....................................................................................................................... 45
Figure 15: Branch 9 Network Design ....................................................................................................................... 46
Figure 16: Branch 10 Network Design ..................................................................................................................... 47
Tables
Table 1: Data Center Compute Components............................................................................................................. 5
Table 2: Data Center Storage Components............................................................................................................... 6
Table 3: Data Center Network Components .............................................................................................................. 7
Table 4: Campus Network Components .................................................................................................................... 7
Table 5: Branch Components .................................................................................................................................... 9
Table 6: Supporting Application Services .................................................................................................................. 9
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Goal
This document reports the tested configuration for the VXI Release 2.0 architecture. It includes specific
device configurations and diagrams showing interconnections.
Audience
This document is intended to assist solution architects, sales engineers, field engineers and consultants in
planning, design and deployment of the Cisco VXI System. This document assumes the reader has an
architectural understanding of the Cisco VXI system and has reviewed the Cisco VXI Release 2.0 CVDs
for Citrix XenDesktop and VMware View.
Objectives
This document is intended to articulate the overall design and VXI specific configurations of the tested
architecture called out in the VXI Release 2.0 CVDs for Citrix XenDesktop and VMWare View.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Design Overview
The design implemented in the Cisco VXI CVD replicates a customer’s network, end-to-end, from the
datacenter to the endpoints installed either in the campus or branch environment. Figure 1 shows the
complete system test setup. Discussion for each of the sections are show following.
Release 2.0 of the VXI CVD is based on existing selling equipment from both Cisco and its Technology
Partners. Where possible the preferred operating mode and configuration was used, unless specific
commands or configurations yielded improvements or enhancements to the Desktop Virtualization (DV)
experience.
Due to the number of devices in the VXI CVD test setup, the configuration files for each device are
included as hyperlink references throughout the document. Refer to the Appendix section of this
document for guidance on completing VXI specific use cases.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
WebEx/WebEx Connect 7
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 6
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 7
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 8
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 9
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
WebEx/WebEx Connect 7
Microsoft Active Directory Domain Controllers Win 2008
The Cisco Unified Computing System is used to host the virtual desktops and desktop virtualization
servers. For the Cisco UCS B Series, three Cisco UCS 5108 Blade enclosures were used for the test.
Two enclosures contain eight half width Cisco UCS B200 blade servers each and one enclosure contains
four full width UCS B250 blade servers. The Desktop Virtualization (DV) machines are distributed
across all three enclosures. Cisco VXI also has validated two configurations of the Cisco UCS C Series
Rack Mount server.
The Cisco UCS M71KR-Q Converged Network Adapter (CNA) mezzanine card was used for this test.
It provides two 10 Gigabit Ethernet NICs and two Fibre Channel HBAs combining them into two Fibre
Channel over Ethernet (FCoE) interfaces. The hypervisor will map these interfaces to the various
VLANs needed to support the DV environment.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 10
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 11
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Cisco UCS 2104XP Fabric Extender (FEX) has four 10 Gigabit Ethernet, FCoE-capable, Small
Form-Factor Pluggable Plus (SFP+) ports that connect the Cisco B5108 blade chassis to the Cisco UCS
6100 fabric interconnect and eight 10 Gigabit Ethernet server ports connected through the mid-plane,
one to each half-width slot in the chassis.
The Cisco® UCS 6100 Series Fabric Interconnects are a core part of the Cisco Unified Computing
System, providing both network connectivity and management capabilities for the system (Figure 4).
Each uplink from the UCS 2104XP FEX is connected to a 10G FCoE Port on the UCS 6100.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 12
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 13
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Cisco UCS Manager (UCSM) is used to configure the various elements of the Cisco Unified
Computing System. Cisco UCSM runs in the Cisco UCS 6100 Series Fabric Interconnects, and
provides multiple interfaces (including GUI and CLI) for managing a UCS instance. Each Fabric
Interconnect has a unique IP address. A Virtual IP address is created to link the two switches and
provide a single point of management. Cisco UCSM is accessed via a browser.
Cisco UCSM can be used to manage all hardware in a UCS instance (chassis, servers, fabric
interconnects, etc) and all resources (servers, worldwide name addresses, MAC addresses, UUIDs,
bandwidth). It also provides tools for server, network, and storage administration. Because of the
breadth and depth of its capabilities, a detailed explanation of the Cisco UCSM configuration is far
beyond the scope of this document. The fundamental elements of a Cisco UCS configuration
include:
• Network: configuring VLANs, LAN pin groups, MAC pools, QoS, and other network-
related policies
• Storage: configuring Named VSANs, SAN pin groups, WWN pools, and other storage
related policies
• VN-Link: configuring VN-Link components, distributed virtual switches, port profiles, VN-
Link related policies, and pending deletions
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 14
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
• System Management: managing time zones, chassis, servers, I/O modules, configurations,
and passwords
Service Profiles are the central concept of Cisco UCS. A service profile ensures that associated server
hardware is properly configured to support the application it will host. The service profile defines server
hardware, interfaces, fabric connectivity, server identify, and network identity. Service profiles are
centrally managed and stored in a database on the Cisco UCS 6100 Series Fabric Interconnect. Every
Cisco UCS server must be associated with a service profile. Service profile templates facilitiate the
rapid creation and deployment of service profiles. The following example shows a truncated version of
a single Cisco VXI service profile.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 15
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The UCSM configuration screenshots can be found here : UCS Manager Configuration.
The configuration file for the UCS6120 was captured by using SSH to get access to the CLI and issuing
the command: show config >> sftp://<host>/<path>/<dest_filename>"
The complete configuration for the UCS6120 can be found here : UCS 6120 Configuration.
NOTE
To enable Jumbo Frames for both NFS and FC storage, use the QOS configuration on the Cisco Unified
Computing System Manager.
Configure platinum policy by checking the Platinum policy box and if you want jumbo frames enabled
change MTU from normal to 9000. Use the option to set no packet drop policy during this configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 16
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Two sets of uplinks are provided on the UCS 6100. One set is used for Fibre Channel (FC) connectivity
to FC based storage. These uplinks are located on the module slot. The second set is used for Ethernet
connectivity including access to NFS storage. Ethernet ports on the module slot or from the built-in
ones can be used for this purpose. Figure 7shows this portion of the system configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 17
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
A pair of MDS Series switches was used in the configuration to connect the Cisco UCS 6120 Fabric
Interconnect Fibre Channel ports to the storage array (see Figure 7). The MDS switches provide an
optimized platform for deploying Storage Area Networks (SANs), intelligent fabric services, and
multiprotocol connectivity. These switches support high density Fibre Channel switching, hardware-
based virtual fabric isolation with VSANs, Fibre Channel-based Inter-VSAN routing, iSCSI
connectivity for Ethernet-attached servers, and a wide range of additional advanced services. In the
Cisco VXI validated design testing, the MDS Series SAN switches enabled e Boot from SAN support
for the Cisco UCS Series servers.
The MDS series switches can be configured by means of a command line interface (CLI), or using the
Cisco Fabric Manager. Detailed information on configuring MDS Series basic and advanced services
can be found at Cisco MDS Series configuration guide. The complete MDS 9222i configuration can be
found: MDS9222i Configuration. The key elements of the Cisco VXI test configuration include
definitions for the following:
• VSANs
• Interfaces
• Zones and Zone Sets
• N-Port ID Virtualization
VSANs
A VSAN is a virtual storage area network. VSANs provide isolation among devices that are physically
connected to the same fabric. VSANs enable the creation of multiple logical SANs over a common
physical infrastructure. Each VSAN contain up to 239 switches, and has an independent address space
so that identical Fibre Channel IDs can be used simultaneously in different VSANs. The VSANs must
be consistent on the VMware vSphere hosts, the Cisco UCS Series servers, the Cisco MDS Series
switches, and the Fibre Channel storage array. VSAN configuration can be done either in the MDS
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 18
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Series switch CLI or the Cisco MDS Device manager. Cisco Fabric Manager can also be used for
managing the SAN configuration and zoning information. Configuring VSANs requires the following:
vsan database
vsan 100 name "vxi_triage_auto_a"
vsan 200 name "vxi_soncs_ls_a"
vsan 300 name "vxi_infra_a"
vsan database
vsan 100 interface fc1/1
vsan 200 interface fc1/2
vsan 100 interface fc1/3
vsan 100 interface fc1/4
vsan 200 interface fc1/5
.
.
.
vsan 300 interface fc2/24
Interfaces
The interface is the mechanism through which frames are transferred. Each interface must be defined
and its operating characteristics specified. Configured interfaces can be Fibre Channel, Gigabit
Ethernet, management, or VSAN interfaces. Interfaces can be associated with a particular owner, and
traffic flow can be enabled or disabled. The example shows a Fibre Channel interface associated with
owner Triage-a1, and with traffic flow enabled.
interface fc1/3
switchport owner Triage-a1
no shutdown
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 19
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
A Zone Set is a collection of zones. A zone set can be activated or deactivates as a single entity across
all switches in the fabric. A zone can be a member of multiple zone sets. Only one zone set can be
activated at a given time. The example shows the creation of a zone set named vxi_soncs_ls_a in VSAN
200, and the assignment of the member zones in this set. The example also shows the zone set being
activated.
N-Port ID Virtualization
The uplink FC ports on the UCS 6120 operate in N-Port Virtualization (NPV) mode. The uplink ports
operate in a similar mode to host ports (referred to as N-Ports in the FC world). To support NPV on the
UCS 6120, the N-Port ID Virtualization (NPIV) feature must be enabled on the MDS 9222i Switch.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 20
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Detailed configuration of the EMC Unified Storage system is beyond the scope of this document. The
basic approach is to begin by connecting the EMC FC ports to the Cisco MDS Series SAN switches.
Adminstrators also should ensure that the FC zoning process has been completed. The next step is to
create a Raid Group (Raid 1/0), and then to create LUNs from that group. Then a storage group is
created, and VMware vSphere hosts (which will access the LUNs) are added to the group. Finally, the
newly created LUNs are added to the storage group. When these steps are completed, vCenter Server is
used to rescan the FC adapter for new storage devices. After the rescan, the FC LUN will be added to
storage.
Refer to EMC documentation for specific configuration guidance on provisioning EMC Unified Storage
in virtual desktop environments :
http://www.emc.com/collateral/solutions/reference-architecture/h8020-virtual-desktops-celerra-vmware-
citrix-ra.pdf
http://www.emc.com/collateral/software/technical-documentation/h6082-deploying-vmware-view-
manager-celerra-unified-storage-plaform-solution-guide.pdf
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 21
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
As shown in Figure 7 above, the NFS based storage array is connected to the Cisco Nexus 5000. In the
Cisco VXI validated design, this array was deployed to store user data. The Cisco VXI system tested a
NetApp FAS series storage system for NFS storage. The NetApp FAS series is a unified storage
platform that provides primary and secondary storage with simultaneous file and block services. This
system supports up to 840 drives with a total capacity of over 16 TB.
The NFS array needs to have L2 adjacency to the hypervisors. If the array is on another L2 segment
then special routes will need to be manually added to both the array and the hypervisor connections.
The NFS array can be connected to another Cisco Nexus 5000 or the Cisco Nexus 7000 as long as the
L2 adjacency is maintained. Jumbo Frames need to be configured on all the Ethernet switches that make
up the pathway between the hypervisors and the NFS storage array.
Detailed configuration of the NetApp FAS system is beyond the scope of this document. Please refer to
the NetApp documentation for specific details on configuration. To provision NFS storage on the
NetApp system, the first step is to create an aggregate, which is a collection of disks. The general
recommendation is to configure an aggregate of maximum possible size. The next step is to configure a
default flexible volume for NFS exports, which can grow to the size of the aggregate. Once the
aggregate is created, additional Flexible Volumes can be defined if needed. Disk space can be reserved
for the volume, or it can configured to grow only as data is written. The configuration process also
involves defining interface characteristics and NAS volumes.
For VMware environments using NFS storage, NetApp recommends increasing the number of VMware
vSphere data stores from the default value of 8 to 64, so that additional data stores can be dynamically
added as needed. NetApp also suggests changing NFS.HeartbeatFrequency to 12, and
NFS.HeartbeatMaxFailures to 10 for each ESX host.
Refer to NetApp documentation for specifc configuration guidance on provisioning NetApp Storage in
virtual desktop environment :
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 22
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Cisco UCS C-Series (C250 M2) was tested with DAS storage in two configurations : 2x100GB
SSD with 6x300GB SAS and 8x300GB SAS. In the Cisco VXI validated design deployment, the UCS
C Series local storage was used to store both boot images and user data. VMware vSphere was installed
on the UCS Server, which was configured to boot from the local hard disk. Local disk policy was
configured on the server to create a RAID LUN (raid-1-mirrored) as shown in the following example:
There are several caveats involving the interaction of UCS C Series servers, UCS Manager, and RAID
storage controllers when configuring local storage. For best results, consult the UCS Manager
Configuration Guide to understand the conditions that may apply in a given situation.
Atlantis ILIO, a storage optimization software solution, was tested with the UCS C-Series. Atlantis ILIO
is designed to reduce the amount of storage capacity required for desktop images, increase desktop
performance, and increase the number of virtual desktops that can be run on a storage system. The
Atlantis virtual appliance was installed on the Cisco UCS C-series server. Cisco and Atlantis
recommend the following additional steps when using the ILIO appliance in a Cisco VXI
environment:
1. Use the default settings configured during the Atlantis ILIO installation. These settings
determine how and when the cache flushes data to disk and have been tuned for optimal
performance.
2. Using the vSphere interface, make the following changes to the ESXi host that is running the
Atlantis ILIO virtual appliance. See the ESXi Advanced Settings under the Configuration tab:
NFS.HeartbeatFrequency = 12
NFS.HeartbeatTimeout=5
NFS.HeartbeatDelta=8
NFS.HeartbeatMaxFailures=10
3. Ensure that the ESXi server has a VMK interface in the same VLAN as the ILIO management
port so that the ILIO NFS traffic stays within the host and never touches the network.
The Cisco Nexus 1000V is an NX-OS based virtual access software switch that runs on VMware
vSphere. The switch was overlaid across the UCS Series servers used for Cisco VXI system testing to
enable virtual machine networking. The Cisco Nexus 1000V provides Layer 2 switching, advanced
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 23
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
networking functions (such as QoS and security), and a common network management model in a
virtualized server environment by replacing the virtual switch within VMware vSphere.
Installation of Cisco Nexus 1000v software is beyond the scope of this document. For detailed coverage
of installation procedures, see the Cisco Nexus 1000V Software Installation Guide.
Once the switch is successfully installed and its VM is powered on, a setup configuration dialog will
automatically start. The Cisco Nexus 1000V can be configured by means of a Command Line Interface
(CLI) or using a Graphical User Interface (GUI) tool. For a complete explanation of the entire
configuration process, see the Cisco Nexus 1000V Configuration Guides. Essentials of a Cisco Nexus
1000V Series configuration include:
• Defining VEMs
• Creating VLANs
• Defining Port Profiles
• Associating Port Profiles with Interfaces
• Configuring the Connection between VSM and the VMware vCenter Server
VEMs
A single instance of the Cisco Nexus 1000V can support up to 64 VEMs as well as a maximum of two
VSMs (deployed in active/standby mode), creating the equivalent of a 66-slot modular switch. Each
VEM needs to be configured with the VMware ID of the host with which it is associated. In the
following example, each VEM is associated with a slot number and a host.
vem 3
host vmware id e87a619a-ee71-11df-0000-00000000025e
vem 4
host vmware id e87a619a-ee71-11df-0000-00000000022e
vem 5
host vmware id e87a619a-ee71-11df-0000-00000000023e
vem 6
host vmware id e87a619a-ee71-11df-0000-00000000020e
vem 7
host vmware id deed2b71-29ec-11df-b019-8843e138e566
VLANs
For Cisco VXI validation, VLANs were used to logically separate traffic according to function as shown
below. VLANs also can be used in system port profiles for VSM-VEM communications, uplink port
profiles for VM traffic, and data port profiles for VM traffic. The following example shows a series of
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 24
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
VLAN definitions. VLAN 1 is the default assignment; interfaces and ports not specifically assigned to
other VLANs are members of VLAN 1.
vlan 1
vlan 34
name SONC-ESX-SRVR-MGMT
vlan 35
name SONC-ESX-SRVR-VMOTION
vlan 36
name SONC-PVS
vlan 46
name VSG-Data
vlan 47
name VSG-HA
vlan 56
name SONC-MS-VM
vlan 58
name SONC-CT-VM
vlan 60
name SONC-VW-VM
vlan 132
name SONCS-Infra-Servers
vlan 144
name NFS
vlan 146
name iSCSI
Port Profiles
A port profile is a collection of interface-level configuration commands that are combined to create a
complete network policy. A port group is a representation of a port profile on the vCenter server. Every
port group on the vCenter server is associated with a port profile on the Cisco Nexus 1000V. Port
profiles are created on the VSM and propagated to VMware vCenter Server as VMware port groups
using the VMware VIM API. After propagation, a port profile appears within VMware vSphere Client
and is available to apply to the vNICs on a virtual machine.
The following examples show typical Cisco VXI configurations for both uplink and VM access ports.
The uplink port profile defines this port as a trunk port, meaning that it can accommodate traffic from
multiple VLANs. The VLANs that can use this trunk are specifically listed. The mtu is set to 9000 to
support jumbo frames.
The next example shows a typical configuration for an access port. This profile defines an access port
dedicated to VM traffic. As an access port, it only supports a single VLAN (vlan 58). A maximum of
1024 ports can be assigned to this profile.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 25
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Interfaces
All interfaces on the Cisco Nexus 1000V are Layer 2 Ethernet interfaces. These interfaces can be
configured as access ports, trunk ports, private VLANs, or promiscuous ports. Virtual Ethernet
interfaces are logical interfaces, which correspond to virtual ports in use on the distributed switch. The
following Cisco VXI example shows a Virtual Ethernet interface that inherits the characteristics of port
profile VM-58-Citrix, and is associated with Network Adapter 1.
interface Vethernet115
inherit port-profile VM-58-Citrix
description AS732MAWEBEX-AE,Network Adapter 1
vmware dvport 4641 dvswitch uuid "04 a7 09 50 7a a7 4a 56-37 12 5c de 0a 49 67 b5"
vmware vm mac 0050.5689.0238
The Cisco Nexus 1000V also provides the foundation for running other virtualized appliances such as
the Cisco Virtual Security Gateway (VSG) and Cisco vWAAS. The following changes were made on
the Nexus 1000v port profile configuration to forward virtual desktop traffic to the VSG and vWAAS
virtual applicance hosted services using vPath:
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 26
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The complete Nexus 1000v configuration can be found here : Nexus1000v Configuration.
Cisco Virtual Security Gateway for Cisco Nexus 1000v Series switches is a virtual appliance that
controls and monitors access to trust zones in enterprise and cloud provider environments. Cisco VSG
provides secure segmentation of virtualized data center VMs using granular, zone-based control and
monitoring with context-aware security policies. VSG works in unison with Nexus 1000v and is
managed using VNMC console. During the Cisco VXI validation, VSG was used to segment knowledge
workers from task workers. Appropriate communication ports for display protocols, Active Directory,
Internet access, etc. were allowed from each zone. Further, inter-zone and intra-zone communications
are denied. Cisco VSG was deployed in the data center and used vPath interception on Nexus 1000v
to intercept and secure traffic.
The attached configuration files layout the VNMC configuration in relation with the CLI
configuration seen on the VSG. A successful VSG setup requires communication between VNMC
and vCenter, VNMC and N1K switch and finally VNMC and VSG itself to be established. The
configuration required is listed in the VNMC-VSG file is listed. Further zoning in a VXI
environment is done base don VM attributes and Virtual Desktops belonging with similar
functions/privileges are placed in the same zone. The configuration required for zoning is covered in
detail in VNMC VSG configuration document along with screen shots. All zones must have rules
defined to allow incoming Desktop protocol traffic and communications between the virtual
desktops and the infrastructure components such as Brokers, Active Directory, Hypervisor manager
etc. The definition of these rules along with appropriate port numbers for infrastructure
communications is described in the configuration files below.
The VNMC configuration screenshots can be found here : VNMC VSG Configuration. The complete
Virtual Security Gateway configuration can be found here : VSG Configuration.
Cisco vWAAS
The Cisco vWAAS is a virtual appliance that accelerates business applications delivered from
private and virtual private clouds. Cisco vWAAS runs on Cisco UCS servers with supported
hypervisors, using the policy-based configurations in the Cisco Nexus 1000v switch. Cisco vWAAS
was deployed in the data center and used vPath interception on Nexus 1000v to intercept and
optimize traffic.
VXI traffic specific classifiers need to be set as described in the Branch configurations below. Apart
from the VXI classifiers and wccp configurations all the remaining configuration is standard.
The complete Cisco vWAAS configuration can be found here : vWAAS configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 27
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
McAfee MOVE-AV
McAfee Optimized Virtual Environments - Antivirus for VDI (MOVE-AV for VDI) was used to
provide a scalable centralized virus protection solution. A highly optimized and dedicated virus scan
server cluster does the resource intensive work of scanning the virtual desktop files, thereby
significantly increasing the virtual desktop density achievable. The MOVE-AV was deployed using
default scan settings with redundant Virus Scan Engines (VSE) co-located on the same virtual switch as
the virtual desktops.
VMwa re ES X/ES Xi
In this test The VMware ESX/ESXi hypervisor was installed on the servers. Version 4.1 was tested.
The vSphere/vCenter management suite was used to manage the hypervisor.
In the Cisco VXI validated design testing, hosts were organized into a single cluster to enable HA
features like DRS and vMotion which provide higher availability when blades power down or fail (due
to power failure or upgrade ). It is recommended to cluster hosts across different UCS chassis enclosures
to provide better availability in event of a chassis failure. Management and infrastructure server
applications (including the Nexus 1000v VSM) were installed on a dedicated UCS blade. A dedicated
database server is used to host the vCenter and VMware View databases. The vSphere Enterprise Plus
License was deployed on the vCenter server. It is recommended to use the Update Manager tool to
install the latest ESXi patches on the blade. The Nexus 1000v virtual distributed switch was used to
provide network connectivity to the virtual machines hosted across multiple UCS blades running
ESX/ESXi. It was deployed with redundant Virtual Switch Modules (VSM). It was provisioned with
VLANs, port profiles, and port channel groups using the NX-OS CLI. Virtual port groups assigned to a
dedicated VLAN were used to segment the traffic for Citrix XenDesktop desktops, View desktops,
VSG, NFS, PVS, vWAAS, and infrastructure servers. NAS and SAN based datastores connected via
NFS and FC protocol connections were provisioned to store the virtual machine vmdk files. Path
redundancy with load balancing was used to connect to the SAN storage devices. Virtual Desktops were
provisioned with a guest OS running Windows 7 32-bit, Windows 7-64-bit, Windows XP with 1.5GB
or 2GB RAM and 24GB of disk space.
The ESX/ESXi and vCenter configuration screenshots can be found here : vSphere configuration.
In this test the Microsoft Hyper-V hypervisor was installed on the servers. The server core installation of
Windows 2008 R2 64-bit was used to implement Hyper-V. The Microsoft System Center Virtual
Machine Manager (MSCVMM) and Hyper-V Manager was used to manage the hypervisor.
In the Cisco VXI validated design testing, the Microsoft virtual switch was used to provide network
connectivity to the virtual machines hosted on the UCS blades running Hyper-V. NAS and SAN based
datastores connected via NFS and FC protocol connections were provisioned to store the virtual machine
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 28
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
files. Virtual Desktops were provisioned with guest running Windows 7-64-bit (2GB of RAM) and
Windows XP (with 1GB RAM) and 20GB of disk space. The MSCVMM was used to manage the
Hyper-V environment and adds features not available in Hyper-V Manager like live migration and
library management (ie. templates).
The Hyper-V and MSCVMM configuration screenshots can be found here : Hyper-V Configuration.
In the Cisco VXI validated design testing, the vSphere/vCenter connection details were provisioned in
the View Manager Administrator console. Separate desktop pools were created for Windows 7 32-bit,
Windows 7 64-bit, Windows XP desktops. These were automated pools so that desktops are generated
based on a golden image. (ie. virtual machine snapshot). The golden image included the standard office
productivity applications in addition to the UC application suite and the McAfee MOVE-AV agent. The
pools used dedicated entitlements so that the user always gets the same desktop. The desktop pools used
the View Composer Linked-clone feature so that the desktops share the same base golden image and
storage disk space is conserved. The linked clones and replica (golden read-only image) were stored on
different storage devices with different memory overcommit policies for better storage performance in
some test configurations. In these configurations, the replica was stored on faster read-only storage. All
desktops were set to be powered up all the time and refreshed after user logout. Please refer to the
configuration screen shots for the detailed pool settings used. The users Windows profiles were
redirected to a persistent disk and temporary files are stored on non-persistent storage. The user
entitlement for the desktop pools were specified as an Active Directory group of users.
The VMware View configuration screenshots can be found here : VMWare View Configuration.
In the Cisco VXI validated design testing, the vSphere, SCVMM, or XenCenter connection details were
provisioned in the XenDesktop Studio console. Separate desktop groups were created for Windows 7
32-bit, Windows 7-64-bit, Windows XP desktops. The desktop catalog used pooled machines that were
generated based on a golden image snapshot. The golden image included the standard office
productivity applications in addition to the UC application suite and the McAfee MOVE-AV agent. The
desktop catalog used random assignment so that the user is assigned a desktop in a random manner. The
desktop catalog uses Citrix Provisioning Services so that the desktops share the same base golden image
and storage disk space is conserved. The user assignment for desktop groups was specified as an Active
Directory group of users. Refer to the configuration screen shots for the detailed desktop group settings
used.
The Citrix XenDesktop configuration screenshots can be found here : Citrix XenDesktop Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 29
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 30
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Cisco Nexus 5010 Switch is a 1 Rack unit (RU), 10 Gigabit Ethernet/FCoE access-layer switch with
20 fixed 10 Gigabit Ethernet/FCoE ports that accept modules and cables meeting the Small Form-Factor
Pluggable Plus (SFP+) form factor. One expansion module slot can be configured to support up to 6
additional 10 Gigabit Ethernet/FCoE ports, up to 8 Fibre Channel ports, or a combination of both. For
larger server pods, the Cisco Nexus 5020 (2 RU and 40 fixed ports, plus to module slots) can be used.
The Cisco Nexus 5010, in combination with the Cisco Nexus 1000v, provides the functions for the Data
Center Access Layer. This test configuration consists of a pair of Cisco Nexus 5010. Four 10 GE
uplinks are configured on each of the Cisco UCS Fabric interconnect and are connected to each Nexus
5010 in parallel. The upstream interfaces are connected to ports that belong to the Internal Virtual
Device Contexts (VDCs) on the Cisco Nexus 7010.
Nexus 5010 configuration for a VXI system is not any different from a typical data center, however a
few key considerations for VXI are described below.
Virtual Port Channel (vPC, LACP): Virtual port channels although not required are a highly
recommended in a VXI setup. VPC creates etherchannels spanning multiple chassis and by turning off
spanning tree protocol allow for all links to be in forwarding state, there by increasing the total
bandwidth available to downstream devices. To enable vPC enable the lacp and vpc features as shown
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 31
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
below and setup the management vrf. The two Nexus 5010 establish peer relationship over the
management interface. No configurations or states are communicated over the management interface.
feature lacp
feature vpc
interface mgmt0
ip address 10.0.32.8/24
vrf context management
ip route 0.0.0.0/0 10.0.32.1
port-channel load-balance ethernet source-dest-ip invalid-hash vpc domain 200
peer-keepalive destination 10.0.32.9 source 10.0.32.8
The portchannel interface between the two 5010 is defined as a vpc peer-link. All switch states are
exchanged over this interface along with data traffic. The port channel is configured as spanning tree
network port by default when the portchannel is added to the vpc domain (the config below explicity
configures the spanning-tree port type).
interface port-channel10
switchport mode trunk
vpc peer-link
switchport trunk native vlan 999
switchport trunk allowed vlan 34-36,38-39,46,48,50,52,54,56,58,60,62,132,144,146,148,999
spanning-tree port type network
speed 10000
All other portchannels and interfaces (not in portchannels) need to be added to appropriate vpc by
issuing “vpc <#>” command. Vlan IDs and vpc numbers can be different but the vpc number are
required to be consistent across both Nexus 5010 switches.
interface port-channel20
switchport mode trunk
vpc 20
switchport trunk native vlan 999
switchport trunk allowed vlan 34-36,38-39,46,48,50,52,54,56,58,60,62,132,144,146,148,999
speed 10000
Jumbo Frame: Jumbo frames need to be normally configured with Fibre channel, iScsi etc storage traffic
and requires some configuration on the Nexus 5010. The QoS and System mtu configuration for jumbo
frames is listed below.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 32
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Vlan should be created to separate hypervisor management, Virtual desktop, migration, storage etc.
traffic. The VLAN assignment can be found in the complete configuration below.
The complete Nexus 5010 configuration used during validation can be found here : Nexus 5010
Configuration.
The Cisco Nexus 7010 Switch is used in the collapsed module for both the Data Center Aggregation and
Core layers. This is done by creating two Virtual Device Contexts (VDCs): Internal (connected to
Hosted Desktops, Application Servers) and External (connected to Enterprise Core switches). The
Internal VDC acts as the Aggregation layer. The External VDC acts like the Core Layer. Each VDC is
isolated from the other for increased availability. Should one VDC encounter an error or crash, the other
is not affected. The Nexus 7010s are installed as a redundant pair.
The complete Nexus 7010 configuration can be found here: Nexus 7010 Internal Device Context
Configuration and Nexus 7010 External Device Context Configuration.
The Cisco Application Control Engine (ACE4710) is used in the VXI environment to provide load
balancing functions for the DV Connection Brokers. They are connected in a redundant pair to the
Internal VDC on the Cisco Nexus 7010. Virtual contexts are created on the ACE for different test
environments and each context included server farms for VMware View Desktops, the Citrix
XenDesktops, and Microsoft Remote Desktops.
The following configurations on the ACE implement health monitoring probes for the View Manager
and XenDesktop Controller :
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 33
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
passdetect interval 15
passdetect count 2
receive 2
expect status 200 200
open 1
The following configurations on the ACE setup the XenDesktop Controller server farm :
interface vlan 60
service-policy input Xen
no shutdown
The complete Cisco ACE4710 configuration can be found here : ACE Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 34
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Cisco Adaptive Security Appliance installed in the Data Center provides firewall services to isolate
and protect the various compute resources from both external traditional desktop users as well as the DV
users installed in the data Center. Two virtual contexts are used on the ASA : Non Server (connected to
VRF for Hosted Desktops) and Server (connected to VRF for Application Servers). The ASA5580s are
deployed as a redundant pair (configured in active-active mode ) and are connected to the Internal VDC
and External VDC on the Cisco Nexus 7010.
Use the ASDM or the ASA CLI to provision security services on the ASA. The following configuration
on the ASA provisions services ( UC and Virtual Desktop services ) and hosts to be allowed through the
firewall :
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 35
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Similarly, hosts and services in the network like hypervisor managers and desktop connection managers
should also be provisioned.
Configure the global policy to enable application layer inspection by the ASA :
policy-map global_policy
class inspection_default
inspect waas
It is important to add ‘inspect waas’ to allow vWAAS traffic through the ASA. Otherwise vWAAS
traffic will be blocked going through the ASA.
The complete Cisco ASA 5580 configuration can be found here : ASA Server Configuration and ASA
Non Server Configuration.
The WAE-512s installed in the data center are the controllers for the other WAAS appliances installed
in both the campus and branch networks. They are installed as a redundant pair.
The complete Cisco WAE-512 configuration can be found here : WAAS Controller Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 36
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Cisco NAM2220 is used to monitor and collect data on traffic flow entering and leaving the data
center. This tool is useful for monitor the DV users load on the network and identifying any potential
bottlenecks in the design. A SPAN port created on the Internal VDC on the Nexus 7010 was used to
monitor and capture traffic. The NAM appliance was also used to collect Netflow data exported from
the campus network routers and switches.
The complete Cisco NAM2220 configuration can be found here : Network Analysis Module
Configuration.
Campus Network
The campus network is composed of two redundant Catalyst 6504E switches, Branch Connectivity
routers, internet access routers, WAN accelerators, and Firewall Appliances. It also contains equipment
for local campus access.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 37
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Cisco CAT6504E switches are used for the core and distribution layer functions for the campus
network. They are installed uplinked to the data center Nexus 7010 which provides the data center core
functions. These switches are installed as a redundant pair.
The complete Cisco CAT6504E configuration file can be found here : Catalyst 6504 configuration.
The Cisco 7206VXR routers are used to provide branch connections that do not require VPN services.
The complete Cisco 7206VXR configuration can be found here : 7206VXR Configuration.
The Cisco WAE-674s installed in the campus network provide the campus-side termination of the
WAAS tunnels from the branch WAE appliances. Some key classifiers and mapping need to be
configured for VXI are listed below. Note that since PCoIP traffic is over UDP transport WAAS is
unable to optimize the traffic and hence no configuration is required. Other WAAS configuration
parameters still need to be configured to enable optimization and these are covered in the complete
comfiguration below.
policy-engine application
name Remote-Desktop
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 38
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
classifier Citrix-ICA
match dst port eq 1494
match dst port eq 2598
exit
classifier HTTP
match dst port eq 80
match dst port eq 8080
match dst port eq 8000
match dst port eq 8001
match dst port eq 3128
exit
classifier HTTPS
match dst port eq 443
exit
map basic
name Remote-Desktop classifier Danware-NetOp action optimize DRE no compression none
name Remote-Desktop classifier Laplink-surfup-HTTPS action optimize DRE no compression none
name Remote-Desktop classifier Remote-Anything action optimize DRE no compression none
The complete Cisco WAE-674 configuration file can be found here : WAE-674 Configuration.
The Campus Cisco ASA5540 network provides a VPN concentrator for the Cisco AnyConnect Clients.
The complete Cisco ASA 5540 configuration can be found here : ASA Configuration.
The Campus Cisco ISR3945 provides the VPN concentrator for site-to-site network connections.
The complete Cisco ISR3945 configuration can be found here : 3945 Configuration.
This Catalyst 4500E provides the campus user access to the network with PoE+ It contains the L2 edge
features such as QoS, Security, and Energywise.
The complete Catalyst 4500E configuration file can be found here : Catalyst4500E Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 39
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The Catalyst 4507E provides alternate method for campus user access to the network. It contains the L2
edge features such as QoS and Security.
The complete Cisco Catalyst 4507E configuration can be found here : Catalyst 4507E Configuration.
Cisco Aironet IEEE 802.11a/b/g wireless access points provide a wireless method for campus users to
access to the network using mobile devices like laptops and tablets. They have the versatility associated
with connected antennas, a rugged metal enclosure, and a broad operating temperature range. In this test,
the Aironet 1140 and 1240 series of access points were used in the branches and Aironet 1250, 1260,
3502e, and 3502i series of access points were used in the campus network. The Cisco Aironet access
points are configured by the Cisco Wireless Controller.
The Cisco 5508 Wireless Controller (WLC) controls and manages the Cisco Aironet wireless access
points deployed in the campus and branch networks. It offers enhanced uptime with RF visibility and
CleanAir protection and reliable streaming video and voice quality.
The Cisco 3310 Mobility Services Engine (MSE) provides mobility services and applications across a
variety of mobility networks. It provides location based services for wired and wireless VXI endpoints.
Refer to Appendix 1 for guidance in provisioning the MSE for location tracking services.
Branch Configurations
All branches use the same configuration for the Catalyst 3560 and 3750. The complete Catalyst 3560
configuration can be found here : 3560 Branch Configuration. All branches use either a Cisco Aironet
1131AG or 1242AG access point to provide wireless network access to users. All branches included the
VXC clients : VXC-2211/2212 and VXC-2111/2112 which allow users to access their virtual desktop.
Refer to the VXC client documentation for guidance on provisioning the VXC clients.
This branch configuration uses a Cisco ISR3845 router, a Cisco Catalyst 3560 switch, and a Cisco
Aironet 1242AG access point. The ISR was tested with both T1 and T3 connections. The 3845 was
configured with the Performance Routing (PfR) feature enabled which provides path optimization and
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 40
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
advanced load balancing for VXI traffic over the WAN. The complete Branch 1 configuration can be
found here : 3845 Branch 1 Configuration.
Branch 1 design focuses on validating PfR in VXI environment. PfR operation has multiple steps, traffic
classification, network characterstics measurement, policy identification and policy enforcement. All
branch routers enabled with PfR work as border routers and communicate with the master router. The
configuration for PfR used is listed below with appropriate explanations.
pfr master
policy-rules POLICY2
logging
!
Identify the interfaces included in the PfR operations and attach the key string defined above for
communications with the master.
border 10.1.253.5 key-chain PFR_Chain
interface Serial0/0/1 external
interface Serial0/0/0 external
interface Loopback1 internal
interface GigabitEthernet0/0 internal
!
Define network characterstics to measure for PfR operation. In this case throughput and delay are
being measure and will be used as inputs for policy enforcement.
learn
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 41
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
throughput
delay
periodic-interval 1
monitor-period 1
prefixes 250 applications 250
list seq 10 refname MYLIST
traffic-class access-list PFR_list
delay
delay threshold 30 <Delay threshold defined here defines out-of-policy triggers.
mode route control
mode monitor passive <Passive monitoring measures TCP throughput from live traffic. For UDP traffic
active ping probes are required>
mode select-exit best
resolve delay priority 1 variance 5 <The amount of delay variance defines the acceptable jitter>
!
active-probe echo 10.1.253.5 <Active probes are sent to the master and network health is measured
directly>
active-probe tcp-conn 10.1.253.5 target-port 2000
!
For active probing it is recommended to use a loopback interface.
pfr border
local Loopback1
master 10.1.253.5 key-chain PFR_Chain
active-probe address source interface Loopback0
crypto pki token default removal timeout 0
Classify PfR traffic using a standard ACL. IN this scenario all traffic was VXI specific so a port
numbers were not used. In scenarios where PfR is applied for specific traffic types port numbers can be
used to classify applications.
ip access-list extended PFR_list
permit tcp any 10.1.1.0 0.0.0.255
permit tcp 10.1.1.0 0.0.0.255 any
permit ip any 10.0.0.0 0.0.0.255
!
ip sla responder
ip sla responder tcp-connect port 2000
pfr-map BLUE 10
match pfr learn delay
set mode select-exit best
set delay threshold 90
!
pfr-map test1 10
!
pfr-map POLICY2 10
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 42
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
This branch configuration uses a Cisco ISR3845 router, a Cisco Catalyst 3750 switch, and a Cisco
Aironet 1131AG access point. The ISR was tested both T1 and T3 connections. The WAVE-674 is
connected via an Ethernet port on the ISR3845. The ISR3845 in Branch 2 uses WCCP redirection for
connection to the WAVE-674.
Refer to the Appendix 3 for specific guidance on implementing a QOS policy for traffic traversing the
WAN.
Traffic to external WAAS appliance from the branch router is redirected using WCCP option 61 on
every incoming traffic interface on the router except where WAE is connected.
On all incoming router interfaces wccp command shown below is used to redirect traffic to WAE.
ip wccp 61 redirect in
Interface connecting directly to WAAS interface need to be excluded from wccp using the command
below. This avoids sending the optimized traffic back into the waas appliance avoinding loops.
ip wccp redirect exclude in
The complete Branch 2 configuration can be found here : 3845 Branch 2 Configuration and WAVE-674
Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 43
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
This branch configuration uses a Cisco ISR3845 router, a Cisco Catalyst 3560 switch and a Cisco
Aironet 1242AG access point. The ISR was tested with both T1 and T3 connections. The complete
Branch 3 configuration can be found here : 3845 Branch 3 Configuration
This branch configuration uses a Cisco ISR2951 router, a Cisco Catalyst 3750 switch and a Cisco
Aironet 1242AG access point. The ISR was connected via both T1 and T3 connections. The
configuration may only show one of the serial connections active. The WAAS SRE module operates the
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 44
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
same as an external device. The ISR still uses WCCP redirection for this application; however the
network connection to the SRE module is made internal to the ISR. The complete Branch 4
configuration can be found here : 2951 Branch 4 Configuration and WAAS SRE Configuration.
This branch configuration uses a Cisco ISR3945 router, a Cisco Catalyst 3750 switch and a Cisco
Aironet 1131AG access point. The ISR was connected via both T1 and T3 connections. The WAAS
Express feature is enabled on the 3945.
The complete Branch 6 configuration can be found here : 3945 Branch 6 Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 45
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
This branch configuration uses a Cisco ISR3945 router, a Cisco Catalyst 3560 switch and a Cisco
Aironet 1242AG access point. While the hardware is the same as Branch 1, the configuration is the
different. This ISR connects via a Site-to-Site VPN tunnel to the Campus ISR3945. The complete
Branch 9 configuration can be found here : 3945 Branch9 Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 46
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
This branch configuration uses a Cisco 7301 router, a Cisco Catalyst 3560 switch and a Cisco Aironet
1242AG access point. This ISR connects using AnyConnect VPN to the Campus ASA5540. The
complete Branch 10 configuration can be found here : 7301 Branch10 Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 47
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Switch(config-civic)# country US
Switch(config-civic)# end
Note The labels such as “building”, “room”, “state” etc are defined element types in the RFC. The rest
of the string that follows is the value configured for that label.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 48
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Configuration sequences below are built based on the discussion in the Release 2.0 VXI CVD. Please see Deployment
and Configuration best practices of VXI security components section under Anyconnect 3.0, Dot1x, MacSec and MAB
along with ACS. The configuration sequence below is an sample scenario and not part of the validated architecture. It
is presented here as a reference to understand configuration sequence of various elements involved in securing the VXI
Access.
Following procedure should be used to setup Anyconnect, Dot1x, ACS5.2 and MacSec in VXI environment.
1. Any Connect 3.0 Network Access Manager (NAM) Configuration
In this section, the AnyConnect NAM Profile Editor is used to create an example wired 802.1x connection that is
MACsec enabled. Before starting Launch the profile editor
Client Policy Configuration : Defines the options that govern the client policy.
Network Configuration:
AnyConnect 3.0 is configured for use on a wired network that supports 802.1x authentication and MACsec encryption.
From the Anyconnect UI on the Anyconnect profile editor follow the procedure below. This is a onetime procedure for
each type of profile and is recommended to be distributed via ASA.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 49
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Note MACsec encryption is supported for Machine and User Connections. However, in this example only User
Connection is selected
Step 9 ClickNext
Step 10 Within EAP methods, select PEAP
Note In this example PEAP is used for authentication however MACsec encryption is supported with any EAP
method that generates a Session-Id (See RFC5247) including EAP-TLS, EAP-PEAP, and EAP-FAST.
Base AAA Configuration: Base configuration for AAA (RADIUS). The configuration described below is required to
enable 802.1X authentication:
CTS-3750X(config)#aaa new-model
CTS-3750X(config)#aaa authentication dot1x default group radius
CTS-3750X(config)#aaa authorization network default group radius
CTS-3750X(config)#aaa accounting dot1x default start-stop group radius
CTS-3750X(config)#radius-server host <ip address> auth-port 1812 acct-port 1813 key cisco123
CTS-3750X(config)#radius-server vsa send authentication
CTS-3750X(config)#radius-server vsa send accounting
Note The command "aaa authorization network" is required for authorization methods such as dynamic
VLAN assignment or downloadable ACL.
3. Configure 802.1x Components in ACS: 802.1x policy elements, Identity Stores, Access Services and Access
Service Selection are configured on ACS.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 50
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Step 1 In the left column, under Users and Identity Stores, select Identity Store Sequences and select
Create.
Step 2 Enter a name (in this example the name is "802.1x Identities"). Select the check box for Password
Based authentication method, and then add Internal Users to the Selected section. Click Submit.
Step 3 In the left navigation column, under Users and Identity Stores, expand Internal Identity Stores,
and select Users. Click Create.
Step 4 Enter a username. Enter and confirm a password under Password Information. Click Submit
when done.
Create an Authorization Profile: Creating a wired authorization profile. This profile is used to define users access to
the network.
Step 1 In the left navigation column, under Policy Elements, expand Authorization and Permissions.
Then expand Network Access and select Authorization Profiles. Click Create.
Step 2 On the General tab, specify a name for this profile.
Step 3 Click on the Common Tasks tab. Navigate down to 802.1x-REV. Select a Static value of "must-
secure".
Note This linksec policy will override the policy configured on the interface
Step 1 In the left navigation column, under Access Policies, click Access Services.
Step 2 At the bottom of the resulting pane, click Create.
Step 3 Specify a name to for this service.
Step 4 Under Access Service Policy Structure, choose Based on service template, and then click Select.
Step 5 Choose Network Access-Simple and click OK. Then click Nextat the bottom of the resulting
window.
Step 6 No need to change any settings in this window*. * Click Finish. You will be prompted to modify
the service selection policy. Click No.
Define the Authorization Policy: Here the authorization policy of the access service is defined.
In the left navigation column, under Access Policies navigate to the access service that was just created. (802.1x
Service in this example).
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 51
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Create a Service Selection Rule: Creating a service selection rule. This rule ensures the policies defined in an access
service (802.1x Service in this example) are applied to 802.1x requests.
Step 1 In the left navigation menu, under Access Policies, click Service Selection. At the bottom of the
right pane, click Create.
Step 2 Specify a name for the rule (Match 802.1x Requests is used here)
Step 3 Select Protocol and match it with Radius.
Step 4 Select Compound Condition. Under Condition, choose RADIUS-IETF for Dictionary and
Service-Type for Attribute. For Value, select Framed and click Add to add the condition to the
Current Condition Set.
Step 5 Under Results, select the access service that was created in Create an 802.1x Access Service
section. Click Ok. Click Save Changes.
Step 6 Finally: Launch AnyConnect and get connected.
The following dot1x and MAB port configuration was found to provide a satisfactory experience with most thin clients:
interface GigabitEthernet0/10
switchport access vlan XX
switchport mode access
dot1x mac-auth-bypass
dot1x pae authenticator
dot1x port-control auto
dot1x timeout tx-period 2
spanning-tree portfast
Classification:
ip access-list RDP
permit tcp any eq 3389 any
ip access-list PCoIP-UDP
permit udp any eq 50002 any
ip access-list PCoIP-TCP
permit tcp any eq 50002 any
ip access-list PCoIP-UDP-new
permit udp any eq 4172 any
ip access-list PCoIP-TCP-new
permit tcp any eq 4172 any
ip access-list ICA
permit tcp any eq 1494 any
ip access-list View-USB
permit tcp any eq 32111 any
ip access-list MMR
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 52
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
ip access-list NetworkPrinter
permit ip any host 10.1.128.10
permit ip any host 10.1.2.201
ip access-list CUPCDesktopControl
permit tcp any host 10.0.128.125 eq 2748
permit tcp any host 10.0.128.123 eq 2748
Class-maps:
class-map type qos match-any CALL-SIGNALING
match access-group name CUPCDesktopControl
Policy-map:
policy-map type qos pmap-HVDAccessPort
class CALL-SIGNALING
set cos 3
set dscp cs3
! dscp = 24
class MULTIMEDIA-STREAMING
set cos 4
set dscp af31
! dscp = 26
class TRANSACTIONAL-DATA
set cos 2
set dscp af21
! dscp = 18
class BULK-DATA
set cos 1
set dscp af11
! dscp = 10
Apply policy-map to the switch port to which the hosted virtual desktop (HVD) virtual machine connects:
port-profile type vethernet VM240
description Port profile for View HVD VM access
vmware port-group
switchport mode access
switchport access vlan 240
no shutdown
state enabled
system vlan 240
service-policy input pmap-HVDAccessPort
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 53
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Note These examples are meant to be guidelines for deploying QoS in a Cisco VXI network and
should not be applied without consideration given to all traffic flows within the enterprise.
In a campus network, the bandwidth is high enough so that contention for the resources should be
minimal. However, slower connections in a branch WAN router network need to be examined. Here
at the egress point from the high-speed connections of the branch-office LAN to the slower-speed
links of the WAN is where bandwidth contention is likely to occur. Service policies that constrain
the amount of bandwidth that is dedicated to a given protocol are defined and applied at this point.
These same queuing and bandwidth configurations can be placed anywhere there is a concentration
of Cisco VXI endpoints, to enforce the appropriate response in case of traffic congestion. Below is
an example of Bandwidth Services Policies used during Cisco VXI system testing.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 54
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Cisco VXI thin-client endpoints do not typically provide the capability to mark the session traffic.
Therefore, the same marking that was performed on the Cisco Nexus1000V in the data center for
outbound desktop virtualization traffic must be performed at the branch office on behalf of the
endpoints for the traffic returning to the data center virtual machine. Below is an example of a
branch switch configuration used during Cisco VXI system testing.
Endpoint
Classification:
ip access-list RDP
permit tcp any eq 3389 any
ip access-list PCoIP-UDP
permit udp any eq 50002 any
ip access-list PCoIP-TCP
permit tcp any eq 50002 any
ip access-list PCoIP-UDP-new
permit udp any eq 4172 any
ip access-list PCoIP-TCP-new
permit tcp any eq 4172 any
ip access-list ICA
permit tcp any eq 1494 any
ip access-list View-USB
permit tcp any eq 32111 any
ip access-list MMR
permit tcp any eq 9427 any
ip access-list NetworkPrinter
permit ip any host 10.1.128.10
permit ip any host 10.1.2.201
ip access-list CUPCDesktopControl
permit tcp any host 10.0.128.125 eq 2748
permit tcp any host 10.0.128.123 eq 2748
Class-maps:
class-map type qos match-any CALL-SIGNALING
match access-group name CUPCDesktopControl
Policy-map:
policy-map type qos pmap-HVDAccessPort
class CALL-SIGNALING
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 55
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
set cos 3
set dscp cs3
! dscp = 24
class MULTIMEDIA-STREAMING
set cos 4
set dscp af31
! dscp = 26
class TRANSACTIONAL-DATA
set cos 2
set dscp af21
! dscp = 18
class BULK-DATA
set cos 1
set dscp af11
! dscp = 10
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 56
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
How to deliver a UC video/voice call solution using deskphone control in a hosted virtual desktop environment
This section will describe the required components and necessary steps to achieve a basic Cisco Unified Personal
Communicator (CUPC) environment (IM, directory lookup and voice/video calls) running in a Citrix XenDesktop hosted
virtual desktop (HVD) deployment.
The required components are listed below:
- Cisco Unified Communications Manager (Cisco UCM) 7.1(5) or later
- Cisco Unified Presence Server 8.0 or later
- Cisco IP phones 9971 or 9951 with USB video camera (Phone load <9-1-0VD-6>)
- VXC-2112 clients
- LDAP server (Microsoft Active Directory is used in this example)
- Citrix XenDesktop 5
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1099013
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063445
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063383
• Configure the SIP Trunk Security Profile for Cisco Unified Presence
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1050014
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 57
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063105
• Create Deskphone control application user as an application user on Cisco UCM (required for deskphone
control)
This user is created on CUCM as a “Application User” and must match the settings on CUPC “Desk Phone
ControlSettings”
• Verifying That the Required Services are Running on Cisco Unified Communications Manager
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1050114
For full details on how to configure Cisco Unified Communications Manager for Cisco Unified Presence see:
http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063445 (Section: Configuring Cisco Unified Communications Manager for Integration with Cisco Unified
Presence)
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 58
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Each Cisco UCM end user that will use CUPC must be properly licensed. For instructions on how to
license your CUPC users see:
http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/
dglic.html#wp1081264
Note LDAP to CUP integration is used for directory lookup only, for user provisioning and user
authentication the LDAP server must integrate to Cisco UCM. This is outside the scope of this
document but information on how to integrate LDAP to Cisco Unified Communications Manager can
be found here:
http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cucm/admin/8_5_1/ccmcfg/bccm-851-
cm.html
Follow the steps below to provision and assign features to CUPC users:
• Configure CUCM Publisher
This value is configured during the Cisco Unified Presence installation process. If this field needs to be changed see:
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/installation/guide/cppostins.ht
ml#wp1075065
• Configure Cisco Unified Presence settings
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 59
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 60
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 61
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 62
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
Citrix XenDesktop
To install Citrix XenDesktop you require virtual Windows server 2003 (XenDesktop 4) or Windows server 2008 (XenDesktop 5) for the Connection
broker application (Controller). For the HVD images Windows 7 or Windows XP are supported and are installed with Citrix VDA agent. Citrix
XenDesktop can be hosted by VMware ESX or ESXi, Citrix XenServer or Windows Hyper-V hypervisor environment. XenDesktop requires an
Active Directory. For complete details on XenDesktop installation and configuration see
http://support.citrix.com/proddocs/index.jsp?topic=/xenapp5fp-w2k8/
The following configuration implements Netflow Data Export (NDE) on the Catalyst 6504 campus core
switch :
mls netflow
mls netflow interface
mls flow ip interface-full
mls nde sender
!
interface TenGigabitEthernet1/4
ip address 10.1.254.45 255.255.255.252
ip flow ingress
ip flow egress
!
interface TenGigabitEthernet1/5
ip address 10.1.254.41 255.255.255.252
ip flow ingress - enable netflow for inbound traffic
ip flow egress - enable netflow for outbound traffic
!
ip flow-export version 9 - Netflow format version 9
ip flow-export destination 10.0.128.49 2055 - IP address of the Netflow collector
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 63
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide
The following sample configuration implements Netflow data export on the 7206VXR campus edge
router :
interface GigabitEthernet0/3
ip flow ingress - enable netflow for inbound traffic
!
ip flow-export source GigabitEthernet0/1
ip flow-export version 9 - Netflow format version 9
ip flow-export destination 10.14.1.206 3000 - IP address of the Netflow collector
ip flow-cache timeout active 1
ip flow-cache timeout inactive 15
snmp-server ifindex persist
Cisco's Energywise Orchestrator can be used to monitor, control, and conserve power consumption on
EnergyWise enabled network elements and endpoints in a VXI deployment . VXI desktop virtualization
endpoints like thick PC's and thin clients running the Orchestrator Client can be managed directly by the
Energywsie Orchestrator. The power consumption on DV endpoints like VXC clients (zero clients) that
use PoE can also be monitored and controlled by managing the attached port on an Energywise enabled
switch. Refer to the Cisco VXI CVD, for guidelines on implementing Energywise in a VXI system.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 64