You are on page 1of 68

Cisco Virtualization Experience Infrastructure

(VXI) Configuration Guide


April 29, 2011
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Contents
Goal .............................................................................................................................................................. 5
Audience ...................................................................................................................................................... 5
Objectives .................................................................................................................................................... 5
Design Overview ......................................................................................................................................... 2
Data Center Compute and Storage Components .................................................................................. 10
Cisco Unified Computing System ........................................................................................................... 10
Cisco UCS 2100 Series Fabric Extender ........................................................................................... 12
Cisco UCS 6100 Fabric Interconnect ................................................................................................. 12
Cisco UCSM Configuration ................................................................................................................ 14
UCS 6100 Uplink Storage Configurations .......................................................................................... 17
Fibre Channel Storage Components ...................................................................................................... 18
NFS Storage Components...................................................................................................................... 22
Cisco Unified Computing System C-Series and DAS Storage ............................................................... 22
Cisco Nexus 1000v ................................................................................................................................. 23
Atlantis ILIO ............................................................................................................................................ 27
Cisco VSG .............................................................................................................................................. 27
Cisco vWAAS.......................................................................................................................................... 27
McAfee MOVE-AV .................................................................................................................................. 28
Hypervisor Installation and Configuration ............................................................................................... 28
VMware ESX/ESXi ............................................................................................................................. 28
Microsoft Hyper-V ............................................................................................................................... 28
Desktop Virtualization Software Installation and Configuration .............................................................. 29
VMware View Configuration ............................................................................................................... 29
Citrix XenDesktop Configuration ........................................................................................................ 29
Data Center Networking Infrastructure ................................................................................................... 31
Cisco Nexus 5010 Access Layer Switch ................................................................................................ 31
Cisco Nexus 7010 ................................................................................................................................... 33
Cisco Application Control Engine (ACE4710) ........................................................................................ 33
Cisco Adaptive Security Appliance (ASA5580) ...................................................................................... 35
Cisco WAN Acceleration Engine (WAE-512) ......................................................................................... 36
Cisco Network Analysis Module (NAM2220) .......................................................................................... 36
Campus Network ....................................................................................................................................... 37
Cisco Catalyst 6504E (CAT6504E) ........................................................................................................ 38
Cisco 7206VXR Router........................................................................................................................... 38
Cisco WAN Acceleration (WAE-674) ...................................................................................................... 38
Cisco Adaptive Security Appliance (ASA5540) ...................................................................................... 39
Cisco Integrated Service Router (ISR3945) ........................................................................................... 39
Cisco Catalyst 4500E (Campus Access CAT4500E) ............................................................................. 39
Cisco Catalyst 4507E (Campus Access CAT4507E) ............................................................................. 39
Cisco Aironet Access Points ................................................................................................................... 40
Cisco 5508 Wireless Controller .............................................................................................................. 40
Cisco 3310 Mobility Services Engine ..................................................................................................... 40
Branch Configurations ............................................................................................................................. 40
Branch 1 – Branch connection with PfR ................................................................................................. 40
Branch 2 – Branch connection with external WAVE-674 ....................................................................... 43
Branch 3 – Branch connection with no WAAS ....................................................................................... 44
Branch 4 – Branch connection with WAAS SRE Module ....................................................................... 44
Branch 5 – Identical to Branch 3 except uses 3945 ............................................................................... 45
Branch 6 – Branch connection with WAAS Express .............................................................................. 45
Branch 7 – Identical to Branch 3 except uses 3945 ............................................................................... 46
Branch 8 – Identical to Branch 3 except uses 2951 ............................................................................... 46

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Branch 9 – Direct Connection using Site-to-Site VPN ............................................................................ 46


Branch 10 – Direct Connection using AnyConnect VPN ........................................................................ 47
Appendix 1 – Location Tracking .............................................................................................................. 47
Appendix 2 - Endpoint security 802.1x, MacSec, MAB, ACS............................................................... 49
Appendix 3 – QOS settings in VXI ........................................................................................................... 52
Appendix 4 – CUPC and Deskphone control ......................................................................................... 57
Appendix 5 – Netflow and Energywise ................................................................................................... 63

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Figures
Figure 1: CVD Test Configuration .............................................................................................................................. 2
Figure 2: Network Map showing IP addresses .......................................................................................................... 3
Figure 3: Virtual Infrastructure ................................................................................................................................... 4
Figure 4: Server Configuration ................................................................................................................................. 10
Figure 5: Logical Diagram of the Cisco UCS Compute Components ...................................................................... 12
Figure 6: Complete UCS System Block ................................................................................................................... 13
Figure 7: UCS 6120 Uplink Storage Connectivity .................................................................................................... 18
Figure 8: Data Center Network Components ........................................................................................................... 31
Figure 9: Campus Network Components ................................................................................................................. 37
Figure 10: Branch 1 Design ..................................................................................................................................... 41
Figure 11: Branch 2 Network Design ....................................................................................................................... 43
Figure 12: Branch 3 Network Design ....................................................................................................................... 44
Figure 13: Branch 4 Network Design ....................................................................................................................... 45
Figure 14: Branch 6 Network Design ....................................................................................................................... 45
Figure 15: Branch 9 Network Design ....................................................................................................................... 46
Figure 16: Branch 10 Network Design ..................................................................................................................... 47

Tables
Table 1: Data Center Compute Components............................................................................................................. 5
Table 2: Data Center Storage Components............................................................................................................... 6
Table 3: Data Center Network Components .............................................................................................................. 7
Table 4: Campus Network Components .................................................................................................................... 7
Table 5: Branch Components .................................................................................................................................... 9
Table 6: Supporting Application Services .................................................................................................................. 9

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Goal
This document reports the tested configuration for the VXI Release 2.0 architecture. It includes specific
device configurations and diagrams showing interconnections.

Audience
This document is intended to assist solution architects, sales engineers, field engineers and consultants in
planning, design and deployment of the Cisco VXI System. This document assumes the reader has an
architectural understanding of the Cisco VXI system and has reviewed the Cisco VXI Release 2.0 CVDs
for Citrix XenDesktop and VMware View.

Objectives
This document is intended to articulate the overall design and VXI specific configurations of the tested
architecture called out in the VXI Release 2.0 CVDs for Citrix XenDesktop and VMWare View.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Design Overview
The design implemented in the Cisco VXI CVD replicates a customer’s network, end-to-end, from the
datacenter to the endpoints installed either in the campus or branch environment. Figure 1 shows the
complete system test setup. Discussion for each of the sections are show following.

Figure 1: CVD Test Configuration

Release 2.0 of the VXI CVD is based on existing selling equipment from both Cisco and its Technology
Partners. Where possible the preferred operating mode and configuration was used, unless specific
commands or configurations yielded improvements or enhancements to the Desktop Virtualization (DV)
experience.

Due to the number of devices in the VXI CVD test setup, the configuration files for each device are
included as hyperlink references throughout the document. Refer to the Appendix section of this
document for guidance on completing VXI specific use cases.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 2
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Figure 2: Network Map showing IP addresses

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 3
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Figure 3: Virtual Infrastructure

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 4
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Table 1: Data Center Compute Components


Component Software Release
Cisco Unified Computing System: 1.4.1j
• UCS 5108
• B200 M1 & M2
• B250M2
• B230 M1
• UCS 6120 Fabric Interconnect
• UCS Manager
Cisco Unified Computing System: 1.4.1j
• C250 M2
VMware ESXi /ESX 4.1

VMware vSphere/vCenter 4.1

Microsoft Hyper-V Windows 2008 R2 64-


bit
Microsoft SCVMM Windows 2008 R2 64-
bit
Citrix XenServer 5.6 FP1
Cisco Nexus 1000v 4.2.1 SV1(4)

Cisco Virtual Security Gateway 4.2.1 VSG1(1)

Cisco vWAAS 4.3.1

VMware View Connection Server 4.5 / 4.6

VMware View Composer 2.5.0.291081

VMware View Agent 4.5 / 4.6

Citrix XenDesktop 5.0

Citrix PVS 5.6.0 SP1

Citrix XenApp 6.0

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 5
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Microsoft Windows 7 & XP SP3

McAfee MoveAV 1.5

McAfee VSE 7.3

Cisco Unified Presence Client (CUPC) 8.5

WebEx/WebEx Connect 7

Microsoft Lync 2010

Table 2: Data Center Storage Components


Component Software Release
Cisco MDS 9222i 5.0(1a)
EMC Celerra NS-480 FLARE 4.30
EMC Unisphere 6.0.36-4

NetApp FAS 3170 8.0.1

NetApp Mgmt. Software 1.1 R2

Atlantis ILIO 2.0.2

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 6
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Table 3: Data Center Network Components


Component Software Release
Cisco Nexus 5000 5.0(2)N1(1)

Cisco Nexus 7010 NxOS 5.1(2)

Cisco - Data Center Network Manager (DCNM) 5.1(2)

Cisco Application Control Engine (ACE) 4710 A3 (2.5)


Cisco ACE Device Manager A3 (2.5)
Cisco Adaptive Security Appliance (ASA) 5580 8.3.1

Cisco NAM Appliance 2220 5.0(1)

Cisco Adaptive Security Device Manager (ASDM) 6.2.5


Cisco WAAS central manager 4.3.1

Table 4: Campus Network Components


Component Software Release
Catalyst 6504-Sup720 12.2(33)SXI5

Cisco 7206VXR NPE-G1 15.0(1)M4

Cat 4507R-E 122-54.SG

Cisco WAE 674 4.3.1

Cisco Adaptive Security Appliance (ASA) 5540 8.3.1

Cisco ISR 3945 G2 15.1(T3)

CiscoWorks LAN Management Solutions (LMS) 4.0

Cisco Aironet Access Point 7.0.98.0

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 7
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Cisco 5508 Wireless Controller 7.0.98.0

Cisco Wireless Control System 7.0.164

Cisco Mobility Services Engine 7.0.105.0

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 8
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Table 5: Branch Components


Component Software Release
Cisco ISR 3945 G2 15.1(T3)

Cisco ISR 3845 15.1(T3)

Cisco WAE 674 4.3.1

NM-WAE module for ISR 4.3.1

WAAS Express 4.3.1

Catalyst 3560 / Catalyst 3750 12.2(35)SE1

Networked Printers Win 2008


Cisco 99xx, 79xx IP Phones 99xx.9-0-2, 79xx.9-0-2SR1

Cisco VXC2111 / VXC2211 (PCoIP) 3.3.1

Cisco VXC2112 / VXC2112 (ICA) 7.0.0.30

Apple iPad 4.2.1


Wyse Xenith, Wyse P20 Embedded OS
Wyse R90LEW, X90LW Embedded WinXP/Win7
Wyse V10L Wyse Thin OS

Wyse R50 Linux

DevonIT TC5DW DeTOS

DevonIT TC5XW Embedded WinXP

IGEL UD7 Embedded WinXP

Table 6: Supporting Application Services


Component Software Release
Cisco Unified Communication Manager 8.5

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 9
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Cisco Unity Connection 8.5

Cisco Unified Presence Server (CUP) 8.5

WebEx/WebEx Connect 7
Microsoft Active Directory Domain Controllers Win 2008

Cisco Secure Access Control System 5.1.0.44

DNS Win 2008


DHCP Win 2008
SSC 5.1
AnyConnect 3.0

Data Center Compute and Storage Components


This section describes the Data Center infrastructure components used in the CVD configuration.
The design for the data center uses guide lines from the Data Center 3.0 CVD available on the Cisco
Design Zone. It deploys the standard three tier model: Access, Aggregation, and Core. Service
Appliances are connected to the Aggregation Layer and are covered in this section.

Cisco Unified Computing System

The Cisco Unified Computing System is used to host the virtual desktops and desktop virtualization
servers. For the Cisco UCS B Series, three Cisco UCS 5108 Blade enclosures were used for the test.
Two enclosures contain eight half width Cisco UCS B200 blade servers each and one enclosure contains
four full width UCS B250 blade servers. The Desktop Virtualization (DV) machines are distributed
across all three enclosures. Cisco VXI also has validated two configurations of the Cisco UCS C Series
Rack Mount server.

The Cisco UCS M71KR-Q Converged Network Adapter (CNA) mezzanine card was used for this test.
It provides two 10 Gigabit Ethernet NICs and two Fibre Channel HBAs combining them into two Fibre
Channel over Ethernet (FCoE) interfaces. The hypervisor will map these interfaces to the various
VLANs needed to support the DV environment.

Figure 4: Example Server Configuration (Cisco UCS B Series shown)

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 10
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 11
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Cisco UCS 2100 Series Fabric Extender

The Cisco UCS 2104XP Fabric Extender (FEX) has four 10 Gigabit Ethernet, FCoE-capable, Small
Form-Factor Pluggable Plus (SFP+) ports that connect the Cisco B5108 blade chassis to the Cisco UCS
6100 fabric interconnect and eight 10 Gigabit Ethernet server ports connected through the mid-plane,
one to each half-width slot in the chassis.

Figure 5: Logical Diagram of the Cisco UCS Compute Components

Cisco UCS 6100 Fabric Interconnect

The Cisco® UCS 6100 Series Fabric Interconnects are a core part of the Cisco Unified Computing
System, providing both network connectivity and management capabilities for the system (Figure 4).
Each uplink from the UCS 2104XP FEX is connected to a 10G FCoE Port on the UCS 6100.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 12
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Figure 6: Complete UCS System Block

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 13
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Cisco UCSM Configuration

The Cisco UCS Manager (UCSM) is used to configure the various elements of the Cisco Unified
Computing System. Cisco UCSM runs in the Cisco UCS 6100 Series Fabric Interconnects, and
provides multiple interfaces (including GUI and CLI) for managing a UCS instance. Each Fabric
Interconnect has a unique IP address. A Virtual IP address is created to link the two switches and
provide a single point of management. Cisco UCSM is accessed via a browser.

Cisco UCSM can be used to manage all hardware in a UCS instance (chassis, servers, fabric
interconnects, etc) and all resources (servers, worldwide name addresses, MAC addresses, UUIDs,
bandwidth). It also provides tools for server, network, and storage administration. Because of the
breadth and depth of its capabilities, a detailed explanation of the Cisco UCSM configuration is far
beyond the scope of this document. The fundamental elements of a Cisco UCS configuration
include:

• System: configuring the Fabric Interconnects, ports, communication servers, role-based


access controls, DNS servers, authentication, firmware management, and more

• Network: configuring VLANs, LAN pin groups, MAC pools, QoS, and other network-
related policies

• Storage: configuring Named VSANs, SAN pin groups, WWN pools, and other storage
related policies

• Server: configuring server-related pools, management IP addresses, server-related policies,


service profiles, and power management

• VN-Link: configuring VN-Link components, distributed virtual switches, port profiles, VN-
Link related policies, and pending deletions

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 14
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

• System Management: managing time zones, chassis, servers, I/O modules, configurations,
and passwords

• System Monitoring: monitoring traffic and hardware, configuring statistics-related policies,


managing the event log, configuring settings for faults, events, and logs, configuring call
home

Cisco UCS Service Profiles

Service Profiles are the central concept of Cisco UCS. A service profile ensures that associated server
hardware is properly configured to support the application it will host. The service profile defines server
hardware, interfaces, fabric connectivity, server identify, and network identity. Service profiles are
centrally managed and stored in a database on the Cisco UCS 6100 Series Fabric Interconnect. Every
Cisco UCS server must be associated with a service profile. Service profile templates facilitiate the
rapid creation and deployment of service profiles. The following example shows a truncated version of
a single Cisco VXI service profile.

enter service-profile S3B4 instance


associate server 3/4
enter vcon 2
enter vhba HBA-A fabric a
set adapter-policy VMWare
set fabric a
set fc-if name SONS-VSAN-Fabric-A
set max-field-size 2048
set order 1
set pers-bind disabled
set pin-group ""
set qos-policy ""
set stats-policy default
set template-name vHBA_FI-A
set vcon any
exit
enter vhba HBA-B fabric b
.
.
.
exit
enter vnic NIC-A fabric a-b
enter eth-if EMC-NFS
set default-net no
exit
enter eth-if NetApp-NFS
set default-net no
exit
enter eth-if NetApp-iSCSI
set default-net no
exit
enter eth-if PXE-Boot-999
set default-net yes
exit
.
.
.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 15
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

set adapter-policy VMWare


set fabric a-b
set mtu 9000
set nw-control-policy CDP-On
set order 3
set pin-group ""
set qos-policy ""
set stats-policy default
set template-name vNIC_SON_FI-A
set vcon any
exit
enter vnic NIC-B fabric b-a
.
.
.
exit
power up
set bios-policy SONS-BIOS
set boot-policy SONS-boot
set descr ""
set dynamic-vnic-conn-policy ""
set ext-mgmt-ip-state none
set host-fw-policy 1.4.1j
set ipmi-access-profile ""
set local-disk-policy SONS-mirror
set maint-policy ""
set mgmt-fw-policy ""
set power-control-policy default
set scrub-policy ""
set sol-policy default
set src-templ-name SONS
set stats-policy default
set user-label ""
set vcon 2 selection all
set vcon-policy ""
exit

The UCSM configuration screenshots can be found here : UCS Manager Configuration.

The configuration file for the UCS6120 was captured by using SSH to get access to the CLI and issuing
the command: show config >> sftp://<host>/<path>/<dest_filename>"

The complete configuration for the UCS6120 can be found here : UCS 6120 Configuration.

NOTE

To enable Jumbo Frames for both NFS and FC storage, use the QOS configuration on the Cisco Unified
Computing System Manager.

Configure platinum policy by checking the Platinum policy box and if you want jumbo frames enabled
change MTU from normal to 9000. Use the option to set no packet drop policy during this configuration.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 16
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

UCS 6100 Uplink Storage Configurations

Two sets of uplinks are provided on the UCS 6100. One set is used for Fibre Channel (FC) connectivity
to FC based storage. These uplinks are located on the module slot. The second set is used for Ethernet
connectivity including access to NFS storage. Ethernet ports on the module slot or from the built-in
ones can be used for this purpose. Figure 7shows this portion of the system configuration.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 17
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Figure 7: UCS 6120 Uplink Storage Connectivity

Fibre Channel Storage Components

A pair of MDS Series switches was used in the configuration to connect the Cisco UCS 6120 Fabric
Interconnect Fibre Channel ports to the storage array (see Figure 7). The MDS switches provide an
optimized platform for deploying Storage Area Networks (SANs), intelligent fabric services, and
multiprotocol connectivity. These switches support high density Fibre Channel switching, hardware-
based virtual fabric isolation with VSANs, Fibre Channel-based Inter-VSAN routing, iSCSI
connectivity for Ethernet-attached servers, and a wide range of additional advanced services. In the
Cisco VXI validated design testing, the MDS Series SAN switches enabled e Boot from SAN support
for the Cisco UCS Series servers.

The MDS series switches can be configured by means of a command line interface (CLI), or using the
Cisco Fabric Manager. Detailed information on configuring MDS Series basic and advanced services
can be found at Cisco MDS Series configuration guide. The complete MDS 9222i configuration can be
found: MDS9222i Configuration. The key elements of the Cisco VXI test configuration include
definitions for the following:

• VSANs
• Interfaces
• Zones and Zone Sets
• N-Port ID Virtualization

VSANs
A VSAN is a virtual storage area network. VSANs provide isolation among devices that are physically
connected to the same fabric. VSANs enable the creation of multiple logical SANs over a common
physical infrastructure. Each VSAN contain up to 239 switches, and has an independent address space
so that identical Fibre Channel IDs can be used simultaneously in different VSANs. The VSANs must
be consistent on the VMware vSphere hosts, the Cisco UCS Series servers, the Cisco MDS Series
switches, and the Fibre Channel storage array. VSAN configuration can be done either in the MDS

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 18
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Series switch CLI or the Cisco MDS Device manager. Cisco Fabric Manager can also be used for
managing the SAN configuration and zoning information. Configuring VSANs requires the following:

Creating the VSANs in the database:

vsan database
vsan 100 name "vxi_triage_auto_a"
vsan 200 name "vxi_soncs_ls_a"
vsan 300 name "vxi_infra_a"

Configuring persistent Fibre Channel IDs in each VSAN:

vsan 1 wwn 20:41:00:0d:ec:f9:b3:c0 fcid 0x6d0000 dynamic


vsan 1 wwn 20:43:00:0d:ec:f9:b3:c0 fcid 0x6d0001 dynamic
vsan 1 wwn 20:45:00:0d:ec:f9:b3:c0 fcid 0x6d0002 dynamic
vsan 1 wwn 50:06:01:60:3c:e0:60:04 fcid 0x6d00ef dynamic
.
.
.
vsan 200 wwn 20:00:00:25:b5:22:01:7f fcid 0xbc0027 dynamic

Assigning interface membership to the appropriate VSAN:

vsan database
vsan 100 interface fc1/1
vsan 200 interface fc1/2
vsan 100 interface fc1/3
vsan 100 interface fc1/4
vsan 200 interface fc1/5
.
.
.
vsan 300 interface fc2/24

Interfaces
The interface is the mechanism through which frames are transferred. Each interface must be defined
and its operating characteristics specified. Configured interfaces can be Fibre Channel, Gigabit
Ethernet, management, or VSAN interfaces. Interfaces can be associated with a particular owner, and
traffic flow can be enabled or disabled. The example shows a Fibre Channel interface associated with
owner Triage-a1, and with traffic flow enabled.

interface fc1/3
switchport owner Triage-a1
no shutdown

Zones and Zone Sets


Zoning enable administrators to control access between storage devices and/or user groups. Zones can
be used to provide an added measure of security, and to prevent data loss or corruption. Zoning is
enforced based on source-destination ID fields. A zone consists of multiple members, and is usually
based on member worldwide names (WWNs) or Fibre Channel IDs (FC IDs). A zone member can
access other zone members; non-zone members cannot access zone members. The example shows
creation of a zone named emc1-a2_s4b7-a for VSAN 200. Two members (one target and one initiator)
are assigned to this zone, based on Port WWNs.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 19
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

zone name emc1-a2_s4b7-a vsan 200


member pwwn 50:06:01:64:3c:e0:60:04
! [emc1-a2]
member pwwn 20:00:00:25:b5:22:02:7e
! [s4b7-a]

A Zone Set is a collection of zones. A zone set can be activated or deactivates as a single entity across
all switches in the fabric. A zone can be a member of multiple zone sets. Only one zone set can be
activated at a given time. The example shows the creation of a zone set named vxi_soncs_ls_a in VSAN
200, and the assignment of the member zones in this set. The example also shows the zone set being
activated.

zoneset name vxi_soncs_ls_a vsan 200


member emc1-a2_s1b1-a
member emc1-b2_s1b1-a
member emc2-a2_s1b1-a
member emc2-b2_s1b1-a
member emc1-a2_s1b3-a
member emc1-b2_s1b3-a
member emc2-a2_s1b3-a
.
.
.
member emc2-b2_s1b7-a2

zoneset activate name vxi_soncs_ls_a vsan 200

N-Port ID Virtualization
The uplink FC ports on the UCS 6120 operate in N-Port Virtualization (NPV) mode. The uplink ports
operate in a similar mode to host ports (referred to as N-Ports in the FC world). To support NPV on the
UCS 6120, the N-Port ID Virtualization (NPIV) feature must be enabled on the MDS 9222i Switch.

# show feature | grep npiv


npiv 1 enabled
# show interface br
-------------------------------------------------------------------------------
Interface Vsan Admin Admin Status SFP Oper Oper Port
Mode Trunk Mode Speed Channel
Mode (Gbps)
-------------------------------------------------------------------------------
fc1/1 1 auto on up swl F 4 --
fc1/2 1 auto on up swl F 4 --
fc1/3 1 auto on up swl F 4 --
fc1/4 1 auto on up swl F 4 --

EMC Unified Storage


Cisco VXI validation testing employed a EMC Unified Storage system and high bandwidth fibre
channel connectivity to provide a Boot from SAN capability. In addition to fibre channel, the EMC
system also supports iSCSI and NAS storage. The system is composed of two to four autonomous
servers and a Storage Processor Enclosure (SPE). The servers (also called X-Blades) run the operating
system and provide block (and file) access. The SPE manages the disk arrays. The system provides up
to 192 TB of storage capacity.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 20
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Detailed configuration of the EMC Unified Storage system is beyond the scope of this document. The
basic approach is to begin by connecting the EMC FC ports to the Cisco MDS Series SAN switches.
Adminstrators also should ensure that the FC zoning process has been completed. The next step is to
create a Raid Group (Raid 1/0), and then to create LUNs from that group. Then a storage group is
created, and VMware vSphere hosts (which will access the LUNs) are added to the group. Finally, the
newly created LUNs are added to the storage group. When these steps are completed, vCenter Server is
used to rescan the FC adapter for new storage devices. After the rescan, the FC LUN will be added to
storage.

Refer to EMC documentation for specific configuration guidance on provisioning EMC Unified Storage
in virtual desktop environments :

http://www.emc.com/collateral/solutions/reference-architecture/h8020-virtual-desktops-celerra-vmware-
citrix-ra.pdf

http://www.emc.com/collateral/software/technical-documentation/h6082-deploying-vmware-view-
manager-celerra-unified-storage-plaform-solution-guide.pdf

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 21
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

NFS Storage Components

As shown in Figure 7 above, the NFS based storage array is connected to the Cisco Nexus 5000. In the
Cisco VXI validated design, this array was deployed to store user data. The Cisco VXI system tested a
NetApp FAS series storage system for NFS storage. The NetApp FAS series is a unified storage
platform that provides primary and secondary storage with simultaneous file and block services. This
system supports up to 840 drives with a total capacity of over 16 TB.

The NFS array needs to have L2 adjacency to the hypervisors. If the array is on another L2 segment
then special routes will need to be manually added to both the array and the hypervisor connections.
The NFS array can be connected to another Cisco Nexus 5000 or the Cisco Nexus 7000 as long as the
L2 adjacency is maintained. Jumbo Frames need to be configured on all the Ethernet switches that make
up the pathway between the hypervisors and the NFS storage array.

Detailed configuration of the NetApp FAS system is beyond the scope of this document. Please refer to
the NetApp documentation for specific details on configuration. To provision NFS storage on the
NetApp system, the first step is to create an aggregate, which is a collection of disks. The general
recommendation is to configure an aggregate of maximum possible size. The next step is to configure a
default flexible volume for NFS exports, which can grow to the size of the aggregate. Once the
aggregate is created, additional Flexible Volumes can be defined if needed. Disk space can be reserved
for the volume, or it can configured to grow only as data is written. The configuration process also
involves defining interface characteristics and NAS volumes.

For VMware environments using NFS storage, NetApp recommends increasing the number of VMware
vSphere data stores from the default value of 8 to 64, so that additional data stores can be dynamically
added as needed. NetApp also suggests changing NFS.HeartbeatFrequency to 12, and
NFS.HeartbeatMaxFailures to 10 for each ESX host.

Refer to NetApp documentation for specifc configuration guidance on provisioning NetApp Storage in
virtual desktop environment :

NetApp Windows Host Utilities 5.3:


http://now.netapp.com/knowledge/docs/hba/win/relwinhu53/pdfs/setup.pdf

NetApp DSM 3.4 for Windows Multi-Path I/O:


http://now.netapp.com/knowledge/docs/mpio/win/reldsm34/pdfs/install.pdf

NetApp SnapDrive 6.3 for Windows


http://now.netapp.com/knowledge/docs/snapdrive/relsnap63/pdfs/admin.pdf

NetApp SnapManager for Hyper-V 1.0


http://now.netapp.com/knowledge/docs/smhv/relsmhv10/pdfs/install.pdf

Direct Attached Storage

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 22
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

The Cisco UCS C-Series (C250 M2) was tested with DAS storage in two configurations : 2x100GB
SSD with 6x300GB SAS and 8x300GB SAS. In the Cisco VXI validated design deployment, the UCS
C Series local storage was used to store both boot images and user data. VMware vSphere was installed
on the UCS Server, which was configured to boot from the local hard disk. Local disk policy was
configured on the server to create a RAID LUN (raid-1-mirrored) as shown in the following example:

enter local-disk-config-policy SONS-mirror


set descr "SONS Local Disk Cfg Policy"
set mode raid-1-mirrored
set protect yes
exit

There are several caveats involving the interaction of UCS C Series servers, UCS Manager, and RAID
storage controllers when configuring local storage. For best results, consult the UCS Manager
Configuration Guide to understand the conditions that may apply in a given situation.

Atla n tis ILIO

Atlantis ILIO, a storage optimization software solution, was tested with the UCS C-Series. Atlantis ILIO
is designed to reduce the amount of storage capacity required for desktop images, increase desktop
performance, and increase the number of virtual desktops that can be run on a storage system. The
Atlantis virtual appliance was installed on the Cisco UCS C-series server. Cisco and Atlantis
recommend the following additional steps when using the ILIO appliance in a Cisco VXI
environment:

1. Use the default settings configured during the Atlantis ILIO installation. These settings
determine how and when the cache flushes data to disk and have been tuned for optimal
performance.

2. Using the vSphere interface, make the following changes to the ESXi host that is running the
Atlantis ILIO virtual appliance. See the ESXi Advanced Settings under the Configuration tab:

NFS.HeartbeatFrequency = 12
NFS.HeartbeatTimeout=5
NFS.HeartbeatDelta=8
NFS.HeartbeatMaxFailures=10

3. Ensure that the ESXi server has a VMK interface in the same VLAN as the ILIO management
port so that the ILIO NFS traffic stays within the host and never touches the network.

Cisco Nexus 1000v

The Cisco Nexus 1000V is an NX-OS based virtual access software switch that runs on VMware
vSphere. The switch was overlaid across the UCS Series servers used for Cisco VXI system testing to
enable virtual machine networking. The Cisco Nexus 1000V provides Layer 2 switching, advanced

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 23
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

networking functions (such as QoS and security), and a common network management model in a
virtualized server environment by replacing the virtual switch within VMware vSphere.

The Cisco Nexus 1000V has the following components:


• The Virtual Supervisor Module (VSM)— the control plane of the switch and a virtual machine
that runs NX-OS.
• The Virtual Ethernet Module (VEM) —a virtual line card embedded in each VMware vSphere
(ESX) host. The VEM is partly inside the kernel of the hypervisor and partly in a user world
process, called the VEM Agent. The VEM provides the data plane for the Cisco Nexus 1000v.

Installation of Cisco Nexus 1000v software is beyond the scope of this document. For detailed coverage
of installation procedures, see the Cisco Nexus 1000V Software Installation Guide.

Once the switch is successfully installed and its VM is powered on, a setup configuration dialog will
automatically start. The Cisco Nexus 1000V can be configured by means of a Command Line Interface
(CLI) or using a Graphical User Interface (GUI) tool. For a complete explanation of the entire
configuration process, see the Cisco Nexus 1000V Configuration Guides. Essentials of a Cisco Nexus
1000V Series configuration include:

• Defining VEMs
• Creating VLANs
• Defining Port Profiles
• Associating Port Profiles with Interfaces
• Configuring the Connection between VSM and the VMware vCenter Server

VEMs
A single instance of the Cisco Nexus 1000V can support up to 64 VEMs as well as a maximum of two
VSMs (deployed in active/standby mode), creating the equivalent of a 66-slot modular switch. Each
VEM needs to be configured with the VMware ID of the host with which it is associated. In the
following example, each VEM is associated with a slot number and a host.

vem 3
host vmware id e87a619a-ee71-11df-0000-00000000025e
vem 4
host vmware id e87a619a-ee71-11df-0000-00000000022e
vem 5
host vmware id e87a619a-ee71-11df-0000-00000000023e
vem 6
host vmware id e87a619a-ee71-11df-0000-00000000020e
vem 7
host vmware id deed2b71-29ec-11df-b019-8843e138e566

VLANs
For Cisco VXI validation, VLANs were used to logically separate traffic according to function as shown
below. VLANs also can be used in system port profiles for VSM-VEM communications, uplink port
profiles for VM traffic, and data port profiles for VM traffic. The following example shows a series of

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 24
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

VLAN definitions. VLAN 1 is the default assignment; interfaces and ports not specifically assigned to
other VLANs are members of VLAN 1.

vlan 1
vlan 34
name SONC-ESX-SRVR-MGMT
vlan 35
name SONC-ESX-SRVR-VMOTION
vlan 36
name SONC-PVS
vlan 46
name VSG-Data
vlan 47
name VSG-HA
vlan 56
name SONC-MS-VM
vlan 58
name SONC-CT-VM
vlan 60
name SONC-VW-VM
vlan 132
name SONCS-Infra-Servers
vlan 144
name NFS
vlan 146
name iSCSI

Port Profiles
A port profile is a collection of interface-level configuration commands that are combined to create a
complete network policy. A port group is a representation of a port profile on the vCenter server. Every
port group on the vCenter server is associated with a port profile on the Cisco Nexus 1000V. Port
profiles are created on the VSM and propagated to VMware vCenter Server as VMware port groups
using the VMware VIM API. After propagation, a port profile appears within VMware vSphere Client
and is available to apply to the vNICs on a virtual machine.

The following examples show typical Cisco VXI configurations for both uplink and VM access ports.
The uplink port profile defines this port as a trunk port, meaning that it can accommodate traffic from
multiple VLANs. The VLANs that can use this trunk are specifically listed. The mtu is set to 9000 to
support jumbo frames.

port-profile type ethernet Uplink-Port-Channel


vmware port-group
switchport mode trunk
switchport trunk allowed vlan 34-36,46-47,58,60,132,144,146
mtu 9000
channel-group auto mode on mac-pinning
no shutdown
system vlan 34,144,146
state enabled

The next example shows a typical configuration for an access port. This profile defines an access port
dedicated to VM traffic. As an access port, it only supports a single VLAN (vlan 58). A maximum of
1024 ports can be assigned to this profile.

port-profile type vethernet VM-58-Citrix


vmware port-group
switchport mode access

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 25
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

switchport access vlan 58


no shutdown
max-ports 1024
state enabled

Interfaces
All interfaces on the Cisco Nexus 1000V are Layer 2 Ethernet interfaces. These interfaces can be
configured as access ports, trunk ports, private VLANs, or promiscuous ports. Virtual Ethernet
interfaces are logical interfaces, which correspond to virtual ports in use on the distributed switch. The
following Cisco VXI example shows a Virtual Ethernet interface that inherits the characteristics of port
profile VM-58-Citrix, and is associated with Network Adapter 1.

interface Vethernet115
inherit port-profile VM-58-Citrix
description AS732MAWEBEX-AE,Network Adapter 1
vmware dvport 4641 dvswitch uuid "04 a7 09 50 7a a7 4a 56-37 12 5c de 0a 49 67 b5"
vmware vm mac 0050.5689.0238

Connecting the VSM and VMware vCenter Server


It is necessary to configure a connection to the vCenter Server in the Cisco Nexus 1000V. Prerequisites
include having registered the Cisco Nexus 1000V plug-in on the vCenter Server, and knowing the
server’s IP address, DNS name, and data center name. The example shows a connection configured
between the switch and the vCenter Server at 10.0.132.16 port 80. The connection uses the VIM
protocol.

svs connection SONC-VCenter


protocol vmware-vim
remote ip address 10.0.132.16 port 80
vmware dvs uuid "04 a7 09 50 7a a7 4a 56-37 12 5c de 0a 49 67 b5" datacenter-name
SONC connect

The Cisco Nexus 1000V also provides the foundation for running other virtualized appliances such as
the Cisco Virtual Security Gateway (VSG) and Cisco vWAAS. The following changes were made on
the Nexus 1000v port profile configuration to forward virtual desktop traffic to the VSG and vWAAS
virtual applicance hosted services using vPath:

port-profile type vethernet VM-60-WebVSG


vmware port-group
switchport mode access
switchport access vlan 60
org root/Sit_ORT
ip verify source dhcp-snooping-vlan
vn-service ip-address 10.0.46.5 vlan 46 fail open security-profile SP_HVD
no shutdown
max-ports 1024
state enabled
port-profile type vethernet VM-60-View-VM-vWAAS
vmware port-group
switchport mode access
switchport access vlan 60

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 26
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

vn-service ip-address 10.0.132.40 vlan 132 fail open


no shutdown
state enabled

The complete Nexus 1000v configuration can be found here : Nexus1000v Configuration.

Cisco Virtual Security Gateway

Cisco Virtual Security Gateway for Cisco Nexus 1000v Series switches is a virtual appliance that
controls and monitors access to trust zones in enterprise and cloud provider environments. Cisco VSG
provides secure segmentation of virtualized data center VMs using granular, zone-based control and
monitoring with context-aware security policies. VSG works in unison with Nexus 1000v and is
managed using VNMC console. During the Cisco VXI validation, VSG was used to segment knowledge
workers from task workers. Appropriate communication ports for display protocols, Active Directory,
Internet access, etc. were allowed from each zone. Further, inter-zone and intra-zone communications
are denied. Cisco VSG was deployed in the data center and used vPath interception on Nexus 1000v
to intercept and secure traffic.
The attached configuration files layout the VNMC configuration in relation with the CLI
configuration seen on the VSG. A successful VSG setup requires communication between VNMC
and vCenter, VNMC and N1K switch and finally VNMC and VSG itself to be established. The
configuration required is listed in the VNMC-VSG file is listed. Further zoning in a VXI
environment is done base don VM attributes and Virtual Desktops belonging with similar
functions/privileges are placed in the same zone. The configuration required for zoning is covered in
detail in VNMC VSG configuration document along with screen shots. All zones must have rules
defined to allow incoming Desktop protocol traffic and communications between the virtual
desktops and the infrastructure components such as Brokers, Active Directory, Hypervisor manager
etc. The definition of these rules along with appropriate port numbers for infrastructure
communications is described in the configuration files below.

The VNMC configuration screenshots can be found here : VNMC VSG Configuration. The complete
Virtual Security Gateway configuration can be found here : VSG Configuration.

Cisco vWAAS

The Cisco vWAAS is a virtual appliance that accelerates business applications delivered from
private and virtual private clouds. Cisco vWAAS runs on Cisco UCS servers with supported
hypervisors, using the policy-based configurations in the Cisco Nexus 1000v switch. Cisco vWAAS
was deployed in the data center and used vPath interception on Nexus 1000v to intercept and
optimize traffic.
VXI traffic specific classifiers need to be set as described in the Branch configurations below. Apart
from the VXI classifiers and wccp configurations all the remaining configuration is standard.

The complete Cisco vWAAS configuration can be found here : vWAAS configuration.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 27
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

McAfee MOVE-AV

McAfee Optimized Virtual Environments - Antivirus for VDI (MOVE-AV for VDI) was used to
provide a scalable centralized virus protection solution. A highly optimized and dedicated virus scan
server cluster does the resource intensive work of scanning the virtual desktop files, thereby
significantly increasing the virtual desktop density achievable. The MOVE-AV was deployed using
default scan settings with redundant Virus Scan Engines (VSE) co-located on the same virtual switch as
the virtual desktops.

Hypervisor Installation and Configuration

VMwa re ES X/ES Xi

In this test The VMware ESX/ESXi hypervisor was installed on the servers. Version 4.1 was tested.
The vSphere/vCenter management suite was used to manage the hypervisor.

In the Cisco VXI validated design testing, hosts were organized into a single cluster to enable HA
features like DRS and vMotion which provide higher availability when blades power down or fail (due
to power failure or upgrade ). It is recommended to cluster hosts across different UCS chassis enclosures
to provide better availability in event of a chassis failure. Management and infrastructure server
applications (including the Nexus 1000v VSM) were installed on a dedicated UCS blade. A dedicated
database server is used to host the vCenter and VMware View databases. The vSphere Enterprise Plus
License was deployed on the vCenter server. It is recommended to use the Update Manager tool to
install the latest ESXi patches on the blade. The Nexus 1000v virtual distributed switch was used to
provide network connectivity to the virtual machines hosted across multiple UCS blades running
ESX/ESXi. It was deployed with redundant Virtual Switch Modules (VSM). It was provisioned with
VLANs, port profiles, and port channel groups using the NX-OS CLI. Virtual port groups assigned to a
dedicated VLAN were used to segment the traffic for Citrix XenDesktop desktops, View desktops,
VSG, NFS, PVS, vWAAS, and infrastructure servers. NAS and SAN based datastores connected via
NFS and FC protocol connections were provisioned to store the virtual machine vmdk files. Path
redundancy with load balancing was used to connect to the SAN storage devices. Virtual Desktops were
provisioned with a guest OS running Windows 7 32-bit, Windows 7-64-bit, Windows XP with 1.5GB
or 2GB RAM and 24GB of disk space.

The ESX/ESXi and vCenter configuration screenshots can be found here : vSphere configuration.

Mic ro s o ft Hyp e r-V

In this test the Microsoft Hyper-V hypervisor was installed on the servers. The server core installation of
Windows 2008 R2 64-bit was used to implement Hyper-V. The Microsoft System Center Virtual
Machine Manager (MSCVMM) and Hyper-V Manager was used to manage the hypervisor.

In the Cisco VXI validated design testing, the Microsoft virtual switch was used to provide network
connectivity to the virtual machines hosted on the UCS blades running Hyper-V. NAS and SAN based
datastores connected via NFS and FC protocol connections were provisioned to store the virtual machine

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 28
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

files. Virtual Desktops were provisioned with guest running Windows 7-64-bit (2GB of RAM) and
Windows XP (with 1GB RAM) and 20GB of disk space. The MSCVMM was used to manage the
Hyper-V environment and adds features not available in Hyper-V Manager like live migration and
library management (ie. templates).

The Hyper-V and MSCVMM configuration screenshots can be found here : Hyper-V Configuration.

Desktop Virtualization Software Installation and Configuration

VMwa re Vie w Co n fig u ra tio n


In this test VMware View 4.5/4.6 was installed to provide DV services to end users. The VMware View
Composer 2.5 was used to provision the hosted virtual desktop pools.

In the Cisco VXI validated design testing, the vSphere/vCenter connection details were provisioned in
the View Manager Administrator console. Separate desktop pools were created for Windows 7 32-bit,
Windows 7 64-bit, Windows XP desktops. These were automated pools so that desktops are generated
based on a golden image. (ie. virtual machine snapshot). The golden image included the standard office
productivity applications in addition to the UC application suite and the McAfee MOVE-AV agent. The
pools used dedicated entitlements so that the user always gets the same desktop. The desktop pools used
the View Composer Linked-clone feature so that the desktops share the same base golden image and
storage disk space is conserved. The linked clones and replica (golden read-only image) were stored on
different storage devices with different memory overcommit policies for better storage performance in
some test configurations. In these configurations, the replica was stored on faster read-only storage. All
desktops were set to be powered up all the time and refreshed after user logout. Please refer to the
configuration screen shots for the detailed pool settings used. The users Windows profiles were
redirected to a persistent disk and temporary files are stored on non-persistent storage. The user
entitlement for the desktop pools were specified as an Active Directory group of users.

The VMware View configuration screenshots can be found here : VMWare View Configuration.

Citrix Xe n De s kto p Co n fig u ra tio n


In this test Citrix XenDesktop 5.0 was installed to provide DV services to end users. The Citrix Desktop
Studio was used to provision the hosted virtual desktop pools.

In the Cisco VXI validated design testing, the vSphere, SCVMM, or XenCenter connection details were
provisioned in the XenDesktop Studio console. Separate desktop groups were created for Windows 7
32-bit, Windows 7-64-bit, Windows XP desktops. The desktop catalog used pooled machines that were
generated based on a golden image snapshot. The golden image included the standard office
productivity applications in addition to the UC application suite and the McAfee MOVE-AV agent. The
desktop catalog used random assignment so that the user is assigned a desktop in a random manner. The
desktop catalog uses Citrix Provisioning Services so that the desktops share the same base golden image
and storage disk space is conserved. The user assignment for desktop groups was specified as an Active
Directory group of users. Refer to the configuration screen shots for the detailed desktop group settings
used.

The Citrix XenDesktop configuration screenshots can be found here : Citrix XenDesktop Configuration.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 29
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 30
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Data Center Networking Infrastructure


Figure 8: Data Center Network Components

Cisco Nexus 5010 Access Layer Switch

The Cisco Nexus 5010 Switch is a 1 Rack unit (RU), 10 Gigabit Ethernet/FCoE access-layer switch with
20 fixed 10 Gigabit Ethernet/FCoE ports that accept modules and cables meeting the Small Form-Factor
Pluggable Plus (SFP+) form factor. One expansion module slot can be configured to support up to 6
additional 10 Gigabit Ethernet/FCoE ports, up to 8 Fibre Channel ports, or a combination of both. For
larger server pods, the Cisco Nexus 5020 (2 RU and 40 fixed ports, plus to module slots) can be used.

The Cisco Nexus 5010, in combination with the Cisco Nexus 1000v, provides the functions for the Data
Center Access Layer. This test configuration consists of a pair of Cisco Nexus 5010. Four 10 GE
uplinks are configured on each of the Cisco UCS Fabric interconnect and are connected to each Nexus
5010 in parallel. The upstream interfaces are connected to ports that belong to the Internal Virtual
Device Contexts (VDCs) on the Cisco Nexus 7010.
Nexus 5010 configuration for a VXI system is not any different from a typical data center, however a
few key considerations for VXI are described below.

Virtual Port Channel (vPC, LACP): Virtual port channels although not required are a highly
recommended in a VXI setup. VPC creates etherchannels spanning multiple chassis and by turning off
spanning tree protocol allow for all links to be in forwarding state, there by increasing the total
bandwidth available to downstream devices. To enable vPC enable the lacp and vpc features as shown

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 31
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

below and setup the management vrf. The two Nexus 5010 establish peer relationship over the
management interface. No configurations or states are communicated over the management interface.

feature lacp
feature vpc
interface mgmt0
ip address 10.0.32.8/24
vrf context management
ip route 0.0.0.0/0 10.0.32.1
port-channel load-balance ethernet source-dest-ip invalid-hash vpc domain 200
peer-keepalive destination 10.0.32.9 source 10.0.32.8

The portchannel interface between the two 5010 is defined as a vpc peer-link. All switch states are
exchanged over this interface along with data traffic. The port channel is configured as spanning tree
network port by default when the portchannel is added to the vpc domain (the config below explicity
configures the spanning-tree port type).

interface port-channel10
switchport mode trunk
vpc peer-link
switchport trunk native vlan 999
switchport trunk allowed vlan 34-36,38-39,46,48,50,52,54,56,58,60,62,132,144,146,148,999
spanning-tree port type network
speed 10000

All other portchannels and interfaces (not in portchannels) need to be added to appropriate vpc by
issuing “vpc <#>” command. Vlan IDs and vpc numbers can be different but the vpc number are
required to be consistent across both Nexus 5010 switches.

interface port-channel20
switchport mode trunk
vpc 20
switchport trunk native vlan 999
switchport trunk allowed vlan 34-36,38-39,46,48,50,52,54,56,58,60,62,132,144,146,148,999
speed 10000

Jumbo Frame: Jumbo frames need to be normally configured with Fibre channel, iScsi etc storage traffic
and requires some configuration on the Nexus 5010. The QoS and System mtu configuration for jumbo
frames is listed below.

class-map type qos class-fcoe


policy-map type network-qos jumbo
class type network-qos class-fcoe
pause no-drop
mtu 2158

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 32
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

class type network-qos class-default


mtu 9216
system qos
service-policy type network-qos jumbo

Vlan should be created to separate hypervisor management, Virtual desktop, migration, storage etc.
traffic. The VLAN assignment can be found in the complete configuration below.

The complete Nexus 5010 configuration used during validation can be found here : Nexus 5010
Configuration.

Cisco Nexus 7010

The Cisco Nexus 7010 Switch is used in the collapsed module for both the Data Center Aggregation and
Core layers. This is done by creating two Virtual Device Contexts (VDCs): Internal (connected to
Hosted Desktops, Application Servers) and External (connected to Enterprise Core switches). The
Internal VDC acts as the Aggregation layer. The External VDC acts like the Core Layer. Each VDC is
isolated from the other for increased availability. Should one VDC encounter an error or crash, the other
is not affected. The Nexus 7010s are installed as a redundant pair.

The complete Nexus 7010 configuration can be found here: Nexus 7010 Internal Device Context
Configuration and Nexus 7010 External Device Context Configuration.

Cisco Application Control Engine (ACE4710)

The Cisco Application Control Engine (ACE4710) is used in the VXI environment to provide load
balancing functions for the DV Connection Brokers. They are connected in a redundant pair to the
Internal VDC on the Cisco Nexus 7010. Virtual contexts are created on the ACE for different test
environments and each context included server farms for VMware View Desktops, the Citrix
XenDesktops, and Microsoft Remote Desktops.

The following configurations on the ACE implement health monitoring probes for the View Manager
and XenDesktop Controller :

probe https View-Web


interval 15
faildetect 1
passdetect interval 30
passdetect count 2
receive 2
ssl version all
expect status 200 200
open 1
probe http Xen-Web
interval 15
faildetect 1

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 33
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

passdetect interval 15
passdetect count 2
receive 2
expect status 200 200
open 1

The following configurations on the ACE setup the XenDesktop Controller server farm :

rserver host XenCM1


ip address 10.0.132.11
inservice
rserver host XenCM2
ip address 10.0.132.24
inservice

serverfarm host XenCM


probe Ping
probe Xen-Web
fail-on-all
rserver XenCM1
inservice
rserver XenCM2
inservice

sticky ip-netmask 255.255.255.255 address source StickyXen


timeout 5
serverfarm XenCM

class-map match-all Xen-VIP


2 match virtual-address 10.0.60.36 any

policy-map type loadbalance first-match Xen-LB


class class-default
sticky-serverfarm StickyXen

policy-map multi-match Xen


class Xen-VIP
loadbalance vip inservice
loadbalance policy Xen-LB
loadbalance vip icmp-reply
nat dynamic 1 vlan 132

interface vlan 60
service-policy input Xen
no shutdown

The complete Cisco ACE4710 configuration can be found here : ACE Configuration.
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 34
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Cisco Adaptive Security Appliance (ASA5580)

The Cisco Adaptive Security Appliance installed in the Data Center provides firewall services to isolate
and protect the various compute resources from both external traditional desktop users as well as the DV
users installed in the data Center. Two virtual contexts are used on the ASA : Non Server (connected to
VRF for Hosted Desktops) and Server (connected to VRF for Application Servers). The ASA5580s are
deployed as a redundant pair (configured in active-active mode ) and are connected to the Internal VDC
and External VDC on the Cisco Nexus 7010.

Use the ASDM or the ASA CLI to provision security services on the ASA. The following configuration
on the ASA provisions services ( UC and Virtual Desktop services ) and hosts to be allowed through the
firewall :

object-group service UC-SERVICES


group-object JABBER
group-object RTP
group-object SCCP
group-object SIP
group-object WEB-SERVICES
service-object tcp destination range 1101 1129
service-object tcp destination eq 2444
service-object tcp destination eq 2749
service-object tcp destination eq 3223
service-object tcp destination eq 3224
service-object tcp destination eq 3804
service-object tcp destination eq 4321
service-object tcp destination eq 61441
service-object tcp destination eq 8007
service-object tcp destination eq 8009
service-object tcp destination eq 8111
service-object tcp destination eq 8222
service-object tcp destination eq 8333
service-object tcp destination eq 8404
service-object tcp destination eq 8444
service-object tcp destination eq 8555
service-object tcp destination eq 8998
service-object tcp destination eq 9007
service-object tcp destination eq 9009
service-object tcp destination eq ctiqbe
group-object LDAP
group-object TFTP
group-object IMAP

object-group service ICA - desktop protocols


service-object tcp destination eq 1604
service-object tcp destination eq 2598
service-object tcp destination eq citrix-ica

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 35
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

object-group service PCOIP


service-object tcp destination eq 32111
service-object tcp destination eq 4172
service-object tcp destination eq 50002
service-object udp destination eq 50002
service-object udp destination eq 4172
object-group service RDP
service-object tcp destination eq 3389
object-group service DESKTOP-SERVICES
group-object ICA
group-object RDP
group-object PCOIP

access-list allowall extended permit object-group UC-SERVICES object-group ALL-HOSTED-


DESKTOPS object-group UC-SERVERS

Similarly, hosts and services in the network like hypervisor managers and desktop connection managers
should also be provisioned.

Use the following configuration on the ASA to enable jumbo frames :

mtu outside 9216 Used for Jumbo Frames


mtu inside 9216

Configure the global policy to enable application layer inspection by the ASA :

policy-map global_policy
class inspection_default
inspect waas

It is important to add ‘inspect waas’ to allow vWAAS traffic through the ASA. Otherwise vWAAS
traffic will be blocked going through the ASA.

The complete Cisco ASA 5580 configuration can be found here : ASA Server Configuration and ASA
Non Server Configuration.

Cisco WAN Acceleration Engine (WAE-512)

The WAE-512s installed in the data center are the controllers for the other WAAS appliances installed
in both the campus and branch networks. They are installed as a redundant pair.

The complete Cisco WAE-512 configuration can be found here : WAAS Controller Configuration.

Cisco Network Analysis Module (NAM2220)

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 36
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

The Cisco NAM2220 is used to monitor and collect data on traffic flow entering and leaving the data
center. This tool is useful for monitor the DV users load on the network and identifying any potential
bottlenecks in the design. A SPAN port created on the Internal VDC on the Nexus 7010 was used to
monitor and capture traffic. The NAM appliance was also used to collect Netflow data exported from
the campus network routers and switches.

The complete Cisco NAM2220 configuration can be found here : Network Analysis Module
Configuration.

Campus Network
The campus network is composed of two redundant Catalyst 6504E switches, Branch Connectivity
routers, internet access routers, WAN accelerators, and Firewall Appliances. It also contains equipment
for local campus access.

Figure 9: Campus Network Components

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 37
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Cisco Catalyst 6504E (CAT6504E)

The Cisco CAT6504E switches are used for the core and distribution layer functions for the campus
network. They are installed uplinked to the data center Nexus 7010 which provides the data center core
functions. These switches are installed as a redundant pair.

The complete Cisco CAT6504E configuration file can be found here : Catalyst 6504 configuration.

Cisco 7206VXR Router

The Cisco 7206VXR routers are used to provide branch connections that do not require VPN services.

The complete Cisco 7206VXR configuration can be found here : 7206VXR Configuration.

Cisco WAN Acceleration (WAE-674)

The Cisco WAE-674s installed in the campus network provide the campus-side termination of the
WAAS tunnels from the branch WAE appliances. Some key classifiers and mapping need to be
configured for VXI are listed below. Note that since PCoIP traffic is over UDP transport WAAS is
unable to optimize the traffic and hence no configuration is required. Other WAAS configuration
parameters still need to be configured to enable optimization and these are covered in the complete
comfiguration below.
policy-engine application
name Remote-Desktop

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 38
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

classifier Citrix-ICA
match dst port eq 1494
match dst port eq 2598
exit
classifier HTTP
match dst port eq 80
match dst port eq 8080
match dst port eq 8000
match dst port eq 8001
match dst port eq 3128
exit
classifier HTTPS
match dst port eq 443
exit
map basic
name Remote-Desktop classifier Danware-NetOp action optimize DRE no compression none
name Remote-Desktop classifier Laplink-surfup-HTTPS action optimize DRE no compression none
name Remote-Desktop classifier Remote-Anything action optimize DRE no compression none

The complete Cisco WAE-674 configuration file can be found here : WAE-674 Configuration.

Cisco Adaptive Security Appliance (ASA5540)

The Campus Cisco ASA5540 network provides a VPN concentrator for the Cisco AnyConnect Clients.

The complete Cisco ASA 5540 configuration can be found here : ASA Configuration.

Cisco Integrated Service Router (ISR3945)

The Campus Cisco ISR3945 provides the VPN concentrator for site-to-site network connections.

The complete Cisco ISR3945 configuration can be found here : 3945 Configuration.

Cisco Catalyst 4500E (Campus Access CAT4500E)

This Catalyst 4500E provides the campus user access to the network with PoE+ It contains the L2 edge
features such as QoS, Security, and Energywise.

The complete Catalyst 4500E configuration file can be found here : Catalyst4500E Configuration.

Cisco Catalyst 4507E (Campus Access CAT4507E)

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 39
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

The Catalyst 4507E provides alternate method for campus user access to the network. It contains the L2
edge features such as QoS and Security.

The complete Cisco Catalyst 4507E configuration can be found here : Catalyst 4507E Configuration.

Cisco Aironet Access Points

Cisco Aironet IEEE 802.11a/b/g wireless access points provide a wireless method for campus users to
access to the network using mobile devices like laptops and tablets. They have the versatility associated
with connected antennas, a rugged metal enclosure, and a broad operating temperature range. In this test,
the Aironet 1140 and 1240 series of access points were used in the branches and Aironet 1250, 1260,
3502e, and 3502i series of access points were used in the campus network. The Cisco Aironet access
points are configured by the Cisco Wireless Controller.

Cisco 5508 Wireless Controller

The Cisco 5508 Wireless Controller (WLC) controls and manages the Cisco Aironet wireless access
points deployed in the campus and branch networks. It offers enhanced uptime with RF visibility and
CleanAir protection and reliable streaming video and voice quality.

The complete WLC configuration can be found here : WLC Configuration.

Cisco 3310 Mobility Services Engine

The Cisco 3310 Mobility Services Engine (MSE) provides mobility services and applications across a
variety of mobility networks. It provides location based services for wired and wireless VXI endpoints.

Refer to Appendix 1 for guidance in provisioning the MSE for location tracking services.

Branch Configurations
All branches use the same configuration for the Catalyst 3560 and 3750. The complete Catalyst 3560
configuration can be found here : 3560 Branch Configuration. All branches use either a Cisco Aironet
1131AG or 1242AG access point to provide wireless network access to users. All branches included the
VXC clients : VXC-2211/2212 and VXC-2111/2112 which allow users to access their virtual desktop.
Refer to the VXC client documentation for guidance on provisioning the VXC clients.

Branch 1 – Branch connection with PfR

This branch configuration uses a Cisco ISR3845 router, a Cisco Catalyst 3560 switch, and a Cisco
Aironet 1242AG access point. The ISR was tested with both T1 and T3 connections. The 3845 was
configured with the Performance Routing (PfR) feature enabled which provides path optimization and

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 40
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

advanced load balancing for VXI traffic over the WAN. The complete Branch 1 configuration can be
found here : 3845 Branch 1 Configuration.

Figure 10: Branch 1 Design

Branch 1 design focuses on validating PfR in VXI environment. PfR operation has multiple steps, traffic
classification, network characterstics measurement, policy identification and policy enforcement. All
branch routers enabled with PfR work as border routers and communicate with the master router. The
configuration for PfR used is listed below with appropriate explanations.

key chain PFR_Chain


key 1
key-string cisco_123

pfr master
policy-rules POLICY2
logging
!
Identify the interfaces included in the PfR operations and attach the key string defined above for
communications with the master.
border 10.1.253.5 key-chain PFR_Chain
interface Serial0/0/1 external
interface Serial0/0/0 external
interface Loopback1 internal
interface GigabitEthernet0/0 internal
!
Define network characterstics to measure for PfR operation. In this case throughput and delay are
being measure and will be used as inputs for policy enforcement.
learn
© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 41
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

throughput
delay
periodic-interval 1
monitor-period 1
prefixes 250 applications 250
list seq 10 refname MYLIST
traffic-class access-list PFR_list
delay
delay threshold 30 <Delay threshold defined here defines out-of-policy triggers.
mode route control
mode monitor passive <Passive monitoring measures TCP throughput from live traffic. For UDP traffic
active ping probes are required>
mode select-exit best
resolve delay priority 1 variance 5 <The amount of delay variance defines the acceptable jitter>
!
active-probe echo 10.1.253.5 <Active probes are sent to the master and network health is measured
directly>
active-probe tcp-conn 10.1.253.5 target-port 2000
!
For active probing it is recommended to use a loopback interface.
pfr border
local Loopback1
master 10.1.253.5 key-chain PFR_Chain
active-probe address source interface Loopback0
crypto pki token default removal timeout 0

Classify PfR traffic using a standard ACL. IN this scenario all traffic was VXI specific so a port
numbers were not used. In scenarios where PfR is applied for specific traffic types port numbers can be
used to classify applications.
ip access-list extended PFR_list
permit tcp any 10.1.1.0 0.0.0.255
permit tcp 10.1.1.0 0.0.0.255 any
permit ip any 10.0.0.0 0.0.0.255
!
ip sla responder
ip sla responder tcp-connect port 2000

pfr-map BLUE 10
match pfr learn delay
set mode select-exit best
set delay threshold 90
!
pfr-map test1 10
!
pfr-map POLICY2 10

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 42
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

match pfr learn list MYLIST


set delay threshold 30
!
pfr-map POLCY2 10
set resolve delay priority 1 variance 1

Branch 2 – Branch connection with external WAVE-674

This branch configuration uses a Cisco ISR3845 router, a Cisco Catalyst 3750 switch, and a Cisco
Aironet 1131AG access point. The ISR was tested both T1 and T3 connections. The WAVE-674 is
connected via an Ethernet port on the ISR3845. The ISR3845 in Branch 2 uses WCCP redirection for
connection to the WAVE-674.

Refer to the Appendix 3 for specific guidance on implementing a QOS policy for traffic traversing the
WAN.

Traffic to external WAAS appliance from the branch router is redirected using WCCP option 61 on
every incoming traffic interface on the router except where WAE is connected.
On all incoming router interfaces wccp command shown below is used to redirect traffic to WAE.
ip wccp 61 redirect in
Interface connecting directly to WAAS interface need to be excluded from wccp using the command
below. This avoids sending the optimized traffic back into the waas appliance avoinding loops.
ip wccp redirect exclude in

The complete Branch 2 configuration can be found here : 3845 Branch 2 Configuration and WAVE-674
Configuration.

Figure 11: Branch 2 Network Design

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 43
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Branch 3 – Branch connection with no WAAS

This branch configuration uses a Cisco ISR3845 router, a Cisco Catalyst 3560 switch and a Cisco
Aironet 1242AG access point. The ISR was tested with both T1 and T3 connections. The complete
Branch 3 configuration can be found here : 3845 Branch 3 Configuration

Figure 12: Branch 3 Network Design

Branch 4 – Branch connection with WAAS SRE Module

This branch configuration uses a Cisco ISR2951 router, a Cisco Catalyst 3750 switch and a Cisco
Aironet 1242AG access point. The ISR was connected via both T1 and T3 connections. The
configuration may only show one of the serial connections active. The WAAS SRE module operates the

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 44
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

same as an external device. The ISR still uses WCCP redirection for this application; however the
network connection to the SRE module is made internal to the ISR. The complete Branch 4
configuration can be found here : 2951 Branch 4 Configuration and WAAS SRE Configuration.

Figure 13: Branch 4 Network Design

Branch 5 – Identical to Branch 3 except uses 3945

Branch 6 – Branch connection with WAAS Express

This branch configuration uses a Cisco ISR3945 router, a Cisco Catalyst 3750 switch and a Cisco
Aironet 1131AG access point. The ISR was connected via both T1 and T3 connections. The WAAS
Express feature is enabled on the 3945.

The complete Branch 6 configuration can be found here : 3945 Branch 6 Configuration.

Figure 14: Branch 6 Network Design

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 45
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Branch 7 – Identical to Branch 3 except uses 3945

Branch 8 – Identical to Branch 3 except uses 2951

Branch 9 – Direct Connection using Site-to-Site VPN

This branch configuration uses a Cisco ISR3945 router, a Cisco Catalyst 3560 switch and a Cisco
Aironet 1242AG access point. While the hardware is the same as Branch 1, the configuration is the
different. This ISR connects via a Site-to-Site VPN tunnel to the Campus ISR3945. The complete
Branch 9 configuration can be found here : 3945 Branch9 Configuration.

Figure 15: Branch 9 Network Design

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 46
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Branch 10 – Direct Connection using AnyConnect VPN

This branch configuration uses a Cisco 7301 router, a Cisco Catalyst 3560 switch and a Cisco Aironet
1242AG access point. This ISR connects using AnyConnect VPN to the Campus ASA5540. The
complete Branch 10 configuration can be found here : 7301 Branch10 Configuration.

Figure 16: Branch 10 Network Design

Appendix 1 – Location Tracking


Location Tracking
Location tracking for Cisco VXI requires the following minimums
Catalyst 4K Switch IOS version 12.2(52)SG or later.
Catalyst 3K Switch IOS version 12.2(50)SE or later.
MSE 3355 – Recommended for large scale deployments to track up to 18,000 endpoints
MSE 3310 – Recommended for smaller VXI deployments of upto 2000 devices.
Currently VXI environment can only leverage the admin tag and civic information stored on the switches. Admin tag is
an administrative tag or a site location associated with a logical grouping of devices or users.
The civic location is a string which usually denotes the various elements of a civic location as described in RFC 4776.
An example of a civic location as configured in the switch is as follows:
Switch(config)# location civic-location identifier 1
Switch(config-civic)# number 3550
Switch(config-civic)# primary-road-name "Cisco Way"
Switch(config-civic)# city "San Jose"
Switch(config-civic)# state CA
Switch(config-civic)# building 19
Switch(config-civic)# room C6
Switch(config-civic)# county "Santa Clara"

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 47
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Switch(config-civic)# country US
Switch(config-civic)# end
Note The labels such as “building”, “room”, “state” etc are defined element types in the RFC. The rest
of the string that follows is the value configured for that label.

Configuration sequence – Wired Switch


The switch has to be configured for the location service. This includes configuration of location data as well as
enabling NMSP. These are the configuration steps on the switch side:
• Understand the Slot/Module/Port configuration (1/0/20).
• Use the correct IOS version that pertains to the respective switch model
• Enable the NMSP.
• Enable the IP Device tracking.
• Configure the SNMP community with read-write access.
• Configure the Civic/admin tag location identifiers.
• Assign identifiers to the switch interfaces.
CLI commands for the above sequence and other useful debug information can be found at:
http://www.cisco.com/en/US/docs/switches/lan/catalyst3750/software/release/12.2_50_se/configuration/guide/swll
dp.html
Configuration sequence – for WCS. Required for wired switches to relay location information to the WCS.
On the WCS:
• Go to Configure >Ethernet Switches.
• Add Ethernet switches.
• Add the IP address.
• Enable Location Capable.
• Enter the SNMP Community (read-write). The SNMP community string entered must match that value assigned to
the Catalyst switch.
• Go to Services > Synchronize Services > Switches.
• Click Assign to assign it to preferred MSE.
• Choose the switch and synchronize.
• Go to Services > Synchronize Services > Switches.
• Go to System > Status > NMSP Connection status.
• Check for the active NMSP status for each switch.
Once both configuration sequences above are complete wired elements can be viewed on the WCS:
• Under Context Aware Services, click Wired Switches under Wired
• A list of the switches displays.
• Click Switch IP Address to view details
Detailed configuration steps and screen shots for Location based services, also called context-aware-services can be
found at.
http://www.cisco.com/en/US/partner/products/ps9742/products_installation_and_configuration_guides_list.html
http://www.cisco.com/en/US/partner/docs/wireless/mse/3350/7.0/CAS/configuration/guide/CAS_70.html

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 48
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Appendix 2 - Endpoint security 802.1x, MacSec, MAB,


ACS

Configuration sequences below are built based on the discussion in the Release 2.0 VXI CVD. Please see Deployment
and Configuration best practices of VXI security components section under Anyconnect 3.0, Dot1x, MacSec and MAB
along with ACS. The configuration sequence below is an sample scenario and not part of the validated architecture. It
is presented here as a reference to understand configuration sequence of various elements involved in securing the VXI
Access.
Following procedure should be used to setup Anyconnect, Dot1x, ACS5.2 and MacSec in VXI environment.
1. Any Connect 3.0 Network Access Manager (NAM) Configuration
In this section, the AnyConnect NAM Profile Editor is used to create an example wired 802.1x connection that is
MACsec enabled. Before starting Launch the profile editor
Client Policy Configuration : Defines the options that govern the client policy.

Step 1 From the left navigation menu, select Client Policy


Step 2 Select Enable for the Service Operation under Administrative Status.
Step 3 Select Attempt connection after user logon for Connection Settings
Step 4 Under Mediacheck Allow Wired (802.1x) Media, and Allow user to disable client
Note MACsec is only supported on wired media.

Authentication Policy Configuration:


Define the global association and authentication policies. This policy controls what type of networks the end user may
create using the AnyConnect UI. Launch the Network menu and enable all options. Depending on the enterprise policy
some options may be disabled as needed.

Network Configuration:
AnyConnect 3.0 is configured for use on a wired network that supports 802.1x authentication and MACsec encryption.
From the Anyconnect UI on the Anyconnect profile editor follow the procedure below. This is a onetime procedure for
each type of profile and is recommended to be distributed via ASA.

Step 1 Click Add.


Step 2 Enter a Name for the connection (MACsec for example) and select Wired (802.3) Network
Step 3 At the bottom of the window, click Next
Step 4 Under Security Level, select Authenticating Network
Step 5 Within the Security section, set Key Management to MKA and Encryption to MACsec AES-
GCM-128
Step 6 Within the Port Authentication Exception Policy, select Dependent on 802.1x. In this example
the switch is configured for 'must secure' so therefore none of the port exceptions under
'Dependent on 802.1x' need to be checked.
Step 7 Click Next
Step 8 Under Network Connection Type, select User Connection.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 49
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Note MACsec encryption is supported for Machine and User Connections. However, in this example only User
Connection is selected

Step 9 ClickNext
Step 10 Within EAP methods, select PEAP
Note In this example PEAP is used for authentication however MACsec encryption is supported with any EAP
method that generates a Session-Id (See RFC5247) including EAP-TLS, EAP-PEAP, and EAP-FAST.

Step 11 Click Next


Step 12 Within User Credentials, select Prompt for Credentials
Step 13 Click Done.
Step 14 Navigate to the menu options in the upper left corner of the Profile Editor. Choose File -> Save.
Save the file as "configuration.xml".
Deploy this profile via the ASA, please refer to Chapter 2, "Deploying the AnyConnect Secure Mobility Client", of the
Cisco AnyConnect Secure Mobility Client Administrator Guide.

2. Configuring the 3750-X/3560-X switch with Dot1X and MacSec

Base AAA Configuration: Base configuration for AAA (RADIUS). The configuration described below is required to
enable 802.1X authentication:
CTS-3750X(config)#aaa new-model
CTS-3750X(config)#aaa authentication dot1x default group radius
CTS-3750X(config)#aaa authorization network default group radius
CTS-3750X(config)#aaa accounting dot1x default start-stop group radius
CTS-3750X(config)#radius-server host <ip address> auth-port 1812 acct-port 1813 key cisco123
CTS-3750X(config)#radius-server vsa send authentication
CTS-3750X(config)#radius-server vsa send accounting
Note The command "aaa authorization network" is required for authorization methods such as dynamic
VLAN assignment or downloadable ACL.

802.1x and MACsec Configuration:


Enable 802.1X in global configuration mode
CTS-3750X(config)#dot1x system-auth-control

Enable 802.1X on interface


CTS-3750X(config)#int gi1/0/1
CTS-3750X(config-if)#switchport mode access
CTS-3750X(config-if)#authentication port-control auto
CTS-3750X(config-if)#dot1x pae authenticator
Enable MACsec on interface
CTS-3750X(config-if)#mka default-policy
CTS-3750X(config-if)#macsec
CTS-3750X(config-if)#authentication linksec policy
CTS-3750X(config-if)#authentication event linksec fail action next-method
CTS-3750X(config-if)#authentication event linksec fail action authorize VLAN

3. Configure 802.1x Components in ACS: 802.1x policy elements, Identity Stores, Access Services and Access
Service Selection are configured on ACS.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 50
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Configuring Identity Stores and an Identity

Step 1 In the left column, under Users and Identity Stores, select Identity Store Sequences and select
Create.
Step 2 Enter a name (in this example the name is "802.1x Identities"). Select the check box for Password
Based authentication method, and then add Internal Users to the Selected section. Click Submit.
Step 3 In the left navigation column, under Users and Identity Stores, expand Internal Identity Stores,
and select Users. Click Create.
Step 4 Enter a username. Enter and confirm a password under Password Information. Click Submit
when done.

Create an Authorization Profile: Creating a wired authorization profile. This profile is used to define users access to
the network.

Step 1 In the left navigation column, under Policy Elements, expand Authorization and Permissions.
Then expand Network Access and select Authorization Profiles. Click Create.
Step 2 On the General tab, specify a name for this profile.
Step 3 Click on the Common Tasks tab. Navigate down to 802.1x-REV. Select a Static value of "must-
secure".
Note This linksec policy will override the policy configured on the interface

Step 4 Click Submit.

Create a 802.1x Access Service

Step 1 In the left navigation column, under Access Policies, click Access Services.
Step 2 At the bottom of the resulting pane, click Create.
Step 3 Specify a name to for this service.
Step 4 Under Access Service Policy Structure, choose Based on service template, and then click Select.
Step 5 Choose Network Access-Simple and click OK. Then click Nextat the bottom of the resulting
window.
Step 6 No need to change any settings in this window*. * Click Finish. You will be prompted to modify
the service selection policy. Click No.

Define the Authorization Policy: Here the authorization policy of the access service is defined.
In the left navigation column, under Access Policies navigate to the access service that was just created. (802.1x
Service in this example).

Step 1 Expand 802.1x Service and click Authorization. Click Create.


Step 2 Specify a name for the rule
Step 3 Under Conditions, choose NDG:Location and click Select. Set the value to All Locations.
Step 4 Scroll down to the Results section and click Select. An Authorization Profiles dialog box
appears:
Step 5 Select the Finance 802.1x Authz profile that was created in Create an Authorization Profile
section.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 51
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Step 6 Click OK. Click Save Changes.

Create a Service Selection Rule: Creating a service selection rule. This rule ensures the policies defined in an access
service (802.1x Service in this example) are applied to 802.1x requests.

Step 1 In the left navigation menu, under Access Policies, click Service Selection. At the bottom of the
right pane, click Create.
Step 2 Specify a name for the rule (Match 802.1x Requests is used here)
Step 3 Select Protocol and match it with Radius.
Step 4 Select Compound Condition. Under Condition, choose RADIUS-IETF for Dictionary and
Service-Type for Attribute. For Value, select Framed and click Add to add the condition to the
Current Condition Set.
Step 5 Under Results, select the access service that was created in Create an 802.1x Access Service
section. Click Ok. Click Save Changes.
Step 6 Finally: Launch AnyConnect and get connected.
The following dot1x and MAB port configuration was found to provide a satisfactory experience with most thin clients:
interface GigabitEthernet0/10
switchport access vlan XX
switchport mode access
dot1x mac-auth-bypass
dot1x pae authenticator
dot1x port-control auto
dot1x timeout tx-period 2
spanning-tree portfast

Appendix 3 – QOS settings in VXI


Many applications do not mark traffic with DSCP values. For even those that do, the marking may
not be appropriate for every enterprise's priority scheme. Therefore, you should perform hardware-
based classification (using a Cisco Catalyst® or Cisco Nexus® Family switch) instead of software-
based classification. In testing, the markings were implemented on a Cisco Nexus 1000V Switch
whenever possible. See below for an configuration example used during testing of Cisco VXI
system

Classification:
ip access-list RDP
permit tcp any eq 3389 any
ip access-list PCoIP-UDP
permit udp any eq 50002 any
ip access-list PCoIP-TCP
permit tcp any eq 50002 any
ip access-list PCoIP-UDP-new
permit udp any eq 4172 any
ip access-list PCoIP-TCP-new
permit tcp any eq 4172 any
ip access-list ICA
permit tcp any eq 1494 any

ip access-list View-USB
permit tcp any eq 32111 any

ip access-list MMR

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 52
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

permit tcp any eq 9427 any

ip access-list NetworkPrinter
permit ip any host 10.1.128.10
permit ip any host 10.1.2.201

ip access-list CUPCDesktopControl
permit tcp any host 10.0.128.125 eq 2748
permit tcp any host 10.0.128.123 eq 2748

Class-maps:
class-map type qos match-any CALL-SIGNALING
match access-group name CUPCDesktopControl

class-map type qos match-any MULTIMEDIA-STREAMING


match access-group name MMR

class-map type qos match-any TRANSACTIONAL-DATA


match access-group name RDP
match access-group name PCoIP-UDP
match access-group name PCoIP-TCP
match access-group name PCoIP-UDP-new
match access-group name PCoIP-TCP-new

class-map type qos match-any BULK-DATA


match access-group name View-USB
match access-group name NetworkPrinter

Policy-map:
policy-map type qos pmap-HVDAccessPort
class CALL-SIGNALING
set cos 3
set dscp cs3
! dscp = 24
class MULTIMEDIA-STREAMING
set cos 4
set dscp af31
! dscp = 26
class TRANSACTIONAL-DATA
set cos 2
set dscp af21
! dscp = 18
class BULK-DATA
set cos 1
set dscp af11
! dscp = 10

Apply policy-map to the switch port to which the hosted virtual desktop (HVD) virtual machine connects:
port-profile type vethernet VM240
description Port profile for View HVD VM access
vmware port-group
switchport mode access
switchport access vlan 240
no shutdown
state enabled
system vlan 240
service-policy input pmap-HVDAccessPort

Nexus 1000v QoS information:


http://www.cisco.com/en/US/partner/docs/switches/datacenter/nexus1000/sw/4_0/qos/configuration/guide/qos_2cl
assification.html#wp1067764

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 53
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Note These examples are meant to be guidelines for deploying QoS in a Cisco VXI network and
should not be applied without consideration given to all traffic flows within the enterprise.

In a campus network, the bandwidth is high enough so that contention for the resources should be
minimal. However, slower connections in a branch WAN router network need to be examined. Here
at the egress point from the high-speed connections of the branch-office LAN to the slower-speed
links of the WAN is where bandwidth contention is likely to occur. Service policies that constrain
the amount of bandwidth that is dedicated to a given protocol are defined and applied at this point.
These same queuing and bandwidth configurations can be placed anywhere there is a concentration
of Cisco VXI endpoints, to enforce the appropriate response in case of traffic congestion. Below is
an example of Bandwidth Services Policies used during Cisco VXI system testing.

Class Maps - defining the buckets:

class-map match-any BULK-DATA


match dscp af11 af12 af13
class-map match-all NETWORK-CONTROL
match dscp cs6
class-map match-all MULTIMEDIA-CONFERENCING
match dscp af41 af42 af43
class-map match-all VOICE
match dscp ef
class-map match-all SCAVENGER
match dscp cs1
class-map match-all CALL-SIGNALING
match dscp cs3
class-map match-all TRANSACTIONAL-DATA
match dscp af21 af22 af23
class-map match-any MULTIMEDIA-STREAMING
match dscp af31 af32 af33

Policy Maps - Assigning the bandwidth per bucket:


policy-map WAN-EDGE
class VOICE
priority percent 10
class NETWORK-CONTROL
bandwidth percent 2
class CALL-SIGNALING
bandwidth percent 5
class MULTIMEDIA-CONFERENCING
bandwidth percent 5
random-detect dscp-based
class MULTIMEDIA-STREAMING
bandwidth percent 5
random-detect dscp-based
class TRANSACTIONAL-DATA
bandwidth percent 65
random-detect dscp-based
class BULK-DATA
bandwidth percent 4
random-detect dscp-based
class SCAVENGER
bandwidth percent 1
class class-default
bandwidth percent 25
random-detect

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 54
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Cisco VXI thin-client endpoints do not typically provide the capability to mark the session traffic.
Therefore, the same marking that was performed on the Cisco Nexus1000V in the data center for
outbound desktop virtualization traffic must be performed at the branch office on behalf of the
endpoints for the traffic returning to the data center virtual machine. Below is an example of a
branch switch configuration used during Cisco VXI system testing.

Endpoint
Classification:
ip access-list RDP
permit tcp any eq 3389 any
ip access-list PCoIP-UDP
permit udp any eq 50002 any
ip access-list PCoIP-TCP
permit tcp any eq 50002 any
ip access-list PCoIP-UDP-new
permit udp any eq 4172 any
ip access-list PCoIP-TCP-new
permit tcp any eq 4172 any
ip access-list ICA
permit tcp any eq 1494 any

ip access-list View-USB
permit tcp any eq 32111 any

ip access-list MMR
permit tcp any eq 9427 any

ip access-list NetworkPrinter
permit ip any host 10.1.128.10
permit ip any host 10.1.2.201

ip access-list CUPCDesktopControl
permit tcp any host 10.0.128.125 eq 2748
permit tcp any host 10.0.128.123 eq 2748

Class-maps:
class-map type qos match-any CALL-SIGNALING
match access-group name CUPCDesktopControl

class-map type qos match-any MULTIMEDIA-STREAMING


match access-group name MMR

class-map type qos match-any TRANSACTIONAL-DATA


match access-group name RDP
match access-group name PCoIP-UDP
match access-group name PCoIP-TCP
match access-group name PCoIP-UDP-new
match access-group name PCoIP-TCP-new

class-map type qos match-any BULK-DATA


match access-group name View-USB
match access-group name NetworkPrinter

Policy-map:
policy-map type qos pmap-HVDAccessPort
class CALL-SIGNALING

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 55
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

set cos 3
set dscp cs3
! dscp = 24
class MULTIMEDIA-STREAMING
set cos 4
set dscp af31
! dscp = 26
class TRANSACTIONAL-DATA
set cos 2
set dscp af21
! dscp = 18
class BULK-DATA
set cos 1
set dscp af11
! dscp = 10

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 56
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Appendix 4 – CUPC and Deskphone control

How to deliver a UC video/voice call solution using deskphone control in a hosted virtual desktop environment

This section will describe the required components and necessary steps to achieve a basic Cisco Unified Personal
Communicator (CUPC) environment (IM, directory lookup and voice/video calls) running in a Citrix XenDesktop hosted
virtual desktop (HVD) deployment.
The required components are listed below:
- Cisco Unified Communications Manager (Cisco UCM) 7.1(5) or later
- Cisco Unified Presence Server 8.0 or later
- Cisco IP phones 9971 or 9951 with USB video camera (Phone load <9-1-0VD-6>)
- VXC-2112 clients
- LDAP server (Microsoft Active Directory is used in this example)
- Citrix XenDesktop 5

Cisco Unified Communications Manager


Delivering the UC experience to XenDesktop HVD users can be accomplished on either virtualized Cisco UCM or MCS
server based Cisco UCM deployments. For details on virtualizing your Cisco UCM see
http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cucm/virtual/servers.html
Cisco UCM administration does not require any extra steps or upgrades to achieve UC with Citrix XenDesktop. The
provisioning and assignment of users, devices and deskphone control remains the same as if the deployment of CUPC was
installed on regular laptops, but instead the CUPC application will be installed on the users Hosted Virtual Desktop (Note:
CUPC is not supported in a Hosted Shared Virtual Desktop Environment). Cisco UCM must be integrated to Cisco Unified
Presence server (CUP) in order to deliver CUPC sign-in, instant message, directory look-up and deskphone control features.
To integrate Cisco UCM to CUP follow the steps below:
• User and Device Configuration on Cisco Unified Communications Manager

http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1099013

• Configure the Presence Service Parameter

http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063445

• Configure the SIP Trunk on Cisco Unified Communications Manager

http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063383

• Configure the SIP Trunk Security Profile for Cisco Unified Presence

http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1050014

• Configuring the SIP Trunk for Cisco Unified Presence

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 57
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063105

• Create Deskphone control application user as an application user on Cisco UCM (required for deskphone
control)

This user is created on CUCM as a “Application User” and must match the settings on CUPC “Desk Phone
ControlSettings”

• Verifying That the Required Services are Running on Cisco Unified Communications Manager

http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1050114

For full details on how to configure Cisco Unified Communications Manager for Cisco Unified Presence see:
http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dgcucm.
html#wp1063445 (Section: Configuring Cisco Unified Communications Manager for Integration with Cisco Unified
Presence)

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 58
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Each Cisco UCM end user that will use CUPC must be properly licensed. For instructions on how to
license your CUPC users see:
http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/
dglic.html#wp1081264

For all licensing requirements see:


http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/dglic.html
#wp1081264

Cisco Unified Presence server


As with Cisco UCM the Cisco Unified Presence server can be installed on either a virtualized environment or MCS server
based deployment. Also, CUP administration remains the same as if it were installed to service a laptop or desktop
environment (non-hosted virtual desktop environment). To provision directory lookup a LDAP server is required and must
integrate to the CUP server. For details on installing CUP server is a virtual environment see
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/installation/guide/cpinvmware.html

Note LDAP to CUP integration is used for directory lookup only, for user provisioning and user
authentication the LDAP server must integrate to Cisco UCM. This is outside the scope of this
document but information on how to integrate LDAP to Cisco Unified Communications Manager can
be found here:
http://www.cisco.com/en/US/partner/docs/voice_ip_comm/cucm/admin/8_5_1/ccmcfg/bccm-851-
cm.html

Follow the steps below to provision and assign features to CUPC users:
• Configure CUCM Publisher
This value is configured during the Cisco Unified Presence installation process. If this field needs to be changed see:
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/installation/guide/cppostins.ht
ml#wp1075065
• Configure Cisco Unified Presence settings

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 59
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

• Configure Presence Gateway


You must configure Cisco Unified Communications Manager as a Presence Gateway on Cisco Unified Presence to
enable the SIP connection that handles the availability information exchange between Cisco Unified
Communications Manager and Cisco Unified Presence. The Cisco Unified Presence server sends SIP subscribe
messages to Cisco Unified Communications Manager over a SIP trunk which allows the Cisco Unified Presence
server to receive availability information (for example, phone on/off hook status). On how to configure the Presence
gateway see:
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/De
ployment_Guide_Cisco_Unified_Presence_85_March16.pdf (search for “Presence gateway”)
• Enable Instant Messaging
You can enable instant messaging on CUP server through MessagingSettings menu. See :
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/De
ployment_Guide_Cisco_Unified_Presence_85_March16.pdf (search for “instant messaging”)
• Create incoming and outgoing ACL (access list)
Access-list for both incoming and outgoing communication between CUCM and CUP server must be configured.
Either IP address or FQDN (if a DNS server is deployed) can be used. For Outgoing ACL configure the CUCM
server only, for the incoming ACL configure both CUCM and CUP server. These values are typically automatically
populated by the CUP server during install and integration to the CUCM, but if errors occur you must enter these
values manually.

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 60
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

• Configure Unified Personal Communicator settings


TFTP and proxy settings must be configured under ApplicationsCisco Unified Personal
CommunicatorSettings. The TFTP server will be your CUCM publisher (Or CUCM node handling CUPC
application). See
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/De
ployment_Guide_Cisco_Unified_Presence_85_March16.pdf (search for “Configuring the Proxy Listener and
TFTP Addresses”)
• Configure LDAP host and profile
LDAP user provisioning, user authentications and directory look-up are all supported. In order to support LDAP
user provisioning and authentication the LDAP integration is done with Cisco Unified Communications Manager,
for directory lookup services the LDAP integration is done on the Cisco Unified Presence server. For details see:
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/De
ployment_Guide_Cisco_Unified_Presence_85_March16.pdf (for CUCM LDAP integration search for “LDAP
integrations”) for LDAP to CUP integration search for “Creating LDAP Profiles and Adding Cisco Unified Personal
Communicator Users to the Profile and Configuring LDAP Server Names and Addresses for Cisco Unified Personal
Communicator”)
• Configure DeskPhone control settings
To configure deskphone control both the CUCM and CUP server must be configured to the CTI (Computer
Telephony Integration) application. For details see:
http://www.cisco.com/en/US/docs/voice_ip_comm/cups/8_0/english/install_upgrade/deployment/guide/De
ployment_Guide_Cisco_Unified_Presence_85_March16.pdf (search for “How to Configure CTI Gateway
Settings for Desk-Phone Control on Cisco Unified Presence”)
• Assign desk phone control feature to users
Desk phone control privileges are easily assigned to users by using the CUP GUI interface

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 61
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 62
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

Citrix XenDesktop

To install Citrix XenDesktop you require virtual Windows server 2003 (XenDesktop 4) or Windows server 2008 (XenDesktop 5) for the Connection
broker application (Controller). For the HVD images Windows 7 or Windows XP are supported and are installed with Citrix VDA agent. Citrix
XenDesktop can be hosted by VMware ESX or ESXi, Citrix XenServer or Windows Hyper-V hypervisor environment. XenDesktop requires an
Active Directory. For complete details on XenDesktop installation and configuration see
http://support.citrix.com/proddocs/index.jsp?topic=/xenapp5fp-w2k8/

Appendix 5 – Netflow and Energywise


Cisco NetFlow provides statistics on packets flowing through network elements ( routers and switches )
and can be used to monitor traffic utilization in a VXI system. Netflow is supported on the following
routing and switching platforms: Cisco ISR and 7200, 10000, 12000, CRS-1 Series Routers; Catalyst4k,
Cat6500, and Nexus switches. Refer to the Cisco VXI CVD, for guidelines on implementing Netflow in
a VXI system.

Steps to enable Netflow in a VXI network:


1. Configure netflow on the network elements (router and switches)
• Set the netflow collector IP address and the netflow export version.
• Enable netflow on devices and interfaces
2. Install the netflow collector as the management station.
• Provision the netflow collector with IP address and SNMP attributes of Netflow enabled routers
and switches
3. Start the netflow collector to begin netflow data collection
4. Run a netflow analyzer report on the collected data

The following configuration implements Netflow Data Export (NDE) on the Catalyst 6504 campus core
switch :

mls netflow
mls netflow interface
mls flow ip interface-full
mls nde sender
!
interface TenGigabitEthernet1/4
ip address 10.1.254.45 255.255.255.252
ip flow ingress
ip flow egress
!
interface TenGigabitEthernet1/5
ip address 10.1.254.41 255.255.255.252
ip flow ingress - enable netflow for inbound traffic
ip flow egress - enable netflow for outbound traffic
!
ip flow-export version 9 - Netflow format version 9
ip flow-export destination 10.0.128.49 2055 - IP address of the Netflow collector

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 63
Cisco Virtualization Experience Infrastructure (VXI) Configuration Guide

The following sample configuration implements Netflow data export on the 7206VXR campus edge
router :

interface GigabitEthernet0/3
ip flow ingress - enable netflow for inbound traffic
!
ip flow-export source GigabitEthernet0/1
ip flow-export version 9 - Netflow format version 9
ip flow-export destination 10.14.1.206 3000 - IP address of the Netflow collector
ip flow-cache timeout active 1
ip flow-cache timeout inactive 15
snmp-server ifindex persist

Cisco's Energywise Orchestrator can be used to monitor, control, and conserve power consumption on
EnergyWise enabled network elements and endpoints in a VXI deployment . VXI desktop virtualization
endpoints like thick PC's and thin clients running the Orchestrator Client can be managed directly by the
Energywsie Orchestrator. The power consumption on DV endpoints like VXC clients (zero clients) that
use PoE can also be monitored and controlled by managing the attached port on an Energywise enabled
switch. Refer to the Cisco VXI CVD, for guidelines on implementing Energywise in a VXI system.

Steps for integrating Energywise in a VXI network:


1. Configure EnergyWise on the domain members (router and switches)
• Set the domain and management password on the domain members connected to the endpoints.
• Configure energywise attributes on devices and interfaces connected to endpoints
2. Install the Orchestrator client on PC end points
3. Install the Orchestrator server (including the Proxy server) as the management station.
• Provision the Proxy server with the domain, password, and IP address of the primary domain
member
4. Start the Orchestrator server to begin discovery of EnergyWise devices and PCs
Here is a sample configuration that enables Energwise management of the PoE+ ports (connected to
VXC clients) on the Catalyst 4500E campus access switch :

energywise domain Campus security shared-secret 0 cisco - sets up the domain


energywise importance 70 - set the devices priority
energywise name Campus_switch - sets device identity
energywise keywords Campus_switch - sets device identity
energywise role Campus_switch - sets the device function
energywise management security shared-secret 0 cisco - sets up management communications
!
interface GigabitEthernet0/3
energywise importance 60 - sets the device priority
energywise role vxc-client-1 - sets the device function
energywise keywords Campus.switch2.port0/3 - sets the device locations
energywise name vxc-client-1 - sets the device identity

© 2011 Cisco Systems, Inc. All rights reserved. This document is Cisco Public Information. Page 64

You might also like