You are on page 1of 166

Cisco Virtualization Experience Infrastructure

Cisco Validated Design


November 15, 2010

Contents
Contents 1
Preface 5
Document Objectives 5
Questions and Answers 7
What’s Changed 8
Revision History 8
Introduction 8
Virtualization: Safeguarding the Desktop 8
Platform Independence 9
Challenges 9
What Is Cisco VXI? 9
Desktop Virtualization 11
Overview 11
User Experience on Virtual Desktops 13
User Input Response Time (Mouse and Keyboard Movements) 13
Application Response Time 13
Display Quality 13
Interactive Multimedia 13

Corporate Headquarters:
Cisco Systems, Inc., 170 West Tasman Drive, San Jose, CA 95134-1706 USA

Copyright © 2010 Cisco Systems, Inc. All rights reserved


Contents

Desktop Authentication 15
Local Devices 15
USB Redirection 16
Printing 16
Local Drives 16
Display Protocols and Connection Brokers 16
Hypervisor for DV 18
hosting the Virtual Desktops 20
Compute 21
Network 21
Storage 22
Typical DV Deployment 22
Enterprise DV Policy and User Profiles 23
Workloads, Peak Workloads, and their Design Impact 24
Workload 24
Peak Workload 25
VMware DV 25
VMware Components 25
Citrix DV 27
Citrix Components 27
For More Information 29
Deployment Models 30
Cisco VXI: New User Deployment Possibilities 32
Centralized, Virtual User Computing 33
A Shift in Connectivity 34
Data Center 40
Compute 41
Fabric Interconnect and Switching 43
Storage Data Flows 43
Storage Considerations 44
Hosting Desktop Virtualization in the Data Center 44
Allocation of Resources 45
DV Machine Density 45
Network Connectivity Allocation 46
Implementing DV Using VMware ESX/ESXi Hypervisor 47
Implementing DV Using Citrix XenServer Hypervisor 50
Connecting the Blade Server to the Network 51
Storage Options 53
For More Information 55

Cisco Virtualization Experience Infrastructure


2
Contents

Network 56
Security 58
Optimization 59
Network Interception 61
Cisco WAAS Deployment in a Cisco VXI Network 63
Cisco WAAS Configuration Tasks 64
Availability 65
ACE Deployment 66
ACE Configuration Tasks 67
Campus 68
Branch 69
Teleworker 70
Fixed Teleworker 70
Mobile Teleworker 71
For More Information 72
Endpoints and Applications 73
DV Endpoints 74
Zero Clients 74
Thin Clients 75
Thick Clients 75
DV Endpoint Management 75
DV Endpoint Printing (Network Printing) 75
DV Endpoint Access to USB (Storage or Printing) 76
DV Tested Endpoints 77
Unified Communications Endpoints 78
Unified Communications Endpoints and Failover 78
Applications 78
Cisco Unified Personal Communicator 78
Cisco Unified Personal Communicator, DV and SRST 81
For More Information 82
Management and Operations 82
Cisco VXI Management and Operations Architecture 84
Scalability 84
High Availability 84
Management Traffic 85
Desktop Management 85
Endpoint Management 87
Cisco VXI Management Tool Summary 87
Data Center and Applications 87
Network Infrastructure 88

Cisco Virtualization Experience Infrastructure


3
Contents

Desktop Virtualization Endpoints 89


Unified Communications 90
Cisco VXI Management Tasks and Work Flows 90
Cisco VXI Management Tools 97
VMware vCenter and vSphere 97
Devon Thin-Client Management 106
EMC Navisphere and Unisphere Management Suite 108
NetApp Virtual Storage Console and StorageLink 110
Cisco NAM 111
For More Information 121
Cisco VXI Security 124
Data Center 125
Network 126
Campus Access 127
Branch Access 127
Teleworker Access 128
For More Information 129
Quality of Service 129
Table of QoS Markings 130
Data Center 131
Markings 131
Queuing for Optimum Cisco Unified Computing System Performance 133
Network 133
Endpoint 135
For More Information 136
Performance and Capacity 137
Computing and Storage Capacity Planning 137
Resource Utilization in Current Environment 138
Estimating Resource Requirements in a Virtualized Environment 139
Estimating CPU 139
Estimating Memory 141
Estimating Server Capacity 143
Virtual CPU Considerations 144
Validating Capacity Estimates 150
Workload Considerations 150
Capacity Planning for Cisco VXI Service and Data Center Infrastructure Components 151
Desktop Virtualization Service Infrastructure 152
Postdeployment Performance Monitoring 154
WAN Sizing 154

Cisco Virtualization Experience Infrastructure


4
Preface

For More Information 156


Acronyms 158
Additional References 158
Cisco Validated Design 166

Preface
This Cisco Validated Design Guide provides design considerations and guidelines for deploying an
end-to-end Cisco® Virtualization Experience Infrastructure (Cisco VXI). The guide is intended for use
by network engineers and architects considering the implementation of the Cisco Virtual Experience
Infrastructure system.
The Cisco VXI system places a user’s computing environment in the data center and allows it to be
accessed through a variety of endpoints, integrates it with collaboration tools, and helps to ensure a
high-quality user experience.
This guide discusses the system as a whole and groups subsystem descriptions as separate chapters. The
chapters describe major functional groups—such as data center, network, and endpoints—as well as
pervasive system functions, such as management, security, and quality of service.
The chapters have been organized to offer a modular discussion of the Cisco VXI system.

Document Objectives
This document provides design considerations and guidelines for deploying an end-to-end Cisco
Virtualization Experience Infrastructure (Cisco VXI). The Cisco VXI system places a user’s computing
environment in the data center and allows it to be accessed via a variety of endpoints, integrates it with
collaboration tools, and ensures a quality user experience.
The system as a whole is discussed, and sub-system descriptions have been grouped under separate
chapters. The grouping is done along the lines of major functional groups: Data Center, Network, and
Endpoints, as well as pervasive system functions: Management, Security, and Quality Of Service.
The chapters have been organized to offer a modular discussion of the Cisco VXI system. Table 1
describes each chapter/module.

Table 1 Chapter/Module Descriptions

Chapter/Module Heading Description


Introduction This chapter offers a link between the business
drivers leading to the consideration of a
virtualized environment and the Cisco VXI use
cases.
Desktop Virtualization This chapter offers design guidance on the
deployment of a core Virtual Desktop
Infrastructure system using either Citrix
XenDesktop or VMware View.

Cisco Virtualization Experience Infrastructure


5
Preface

Chapter/Module Heading Description


Deployment Models This chapter classifies the deployment topologies
based on user location in relationship to the
hosted computing resources. The major
dimensions of network connectivity and compute
resource location are used to guide the reader
through design considerations. The shifts in
computing and networking loads are exemplified.
Data Center This chapter offers generic design guidance for
storage sub-systems, derived from testing on a
precise, finite set of laboratory storage staging.
Network This chapter discusses the network-based services
which enhance the Desktop Virtualization
experience through the use of specific products
and technologies. It also offers specific
recommendations on the deployment best
practices of targeted network functions in the
campus WAN and branch networks.
Endpoints and Applications This chapter highlights DV and Collaboration
endpoint choices and also offers guidance on the
deployment and use of specific applications
within the virtualized desktop. It also offers
guidance on the integration of UC functionality
within the Desktop Virtualization user
experience.
Management and Operations This chapter maps out the various management
platforms supporting the VXI sub-system
components. It also offers workflows of common
management tasks required to operate a VXI system.
Cisco VXI Security This chapter focuses on the features and policies to
be implemented to safeguard the VXI system.
Quality of Service This chapter offers a compiled view on the
descriptive protocol characteristics required to
classify, prioritize, and properly provision VXI
traffic flows.
Performance and Capacity This chapter offers design guidance on the
compute, storage and networking sizing of a VXI
system.
List of Acronyms
Additional References List of links to additional information.

The Cisco VXI system described here in reference form represents the totality of components supported
by the design. However, the components described here are not intended as a recommendation of specific
design substitutions. For instance, a finite set of access switching products were used during the testing
of the reference architecture, but this does not mean that other access switching products are
incompatible with the Cisco VXI system. Where required, specific component requirements are
highlighted.

Cisco Virtualization Experience Infrastructure


6
Preface

A Cisco VXI system is based on many subsystems; for many customers, many of these subsystems are
already deployed. For instance, most customers will already have a routing and switching infrastructure
in place. This document is focused on the VXI-specific design guidance needed to augment a preexisting
infrastructure that VXI functionality be implemented.
This document does not cover all of the foundational technologies and reference designs for routing,
switching, security, storage, and virtualization. It refers to detailed documents covering those
technologies and reference designs.
The Cisco VXI system is based on the integration of subsystems from different vendors. References to
other vendors’ product and system documentation are offered, and while every effort has been made to
ensure that at time of publication, these references are accurate, there may be instances where changes
to a vendor product or design guidance make specific references in this guide out of date. Please contact
Cisco if you find such disparities.

Questions and Answers


Table 2 provides links to sections that answer frequently asked questions regarding Cisco VXI Certified
Validated Design.

Table 2 Cisco VXI Frequently Asked Questions

Question Answer
How is the VXI system sized based on the The Estimating Server Capacity section of the
quantity of users? Performance and Capacity chapter offers design
guidance on CPU, memory and storage requirements.
How is Unified Communications See the Cisco Unified Personal Communicator section
functionality integrated into the user in the Endpoints and Applications chapter.
experience within the VXI system?
How is Cisco WAAS inserted into the See the Network Interception section of the Network
network flow? chapter.
What are the main building blocks involved in The Display Protocols and Connection Brokers section
the connection of a DV endpoint to a Hosted of the Desktop Virtualization chapter provides an
Virtual Desktop? illustration of the connection flow.
What are the main Desktop Virtualization See the Desktop Virtualization Endpoints section of the
endpoint types? Endpoints and Applications chapter
What are the Quality of Service (QoS) See Table 24 located in the Quality of Service chapter
characteristics of the various VXI traffic for details on the QoS markings of VXI-related
types? protocols.
What changes in traffic flows result from the See the Centralized, Virtual User Computing section of
adoption of desktop virtualization in the VXI deployment models chapter.
comparison to PC environments?
What considerations enter into the sizing of See the Wide Area Network Sizing section of the
the WAN bandwidth required to deploy Performance and Capacity chapter.
desktop virtualization users in branch
locations?
What other VXI-related documents are Links to additional reference material are available at
available? the end of each chapter, as well as in the Additional
References section at the end of the document.

Cisco Virtualization Experience Infrastructure


7
Introduction

Question Answer
What traffic types can be optimized by the Table 6 located under the Optimization section of the
Cisco Wide Area Application Services Network chapter lists the main traffic types related to
(WAAS)? VXI and indicates what optimization features Cisco
WAAS can apply.
What Virtual Local Area Network (VLAN) Table 4 of the Data Center chapter offers guidance on
design should I consider for the deployment the list of VLANs required for a VXI deployment.
of VXI in the Data Center?
What workload profile was used in the testing The Workload Considerations section of the
of the VXI system? Performance and Capacity chapter offers the details of
the user profiles used to test the VXI system.

What’s Changed
The information provided herein represents the initial publication of Cisco VXI Design Guidance.

Revision History
This document was originally published on November 16, 2010. As revisions are made, they will be
documented in this section.

Introduction
The main goal of desktop virtualization is to reduce total cost of ownership while enhancing security and
increasing business agility without compromising the quality of the user experience.

Virtualization: Safeguarding the Desktop


Centralized desktop virtualization (DV) takes the user's desktop environment, and hosts it in a data
center-based virtual infrastructure.
Since most of the desktop-related IT investment is now centralized, the management and operation of
the enterprise’s computing fleet is simplified. Endpoints are easier to maintain and less expensive. The
reduction of cost applies to both the capital expenditure (where the purchase cost of a DV endpoint is
lower than that of a regular laptop/desktop) and the operational expenditure surrounding the operation
of complex, “thick” endpoints. For instance, activities such as software asset tracking, licensing,
patching, backup (both in time and bandwidth costs), and failure recovery are easier. In a virtualized
desktop environment, an endpoint experiencing a failure is mainly a hardware replacement issue with
the user’s files and desktop environment left intact within the data center’s virtualized environment.
Data security is improved. A user has access to data as if they were connecting through a regular desktop
computing environment, but the actual data is not delivered to or stored in the user’s computing endpoint;
rather, the data is merely viewed remotely. This presents two main advantages from a security
standpoint:

Cisco Virtualization Experience Infrastructure


8
Introduction

• Electronic data leakage can be controlled. The user’s computing endpoint may be equipped with
removable storage media such as CD/DVD drives or USB–connected memory, but the centrally
applied policies control whether these devices are made available to the user. A user can be given
access to a file, but has no access to means by which to copy this file to removable electronic media.
• Loss, damage or theft of computing endpoints have no impact on the data. A typical laptop contains
many files that can be damaging to an enterprise if found in the wrong hands. Likewise, an employee
whose next great idea is on a single laptop loses more than the cost of the laptop if it is damaged or
stolen. By comparison, a virtualized endpoint, if lost or damaged, contains no company data: there
are no files to be lost, damaged or compromised within the remote computing endpoint. All the
important data is centrally located and left intact following an endpoint’s damage or theft. This
advantage applies even when a user’s computing endpoint is an actual “thick” laptop running a DV
client: although a user may require a more conventional laptop to perform their tasks, the most
sensitive computing resources can be remotely hosted in a virtual desktop.

Platform Independence
The endpoint can be chosen from an ever-expanding list of devices, including smartphones, tablet
computing platforms, laptop and desktop computers. These include a multitude of operating systems.
Thanks to virtualization, the enterprise can offer a consistent user experience across multiple devices.
An employee can now access their desktop environment from various endpoints during the day: a
desktop computer while working from headquarters, a thin endpoint when visiting a remote branch
office, a mobile smartphone while traveling, and a tablet when moving around the enterprise. Even when
working from home, an employee is provided with the means to “attach” to the enterprise through the
use of a personal computer and a VPN client.

Challenges
A user’s IT environment is not entirely contained within the desktop. Collaboration tools such as audio
and video telephony and printing services may not be served by DV technologies to the user’s
satisfaction.
For instance, remotely accessing audio, video, and interactive multimedia resources from within a virtual
desktop may offer a degraded level of performance, while increasing the demands placed on the IT
infrastructure such as WAN bandwidth. A telephone call, if placed from within the remote desktop and
carried within the remote desktop’s display protocol, may be connected to the user’s computing endpoint
without any network-based traffic prioritization. This effectively prevents the network’s QoS
functionality from affording the audio media the special handling it requires to prevent loss of quality
due to delay, jitter, and packet drops. Likewise, the streaming of interactive multimedia content may
inflate the bandwidth consumption of the DV endpoint, thus reducing the quality of the experience of
not only the viewer of the media, but also that of other colocated users relying on the same WAN
connection.

What Is Cisco VXI?


Cisco Virtualization Experience Infrastructure enhances desktop virtualization in the following
ways.

Integration into Cisco Unified Communications

A user can now connect to the hosted virtual desktop (HVD) to make and receive voice or video calls
from Cisco Unified Personal Communicator. Cisco Unified Personal Communicator (CUPC) acts as
a controller to the user’s desk phone. The control plane of the phone is integrated into the user’s

Cisco Virtualization Experience Infrastructure


9
Introduction

desktop. However, the media plane is the same as that of any desk phone: the media traffic is
connected outside of the remote desktop display protocol. This allows the network to perform
quality-of-service (QoS) functions such as Real-Time Transport Protocol (RTP) packet
prioritization, Call Admission Control, and path optimization. For example, two Cisco VXI users in
the same branch can call each other, and while the control (signaling) plane resides in the data
center, the audio connection between the users never leaves the branch. The branch network applies
packet prioritization to the RTP flow, and the two phones connect directly to each other, without the
need for the voice to traverse the WAN twice, to reach the data center. This same approach applies
whether the endpoints are audio-capable only or video-capable.

Simplified Configuration Through the Cisco Unified Computing System Virtualized Computing Platform

The Cisco Unified Computing System™ integrates the compute, virtualization hypervisor, fabric
interconnect, and storage functions usually found in separate platforms. This degree of integration
benefits both the performance and manageability of the data center. For instance, the Cisco Nexus®
1000V Virtual Ethernet Module places switching, traffic isolation, and policy-insertion capabilities
within the virtualized environment. This means two virtual computers, which must be connected per
a Layer 2 policy, can communicate directly within the virtualized environment, without the frames
being switched through the physical switching infrastructure. Likewise, if a user should be
prevented from connecting to a specific file system, the policy can be applied directly within the
virtualized environment by using Cisco’s virtualized switching platform.

Network Optimization

Many of the applications we use today are based on network protocols not optimized to be
economical of network bandwidth, and may not perform well in the presence of bandwidth limits
and/or network latencies. Cisco Wide Area Application Services (WAAS) technology can improve
application response time by reducing bandwidth consumption, thereby increasing application
performance. For instance, remote print operations can be launched from a user’s virtual desktop
within the data center, toward a network printer at a remote branch. WAAS can automatically
recognize the printing protocol, compress it to reduce WAN bandwidth consumption, and spool the
resulting print file at the remote branch, even if the remote printer is busy. This translates into a
faster application response time for the user while consuming less bandwidth on the WAN. This has
the dual benefit of improving the user experience while allowing a higher quantity of users to be
served by a given WAN link. The core of Cisco WAAS functionality relies on Transport Flow
Optimization (TFO), Data Redundancy Elimination (DRE) and Lempel-Ziv (LZ) compression
technologies, which combine to maximize bandwidth consumption efficiency.

Security

Connectivity into the network can be controlled starting at the access layer, where a device first
attaches to the virtual infrastructure. Industry-standard 802.1X can be deployed to control network
access at the port level. Cisco’s access switches can thus partake in the enforcement of a security
policy at the physical level, as well as interact with the credentials-based access control deployed
through integration with directory services such as Microsoft Active Directory.
Teleworker users, such as mobile users using laptop computers, as well as fixed users, such as
home-based teleworkers, can use Cisco’s award-wining VPN technology to connect them into the
enterprise network across the Internet. The user’s virtual desktop data is fully protected as it
traverses the Internet in a VPN tunnel, encrypted. This technology can also be deployed for traffic
traversing a managed WAN.

Cisco Virtualization Experience Infrastructure


10
Desktop Virtualization

End-to-End Testing

Cisco VXI is an end-to-end system, integrating network and collaboration functions into the virtual
desktop experience. It has been designed and tested as an integrated whole, and mitigates the system
integration investment required if one were to deploy DV, compute, networking, storage, security,
optimization, and load balancing technologies in isolation.

Desktop Virtualization

Overview
Today’s enterprise desktop environment, based on the PC, is highly distributed, giving users possess
ample computing power and application flexibility. This desktop model works very well for the business
needs of users. However, for IT departments, it presents multiple challenges such as high operating costs,
complex management, and reduced data security, among others.
Desktop virtualization (DV) technologies have been developed with an aim to solve the problems IT
departments face while still keeping the current user experience intact. Multiple variants of DV are
currently evolving in the marketplace, and each one variant approaches DV from a different angle.
Four popular approaches are illustrated in Figure 1. They differ from each other on four planes based on
whether the processing takes place at the server or the client and whether the complete desktop is
virtualized or the user’s available applications are virtualized. The choice of desktop virtualization drives
the client device hardware requirements, back-end software system complexity, network utilization, and
the user experience expectations.
While no particular variant fits all situations and hybrid approaches may be used in some scenarios, this
Cisco Validated Design Guide focuses primarily on the hosted virtual desktop variant of DV. Hosted
virtual desktops (HVD) are growing in popularity since they meet most of the IT department
requirements and maintain a local desktop-like user experience for the overwhelming majority of
applications. All references to DV in this document refer to HVD DV unless otherwise stated. Please
note that this chapter describes key DV-specific technology components and establishes the need for
more. For a deeper understanding of each component and general installation discussions, please consult
vendor-specific documentation.

Cisco Virtualization Experience Infrastructure


11
Desktop Virtualization

Figure 1 Desktop Virtualization Technology Landscape

Virtual Desktop Streaming Hosted Virtual Desktop

OS Desktop
APPS APPS APPS
Apps Hypervisor
APPS OS
APPS OS APPS OS
OS OS OS
Guest Synchronized Hypervisor
OS desktop

Display protocol

Application Streaming Hosted Virtual Application


APPS APPS APPS
OS OS OS
Apps Hypervisor
Application

Guest Applications
OS
Operating System

254676
Client Based Computing Server Based Computing

DV abstracts the end-user experience (operating systems, applications, and content) from the physical
endpoint. With DV, the desktop environment is hosted in data center servers as virtual machines. Users
can access their virtual desktops across the network by using laptop computers, thin client terminals,
mobile Internet devices, smartphones, and so on. Every large DV design should take into account the
user accessibility needs, the specific user work profiles, the applications, and the robustness of the
underlying infrastructure. The infrastructure design should further consider the compute, storage, and
network (data center, campus, WAN, or Internet) requirements. Each component plays an important role
in delivering a good experience to the end user.
A DV session requires, at a minimum, an endpoint, a hosted desktop running on a virtual machine housed
in a data center, and a software agent running inside the virtual desktop. The client initiates a connection
to the agent on the virtual desktop and views the desktop user interface with a display protocol. In a
real-world scenario, another component called connection broker is present between the endpoint and
the virtual desktop. The broker authenticates and connects the user to an appropriate virtual desktop.
As illustrated in Figure 2, there is a continuous exchange of display data between the virtual desktop in
the data center and the endpoint, with a continuous exchange of interface device data (keyboard, mouse,
USB peripherals, and so on) from the endpoint to the virtualized desktop. The network can be purely a
LAN environment (for example, a corporate headquarters) or a mix of WAN and LAN environments. To
allow for a consistent, high-quality, and scalable DV deployment, it is important to carefully plan the
data center and the data network.

Figure 2 A Simple DV Session

Display protocol Network Display protocol


254329

DV endpoint Virtual Desktop host


in Virtual Data Center

Cisco Virtualization Experience Infrastructure


12
Desktop Virtualization

User Experience on Virtual Desktops


One important step for DV is defining the primary attributes of good user experience. Depending on the
type of user, actual DV feature sets may include a subset of the following features. The overall goal is
to mirror user experience in a DV environment so that it reflects as closely as possible the physical
desktop experience.

User Input Response Time (Mouse and Keyboard Movements)


When a user moves the mouse pointer or types using the keyboard, the endpoint DV client sends these
commands encapsulated inside the display protocol. Within the HVD, the DV software agent receives,
decodes, and presents these commands to the guest operating system; the result of the commands, in
terms of display changes, is sent back to the client over the display protocol. The compute capacity of
the HVD, network latency, and bandwidth play a critical role in keeping the response time within a target
value.

Application Response Time


Assuming user input response time is within the limits, application response time is governed by the
amount of CPU, memory, and storage input/output per second (IOPS) seen by the virtualized guest OS.
Application response time equivalent to what is seen by an end-user on a physical desktop is the target
for all DV deployments. Setting very aggressive application response time expectations will
significantly impact the density of HVDs in the data center. When comparing application response time
on the physical desktop and on HVD, the OS versions and underlying hardware (physical or virtual)
should be the same to achieve valid measurements.

Display Quality
The display quality seen by the end user is a function of the content viewed, screen resolution, and screen
refresh rate. In general, for DV, high resolution and refresh rate result in higher compute and network
loads. It is very important to assign enough computer resources and sufficient bandwidth based on user
profiles and user location in the network. Most display protocols today support multiple monitors, either
in span mode (single display sent from HVD and replicated to multiple monitors on the endpoint) or in
true multiple-monitor mode (multiple displays sent from HVD, potentially each with a different screen
resolution). Depending on the supported mode, the endpoint/HVD compute requirements will be
impacted. In true multiple-monitor mode, network bandwidth requirements are much higher.

Interactive Multimedia
In enterprises today, interactive multimedia is critical for business productivity. Interactive multimedia
incorporates everything from content-rich web pages with embedded video content, to collaboration and
social network applications. The type of content may include Flash media, audio, video on demand,
voice over IP (VoIP), application streaming, and so on. In a DV environment, interactive multimedia is
typically requested by and rendered on the virtualized desktop in the data center. Once rendered on the
virtual desktop, the media is re-encoded in the DV protocol and sent from the data center to the endpoint,
where the DV protocol is decoded and finally displayed to the end user.
In this scenario, the interactive multimedia does not flow directly from the source to the endpoint; rather
it is “hairpinned” through the data center and frequently tunneled through the DV protocol connection.
Streaming media and real-time media are two primary means of interactive multimedia delivery, each
presenting different challenges at the HVD in the data center.

Cisco Virtualization Experience Infrastructure


13
Desktop Virtualization

As an example of streaming one-way media, when a user requests to view a web page containing flash
video and audio, the following main events happen:
• Display protocol consumes a small amount of network bandwidth to transport user request actions
to HVD.
• HVD consumes local compute, memory, and possibly storage resources to bring up a browser.
• A connection to the web server hosting the web page is initiated (possibly over the Internet),
consuming network bandwidth.
• The content returned by the web server (such as images, flash video files and so on) is downloaded
and processed by the browser, consuming CPU, memory, and storage resources.
• The content is rendered on the HVD virtual display, captured by the DV display protocol agent, and
transported to the endpoint. This consumes compute and memory resources, but more importantly
network resources (LAN or WAN) are consumed a second time for the same content. DV display
protocols are not as efficient at compressing video as codecs like MPEG4, so it is typical that the
network bandwidth consumed to retransmit the video encoded in the DV protocol can be several
times higher than the original compressed video stream.
• Finally, the endpoint displays the video and uses the underlying audio drivers (if supported) to
deliver the interactive multimedia.
Real-time media includes live video streams and two-way video/audio conference sessions. As an
example, when a DV user requests to view a live video stream, the following events happen:
• Display protocol consumes a small amount of network bandwidth to transport user request actions
to HVD.
• HVD consumes local compute, memory, and possibly storage resources to bring up a browser.
• A connection to the content distribution server hosting the live stream is initiated (possibly over the
Internet), consuming data center network bandwidth.
• The live video returned by the CDS can’t be cached and is decoded using appropriate video codecs
in the user’s HVD, consuming CPU and memory resources.
• The content is rendered on the HVD virtual display, captured by the DV display protocol agent, and
transported to the endpoint. This not only consumes compute and memory resources, but also, and
more importantly, network resources (LAN or WAN) a second time for the same content. DV display
protocols are not as efficient at compressing video as codecs like MPEG4, so it is typical that the
network bandwidth consumed to retransmit the video encoded in the DV protocol can be several
times higher than the original video compressed stream.
• Finally, the endpoint displays the video and uses the underlying audio drivers (if supported) to
deliver the interactive multimedia.
In a two-way video conference session, end-to-end audio latency and video rendering requirements are
very stringent. In a user’s HVD, the process of receiving the content, processing it, and preparing it for
transport over the display protocol will add extra latency. For acceptable user experience, end-to-end
latency should not exceed a few milliseconds. The HVD will need to have a sufficient CPU, memory,
and network resources. To achieve this, sync between audio and video streams needs to be very tight.
For this reason, extra compute resources in the HVD and strict QoS guarantees in the network are
required.
Depending on the quality and complexity of the video and fidelity of the audio, the resources utilized
can vary significantly. For most interactive multimedia content types, the general process for fetching
and delivering the content to the end user essentially remains the same. The resources used for interactive
multimedia delivery in a DV environment can be many times greater than in a traditional desktop
environment, and this resource utilization is spread across multiple components. Further, scaling the
model just described for thousands of geographically dispersed user groups requires careful capacity

Cisco Virtualization Experience Infrastructure


14
Desktop Virtualization

planning as well as a focus on optimization. Advanced network features such as multicast for streaming
media, or video on demand, can’t be applied in this model, since the media is contained in the display
protocol and the display protocol must be point-to-point. Currently, the optimization techniques either
use compression of the display protocol to conserve network bandwidth or separate the interactive
multimedia from the display protocol itself (this is called multimedia redirection or MMR). MMR
enables fetching and rendering of media at local endpoint and provides the opportunity to take advantage
of advanced network features such as multicast, caching, targeted compression, and so on.
The technology to support a better interactive multimedia experience over a large-scale network or WAN
is still evolving, but with careful planning, most day-to-day business needs can be met with the
technology currently available.

Desktop Authentication
A user transitioning to a virtual desktop would expect to log in once on the endpoint and be served with
the desktop. Authentication in a DV environment is required on the endpoint, on the connection broker,
and finally on the HVD to get to the desktop. Since the connection broker integrates with existing Active
Directory infrastructure securely in the data center, desktop authentication is easy to provision with
negligible network or compute impact. User authentication information is passed to the connection
broker from the endpoint over a secure connection inside the display protocol.

Note Applications inside the HVD are separately authenticated and those mechanisms are out the scope of DV
discussion.

Local Devices
Local device and peripheral support in a DV environment play an important role in preserving the user’s
existing desktop experience. Some peripherals are an integral part of the day-to-day business and user
experience, such as USB- controlled devices, printers, and access to local drives. A decision to integrate
these peripherals into the DV system significantly impacts the network and compute requirements and
requires careful consideration. As part of the planning for DV, the following need to be considered:
• Which users receive access to which peripherals?
Access is typically controlled by Active Directory policies, connection broker policies, and choice
of endpoint hardware. Active Directory group policy-based control applies to user permissions
inside the HVD itself. The endpoint is controlled by the software client, which in turn talks to the
connection broker to procure specific policies. These policies can control access to USB devices,
ability to copy data in and out, or ability to bypass display protocol. Most connection brokers
available have the ability to derive these policies from Active Directory group policies.
• What is the location of these user groups?
Location helps determine which network segments are impacted. Most local peripherals were
designed with no network limitations in mind and generally have ample hardware bandwidth
(through a Peripheral Component Interconnect Express [PCIe] expansion card) and local compute
resources for operations. When these local peripherals are made available to a HVD in the data
center, limited network bandwidth and higher latencies may be introduced. Providing USB and local
drive access to campus users on the LAN, for example, has less impact on the network when
compared to providing such access to WAN users.
Depending on the data collected, certain display protocols might work better than others and should be
considered in the choice of any peripheral optimization software.

Cisco Virtualization Experience Infrastructure


15
Desktop Virtualization

USB Redirection
Many new peripherals, like USB attached printers, webcams, audio headsets, or scanners, use the USB
interface on the local device to connect. For these to work in a DV environment, the DV client on the
endpoint intercepts the USB data on the client endpoint and sends it within the display protocol to the
agent installed on the HVD. The agent presents the USB device and data to the HVD as a new hardware
device. Raw USB data streaming, depending on the software and hardware version of local USB
controller, can consume 480 Mbps of network bandwidth and CPU cycles on the endpoint as well as in
the HVD. Such high-bandwidth requirements can’t work over a WAN link without optimization and
compression on the local client itself. Available display protocol software clients (like VMware or
Citrix) can compress and optimize the USB data to be sent over the network. Note that not all operating
systems support compatible drivers, and a thorough compatibility check between the HVD OS, local
client OS, DV client/agent software, and the end application is highly recommended. When using USB
redirection over a WAN link, make sure enough bandwidth is provisioned and optimization is employed.
USB data flow diagrams are covered in detail in the Endpoints and Applications chapter.

Printing
Printing is an important aspect of user experience, and it also has a significant impact on performance
and stability of any Windows-based HVD environment. In Windows-based printing, the application
initiating the print process creates a printer-independent metafile, called an enhanced meta file (EMF),
and sends it to Windows spooler. The spooler then converts the EMF into a printer-specific raw file based
on specific printer drivers and sends it to the USB or network-attached printer. In DV environments, all
of these actions take place in the HVD, while the printer is present locally near the endpoint. As an
example, a typical EMF file of 2 MB can expand to a 10-MB raw file. Given the amount of print traffic
per user, it is highly recommended to optimize print traffic in a DV environment.

Local Drives
Some users may need access to local drives for their jobs. Most connection brokers and display protocols
support local drive mapping from the endpoint point to the associated HVD. Active Directory group
policies can also be used to control the usage. However, it is recommended that this be done on a
case-by-case basis since one of the primary drivers for DV deployment is data security. The network
challenges for a WAN environment remain the same as for any other local device mapping.

Display Protocols and Connection Brokers


The primary functions of display protocols are to transport display and user input data to and from the
HVD and an endpoint. Beyond these basic functions, other functions such as video optimization, audio
transport, and USB/print data transport, are also performed by display protocols. The client (on the
endpoint) and the agent (in the HVD) software-initiate and terminate the connection, respectively, and
are responsible for interfacing with the OS drivers to produce a satisfactory user experience.
CPU and memory utilization on both the client and HVD are impacted by the choice of display protocol.
Display protocols generally consist of multiple channels, each carrying a specific type of data stream.
Further, advanced algorithms to optimize compute resource consumption and network bandwidth are
built in many of the newer display protocols, clients, and agents.
Display protocols currently present in the market are differentiated by how they deal with interactive
multimedia. Variations include support from multimedia redirection to advanced progressive display
rendering technologies. Three of the most commonly used display protocols are Independent Computing
Architecture (ICA), PC over IP (PCoIP), and Remote Desktop Protocol (RDP). All three of these

Cisco Virtualization Experience Infrastructure


16
Desktop Virtualization

display protocols were extensively used during validation. Some endpoint types, like Zero clients,
support only a subset of display protocols. The choice of display protocol will heavily influence the way
the network is configured, the amount of computation power required, policy application, future
compatibility and most importantly, the user experience.
A lot of the added features already described are a function of the display protocol and the capabilities
of the connection broker. A direct connection from an endpoint to the virtual desktop as depicted in
Figure 2 is not scalable for a large deployment. As a primary DV component, the connection broker is
required to authenticate and redirect multiple client connection requests to the virtual desktops.
Typically, the connection broker also provisions new virtual desktops on demand, and relays user
credentials to the hosted virtual desktop.
Other, advanced broker features allow multiple desktops per user, tunneled and proxy display protocol
modes, support for creating large clusters with load balancing capabilities, and termination of SSL
connections, to name a few. Two important DV technology vendors in the current market, Citrix and
VMware, have their own versions of the broker, and each one has a deep feature set. The broker typically
communicates with the existing Active Directory infrastructure to authenticate users and to pull and
apply user-specific desktop policies. The connection broker also maintains the state of the connection in
case of drops or disconnects, and can optionally power down or delete the remote desktop.
Figure 3 illustrates connection flow in a simplified DV environment that includes a single endpoint and
connection broker. This basic connection flow is present in all DV deployments, across all vendors.
Certain vendors, such as VMware and Citrix, support tunneling modes on their connection brokers and
vary in capabilities. Tunneling configurations have not been validated in this design guide and are
generally not recommended in most scenarios. Tunneling requires termination and reorigination of
display protocol sessions between user endpoint and HVD, causing an increase in the compute and
memory required to support a single connection on the connection broker. This can significantly impact
the capacity planning and resource requirements for a DV deployment. In some deployment scenarios,
for example, where Remote Desktop Protocol (RDP) over HTTPS is required for users accessing their
HVDs over the Internet, dedicated tunneled connections might be needed and should be factored into the
design.

Figure 3 DV Connection Flow

3 Query my desktop
for user/policy 4 Query/start VM

Active Virtual
Directory Infrastructure
Management

2 Client contacts
Connetion Broker

DV endpoint
5 Redirect to
1 machine
User logs into local Virtual Infrastructure
DV endpoint Connection Broker

Authentication
254330

6 Connect
to VM Display Protocol

Cisco Virtualization Experience Infrastructure


17
Desktop Virtualization

The following steps list the DV connection flow depicted in Figure 3 above:
1. User logs onto the endpoint and accesses the client software.
2. The client software initiates the connection to the broker.
3. The broker looks up the user in the Active Directory and determines the associated remote desktop.
4. The broker may optionally contact the virtual infrastructure manager to create/start the user’s remote
desktop.
5. The user is then redirected to that remote desktop.
6. The endpoint is thereafter directly connected to remote desktop via the display protocol.

Hypervisor for DV
Virtual machines are isolated and abstracted virtual hardware containers where a guest operating system
runs. The isolated design allows multiple virtual machines to run securely and access the hardware to
provide uninterrupted performance. The layer of software that provides isolation is called the hypervisor.
Figure 4 shows the hypervisor’s role in the HVD.
Apart from hardware abstraction, a properly configured hypervisor ensures that all VMs receive a fair
share of the CPU, memory and I/O resources. The hypervisor has minimal or no direct control of the
guest operating system running inside the VM itself. It does, however, indirectly control the OS behavior
by controlling memory utilization, sharing CPU cycles, and so on. A virtual desktop OS, like Windows
7, runs in a virtual machine, which itself runs on the hypervisor-controlled host server. In this Cisco
Validated Design, the VMware ESX/ESXi and Citrix XenServer hypervisor were used in the
deployments. Some important hypervisor design and install considerations are noted in the Data Center
chapter and the Performance and Capacity chapter. Please refer to the appropriate VMware and Citrix
installation guides for installation steps and other details.

Figure 4 Components of a HVD

APPS APPS APPS APPS

Guest OS Guest OS

VM VM VM VM
Hypervisor/Virtual Network

Compute Platform
Storage Pool CPU Pool Memory Banks
254612

Cisco Virtualization Experience Infrastructure


18
Desktop Virtualization

The VMware ESX/ESXi with vCenter and Citrix XenServer with XenCenter provide a wide range of
tools to achieve optimum virtual desktop densities in the data center. Among all the available features,
the critical enablers for DV are described next.
All virtual machines share common hardware resources on a single host (server) or in a pool of hosts
(servers). In a typical DV deployment with a high density of virtual desktops running on a single server,
it is highly advised that resource sharing features such as VMware Distributed Resource Scheduler
(DRS) or Citrix Workload Balancing are used appropriately in the environment.
Resource sharing includes fair or configured distribution of hardware resources on a single host,
identification of overcommitted host machines, live movement of VMs among hosts in a pool, VM
power-up sequences based on priorities, and so on. In a typical DV scenario, the provisioned desktops
are generally not all powered up at the same time. This is fundamentally different from the usage pattern
of the typical always-on mode of operation of virtualized servers in the data center. Furthermore, virtual
desktops in an enterprise environment are normally not used after hours and during holidays. Using the
hypervisor’s resource sharing capabilities along with power management features will reduce the data
center’s power usage footprint especially in a DV deployment. Hypervisor’s power management tools
can consolidate all the active virtual desktops on the fewest possible hosts (servers), or power-down idle
hosts, power-up hosts based on configured priority, and so on.
Both the resource sharing and power management features need VM migration capabilities, preferably
live migration, and shared VM storage. Live migration enables movement of running virtual machines
from one physical server to another with zero downtime, continuous service availability, and complete
transaction integrity. The ability to move desktops within the data center based on predefined power and
resource usage policies is extremely useful.
Figure 5 illustrates the concept of resource sharing with VM Migration. VMware ESX offers vMotion
and Citrix XenServer offers XenMotion to do live migrations. Since this technology is being improved
on an ongoing basis, both vendors have limitations, prerequisites, and best practices. Some of the critical
best practices are covered in this guide, but a comprehensive listing is out of scope of this document.

Figure 5 Resource Sharing with VM Migration

Resource Sharing with VM Migration

Hypervisor (Low Resources) Hypervisor (extra resources)

Pooled DV
Hardware Resources
254615

Compute Platform Compute Platform

Both ESX/ESXi and XenServer implement a software virtual switch to which each HVD connects. The
virtual switch presents virtual network interface cards (NICs) to the HVDs and connects to the physical
NICs on the host machine. All data from the HVDs is forwarded by the virtual switch with appropriate
VLAN tags and QoS policy application (if available).

Cisco Virtualization Experience Infrastructure


19
Desktop Virtualization

The virtual switch offered by both ESX/ESXi and XenServer implements basic switching features that
in many scenarios are not sufficient in a real deployment. XenServer allows creating multiple networks,
VLANs, physical NIC bonds with load balancing, and dedicated storage NICs. The ESX/ESXi vSwitch
provides all these same features, and with vCenter management service also provides a distributed
virtual switch (DVS) functionality. DVS allows multiple VMs across multiple hosts to be connected to
a shared virtual switch. This feature makes it very easy to manage network configurations for large
quantities of HVDs in a DV environment. XenServer depends on an external switch platform to provide
functionality similar to a distributed switch. A distributed switch enables the movement of VMs between
hosts, which is a basic requirement for advanced features such as resource scheduling, power
management, VM migration and HA discussed above. Although distributed virtual switching with
ESX/ESXi and XenServer provide basic switching, they do not provide comprehensive feature depth.
For example, VMware’s DVS feature lacks advanced networking features such as advanced QoS,
stacked VLANs, 802.1x and port security that any access switch should possess. The Cisco Nexus®
1000V Series Switches provide all these and more.
In addition to the feature sets, a fundamental limitation of a virtual switch is that it consumes CPU cycles
on the host machines themselves. If there is a lot of inter-VM communication the host CPU, consumption
will be higher. In a high- density DV deployment, this resource utilization must be accounted during
capacity planning. Further, if virtual switches packaged in the hypervisors are used, visibility into all
inter-VM communication on a single host is lost. Certain types of security breaches, such as DoS attacks
or even broadcast storms caused by misconfiguration, can have severe implications on the DV
environment and are difficult to troubleshoot without visibility into the flows.

Note Currently, Cisco Nexus 1000V Series is only supported on VMware ESX/ESXi hypervisor. See
Deployment Models for a more detailed discussion of Cisco Nexus 1000V Series deployment.

hosting the Virtual Desktops


Whether DV is deployed in existing data centers or is greenfield, some important design criteria need to
be considered. An enterprise DV environment requires scalable, secure, and always-on compute, storage,
and network infrastructure. Specific guidelines depend on the DV vendor of choice, but the following
general criteria specific to any DV deployment need be considered.
As depicted in Figure 6, compute resources, memory and I/O form a decision axis for DV in the data
center. Running out of one resource before the other—for example, CPU—would waste other
provisioned resources, memory, and I/O. At the same time, if the capacity planning is not appropriately
done, it can be difficult to meet the goals of obtaining the maximum density of HVD while preserving
user experience and avoiding wasted resources. The next sections take a look at some of the fundamental
differences in regard to the consumption of these resources in a DV environment.

Cisco Virtualization Experience Infrastructure


20
Desktop Virtualization

Figure 6 Dimensions of DV Performance

Memory

CPU

I/O
Lower power per VM

254613
More VMs per Server = Lower cooling per VM
Lower cost per VM

Compute
Server CPU power is many times that of a local desktop. That means a single multicore CPU can process
multiple HVDs. Since applications in a desktop are specific to the user, the amount of memory sufficient
for each HVD can be difficult to estimate. A DV deployment is more memory-centric than a virtualized
server deployment. With most common workloads, maximum HVD density can be achieved by choosing
a compute platform that can provide large memory banks. For specific compute requirements, see the
Deployment Models chapter.

Network
For display traffic, many elements can affect network bandwidth, such as protocol used, monitor
resolution and configuration, and the amount of multimedia content in the workload. Concurrent
launches of streamed applications can also cause network spikes. For WANs, you must consider
bandwidth constraints, latency issues, and packet loss. In a DV environment, the density of virtual
desktops supported by a single compute and network infrastructure is much higher than the traditional
campus or branch infrastructure. The aggregation of endpoints increases network and compute load
especially during peak usage periods.
The following primary data flows exist in a DV environment:
1. Display protocol traffic between HVD in data center and endpoint.
2. Traffic between HVD in the data center and the Internet. This traffic is generally routed over the
enterprise campus network.
3. Storage data traffic (I/O per second [IOPS]) between HVDs and storage arrays. This traffic normally
stays within the data center.
4. Application data traffic. This traffic generally remains within the data center.
All the above except 4 are new traffic flows introduced because of DV. In a large-scale deployment, each
data flow can congest entry and exit points in the data center network. Appropriate QoS mechanisms to
protect against delay and dropping sensitive data flow, such as display protocol data, is required.

Cisco Virtualization Experience Infrastructure


21
Desktop Virtualization

Additionally, user usage patterns are unpredictable and may negatively impact bandwidth consumed per
virtual desktop. For example, a high-quality, full-screen video can suddenly increase bandwidth
consumption for a particular HVD multifold data center. Network capacity may normally be sufficient,
but a careful design of the complete end-to-end network is required to preserve the user experience, and
to allow for future expansion and accommodate technology advances in DV space. In traditional
enterprise desktop deployments, user desktops have one point of connection to the access network. This
access layer separates the functions of the network and the actual compute resources cleanly, such that
the policy application for network and user work environment do not conflict. In a DV environment, the
endpoint still attaches to the campus access layer, but now there is also a HVD in the data center server
attaching to the network at a virtual switch port inside the server. The addition of a new attachment point
for the single user, and location of the virtual switch, blurs the boundary for policy application and opens
new security concerns. Such concerns can be handled by either moving the attachment point of HVD to
a controlled access switch or by using a feature-rich virtual switch to shift the control inside the server
itself. Data Center, and Network, explain the end-to-end Cisco VXI network requirements.

Storage
A DV endpoint has no local storage, and the associated HVDs are located on a centrally managed storage
system. Normally, storage for each HVD is split into desktop stores and user profile stores. The split in
desktop and user profile storage enables deletion on the desktop while still preserving the user profile;
this, in turn, supports persistent and nonpersistent desktops, data backup (user data backup only), and
reduction in storage requirements (desktops are highly redundant and cloning can be used). Average
storage I/O generation in a DV environment is predictable over a period of time, but very unpredictable
in short duration. This is quite unlike server environments. However, IOPS is very dependent on the type
of OS, its version, and kind of workload applications. Duplicate data in DV environment is very high
(common OS code base, similar applications installed), so DV storage can benefit enormously from
data-duplication technologies and VM cloning. Storage requirements for each desktop can keep growing
unless appropriate control policies are in place. In the case of cloned HVDs, it’s highly desirable to
restart the HVD periodically (to purge accumulated temporary data) and use disk quotas for users or
assign controlled network storage instead of direct disk access. DV deployments are generally storage
technology agnostic, and so storage area networks (SANs) or network attached storage (NAS) can be
used based on organizational preference.

Typical DV Deployment
DV Connection Paths depicts a typical DV deployment with all major components and connection paths.
For DV, the data center houses, at a minimum, an HVD pool, connection broker, Active Directory, and
virtual manager servers.. Each HVD in the pool is a VM and the pool itself is a physically separate host
machine. Another host houses the VMs on the connection broker, Active Directory, and the VM manager
that are installed. It is highly recommended to physically house HVD pools and enterprise services on
different host machines. The hypervisors in each host connect to separate storage and management
networks marked black and blue, respectively. Note here that the endpoints in the campus or the branch
connect to the already existing access network infrastructure, and the HVDs in the pool attach to the
virtual switch installed inside the hypervisor. All the endpoints, HVDs, and DV server components are
on the “green” network. Devices in the green network can reach each other using Layer 2 or Layer 3
connectivity. All the display protocol traffic originates and terminates between the endpoints and HVDs.

Cisco Virtualization Experience Infrastructure


22
Desktop Virtualization

Figure 7 DV Connection Paths

Host Machine 1

Connection Active VM
Broker Directory Manager
Branch

ESX or XenServer
WAN

Campus DV Display
traffic
Virtual Switch

DV Display Enterprise
traffic Network

ESX or XenServer
DV user
traffic Virtual Switch

Desktop End User


Data Data
Network

Storage Arrays
HVD DV Display and user traffic Host Machine 2

254616
Hypervisor - Management traffic
Hypervisor - Storage traffic

Note Figure 7 shows basic DV connectivity. For specific scenarios and features, the deployment could differ.

Note Figure 7 places the use of critical Cisco VXI components in the perspective of the network and does not
illustrate all Cisco VXI components required to enhance user experience.

For complete deployment information, refer to the subsequent chapters in this guide.

Enterprise DV Policy and User Profiles


User profile definition is one of the first and most important steps when planning a DV deployment. User
profile definitions are driven by an overall enterprise IT policy and business drivers behind DV
deployment. Today’s enterprise desktop virtualization decisions are driven by data security and IT
environment flexibility, rather than total cost of ownership. The important policy decisions that define
the DV deployment are listed next. Please note that this list is not comprehensive and specific business
needs might require further policy granularity.
• Deciding user profiles and the total number of users to be migrated to DV. User profiles are generally
categorized into task workers, knowledge workers, and power users. Task workers perform specific
functions with minimal requirement to change their desktop environment (for example, storing files

Cisco Virtualization Experience Infrastructure


23
Desktop Virtualization

locally, desktop personalization, and so on). Task workers normally do not require persistent
desktops (enabling desktop sharing) and are ideal candidates for large scale DV. Knowledge workers
are users who need access to local files and who create content and so on (examples include product
managers, executives, and software programmers). They normally require persistent dedicated
desktops with personal storage space. Power users are workers who need access to very high
computing power for their day-to-day work. Power users manipulate large amounts of locally stored
data—for example, graphic designers, CAD designers, architects, and so on. For a successful DV
deployment, it is critical to crisply define user profiles based on the enterprise policy and business
needs.
• Identifying business-critical applications required by the user groups that have been identified.
Application requirements heavily drive storage, compute, and network requirements in a DV
environment. These requirements can also help decide specific virtualization vendors and
virtualization software to be deployed in the environment. Depending on the business requirements,
applications could include SAP business software, customer relationship management (CRM)
applications, software as a service (SaaS), web browsers, Microsoft Office, Microsoft Outlook, and
so on. Generally speaking, any existing IT desktop policy should apply to applications deployed in
the DV environment.
• Grouping of users based on geographical locations and places in the network. The location
information defines which data centers the DV users are served from and also helps ascertain
network, security, and localization policy requirements.
• Defining level of access per user or per user location. A large enterprise with diverse workforce (for
example, fixed users, mobile users, telecommuters, contractors, and so on) and an existing IT
infrastructure has well-defined data access policies in place. Most of these policies can be
transplanted seamlessly into the DV environment. Keep in mind that users are no longer necessarily
tied to a physical machine and/or location. Therefore, in some DV deployments, defining data access
policies based on user, user type, location, and network attachment point might be required. These
policies will impact how the DV infrastructure interacts with enterprise AD infrastructure.
• Defining IT policies for user profiles that clearly specify use of DV-specific features such as USB
redirection, localized printing, audio channel support, single sign on (SSO) and so on, per user based
on the business case. Policies should also be defined per user profile or per user to allow/disallow
use of peripheral devices, changing monitor resolutions, choosing a specific display protocol, access
to personal storage space, rebooting of HVD and so on. These features in a DV environment heavily
impact network, storage, and compute requirements, and allowing all users access to these features
can severely impact the viability of a DV deployment.

Workloads, Peak Workloads, and their Design Impact

Workload
For each user profile defined, granular workload parameters need to be set based on experience with
existing physical desktops or by testing. Workload parameters, like user profiles, are very unique to each
enterprise. Early determination of these parameters makes the difference between the success or failure
of a DV deployment. Workload parameters should include application run time, application usage
patterns, average screen refresh rates and OS and application memory, CPU and storage footprint. Events
that generate CPU, memory, or network load in the HVD—such as saving a document, sending an email,
viewing a video, or editing text—need to be listed in a workload definition along with estimated average
utilization. Each type of workload can have varying storage IOP load requirements and care should be
taken to discover these requirements very accurately, especially if planning a large scale DV

Cisco Virtualization Experience Infrastructure


24
Desktop Virtualization

deployment. Much of this data is generated during pilot DV deployment phase and is then used to
estimate resource requirements for full-scale deployments. Minor inaccuracies in estimating network,
IOP, CPU, and memory loads during pilot deployments can add up significantly in the final deployment.

Peak Workload
Because HVD runs on shared compute resources abstracted by the hypervisor, the resource utilization
of one HVD can impact the others if appropriate hypervisor mechanisms and planning are not used
during deployment. Since all users access their HVDs centrally located in data centers, they share a
common secure network entry point into the data center. If not designed properly, this common network
entry point can be a potential bottleneck, and can cause user experience degradation or even complete
loss of connectivity. Identifying potential peak workload scenarios during DV deployment planning is
essential.
As an example, consider an enterprise where hundreds of users start their HVDs in the morning, start
Microsoft Outlook, and launch their productivity applications in a span of a few minutes. This sequence
of events can cause severe strain on every component in the DV environment and possibly non-DV
applications sharing the network. Since all user traffic is encapsulated in a display protocol,
synchronized user actions, such as hundreds of users simultaneously watching a live corporate video,
can congest the network. If mission-critical applications are consumed concurrently by a group of these
users, there are no QoS mechanisms built within the display protocol to treat video and critical
application traffic separately. Simultaneous antivirus checks, application and OS updates or HVD
reboots, are other examples of peak workload.
Proper capacity planning, sufficient spare resources, intelligent use of resource scheduling, fault
tolerance and high-availability (HA) features, and network capacity and QoS planning are absolutely
essential to deal with peak workloads in a DV environment. Further, optimum WAN/Internet bandwidth
utilization should be carefully considered and planned.
Powered with a clear understanding of average workloads, peak workloads, and user profiles, it’s
possible to make a good assessment of resources and configurations required in a particular DV
deployment. Please see the discussion of capacity and performance in the Performance and Capacity
chapter for more on making resource assessments with sample workloads.

VMware DV

VMware Components
VMware View DV Deployment depicts a typical DV deployment based on VMware View. Although
quite similar to the general DV deployment discussed earlier in this chapter, some components that need
specific attention are briefly described here. For much more detailed discussion on VMware-specific
components, please refer to the VMware knowledge base.

Cisco Virtualization Experience Infrastructure


25
Desktop Virtualization

Figure 8 VMware View DV Deployment

Thick Thin Mobile


Endpoint Endpoint Endpoint PDA
View Client

Endpoint OS

Network
Microsoft Active View Composer
Directory on vCenter

View
Connection
Server

Server Cluster
ESX Hypervisor
Virtual Machines
APPS APPS

View Agent
vSwitch
Guest OS

254617
Virtual Desktop Pool

View Agent

VMware View Agent on each HVD in the pool is required to create the connection between the client
and HVD. The features and policies on View Agent can be controlled via Active Directory and/or View
Connection Server settings. The agent also provides features such as connection monitoring, virtual
printing, and access to locally connected USB devices. To install the agent automatically on all HVDs
in a pool, install the View Agent service on a virtual machine and then use the virtual machine as a
template or as a parent of linked clones. The capability set of the agent is tied to the operating system.
Please consult the appropriate VMware documentation for compatibility information.

View Client

VMware View Client is installed on each endpoint that needs to access its HVD. View Client supports
the following display protocols: PC over IP (PCoIP), Microsoft Remote Desktop Protocol (RDP), and
HP Remote Graphics Software (RGS). View Client also supports a local mode option that enables the
user to work in offline mode; however, this needs View Transfer Server in the data center and is not
covered here. Zero client devices have a View Client integrated in the embedded operating system.

Cisco Virtualization Experience Infrastructure


26
Desktop Virtualization

View Connection Server

This software service acts as a broker for client connections. VMware View Connection Server
authenticates users via Active Directory and post-authentication directs the user to an appropriate HVD.
Some other important functions performed by View Connection Server are Desktop entitlement for
users, desktop session management, establishing secure connections between users and desktops, policy
application, and SSO.
View Administrator is a user interface that comes prepackaged with the View Connection Server and
provides an administrative interface for management. The sizing guidelines for View Connection Server
can vary based on display protocol type, VM resources, use of encryption, and choice of tunneled or
nontunneled mode. It is highly recommended to implement a load balancing solution such a Cisco ACE
Application Control Engine Module in a large deployment. Please see the Network chapter for more
details on this topic.

View Composer

One of the main reasons to consider DV in the enterprise is to conserve storage. View Composer is an
important component that allows for storage optimization. In DV environments, data redundancy per
HVD is very high since typically the same OS and application sets are replicated across the virtual
desktop pool. To deal with this, View Composer creates a pool of linked clones from a specified parent
virtual machine. Each linked clone acts like an independent desktop, with a unique host name and IP
address, yet the linked clone requires significantly less storage because it shares a base image with the
parent. View Composer can create images that share the base OS image while still keeping the user
profile data separate. It is highly recommended to separate the OS files from user profiles in the storage
array. The storage requirements for using View Composer can be reduced by more than 50 percent.
View Composer is set up on the same virtual machine as vCenter to allow control of the ESX hosts. Each
View Composer server in a cluster can handle up to 512 HVDs in a pool; in a large deployment,
clustering multiple View Composer instances may be required.
Please note that if the company policy allows users to install custom applications, full benefits of View
Composer can’t be realized. It is highly recommended to separate such user profiles and place these
HVDs on a backed up data storage system.

Citrix DV

Citrix Components
Figure 9 depicts a typical DV deployment based on Citrix XenDesktop. Although quite similar to the
general DV deployment discussed above, some components that need specific attention are briefly
described here. For a much more detailed discussion on Citrix specific components, please refer to Citrix
knowledge base.

Cisco Virtualization Experience Infrastructure


27
Desktop Virtualization

Figure 9 Citrix XenDesktop DV Deployment

Citrix Receiver

Endpoint OS Thick Think Mobile


Endpoint Endpoint Endpoint PDA

Citrix Citrix Network


Active Citrix Provisioning Xen Desktop
XenCenter Directory Data Store Server Data Collector
Citrix
Desktop
Delivery
Controller

Server Cluster
XenServer Hypervisor

Virtual Switch APPS APPS

Virtal Desktop Agent

Virtual Desktop Pool


Guest OS

254618
Virtual Machines

Virtual Desktop Agent

Citrix Virtual Desktop Agent (VDA) is an agent running on each HVD. It makes the desktop available
to the client running in the endpoint. VDA receives ICA or HDX connection requests on port 1494,
prepares the desktop, and streams it to the client. VDA is also responsible for registering the HVD with
the Desktop Delivery Controller (DDC). In the background, VDA communicates with the DDC to update
session status and provide usage statistics and current desktop state. VDA uses port 8080 to
communicate with DDC.

Citrix Online Plug-in

Multiple client software options are available from Citrix that work with the VDA. Depending on the
endpoint hardware, Citrix Receiver, Online Plug-In, or Dazzle can be used. The Citrix Online Plug-in
provides toolbar functionality, enabling users to pan and scale virtual desktops inside their local desktop.
Users can work with more than one desktop by using multiple Citrix Online Plug-in sessions on the same
device. Details on other features can be found readily in Citrix product documentation.

Desktop Delivery Controller

DDC is the central piece of a Citrix XenDesktop environment, and it performs the key functions of any
broker—for example, user authentication or VM power management. On session establishment, DDC
also makes ICA policy decisions and enforces any licenses as needed. Once the connection between the

Cisco Virtualization Experience Infrastructure


28
Desktop Virtualization

client and VDA is established, DDC does out-of-band monitoring with assistance from VDA. DDC does
not support tunneled mode for the display protocols, and because of this, it can scale to handle many
thousands of desktops on a single installed instance.
DDC also performs data collection functions that can be valuable for auditing and DV deployment
tweaking. It is possible to create DDC farms where each controller is assigned a specialized function
such as data collection and controlling the infrastructure. For high availability, it is highly recommended
to have multiple DDCs in the environment, preferably behind a load balancer such as Cisco ACE
Application Control Engine Module. DDC requires that all farm members be part of same Active
Directory domain.

Virtual Desktop Provisioning Server

Citrix Provisioning Server (PVS) is a VM management server that can create and deprovision virtual
desktops on the fly based on DDC control. For DV environments where persistent desktops are not
needed, the provisioning server can significantly optimize the storage utilization, much like View
Composer described earlier. This is an essential piece of any Citrix-based DV deployment. Desktop
provisioning is a component that creates and deprovisions virtual desktops from a single desktop image
on demand, optimizing storage utilization and providing a pristine virtual desktop to each user every
time they log on. Desktop provisioning also simplifies desktop images, provides the best flexibility, and
offers fewer points of desktop management for both applications and desktops.

For More Information


Table 3 provides more detailed information regarding specific vendor information including hardware
and software discussed in this chapter.

Table 3 DV Links

VMware
VMware view 4.0 architecture planning guide
http://www.vmware.com/pdf/view401_architecture_planning.pdf
Introduction to VMware vSphere (ESX 4.1, ESXi 4.1)
http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esx41_vc41.html
VMware VDI implementation best practices
http://www.vmware.com/files/pdf/vdi_implementation_best_practices.pdf
Performance Best Practices for VMware vSphere® 4.0 (ESX 4.0 and ESXi 4.0)
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Citrix
XenDesktop 4 @ Citrix knowledge base
http://support.citrix.com/proddocs/index.jsp
Best Practices for Citrix XenDesktop with Provisioning Server
http://support.citrix.com/article/CTX119849
Citrix XenDesktop Modular Reference Architecture
http://support.citrix.com/article/CTX124087
Citrix XenDesktop - Design Handbook
http://support.citrix.com/article/CTX120760
Citrix XenServer – Documentation
http://docs.vmd.citrix.com/XenServer/5.6.0/1.0/en_gb/

Cisco Virtualization Experience Infrastructure


29
Deployment Models

Deployment Models
The borderless network’s promise of bringing the networked user experience to any device, in any place,
at any time, can be fulfilled in various ways, depending on the particular demands of the enterprise.
Virtualization brings another set of possibilities to the system designer, allowing different deployment
models based on a shift in where the computing and networking investments are made.
The Cisco Virtualization Experience Infrastructure (Cisco VXI) system is fundamentally based on the
centralization of the user’s computing infrastructure in the data center. The computing infrastructure is
virtualized and hosts the desktop virtualization (DV) and collaboration subsystems. Together, these
subsystems provide the user community with desktop and telephony services delivered through the
network to the various DV and collaboration endpoints. These endpoints can be located anywhere within
the enterprise’s network. The term “enterprise network” is used to signify both the enterprise’s own
networking infrastructure, as well as it’s virtual private network (VPN)–based extensions into the
Internet, and other various forms of externally-managed commercial networks.
A user can access the computing resources from a variety of locations. In the context of this chapter,
location refers to the conditions of the network connectivity that the user’s endpoint will be using to
connect to the centralized computing services. As illustrated in Figure 10, endpoints are further defined
based on their location in the network.

Figure 10 Cisco VXI Top-Down View

IP
Fixed Data
endpoint Desktop Virtualization & Center
Teleworkers Collaboration services
Endpoints Home
office
UCS

Public
internet ACE
ASA
Mobile Nexus
endpoint
Managed WAAS
Storage
SRST IP Wan
Public internet
access point
IP
Network
Branch Campus

IP Campus
Branch Endpoints
Endpoint
254620

Management and operations / Security / QoS Performance & Capacity

Campus Endpoint

A campus endpoint is deployed in a campus, where a colocated data center provides the computing
resources, and a LAN provides the network connectivity, typically at speeds of 100 Mbps or more, and
latency of 15 ms or less.

Cisco Virtualization Experience Infrastructure


30
Deployment Models

Branch Endpoint

A branch endpoint is deployed in an enterprise’s remote branch, connected to the data center through a
managed WAN. The WAN typically provides bandwidth, latency, and jitter guarantees, as per a
negotiated service-level agreement (SLA), on-net addressing (addressing is private, as per RFC 1918),
and offers some inherent degree of security (by comparison to the Internet). The remote branch where
the endpoint is located is an enterprise-controlled environment and will typically feature a Cisco branch
router (such as Cisco Integrated Services Routers), thus offering the ability to deploy network-based
services, such as Cisco WAAS, telephony gateways, and Survivable Remote Site Telephony (SRST) to
aid in accelerating some aspects of the DV service (such as printing).

Teleworker Endpoint

A teleworker endpoint is deployed over the Internet—for instance, when a remote worker is accessing
the corporate network services over a VPN. The connectivity is provided by a VPN connection
established over the public Internet. The characteristics of bandwidth, latency, and jitter are typically not
known, and can vary from connection to connection, or over time within the span of the same VPN
connection’s duration. There are typically one (or more) Network Address Translation (NAT) traversals
to connect through, and the underlying transport mechanism (the Internet) is untrusted. The remote
environment can vary greatly, and the type of endpoint can be further classified as a mobile teleworker
endpoint or a fixed teleworker endpoint.

Mobile Teleworker Endpoint

When the user is in a location where the connection to the Internet is made from a public Wi-Fi hotspot,
hotel room, airport or other public space, it is considered a mobile teleworker endpoint. Typically, no
equipment other than a laptop or mobile Internet device is available; the user is essentially relying on
the device’s own network and VPN capabilities. If more than one device is used (for instance, a mobile
phone, a tablet computing device, and a laptop computer), they are each connected to the enterprise
network services over their own, separated VPN connections.

Fixed Teleworker Endpoint

The endpoint is considered a fixed teleworker endpoint when the user is connected to the Internet
through a dedicated broadband connection, such as a user in a home office environment. This situation
can be very similar to the mobile teleworker endpoint, except for the relative bandwidth, which is
typically greater in quantity and availability than provided by a public Internet connection. Another
possibility in this case is to provide the user with a Small Office Home Office (SOHO) router, through
which hardware-based VPN connectivity can be established. This affords the user more endpoint and
connectivity choices, such as:
• A dedicated collaboration endpoint, such as a Cisco Unified IP Phone, allowing the use of Cisco
Unified Personal Communicator from the virtual desktop (in desk phone mode)
• A network-attached printer
• WAN optimization functionality through a Cisco WAAS module
• Content caching functionality through a Cisco Application and Content Networking System
(ACNS) Network module.
The various endpoint location possibilities just highlighted, share many attributes with Cisco VXI. The
common functionality attributes are presented next through a discussion anchored on a branch endpoint.
This is done to simplify the narrative. The various chapters covering the constituent technologies will
highlight the differences stemming from the endpoint’s location.

Cisco Virtualization Experience Infrastructure


31
Deployment Models

Cisco VXI: New User Deployment Possibilities


To illustrate some of the fundamental shifts in the deployment of user computing resources caused by
centralized virtualization, the following sections contrast the computing and networking environment of
the traditional, distributed user computing with the centralized, Cisco VXI environment.

PC-based Computing

Consider a typical PC user’s computing environment, illustrated in Figure 11.

Figure 11 PC-Based Computing Environment

Data Center
WWW Email, web
services
TTP
il, H
Branch Ema
User’s SRST
traffic LDAP
Directory
Enterprise Service
IP network Tele CTI/
p QBE
hon Unified
y sig
PC IP nalin Communications
C:\ HT g
TP Manager
Local internet Centralized internet Internet
Internet access
access access

254621
firewall

User Computing Resources

The user’s computing environment is centered on the personal computer (PC). The user’s PC relies on
its own hardware resources of compute (CPU), memory (RAM) and storage (hard drive). Of particular
interest, the locally configured storage contains the user’s operating system (OS) and personal files. This
storage resource is an IT asset that must often be managed (backed up, patched, configured) remotely,
across the enterprise’s IP network. Centralized virtualization will bring this storage resource under a
shared, centrally deployed storage system.

User Network Connectivity

User actions trigger various connections from the PC, which support the user’s typical applications.
• When using email, the user’s PC initiates connections to an enterprise email system. This connection
could be based on a variety of email protocols, such as SMTP, POP3, IMAP or Microsoft’s
Exchange. Likewise, when accessing web-based resources, the user’s PC initiates HTTP-based
connections to servers either on-net (within the enterprise) or off-net (through the Internet).
• When browsing the web, the user’s PC will initiate HTTP connections to the Internet. Access to the
Internet can be implemented either directly from the user’s branch location, or aggregated through
a centralized Internet link within the enterprise’s main site.
• Directory access is initiated directly from the user’s PC. For instance, when the user looks up a
colleague’s phone number in the company directory, a Lightweight Directory Access Protocol
(LDAP) query is initiated from the user’s PC.

Cisco Virtualization Experience Infrastructure


32
Deployment Models

• The user’s phone may be deployed in a Centralized Call Processing topology, where the user’s
actions (making a phone call) are relayed to the centralized Cisco Unified Communications Manager
for processing. The telephony signaling can be implemented using Session Initiation Protocol (SIP)
or Skinny Call Control Protocol (SCCP).
• Collaboration applications such as Cisco Unified Personal Communicator allow the user to
concentrate instant messaging, presence, contact lookup, and telephony control under one desktop
application. Cisco Unified Personal Communicator initiates various connections to centralized
services to support these functions. Of note, when Cisco Unified Personal Communicator is used to
control the user’s desk phone, it initiates a Computer Telephony Integration/ Quick Buffer Encoding
(CTI/QBE) connection toward Cisco Unified Communications Manager.
All the user’s PC connections are initiated separately towards the appropriate service. In other words,
each connection is seen as a separate flow of IP traffic from the user’s PC toward the target service.

Centralized, Virtual User Computing


Virtualization brings about a fundamental shift in the deployment of a user’s computing and networking
environments. Figure 12 illustrates the shift in the user’s compute, connectivity, and storage
environments when using desktop virtualization (DV).

Figure 12 DV-Based Computing Environment

Data Center Cisco Unified


Personal
Communicator
Display
Protocol
UCS compute
Display updates

resources
C:\
User’s network connectivity

User’s Virtual
Desktop Storage

Email WWW Email, web


services
Branch
User’s SRST
actions LDAP Directory
Enterprise Service
IP network
CTI/QBE Unified
DV IP
Telephony signaling Communications
endpoint Manager
Internet Internet
Local internet Centralized internet access
Internet
254622

access access firewall

User Computing Resources: Central, Virtual

In this deployment, the user’s desktop has been deployed in a virtualized infrastructure. The user
desktop’s OS is now running centrally, over a shared compute, storage and connectivity infrastructure.
This means that:
• The user’s OS and all of the user’s applications are now consuming centralized, shared compute
resources of CPU and memory. These shared resources are supporting multiple users.

Cisco Virtualization Experience Infrastructure


33
Deployment Models

• The user’s OS and application files are stored on a centralized file storage system.

Note Server applications such as email, web, directory and unified communications services can also be
virtualized. The illustration above focuses on user desktop virtualization.

User Network Connectivity

Connectivity between the user’s desktop environment and the various IT services is now achieved within
the data center. In some instances, when both the user’s desktop and the target service are virtualized
within the same compute infrastructure, the connections never leave the virtualized environment.

A New Type of Endpoint

The user’s PC has been replaced by a DV endpoint. In contrast to a PC-based environment where a
“thick” client is required to support the user’s applications executing locally, a virtualized environment
requires a comparatively “thin” user device, supporting display and user input devices. User actions are
relayed from the keyboard and mouse to the user’s hosted virtual desktop (HVD) across a display
protocol, and the resulting action is executed over the shared, virtualized resources of compute, storage,
and connectivity. The visual result of the user’s actions is propagated back to the user’s DV endpoint
across the display protocol.

Note The enterprise IP network now only carries one connection per user desktop. This connection now serves
as a conduit for user actions and desktop GUI updates. For example, when typing the letter “A,” the
user’s action results in a display protocol message sending the action from the DV endpoint toward the
user’s HVD where it is processed by an application. The resulting display update of the letter “A” on the
user’s screen relies on a display protocol message from the HVD toward the user’s DV endpoint.

A Shift in Connectivity
In a nonvirtualized environment, a user’s actions often result in network traffic between the user’s PC
and the data center. The habitual connectivity consideration of system designers is to make sure an
enterprise branch is deployed with an amount of bandwidth commensurate with the connectivity needs
of PC applications contained within the branch. These needs vary per application: some will consume as
much bandwidth as possible during execution (true, for example, of a data backup application), while
others will sporadically require bandwidth (for example, web browsing). In all but campus deployments,
the connectivity traverses the limited bandwidth of WAN or Internet clouds.
Let’s explore some examples of how the connectivity needs of a DV user shift when compared to those
of a PC user.

File Backup

As illustrated in Figure 13, when a backup application copies files from the user’s PC toward a central
file storage system, the data flow traverses the enterprise IP network. When the user is located in a
branch office, the backup action traverses the WAN, where latency and bandwidth impact the duration
and cost of the transaction.

Cisco Virtualization Experience Infrastructure


34
Deployment Models

Figure 13 PC-Based File Backup

Data Center

Limited bandwidth
Branch Latency
User’s
traffic Enterprise
IP network
File backup
PC IP system

254623
C:\

In a centralized desktop virtualization environment, the user’s desktop and the central storage system are
colocated and served by the same high-speed LAN. The backup action is thus comparatively faster and
more economical. Figure 14 illustrates the shift in connectivity for backup operations:

Figure 14 DV-Based File Backup

Data Center Cisco Unified


Personal
Communicator
Display
Protocol
UCS compute
Display updates
resources
C:\
User’s Virtual
Desktop Storage

high bandwidth (10Gbps typ.)


Branch Low Latency (<10ms typ.)
User’s
actions Enterprise
IP network Backup

DV IP File backup

254624
endpoint system

Whether the file backup system is itself virtualized or not, it is colocated with the user’s HVD, and thus
the backup traffic does not traverse the WAN. This example illustrates the shift in connectivity brought
about by virtualization of the user’s desktop in the data center.

Internet Browsing

A PC Internet browsing example is illustrated in Figure 15. Consider a PC user located in a branch
accessing the Internet. There are two popular connectivity topologies for enterprises to choose from:
either a centralized Internet access point located within the data center, or access to the Internet through
a localized branch connection. When considering equivalent bandwidth connections, the user’s
experience is the same in both cases, although Internet access through the data center places bandwidth
demands over the enterprise IP WAN.
In the specific case of PC-based interactive multimedia content browsing, such as the viewing of
Flash-based video files, bandwidth demands precede the playing of the file. The content is downloaded
first and then played. Even in the case of some streaming content, the content viewing lags behind the
bandwidth demand, due to the buffering of the media file.

Cisco Virtualization Experience Infrastructure


35
Deployment Models

Once the interactive multimedia file is playing in the user’s PC (or, in the case of streaming media, once
the buffer has reached the end of the file), it places no bandwidth demands on the WAN or the Internet
connection.

Figure 15 PC-Based Internet Access

Branch Data Center


WWW Email, web
Media-rich content executing services
locally, post-download

Directory
C:\ Service
Enterprise
IP network
Unified
PC IP Communications
Bandwidth demand Manager
during file access only
HTTP Internet
HTTP Centralized internet access
Internet firewall

254678
Local internet access
access

When Internet access is achieved through a HVD, the shift in connectivity depends on the DV endpoint’s
capabilities. In Figure 16, consider a DV endpoint whose Internet browsing abilities are essentially
carried out by the HVD’s browser. This results in bandwidth consumption between the user’s HVD and
the Internet through the central Internet connection during the multimedia content download, followed
by bandwidth consumption over the enterprise IP WAN while the file is playing. The latter is an
important consequence of the displacement of content execution from the remote branch toward the
central data center: display updates from the HVD to the DV endpoint require WAN bandwidth, even
after the Internet content being viewed has been delivered to the HVD. In the case of looped interactive
multimedia animations, the Internet download of the content may represent a punctual, finite amount of
bandwidth over the central site’s Internet connection (for example, 1 Mbps for a few seconds), while the
display protocol’s demands on WAN bandwidth will last as long as the user’s desktop is rendering the
file.

Cisco Virtualization Experience Infrastructure


36
Deployment Models

Figure 16 DV-Based Internet Access

Data Center Media-rich content executing


locally, post-download

Media-rich content’s rendition consumes Display


bandwidth, post-download
Protocol
UCS compute
resources

Display updates
C:\
User’s Virtual
Desktop Storage

User’s traffic
Branch
User’s
actions Enterprise
IP network

DV IP
Internet
endpoint Internet
HTTP Centralized internet access
Internet

254679
Local internet access firewall
access

In another deployment model, shown in Figure 17, the DV endpoint, while still relying on the
centralization of much of the user’s computer resources in the data center, retains some local execution
capabilities for multimedia files.

Figure 17 Local Internet Access to Rich Media Content

Data Center Media-rich content


excised from HVD

HVD GUI information sent


without media rich content Display
Protocol
UCS compute
resources
Display updates

C:\
User’s Virtual
Branch Desktop Storage
User’s traffic

Media-rich content played


locally, combined with HVD
GUI updates by DV endpoint
User’s
actions Enterprise
IP network

DV IP
endpoint Media-Rich Internet Internet
content access Centralized internet access
Internet
254680

Local internet access firewall


access

Cisco Virtualization Experience Infrastructure


37
Deployment Models

Figure 17 illustrates a DV endpoint supporting local execution of media-rich content. In this deployment
model, the DV endpoint interacts with the display protocol and the HVD’s management system to retain
control over certain media types. Upon accessing interactive multimedia content over the Internet, the
DV subsystem instructs the DV endpoint to locally connect to the Internet to fetch the media file to be
viewed. Once received at the DV endpoint, the file’s content is locally executed on the endpoint, with no
dependency on the WAN connectivity to the user’s HVD. This approach can yield better performance
for the user, while reducing the bandwidth demands placed on the enterprise’s WAN. The DV endpoint’s
content viewing and rendering capabilities vary according to the DV subsystem’s functionality and the
type of endpoint. The Desktop Virtualization chapter offers more insight into various vendor options.

Printing

In the nonvirtualized environment shown in Figure 18, the printing actions of a PC user typically do not
generate network traffic outside the local branch. A user printing to a directly attached USB printer or
even to a network-attached IP printer will trigger a data connection flow from the PC to the printer
directly. The bandwidth consumption is essentially contained within the branch location.

Figure 18 PC-based Printing

Data Center
Branch
WWW Email, web
Printing connection services
remains local

IP Directory
Enterprise Service
IP network
Unified
DV Communications
endpoint Manager
Local printer
HTTP Internet
HTTP Centralized internet access
Internet firewall

254628
Local internet access
access

In a centrally virtualized environment, the user’s print actions rely on a connection from the user’s HVD
to the branch-located printer. This connection is either from the HVD to the DV endpoint’s USB port, as
illustrated in Figure 19, or to a branch’s IP networked printer as shown in Figure 20. In either case, the
print action is dependent on the connectivity from the user’s HVD to the branch location.

Cisco Virtualization Experience Infrastructure


38
Deployment Models

Figure 19 DV-based USB Printing

Data Center

Display
Protocol
UCS compute
resources

Display updates
Branch C:\
User’s Virtual
DV endpoint connects Desktop Storage
Display Protocol’s print

User’s traffic
channel to USB printer
IP

User’s
actions Enterprise
IP network
DV
endpoint Local printer Internet Internet
HTTP Centralized internet access
Internet

254629
Local internet access firewall
access

Figure 20 DV-based Network Printing

Data Center

Print connection
Display
UCS compute
Protocol resources
Display updates

C:\
User’s Virtual
Desktop Storage
Branch Local printer
User’s traffic

User’s
actions Enterprise
IP network

DV IP
endpoint Internet Internet
HTTP Centralized internet access
Internet
254630

Local internet access firewall


access

Cisco Virtualization Experience Infrastructure


39
Data Center

The functionality considerations of compute and network resources outlined in these figures apply to DV
deployments in general. The Cisco VXI system facilitates the deployment of centralized virtual
environments by targeting various subsystems to the DV challenges:
• Data center
The Cisco Unified Computing System™ (UCS) family of servers and fabric interconnect products
bring integrated management, performance, and scalability to the compute and storage needs of
centralized virtualization.
• Network
Cisco Wide Area Application Services (WAAS), Adaptive Security Appliance, and Application
Control Engine offer bandwidth efficiency, security and high availability to network connections.
• Endpoint
Cisco Unified IP Phones offer a guaranteed-quality video and audio experience to the user. Cisco
Unified Personal Communicator integrates telephony functionality into the desktop virtualization
user experience.
As part of Cisco’s Borderless Networks Architecture, the Cisco VXI system allows desktop
virtualization deployments to deliver a high-quality experience to the user while enabling the many cost
and operational advantages of centralized virtualization.

Data Center
In the context of this document, the term “data center” refers to the Cisco VXI subsystems supporting
the compute, storage and connectivity needs of desktop virtualization. The same data center may also
support the needs of other virtualized payloads, such as unified communications, email, directory, and
other services. Figure 21 illustrates the data center focus of this chapter. The data center subsystem (as
shown in the blue square) attaches to the enterprise network through the enterprise core switching
network. The additional components shown outside the blue square are covered in the Network chapter.
They critical challenges of scalability, manageability, security, and availability are satisfied by each of
the building blocks. The Cisco Data Center 3.0 architecture provides a robust and scalable environment
for hosting desktop virtualization, tightly integrated with the server and network resources and
controlled by an integrated management system.

Cisco Virtualization Experience Infrastructure


40
Data Center

Figure 21 Data Center Building Blocks

Data Center

Virtual Application machine pools Desktop Virtualization machine pools

WWW

Email, web Directory Unified


services Service Communications
Manager User Hosted Virtual Desktops

Hypervisor Hypervisor

Network and Network and


Cisco UCS
Manager storage traffic storage traffic
UCS server UCS UCS server
6100 Compute
FC Storage traffic
Internet
Internet access IP Storage &
firewall network traffic

Ethernet
ASA Switching
Enterprise Nexus 5000 MDS 9000
Network traffic NFS storage traffic
IP network Fiber Channel Switch

WAAS
Enterprise
Core
Nexus 7000 Storage array
ACE Fabric Interconnect

254631
& switching Storage

This chapter focuses on the compute, fabric interconnect, and switching and storage building blocks
supporting server and desktop virtualization.

Note This document is not intended to provide a complete reference architecture for the data center. Details
of the implementation are based on the Cisco Data Center 3.0 architecture and available from the Data
Center Design Zone on Cisco.com.

Compute
The Cisco VXI system’s compute building block uses Cisco Unified Computing System (UCS) blade
servers to host the virtual desktops. The Cisco UCS B-Series Blade Servers are housed in the Cisco UCS
5108 Blade Server Chassis, which can contain up to eight half-slot or four full-slot B-Series blade
servers. Each blade server contains significant processor, RAM, and I/O resources needed to support the
virtualization of the desktop environment.

Cisco Virtualization Experience Infrastructure


41
Data Center

Each Cisco UCS server has a hypervisor installed to allow various virtualized desktops and servers to
run as independent virtual machines (VMs). The physical servers can be divided into groups of
machines, each serving the needs of like-purpose virtual machines. For example, as illustrated in Figure
21, a group of servers is dedicated to unified communications, directory, and email servers, while others
serve the needs of desktop virtualization users.
Another form of resource grouping is possible in the virtual realm. Pools of virtual machines can be
defined along the lines of worker profile boundaries such as “task worker” “knowledge worker,” or
“power user.” This grouping may also be a function of organization boundaries such as sales, marketing,
and engineering. Each pool contains a set of like VMs with the same security and QoS policies. Pools
of virtual machines are associated with common resources by the hypervisor.
A virtual switch is integrated into the hypervisor, providing the virtual machines with Ethernet
connectivity within the software realm of the server. In other words, the frames originating from a virtual
machine are processed by software switching before being forwarded to the fabric interconnect and
switching block. This means that, if allowable by network policy, a frame originating within one virtual
machine, and destined to another virtual machine running on the same physical server, may be switched
entirely in the virtual realm, without reaching the network adapter of the physical server.
Another aspect of the virtualization of Ethernet switching is that a virtual realm is not bound to one
physical server: multiple blade servers may be connected to the same virtual switch. One important
advantage of this approach allows the Ethernet connectivity profile for a particular virtual machine to
follow VM movements from one Cisco UCS blade server to another without requiring the physical
reconnection of Ethernet ports.
The hypervisor also provides virtual desktops with storage resources. Each virtual desktop is associated
with a storage pool. Connectivity to the storage is provided in the virtual realm by the hypervisor and
can be physically located on the UCS blade server (a local drive) or remotely attached from within the
storage building block (a storage area network [SAN]). In order to support virtual machine migration,
the data storage for a particular set of users must be common to the entire set of physical servers hosting
those virtual desktops. For this reason, remote storage is required for DV, The remote storage may be
located on a Fibre Channel SAN or on the IP network in the form of network file server (NFS) or
iSCSI-based storage.
For access to Ethernet data and network-based storage resources, the hypervisor bridges the virtual and
physical realms by interfacing the virtual desktops with converged network adapters (CNAs) available
on the physical blade servers. The CNAs provide the physical server (and thus the hypervisor) access to
a pair of Ethernet NICs and a pair of Fibre Channel host bus adapters (HBAs). Each CNA combines the
Ethernet and Fibre Channel traffic unto a unified fabric called Fibre Channel over Ethernet (FCoE). The
hypervisor maps the physical network interface cards (NICs) to virtual ones as part of the virtual switch.
The physical HBAs are bridged directly to virtual ones.
On the back of each Cisco UCS 5108 Blade Server Chassis are two Cisco UCS 2100 Series Fabric
Extenders that provide the network connectivity for all the blade servers. These modules act as
aggregators for the traffic coming from the CNAs. They provide no switching functionality. Each Cisco
UCS fabric extender provides up to four 10 Gigabit FCoE uplinks, which are connected to a pair of
Cisco UCS 6100 Series Fabric Interconnects. In Figure 21, the Cisco UCS 6100 Fabric Interconnect is
depicted as straddling the Compute and Fabric Interconnect & Switching building blocks because:
• It supports the inter-blade traffic as well as traffic flowing between the blades and the enterprise IP
network or storage
• It hosts the Cisco UCS Manager, allowing the integrated management of compute and switching
fabric resources.

Cisco Virtualization Experience Infrastructure


42
Data Center

By using Cisco UCS Manager in combination with the DV management software, administrators can
deploy virtual machines, perform software upgrades, migrate virtual machines between physical servers,
and extend compute resource control over thousands of virtualized desktops. In addition, the Cisco UCS
Manager and the DV management software provide tools for monitoring and managing the entire
desktop virtualization environment.

Fabric Interconnect and Switching


Ethernet and storage traffic flows to and from the compute building block are supported by the
integration of the Cisco UCS 6100 Series Fabric Interconnects. The Cisco UCS 6100 Series terminates
the FCoE traffic flows coming from the blade enclosures. The Ethernet traffic is separated out and
forwarded to the data center switch fabric (composed of Cisco Nexus® 5000 and 7000 Series Switches).
Any Fibre Channel is forwarded via Fibre Channel uplinks on the Cisco UCS 6100 Series Fabric
Interconnects onto the Fibre Channel storage area network (SAN). The Fabric Interconnect and
Switching building block supports both the Networking and Storage data flows from the Compute
building block.
The Cisco Nexus 5000 Series and 7000 Series Switches afford high-speed Ethernet switching. They are
generally installed as redundant pairs to maximize uptime. In conjunction with the Cisco UCS fabric
interconnects and the Cisco Nexus 1000V Series Switch, they provide a tightly coupled infrastructure
for supporting desktop virtualization.
Important advantages of the Cisco Nexus and UCS platforms are:
• Highly available hardware design
• Multiple redundant 10 Gigabit Ethernet interfaces
• Low-latency switching fabric
• Consistent management interface

Storage Data Flows


The Cisco VXI system uses external storage to provide the virtual desktops with simulated local hard
drives. The main advantage of this approach is that the information for each virtual desktop is not tied
to a particular server, virtual machine, or hard drive. This aids in virtual desktop migration, which may
be part of compute resource load balancing or may be necessary because of an equipment or network
outage.
Remote storage can take many forms. It can be from a network file server (NFS) or part of an iSCSI
storage array located somewhere on the Ethernet network. It can also be part of a Fibre Channel Storage
Area Network. Regardless, the hypervisor will map the incoming storage data to a local storage pool and
present it to virtual desktop as if it were a local hard drive. There may be multiple storage pools in the
system presenting different data. One may provide the operating system and applications and a second
one may provide the user data.
Note in particular the following about storage data flows:

IP-based Storage Flows


Storage traffic originating from the NFS server is switched as Ethernet traffic to the storage array via the
Cisco Nexus 5000 or 7000 Series Switch. The location of network-based storage impacts Cisco VXI in
two areas: available throughput to the storage array and isolation of storage traffic.

Cisco Virtualization Experience Infrastructure


43
Data Center

Fibre Channel Storage Flows


Fibre Channel-based storage traffic will originate on a Fibre Channel storage array attached to a Cisco
MDS 9100 Series Multilayer Fabric Switch. The Cisco MDS 9000 family of switches allow for multiple
arrays to talk to multiple hosts (servers) in much the same manner as an Ethernet switch would. The
Cisco MDS 9000 family of switches are connected to the Cisco UCS 6100 Series Fabric Interconnect.
Traffic is then encapsulated by the 6100 into FCoE frames and passed onto the server’s converged
network adapters (CNAs), where it is decapsulated back into Fibre Channel packets presented on the
vHBA (part of the CNA).

Storage Considerations
The Cisco VXI system supports unified storage subsystems providing both Fibre Channel and NFS
storage resources. Testing was done with the EMC CX4-240 and NetApp FAS 3170. Each provides
NAS-based and SAN-based storage connectivity options. Each array is broken down into several logical
unit numbers (LUNs), also called partitions. Each LUN can be shared among several servers (and virtual
desktops). For NFS access, the array is broken down into separate file systems. Each file system is
mapped by the hypervisor into local storage pools.

Hosting Desktop Virtualization in the Data Center


Desktop virtualization’s performance footprint must be laid out in order for the data center resources to
be allocated adequately. While DV shares many attributes with more generic virtualization payloads
such as web server virtualization, it is distinct in the following ways.

Compute Power

The virtualized machine is that of a user—in contrast, for example, to a web server, a virtualized desktop
machine carries the load of a single user. RAM, storage, and connectivity requirements may be much
smaller than those of a server when considered at an individual level, but the aggregation of hundreds of
DV machines on a physical server represents a significant load on the system.
DV machines need to be insulated from other virtualized payloads in the data center. This is both to
shield the DV machines from the compute demand peaks of server machines, and also to shield server
machines from the aggregate load effects of the DV machines. While an application server may represent
a heavy load on compute resources, its intensity can be near-constant during the work day. For example,
an email server may serve a few hundred connections at any given time, and this intensity, though
variable, is not likely to abruptly change. A single DV machine running a virus scan generates minimal
resource load. However, if all DV machines launch a virus scan operation at the same time, the aggregate
load will represent a sharp crest in the consumption of compute resources. Likewise, if all the DV
machines boot up at the same time (for example, at the beginning of the workday), the aggregate RAM,
CPU and I/O consumption will spike.
To prevent the DV machine resource spikes from interfering with other applications running on virtual
machines, the CPU and RAM boundaries of compute resources are aligned with physical blade servers.
DV machine pools should be deployed on dedicated sets of physical servers.

Network Connectivity

DV connectivity is user-based. In contrast to a web server, which would essentially listen on port 80 for
HTTP connections, a user’s DV machine is the originator of traffic. The quantity of bandwidth required
to service a DV user is smaller than that of a server, when taken on an individual basis. However, once
aggregated by the hundreds, DV machines represent a significant traffic load on the network. The

Cisco Virtualization Experience Infrastructure


44
Data Center

destination of the DV machines’ connectivity is very different than that of servers. The types and
volumes of traffic typically seen from access switches in the branch office and campus are now
originating from within the data center. DV machines will connect to OS update services, will originate
Internet traffic to various websites, will serve as the source of email protocol connections (IMAP,
SMTP), and so on. Based on the user profile, dozens, if not hundreds of flows not usually seen in servers
will now originate from the compute resources.
Applications with reliance on network connectivity, such as backup software, may represent heavy loads
on the network. When aggregated by the hundreds, DV machines backup operations can occupy a large
proportion of the network connectivity links, most especially if launched at the same time.
The quantity of network connectivity allocated to DV machines needs to be determined by the types of
user applications. In addition, this connectivity needs to be protected from depletion by other types of
payloads in the data center. This is implemented by carving out dedicated network resources in the
virtual and physical realms. The VLAN, subnet, physical and virtual network adapters, uplinks, and
switching fabric configurations need to be aligned as DV port profiles, targeted to the needs of DV user
pools.

Storage

In contrast to a PC user’s locally provisioned hard drive, the DV user’s file system is aggregated in the
data center and stored in a shared array, and thus rely on the storage building-block to house the users’
desktop and OS files. The DV virtual machines’ individual storage size (in the order of 20 GB) is modest
in comparison with that of say, an email server. When aggregated, the total storage needs grow, but not
necessarily linearly with the quantity of users. Various techniques are employed by desktop
virtualization control software to eliminate file duplication between DV machines. If a hundred user
machine OS file systems require the same given file, it can be stored once, and made available to all DV
machines. This technique allows for a considerable reduction of storage requirements when compared
to the linear growth of storage that would occur if files were simply duplicated in storage.
The storage subsystem’s most precious currency then becomes the input/output operations per second
(IOPS). For example, if dozens or even hundreds of DV machines are booted up at the same time, the
simultaneous drain on storage resources will be most felt at the IOPS level. When a data center supports
multiple types of virtualized machine payloads, there needs to be a degree of storage segregation, to
protect the DV machines’ IOPS resources from the effects of server machine IOPS, as well as vice versa.
IOPS rely on storage bandwidth being apportioned to pools of virtual machines, via the configuration of
the hypervisor and the fabric interconnect switches.
Storage

Allocation of Resources

DV Machine Density
The density of DV machines per physical server is generally much larger than that of application servers.
While the limiting factors are memory, CPU capacity, and network throughput, additional limitations
may be imposed by the virtual desktop software used. The number of VMs per server pool may be
limited by either the connection broker or the DV management software. The Performance and Capacity
chapter offers some guidelines on the DV machine density based on task worker profile testing. The load
represented by the task worker profile is detailed, and the resulting compute density recommendations
are explored.

Cisco Virtualization Experience Infrastructure


45
Data Center

Network Connectivity Allocation


As the number of virtual desktops grows, so does the complexity of management, security, and
availability. It is important to have intelligent services deployed at the virtual desktop level since the
user’s computing environment has been migrated to the data center. The separation of the DV server pool
(as shown in Figure 21 at the beginning of this chapter) through the use of different VLANs needs to
be deployed throughout the switching fabric. These VLANs may terminate on the data center router or
a firewall based on the company’s network policies.
In the Cisco VXI data center, the virtual and the physical connectivity realms need to be jointly
provisioned to offer appropriate network connectivity to the DV machine pools.
Figure 22 illustrates the integration of the physical and the virtual Ethernet switching functionality. In
this example, the VLANs assigned to three separate pools of machines are implemented in the Cisco
Nexus 5000 Series Switches and the UCS 6120XP 20-Port Fabric Interconnect at the physical level,
while the hypervisor provides the VLAN implementation at the virtual level. The VLAN numbers shown
for the separate pools of virtual machines are arbitrary. They should be assigned according to the
network policies set by the data center administrators.
The pool with VLAN 21 assigned, represents applications that need access to the public Internet. Those
virtual machines are typically firewalled from the internal applications hosted on the servers in the pool
marked with VLAN 22. For this example, VLAN 23 represents the new pool of servers hosting the
virtual desktops. This pool is most likely also firewalled from the VLAN 22 machines. This subject is
covered in more detail in Network and Cisco VXI Security

Figure 22 Network Connectivity, Physical Resource Allocation

Cisco UCS Nexus 5000

Port Profiles
Cisco UCS Fabric Interconnect VLAN
QoS
Security
Cisco UCS Blade Servers
Hypervisor
DV Virtual Machines

VLAN 21 VLAN 22 VLAN 23

Web Server Farm Application Server DV Pool


- Unified Communications - Connection Brokers
Manager - Virtual Desktops
254633

- Active Directory - DV Management SW


- Data Base

Cisco Virtualization Experience Infrastructure


46
Data Center

Implementing DV Using VMware ESX/ESXi Hypervisor


The VMware ESX/ESXi hypervisor (which is part of VMware vSphere) provides many of the features
just described. It is installed on each physical server and managed by VMware vCenter (see Figure 23).
ESX/ESXi supports virtual switching. In this reference architecture, the Cisco Nexus 1000V Series
Switch was used. This provides a distributed virtual switch which crosses several physical servers. Port
profiles can be created once and associated with each virtual desktop. If the virtual desktop migrates, the
port profile will migrate with it. This profile is created within the Cisco Nexus 1000V Series Switch and
contains information such as VLAN assignment, QoS policies, and security access control lists (ACLs).
The port profile is linked to VMware vCenter virtual machine profile, such that if vCenter migrates a
particular virtual desktop via vMotion, the associated port profile will also migrate.

Figure 23 VMware ESX/ESXi and Cisco Nexus 1000v integration

VM VM VM VM VM VM VM VM VM VM VM VM

Nexus Nexus Nexus


1000V 1000V 1000V
VEM VEM VEM
vSphere vSphere vSphere

Virtual Supervisor Module (VSM) Virtual Ethernet Module (VEM)


Virtual Appliance running Cisco Enables advanced networking
NX-OS (support high availability) capability on the Hypervisor
Performs management, monitoring, Provides each VM with dedicated
and configuration Switch Port
Tightly integrates with VMware Collection of VEMs = one
vCenter server vNetwork distributed switch.

254632
Cisco Nexus VMware vCenter
1000V VSM server

The network security issues that were once dealt with through wiring closet switches are now a function
of the data center network. Because the environment is now virtualized, it is necessary to implement a
distributed virtual switch that can see into the physical servers and have access to the virtual desktop’s
network connectivity. To perform this task, the Cisco Nexus 1000V Virtual Ethernet Modules (VEMs)
are integrated into the ESX/ESXI hypervisors on each of the servers. This provides a single, manageable
entity that spans multiple ESX/ESXI hosts. Two additional modules —virtual switch modules
(VSMs)—are created for the virtual switch control plane. With the assistance of VMware vCenter, the
Cisco Nexus 1000V Series allows for multiple virtual desktop VMs to migrate within the server pool
without duplicating VM- or port-specific policies.
Security, QoS, and network polices span multiple ESX/ESXI hosts without having to reproduce those
policies per server. Targeted port profiles can be created for the specific requirements associated for each
type of user and the associated virtual desktop. Troubleshooting of virtual desktop connectivity issues is

Cisco Virtualization Experience Infrastructure


47
Data Center

enhanced through the built-in switched port analyzer (SPAN) protocol. Lastly, increased security is
implemented by the use of several additional features such as VLANs, private VLANs, port security,
Network Admission Control (NAC), IEEE 802.1x authentication, security ACLs.
The network segregation needs to start in the virtual realm, where the ESX/ESXI hypervisor connects
the DV machine’s virtual network interface card (vNIC) into a virtual switch port controlled by the Cisco
Nexus 1000V VEM modules.
The virtual network connectivity is carried out across physical servers: that is, multiple VEMs are
deployed as a single virtual distributed switch, allowing for the movement of virtual machines across the
boundaries of physical servers while remaining in the same Layer 2 and Layer 3 context.
This relies on the joint configuration of physical Ethernet connectivity in the Fabric Interconnect and
Switching building block.
A hypervisor allows multiple operating systems to run concurrently on a host computer— a feature
called hardware virtualization. A combination VMware ESX/ESXI and ESX/ESXI (Versions 4.0 and
4.1) were used as the hypervisor for managing the virtual desktops regardless whether the user chooses
VMware View or Citrix XenDesktop deployments. VMware ESX/ESXI is installed as part of the
vSphere 4.0 package. VMware vCenter, which is also included as part of vSphere, is software for the
creation, management, and monitoring of the VMs used for the virtualized desktops. The user should
follow the installation instructions provided by VMware.
Here is a quick summary of guidelines to enhance the Cisco VXI deployment:
• ESX/ESXi should be set to boot from the remote storage where possible. This allows for
off-the-shelf server replacements as well as centralized configuration management of the ESX/ESXi
software.
• Set the ESX/ESXI management Ethernet address on a different subnet from the Cisco UCS Manager
network. This allows the user to have access to the virtual KVM console (part of the UCS Manager
software) even if ESX/ESXI is not functional.
• Synchronize the ESX/ESXI clock to the external Network Time Protocol (NTP) server. Then
synchronize the virtual desktop OS clocks to the ESX/ESXI clock. This will help ensure that logging
information will maintain time consistency. You can do this through the VMware tools installed on
the virtual desktop.
The Cisco UCS B-Series Blade Servers offer converged connectivity supporting network and storage
protocols in a single adapter. Figure 24 shows a server with a pair of CNAs installed and the logical
connections made within the server. Each CNA presents to the server a Fibre Channel HBA and an
Ethernet NIC. When ESX/ESXI is installed, the Fibre Channel HBAs are mapped to virtual ones, which
can be used for attaching remote storage. The Cisco Nexus 1000V Series is not aware of the HBAs since
they are not mapped into the internal virtual Ethernet framework. More on this is discussed in the storage
section of this chapter. The Ethernet interfaces ARE mapped to the Cisco Nexus 1000V virtual switch.
The Cisco Nexus 1000V Series treats these as uplink ports. They are usually configured as trunk ports
and connected through EtherChannel for redundancy and load balancing. Internal to the Cisco Nexus
1000V Series, several virtual interfaces (vETHs) are created. One is reserved for the kernel traffic
(connection to vCenter) and is put into a separate VLAN and assigned a unique IP address. A second
one is reserved for vMotion, and again placed in a separate VLAN and assigned a second unique IP
address. Neither of these interfaces is seen by the virtual desktops. For security reasons, the virtual
desktops do not even have access to these VLANs. One final reserved vETH may be created if the
ESX/ESXI datastore is connected via Ethernet-based storage (NFS). The Ethernet-based storage traffic
should also be isolated into its own VLAN.

Cisco Virtualization Experience Infrastructure


48
Data Center

Figure 24 CNA Connectivity with ESX/ESXi

Cisco UCS B200 Server

ESX Hypervisor ESX


Datastore CNA0
Active
vmhba0
vmhba2 HBA
HDV1 1 Fibre Channel 10G FCoE
Virtual vmhba1
HDV2 NIC
Desktops
HDV3 vmhba2

2 NAS
C:\ CNA1
HVD1 vEth6 Standby
HBA
Nexus 1000V 10G FCoE
(VEM 1) NIC
vEth5
C:\
HVD2 Eth1
vEth4
Eth2
Traffic Flows
vEth3 FC storage
CMIC
C:\ Server MGT
HVD3 DV Migration
vEth2 vEth1 Hypervisor Mgmt.
NAS Storage
Virtual Desktop
vMotion vKernel

254634
Additional VLANs will be created for the virtual desktops. The IP addresses may be statically defined
or acquired via Dynamic Host Configuration Protocol (DHCP). The addresses assigned and subnet mask
needs to be created according to the size of the virtual desktop pool. Virtual desktops can migrate only
within the same pool (subnet, VLAN, and so on). For each virtual desktop, only a single vNIC needs to
be created. It is possible to have virtual desktops from different pools running on the same physical
server. The configuration for the trunk ports needs to include all possible VLANs, including the ones for
the allowed virtual desktops. The Cisco Nexus 1000V Series will create an EtherChannel across the two
CNA (NIC) interfaces, so loads on each interface should be monitored to ensure that the appropriate load
balancing algorithm was selected.
The CMIC is forwarded internally to the UCS 6100 Fabric Interconnects. The Cisco UCS Manager
assigns and manages that interface. The IP address assigned to the Cisco Integrated Management
Controller must be on the same subnet as the UCS Manager. This is the address that will be used for
remote KVM and server maintenance.
The VMware vKernel interface will be used to communicate with vCenter. The IP address should be
assigned to a separate VLAN from the Cisco Integrated Management Controller interface. The VLAN
used must be routable to the vCenter server’s VLAN.
The vMotion interface is used for virtual desktop mobility. It also should be assigned to a separate VLAN
that is common to all the servers within the same virtual desktop pool. This network does not need to be
extended beyond the access layer, since vMotion is a not routable protocol. vCenter will use the vMotion
interfaces on the servers (and the associated VLAN) for the traffic flow needed to migrate the virtual
desktop. Because the virtual desktop’s storage is maintained remotely, only the contents of RAM (the
CPU state) of the desktop are transmitted.

Cisco Virtualization Experience Infrastructure


49
Data Center

Implementing DV Using Citrix XenServer Hypervisor


Citrix XenServer provides many of the hypervisor features mentioned earlier. It is installed on each
physical server and is managed by XenCenter, which is an application that runs on any Windows
machine. The actual data is stored within each XenServer. While XenServer supports virtual switching,
it does not support the Cisco Nexus1000V Series Switches. Therefore, port profiles must be created
either in the UCS Manager or on the Cisco Nexus 5000 Series.
Figure 25 shows a Cisco UCS B200 M2 Blade Server with a pair of CNAs installed and the logical
connections made within the server. Each CNA presents to the server a Fibre Channel HBA and an
Ethernet NIC. When XenServer is installed, the Fibre Channel HBAs are mapped to virtual ones which
can be used for attaching remote storage. XenServer treats these as uplink ports. The user creates
“networks” that serve as pass-through tunnels for the virtual desktops. It assigned a VLAN which is
common across all the physical servers that make up the DV user pool. Separate management and
XenMotion networks should also be created. The optional Ethernet-based storage traffic should also be
isolated into its own VLAN. Since the number of networks (VLANs) exceeds the number of physical
NICs, the pair of vNICs should be placed into a NIC team. At the time of publication, Citrix supports
only Server Load Balancing on the NIC team. However, Link Aggregation Control Protocol (LACP)
implementation instructions are located at:http://www.sysadminhead.com/2009/11/.

Figure 25 CNA Connectivity Using XenServer

Cisco UCS B200 Server

XenServer Storage
Hypervisor repository CNA0
Active
HBA
HDV1 1 Fibre Channel 10G FCoE
Virtual
HDV2 NIC
Desktops
HDV3
or 2 NAS
C:\ CNA1
HVD1 Standby
Storage Network HBA
10G FCoE
vNIC1 NIC
XenMotion
C:\ Network
HVD2
HDV User vNIC2
Network

Mgmt Traffic Flows


Network CMIC FC storage
C:\ Server MGT
HVD3 XenServer Virtual DV Migration
Switch Hypervisor
Management
NAS Storage
Virtual Desktop
254677

Cisco Virtualization Experience Infrastructure


50
Data Center

Virtual desktops can migrate only within the same pool (subnet, VLAN, and so on). For each virtual
desktop, only a single vNIC needs to be created. It is possible to have virtual desktops from different
pools running on the same physical server. The XenServer management interface will be used to
communicate with XenCenter. The IP address should be assigned to a separate VLAN than the CMIC
interface.
The XenMotion network is used for virtual desktop mobility. It also should be assigned to a separate
VLAN that is common to all the servers within the same virtual desktop pool. This network does not
need to be extended beyond the access layer. XenCenter will use the XenMotion network on the servers
(and the associated VLAN) for the traffic flow needed to migrate the virtual desktop.

Connecting the Blade Server to the Network


As Figure 26 shows, Cisco Integrated Management Controller is forwarded internally to the Cisco UCS
6100 Fabric Interconnects. The Cisco UCS Manager assigns and manages that interface. The IP address
assigned to the CMIC must be on the same subnet as the Cisco UCS Manager. This is the address that
will be used for remote KVM and server maintenance.

Figure 26 CNA Connectivity & Flows

Cisco UCS B200 Server


4/8 Gb FC SAN A
CNA0
Hypervisor HBA NAS
10G FCoE 10 GE
NIC
C:\ UCS
6100 Aggregated
HVD1 Ethernet
CNA1 Traffic
UCS
HBA 6100
10 GE
10G FCoE
NIC
C:\ NAS
HVD2
SAN B
4/8 Gb FC

C:\
HVD3 Traffic Flows
FC storage
Server MGT
DV Migration
Virtual Hypervisor Management
254635

Desktops NAS Storage


Virtual Desktop

Several VLANs must be created to support the Cisco VXI deployment. These are shown in the Traffic
Flow box in Figure 26 for the data center, but there are additional ones needed throughout the network.
Table 4 outlines an example set of VLANs needed:

Cisco Virtualization Experience Infrastructure


51
Data Center

Table 4 List of Required VLANs for Cisco VXI Deployment

Name Devices Usage Notes


Data Center
UCS Mgmt UCS Blade Enclosures, Out of Band Management of This VLAN may be in common
Fabric Interconnects, all UCS Components with other Management VLANs
And Blade Server CMIC
vMgmt Hypervisor Management of all This VLAN is specific to a
Hypervisors particular DV pool, therefore
there may be separate VLANs for
each pool
vMigration Hypervisor Migration of DV VMs within Like vMgmt, there may be
the DV Pool several separate VLANs for each
pool
N1KControl Nexus 1000V VEM Nexus 1000V Management Common to all physical servers
& and VSM that provide the distributed
switch functionality
N1KData*
There may be more than one
Nexus 1000V installed in a data
center
VMData DV Virtual Machines User’s Network Access DV machines should only need
DV Display Protocol one NIC.
One VLAN for each DV Pool
VMStorage Hypervisor Provides isolation for storage For FC or IP based storage
traffic

Endpoint Location
Data** DV Endpoint Connection to the DV virtual The DV Data VLAN may be
Machine segregated from the legacy
compute endpoint VLANs
Voice** UC Endpoints Provides UC access Traditional Voice VLAN
Note *These VLANs are only used when deploying the Cisco Nexus 1000v

Note **The endpoint data and voice VLANs are discussed more in the Network Chapter.

For the virtual desktops, as shown in Figure 27, the traffic flows include:
1. Display and control protocols that will be sourced from the virtual desktop and terminated on the
remote endpoint.
2. Flows associated with access to applications existing within the data center which will travel across
the L2 segment until it reaches either the router in the Aggregation Layer or a Firewall and then
routed to another L2 segment containing the application.
3. Internet access flows sourced from the virtual desktop and travel up the data center network stack,
cross over into the Enterprise Network, and exit the corporate network via the Internet Router. They
may cross several firewalls along the way.
4. On rare occasions, traffic between the virtual desktops directly:
a. If the flows occur between desktops contained within the same pool and on the same physical
server, this traffic will never exit that server. If this needs to be prevented, the administrator can
configure private VLANs.

Cisco Virtualization Experience Infrastructure


52
Data Center

b. If the flows occur between desktops not on the same server but within the same pool, the traffic
will travel up to the aggregation layer switch then back down to the server containing the other
virtual desktop.
c. If the flows occur between desktops not server pool, then the traffic will flow up the network
stack to the aggregation layer and get routed to the other subnet and then forwarded to the server
containing the other virtual desktops.

Figure 27 Virtual and Physical traffic flows

DV Display
traffic Data Center Physical Server
Hypervisor

1
Virtual Switch
Core Aggregation

Enterprise 2
Network
4c 4a
4b

Internet Virtual Switch Virtual Switch


Hypervisor Hypervisor

254636
DV user
Internet traffic Physical Server Physical Server

Storage Options
Virtual desktops rely on a shared storage infrastructure to contain the user's machine environment
including the operating system, user profiles and if necessary, each user’s personal data. Storage is
offered to the virtual desktop VM in a form that emulates a directly attached disk, but is physically
located on a shared storage platform that is accessed through a 10 Gigabit Ethernet, FCoE or Fibre
Channel connection.
In Figure 26 earlier in this chapter, the Fibre Channel HBAs were shown to be part of the list of CNAs.
These are visible to the hypervisor, mapped to virtual HBAs, and can be used to create virtual storage
pools that are physically located on the remote storage device. The Fibre Channel traffic either travels
directly across a fiber channel link, or is encapsulated in an FCoE connection. In the case of FCoE, the
storage data travels upstream to the Cisco UCS 6100 Series Fabric Interconnects where it is separated
back out onto Fibre Channel uplinks. Those uplinks are connected to Cisco MDS Fibre Channel switches
and the Fibre Channel storage array is also connected to the Cisco MDS switch (see Figure 28). Fibre
Channel connections typically operate in an active or standby mode. In some cases, depending on the
storage array vendor, the uplinks from Cisco MDS switch A and Cisco MDS switch B may point to the
same array. However, a particular logical unit number (LUN) will show up on only one of the interfaces
at a time. For traffic load balancing, the user should mix the LUN assignments across the two SANs.

Cisco Virtualization Experience Infrastructure


53
Data Center

Figure 28 Fibre Channel Storage Network

SAN 1 SAN 2
FC Storage Arrays

FC Traffic FC Traffic

MDS FC Switches

FC Traffic FC Traffic
UCS 6100 Fabric
Interconnects

FCoE Traffic FCoE Traffic

254637
UCS Blade Systems

An alternative to Fibre Channel based SAN storage is Ethernet-based NFS storage (Figure 29). Like
Fibre Channel attached storage, the remote array is mapped to a hypervisor storage pool, and the
hypervisor provides virtual desktops with simulated local storage. Since the storage is accessed over
an Ethernet network, a separate VLAN should be created to isolate storage I/O from the rest of the
network.

Figure 29 Ethernet-based Storage Network

Nexus
7000

Ethernet Traffic

Nexus
5000

Ethernet Traffic Ethernet Traffic

FCoE Traffic
254638

NAS Storage Compute

The decision to use Fibre Channel SAN or Ethernet-based NAS is up to the user. The storage traffic will
consume some of the CNA’s network bandwidth. QoS policies and queue mapping within the UCS
Manager should be done for the storage traffic. A storage VLAN should be created to isolate this traffic
in the case of NFS-based storage, and jumbo frames should be configured where possible in order to
optimize the traffic sent.

Cisco Virtualization Experience Infrastructure


54
Data Center

There are a few things to take into account when deploying shared storage. If many different VMs share
the same storage, accessed through shared links, the possibility for the aggregated traffic to exceed the
available resources may exist. Also, if any one virtual desktop runs an application requiring a lot of
storage device I/O, it may starve others relying on the same shared pool of storage resources. Segregating
the shared resources according to virtual machine type, and configuring resource pools for each of the
DV pools can help. The isolation of HVDs from general-purpose servers is best achieved by configuring
segregated service profiles, each assigned to separate storage volumes. The number of VMs assigned to
a volume should not exceed the aggregate I/O bandwidth available to the specific volume. The specific
configuration steps for this are part of the server template configured in DV management software.

For More Information


Table 5 provides more detailed information regarding specific vendor information including hardware
and software discussed in this chapter.

Cisco Virtualization Experience Infrastructure


55
Network

Table 5 Data Center Links

Cisco Data Center Design Zone


http://www.cisco.com/en/US/netsol/ns743/networking_solutions_program_home.html
Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B Series Blade Servers
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
VMware VSphere Documentation Page
http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esxi40_i_vc40.html
Cisco Solutions for a VMware View 4.0 Environment Design Guide.
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/cisco_VMwareView.pd
f
Cost-Effective Storage Solutions for VMware View 4.5 Enabled by EMC Unified Storage
http://www.emc.com/collateral/software/white-papers/h8033-vmware-view-unified-storage-wp.pdf
Proven Solution Guide: EMC Infrastructure for Virtual Desktops - Enabled by EMC Celerra Unified
Storage (FC), VMware vSphere 4.1, VMware View 4.5, and VMware View Composer 2.5
http://www.emc.com/collateral/software/technical-documentation/h8042-virtual-desktop-celerra-fc-v
mware-psg.pdf
Reference Architecture: EMC Infrastructure for Virtual Desktops - Enabled by EMC Celerra Unified
Storage (FC), VMware vSphere 4.1, VMware View 4.5, and VMware View Composer 2.5
http://www.emc.com/collateral/solutions/reference-architecture/h8027-virtual-desktop-celerra-fc-vm
ware-ra.pdf
White Paper: Deploying Microsoft Windows 7 Virtual Desktops with Windows Server 2008 R2
Hyper-V—Applied Best Practices
http://www.emc.com/collateral/software/white-papers/h8041-windows-virtual-desktop-hyper-v-wp.p
df
VMware View on NetApp Deployment Guide
http://media.netapp.com/documents/tr-3770.pdf
Citrix XenDesktop Deployment Guide
http://www.citrix.com/%2Fsite%2Fresources%2Fdynamic%2Fsalesdocs%2FXD_Enterprise_Design_
White_Paper.pdf
Citrix XenServer 5.6 Documentation
http://support.citrix.com/product/xens/v5.6/
Scalability Study for Deploying VMware View on Cisco UCS and EMC Symmetrix V-Max Systems
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/vdiucswp.htm
l

Network
The network design for Cisco VXI system is based on Cisco's best practices for deploying campus,
WAN, and data center network infrastructures. The network infrastructure can be broken down into three
main functions that enable the display protocol to flow over the network, as illustrated in Figure 30.

Cisco Virtualization Experience Infrastructure


56
Network

These functions are security, optimization, and availability. Security, optimization and availability
consist of many components that combined form the Cisco VXI system. This chapter covers each
function in depth and provides best practices for deployment, configuration, and operations.

Figure 30 Functional Flow for a DV Network

Security

Display
protocol Optimization
over the
network

254395
Availability
Endpoints Data Center

The network provides the infrastructure for delivering secure, optimized, and available applications. One
of these applications is DV. One of the main drivers for DV is the increase in security. With DV
deployment, the enterprise workspace is moving into the data center. Virtual desktops are more secure
because they are located in the data center along side the corporate servers, which all ready are treated
with high security measures.
Many DV deployments permit USB redirection, which means data traversing the data center to the client
to allow for USB flash drive data transfers. However this is a security policy. The connection broker
administrator can disable USB mass storage device redirection.
Additional services must be applied to the network to provide network traffic monitoring,
troubleshooting, QoS control, network device protection, and secure access. The following network
services should be considered for delivering secure, optimized, and available DV applications:
• When deploying hosted virtual desktops, network services such as Domain Name Server (DNS),
Dynamic Host Configuration Protocol (DHCP), and Active Directory should be installed on separate
servers. We recommend implementing network segmentation, virtualization, and firewall
technologies to enforce proper access control policies between hosted virtual desktops and corporate
servers.
• Consider applying traditional campus access services in the data center access layer for
infrastructure security: for example, access control lists (ACLs), Layer 2 security such as port
security, dynamic Address Resolution Protocol (ARP) inspection, IP source guard, and private
VLANs.
Optimization can accelerate a variety of applications accessed by the system's users. The primary
purpose for optimization technologies is to eliminate redundant data transmissions. Optimization can be
achieved by populating data in local caches, compressing, prioritizing, and streamlining chatty protocols
such as Common Internet File System (CIFS), Messaging Application Programming Interface (MAPI),
HTTP, and Network File System (NFS) protocol. A form of optimization also known as WAN
optimization (Cisco Wide Area Application Services) helps avoid packet delivery issues common with
WAN environments. WAN optimization includes TCP flow optimization, data redundancy elimination,
wide area file services (WAFS), CIFS proxy, HTTP proxy, media multicasting, web caching, forward
error correction (FEC), and bandwidth management.
Maintaining application uptime and availability is a major concern of IT system administrators. Many
mission-critical applications in the Cisco VXI system require transparent failover if parts of the system
become unresponsive. Server uptime is important in helping prevent lost productivity and ensuring that

Cisco Virtualization Experience Infrastructure


57
Network

users can access their virtual desktops without any interruptions. Cisco VXI systems must be able to
detect failures of a connection broker or virtual desktop and use a dynamic approach to mitigate these
failures.
In virtual desktop environment(s), verification of several layers of infrastructure (such as a connection
broker, Active Directory, servers and storage) may be desirable before a virtual desktop request is sent
to a server. To determine that all pieces are active and available, load balancers are needed to verify the
connection broker. The connection broker will verify the virtual machines and servers hosting them.
Active health checking is required to test the application server and not just the protocol layer. The Cisco
ACE Application Control Engine part of the Cisco VXI solution provides these capabilities.
Figure 31 represents the three main functions of the Cisco VXI system for securing, optimizing, and
providing availability. The Cisco Wide Area Application Engines (WAE) device with Cisco Wide Area
Application Services (WAAS) sits at the network edge in both the data center and branch, as shown in
Figure 31. This enables the Cisco WAAS to accelerate DV delivery or application delivery over DV and
routing all traffic through a secure virtual private network (VPN) tunnel set up by the Cisco ASA 5500
Series Adaptive Security Appliances (ASA). The vast majority of branch deployments use
router-to-router VPN and not ASA-to-ASA VPNs. Cisco ASA appliances are usually just for remote
access (IPsec VPN client, Secure Sockets Layer [SSL] VPN). Cisco IOS® Software has a much richer
set of site-to-site VPN support. In a Cisco VXI system-enabled network VPN between sites can be
serviced by Cisco routers and/or Cisco ASA appliances. Figure 31 represents the functional component
flow of the display protocol from the branch to the data center across a VPN tunnel.

Figure 31 Component Flow of the Network

Hosted Virtual
Branch Data Center Desktop

VPN tunnel

WAE ASA ASA WAE ACE

Connection
Broker
Branch
Endpoints

WAAS Central 254396


Manager

Security
IT organizations today want desktop virtualization to deliver desktop services to end-users anytime and
in any location. This applies not only to users in the LAN, but also increasingly to users across the WAN
who must access virtual desktops hosted in corporate data centers. The WAN can pose a challenge to IT
organizations that not only need to deliver desktop services to remote end users, but must also do so in
a secure manner. To deliver desktop services, many IT organizations have turned to VPN solutions to
provide secure connections to end users accessing virtual desktops from branch-offices or fixed and
mobile teleworker locations. With solutions such as the Cisco ASA appliances and the Cisco
AnyConnect Secure Mobility Client, an end user is able to use an encrypted connection to the data center
where their virtual desktop resides.

Cisco Virtualization Experience Infrastructure


58
Network

The Cisco ASA 5500 Series Adaptive Security Appliance is a purpose-built platform that combines
best-in-class security and VPN services. The Cisco ASA 5500 Series enables customization for specific
deployment environments and options, with special product editions for secure remote access
(SSL/IPsec VPN), firewall, content security, and intrusion prevention.
Additional network security concerns and detailed information regarding the Cisco ASA appliances with
the Cisco AnyConnect Secure Mobility Client will be covered in the Cisco VXI Security chapter.

Optimization
Many protocols can be optimized across the network. Protocol optimization uses the knowledge of the
protocol to accelerate user response time. Network optimization can be achieved by understanding the
intricacies of how these protocols function. Traditional chatty applications designed for the local area
networks become a bottleneck to user productivity when utilized across a WAN. Protocol optimization
can greatly improve performance.
Protocol optimization is possible because network appliances have the ability to understand the protocol
and terminate the user's requests. Protocol optimizers open a separate connection to the server. Some
network protocols used in a Cisco VXI system apply methods of encryption and compression. For a
protocol optimizer to be able to terminate and optimize a user's request, the network appliance will need
to understand the methods of encryption or compression. There could be a use case for removing the
underlying protocol's encryption and compression to take advantage of protocol optimization in the
network.
Cisco WAAS accelerates virtual desktop delivery to remote users, scales the number of users over the
WAN, lowers IT cost, and improves the interactive multimedia experience. Cisco WAAS makes it
possible to support the highest numbers of concurrent virtual desktop users over a WAN link. It does this
in part by enabling network administrators to optimize the amount of bandwidth consumed by desktop
virtualization traffic, and thereby more effectively manage WAN costs through a suite of Cisco WAAS
technologies, including:
• Optimization of desktop virtualization protocols, including Independent Computing Architecture
(ICA) and Remote Desktop Protocol (RDP). By providing latency mitigation and reduction of
bandwidth, as well as multimedia redirection (MMR) and USB Redirect for further reduction of the
bandwidth required for interactive multimedia and the use of USB peripherals, optimization is
achieved. An example is video and printing.
• Advanced compression using Data Redundancy Elimination (DRE) and Lempel-Ziv (LZ)
compression, greatly reducing or eliminating redundant packets that traverse the WAN. In this way,
administrators can conserve WAN bandwidth and reduce associated costs, while improving
application transaction performance and significantly reducing the time for repeated bulk transfers
of the same application.
• Transport Flow Optimization (TFO) improves throughput and reliability for clients and servers in
WAN environments, helping to ensure that maximum throughput is sustained in the event of packet
loss.
• Application-specific accelerators such as Common Internet File System (CIFS), network file server
(NFS), HTTP, Secure Sockets Layer (SSL), Messaging Application Programming Interface (MAPI),
and video Real Time Streaming Protocol (RTSP). Cisco WAAS enhances the performance and
accelerates the operation of a broad range of these chatty application protocols, thus improving
distributed user access for all of these application protocols over the WAN.
The Cisco WAAS suite of WAAS technologies can be employed transparently on any application using
TCP, as described in Table 6. Here are some things to know about the various WAAS technologies Cisco
WAAS employs.
• Transport Flow Optimization (TFO)

Cisco Virtualization Experience Infrastructure


59
Network

TFO addresses TCP performance limitations in high-latency, high-loss, and high-bandwidth


networks. TFO employs the following main optimizations:
– Selective acknowledgement (SACK) and extensions: Reduces the amount of data that must be
retransmitted when a loss is detected.
– Large initial windows: Reduces the amount of time each connection spends in slow-start mode
to enable more timely use of available bandwidth.
– Virtual window scaling of TCP windows: Enables end nodes to transmit and receive larger
amounts of data by increasing the amount of data that can be outstanding and unacknowledged
in the network at any given time.
– Advanced congestion avoidance: Reduces the performance effects on throughput when a loss is
detected by more intelligently managing the congestion window of each TCP connection. This
congestion avoidance mode also enables “fill-the-pipe” optimization, which enables
applications that are TCP-throughput-bound to make better use of available bandwidth capacity.
• Data Redundancy Elimination (DRE)
DRE is a bidirectional database of blocks of data seen within TCP byte streams. DRE inspects
incoming TCP traffic and identifies data patterns. Patterns are added to the DRE database, which
can then be used in the future as a compression history. Repeated patterns are replaced with very
small signatures that tell the distant device how to rebuild the original message. With DRE,
bandwidth consumption is reduced, as is latency associated with data transfer because fewer packets
need to be exchanged. DRE maintains full application and protocol coherency and correctness
because the original message rebuilt by the distant Cisco Wide Area Application Engine (WAE)
device is always verified for accuracy at multiple levels and is application-independent. Patterns that
have been learned from one application flow can be used when another flow is seen, even when using
a different application. DRE can provide from 2:1 to 100:1 compression, depending on the
application, data, and workload.
• Persistent Lempel-Ziv (LZ) compression
Cisco WAAS implements LZ compression with a connection-oriented compression history to
further reduce the amount of bandwidth consumed by a TCP connection. Persistent LZ compression,
which can be used independently or in conjunction with DRE, provides from 2:1 to 5:1 compression,
depending on the application used and data transmitted, in addition to any compression offered by
DRE.
Delivery of interactive multimedia, such as voice and video, is one of the biggest challenges in desktop
virtualization deployment. DV vendors today struggle to provide a good user experience for voice and
video applications within the display protocol. Display protocols are not optimized for interactive
multimedia delivery. Some DV vendors, such as Citrix and VMware, have started to introduce
technologies like High Definition User Experience (HDX) and multimedia redirection (MMR). HDX
technology provides network and performance optimizations to deliver the best user experience over any
network. HDX and MMR are optional features providing additional protocol optimization for interactive
multimedia delivery in the display protocol.
The software implementation of PC over IP (PCoIP) uses TCP and User Datagram Protocol (UDP) over
port 4172. The TCP port is used for session establishment and control, while the UDP port is used for
the display protocol. The display protocol is encrypted with 128-bit Advanced Encryption Standard
(AES); when used in its hardware implementation, it can use AES or Salsa20. Cisco WAAS doesn’t
optimize PCoIP data packets as they are natively encrypted and compressed.
Table 6 lists the traffic types covered in the Cisco VXI system and what protocol optimization features
Cisco WAAS can apply.

Cisco Virtualization Experience Infrastructure


60
Network

Table 6 Traffic Types

Traffic Types Cisco WAAS


Application Protocol Transport TFO DRE LZ
Citrix ICA (Basic TCP port 1494 Yes Yes Yes
encryption) and TCP port
2598 with session
reliability
Vmware View 4.x RDP TCP port 3389 Yes Yes Yes
USB Redirection TCP port 32111 Yes Yes Yes
Print CIFS TCP port 445 Yes Yes Yes
Multimedia MMR is support TCP port 9427 Yes Yes Yes
Redirection by View Client
(MMR)

The following three protocols are most commonly used in the DV environment. Each protocol is
developed by different DV vendors.
• PCoIP
• ICA
• RDP
These protocols are discussed in-depth in the Desktop Virtualization chapter.
Branch offices have printers. Most branch offices do have print servers as they use embedded printer
servers from vendors such as HP. For the branch office that does not have any print server or embedded
printer server when a document has to be printed at the remote office, the rendered document print job
is spooled over the WAN from the data center to the user’s local printer. Cisco WAAS provides a holistic
approach to supporting a variety of print strategies, including the foillowing:
• Centralized network printing through a print server can dramatically improve printing performance
and reduce WAN data by using print-specific optimizations.
• Cisco WAAS provides print servers locally to branch-office users by running Microsoft Windows
print services.
• Combined with USB redirection, Cisco WAAS offers optimization of the printing traffic redirected
to locally attached peripherals (such as USB-connected printers) at the branch client device by
reducing the bandwidth utilization and mitigating the WAN latency

Network Interception
Since Cisco WAAS is transparent in the network, some form of network interception is required to
redirect relevant traffic types to the Cisco WAE Wide Area Application Engine. Cisco WAAS supports
many methods of network interception. The following two methods are used in the Cisco VXI system:

Physical Inline Interception


The Cisco WAE Wide Area Application Engine is deployed physically between two network devices,
most commonly between a router and a switch in a branch-office. This allows all traffic traversing the
network toward the WAN or returning from the WAN to physically pass through the WAE, thereby giving
it the opportunity to optimize.

Cisco Virtualization Experience Infrastructure


61
Network

Web Cache Communication Protocol Version 2


Cisco WAAS devices support Web Cache Communication Protocol Version 2 (WCCPv2). This
deployment uses Generic Router Encapsulation (GRE). WCCP can be used with both L2 redirection and
GRE redirection. With WCCPv2, WAE devices are deployed as appliances (nodes on the network and
not physically in-line) on the network. WCCPv2 provides scalability to 32 WAE devices in a service
group, load-balancing amongst WAEs, fail-through operation if all WAEs are unavailable, and allows the
administrator to dynamically add or remove WAE devices in the cluster with little to no disruption.
WCCP service 61 and 62 direct the router to reroute traffic from the interface to the WCCP group. Both
service 61 and 62 redirects traffic (61 used source IP for load balancing and 62 destination IP for load
balancing traffic among multiple WAE devices). Redirection can be performed on the ingress or egress
direction using the knob (in or out). Services 61 and 62 are both needed to redirect bidirectional traffic
flow. WCCP is an open standard. Other equipment that implements the WCCP protocol can participate
in the WCCP group. Configure WCCP services 61and 62 globally in the router configuration:
ip wccp 61
ip wccp 62

Exclude the WAE subnet from interception since this configuration uses a single interface to intercept
incoming and outgoing packets. The interception exclusion is required because the router does not
differentiate between traffic from the Cisco WAE for the client or server. Traffic from the Cisco WAE
should not be redirected again by the router as this will create a loop. Use the following command to
exclude interception on a single interface:
ip wccp redirect exclude in

Configure WCCP interception services 61 ingress interface and 62 on the egress interface. All ingress
and egress packets from the interface are forwarded to the Cisco WAE for optimization.
ip wccp 61 redirect in
ip wccp 62 redirect out

WCCPv2 and physical inline interception are the most common methods used. The following Cisco
WAAS sequence shows the handshake between the thin client and connection brokers using WCCP
interception (see also Figure 32). Virtual desktop users connect to the network via an endpoint. These
endpoints are a combination of zero, thin, or thick clients. Endpoints are covered in detail in the
Endpoints and Applications chapter.
1. The endpoint sends a TCP synchronize (SYN) packet to the virtual IP address configured on the
Cisco ACE Application Control Engine for the connection broker. The branch-office router
intercepts the packet and forwards it to the branch-office Cisco WAE.
2. The branch-office Cisco WAE applies a new TCP option (0x21) to the packet if the application is
identified for optimization. The branch-office Cisco WAE adds its device ID and application policy
to the new TCP option field. This option is examined and understood by other Cisco WAE devices
so they can both apply the same policies. If the requested data is in its cache, the branch-office Cisco
WAE returns the cached data to the thin client.
3. The WAN edge router intercepts the packet and forwards the packet to the data center Cisco WAE.
4. The WAN edge Cisco WAE inspects the packet examining the device ID and policy. The request is
then sent to the Cisco ACE Application Control Engine. Cisco ACE load balances the connection
on one of the connection brokers in the server farm. Since the Cisco ACE load balancer is configured
in one-armed mode, source Network Address Translation (NAT) is required so the response does not
bypass the Cisco ACE. (One-armed mode is described later in this chapter.)

Cisco Virtualization Experience Infrastructure


62
Network

5. The connection brokers respond with a SYN/ACK packet, which is sent back to the thin client. The
packet is then forwarded to the Cisco ACE and then back to the Cisco WAE in the data center. The
Cisco WAE marks the packet with TCP option 0x21. During the data transfer phase, the Cisco WAE
caches the data if the data is not in its cache.
6. The packet travels through the WAN and arrives at the branch-office router. Again WCCP running
on the branch router intercepts the packet and forwards it to the branch Cisco WAE. The branch
Cisco WAE is aware of the Cisco WAE in the data center because the SYN/ACK TCP option 0x21
contains an ID and application policy. Auto negotiation of the policy occurs as the branch Cisco
WAE compares its application-specific policy to that of its remote peer defined in the TCP option.
At this point, the data center and branch-office Cisco WAE devices have determined the application
optimizations to apply on this specific TCP flow. During the data transfer phase, the branch Cisco
WAE caches the data if the data is not in its cache.
7. The packet is forwarded to the branch-office router and then to the endpoint.

Figure 32 DV Traffic Flow

Branch Office Data Center

Branch WAN Edge


Router Router

2b 2a WAN

Branch 1 3 Cisco WAE


Endpoint 6
7 4

Cisco ACE
Branch Cisco
WAE 4 5

Thin Client to Connection Broker


Connection Broker to Thin Client 254397
Connection
Broker

Cisco WAAS Deployment in a Cisco VXI Network


Cisco WAAS requires a central manager to manage the Cisco WAAS solution from a central point. The
central manager for Cisco WAAS provides a centralized mechanism for configuring Cisco WAAS
features and for reporting and monitoring. When the administrator applies configuration or policy
changes to a Cisco WAE device or a group of Cisco WAE devices, the Cisco WAAS manager
automatically propagates the changes to each of the managed Cisco WAE devices.
Figure 33 shows the topology for the Cisco WAAS in a Cisco VXI network.

Cisco Virtualization Experience Infrastructure


63
Network

Figure 33 WAAS deployment in the network

Cisco WAAS Cisco WAAS


Branch Data Center
Edge Core
Router Router

WAN

Nexus
7000 Core
Branch Aggregation
switch Switch

Cisco WAAS
Central Manager ACE in VSS
chassis
Local Branch Network Cisco 6120
printer printer Nexus
Endpoints
5000

Cisco UCS 5108


HVD

254398
Connection Broker
Hypervisor

Cisco WAAS Configuration Tasks


Each Cisco WAE device can be configured either as an application accelerator or a central manager. Best
practice is to deploy a primary and a standby central manager. The central manager will configure all
other WAE devices on the network.
Table 7 provides the relative configuration information needed to configure Cisco WAAS on a Cisco
VXI network.

Table 7 Cisco WAAS Configuration Information

Product
Documentation
Function Traffic Types Description/Capability Links
Configure the Cisco WAAS HTTPS Introduction to the
central manager WAAS Central
Manager GUI
Configure WCCP on the Interception of interesting WCCP Supported IOS Configuring
branch and data center traffic versions Traffic
router Interception
Configure the application Citrix ICA Disable ICA encryption and Configuring
accelerators compression Application
Acceleration
Vmware RDP Disable RDP encryption Configuring
Application
Acceleration

Cisco Virtualization Experience Infrastructure


64
Network

Product
Documentation
Function Traffic Types Description/Capability Links
Vmware PCoIP using Configuring
TCP Application
Acceleration
Configure the branch and Configuring
Data Center Cisco WAE Network Settings
Optimizing printing using CIFS Configuring and
Cisco WAAS Managing WAAS
Print Services

Availability
Availability is an important characteristic of any system. The definition of availability can be based on
an IT organization’s business objectives and measurements. For example, are you interested in
increasing the network’s availability for users? Also, are some users more important than others? These
are all questions you need to consider before rolling out a desktop virtualization infrastructure. The
different types of availability include availability of devices, media interfaces, and paths, and availability
of the network to users and applications. All can be measured in performance —for example, in latency
and network drops.
Desktop virtualization relies on an always-on network connectivity to the data center. Over the last
decade, availability has become a critical part for every network design. Cisco provides high-availability
network design guidance focused on specific areas of the enterprise network, such as data center,
campus, branch/WAN, and Internet edge. Hierarchical network design is a common theme for designing
the network for high availability. Refer to the links provided for high availability network design
guidance when building out a Cisco VXI system-enabled network.
To further enhance the availability and scalability requirements of your DV environment, it is
recommended that you deploy a load balancing solution. The VMware View administration guide states
that this helps to ensure that connections are distributed evenly across each available View Connection
Server, and that failed or inaccessible servers are automatically excluded from the replicated group.
To provide traffic load balancing for the connection brokers, the Cisco ACE is deployed.
In one-arm mode, the Cisco ACE is configured with a single VLAN that handles both client requests and
server responses. Routed and bridged ACE deployments also work in this design. For one-arm mode,
client-source NAT or policy-based routing (PBR) needs to be configured. The Cisco ACE then uses NAT
to send the requests to the real servers. Server responses return through the Cisco ACE rather than
directly to the original clients. This topology is convenient, since the Cisco ACE can be almost anywhere
on the network, but its reliance on NAT makes it impractical in some situations. However if you are
routed or bridging mode, confirm there is sufficient bandwidth for all of the users display connections.
In a one-armed topology, the Cisco ACE is connected to the real servers “connection broker" via an
independent router, and acts as neither a switch nor a router for the real servers. Clients send requests to
the virtual IP (VIP) on the Cisco ACE.
A minimum of two Cisco ACE appliances are required. Either the Cisco ACE Application Control
Engine Module or Cisco ACE 4170 Application Control Engine Appliance can be used. When deploying
the Cisco ACE Module, we recommend that you use technologies such as the Cisco Catalyst® 6500
Virtual Switching System (VSS) 1440 with its multichassis EtherChannel. This can help improve the
resiliency of the network design by delivering a sub-second network convergence in a failure recovery.
Cisco ACE 4710 appliances are connected to the aggregation layer using a port channel and are enabled

Cisco Virtualization Experience Infrastructure


65
Network

for high availability. The Cisco ACE Module or 4710 Appliance load-balances virtual desktop requests
to the connection broker while providing session persistence. The main features implemented on the
Cisco ACE for the Cisco VXI system are:
• Load balancing of the connection broker
• Health monitoring of the connection broker
• Session persistence based on client IP address

ACE Deployment
The Cisco ACE Application Control Engine periodically checks the health of the connection broker.
Using the probe information, Cisco ACE determines if the connection broker can service the user’s
request with the best performance and availability. To monitor the connection broker, use the following
probe http sample configuration on the Cisco ACE:
probe http ViewProbe
interval 5
faildetect 2
passdetect interval 5
passdetect count 2
request method get url /admin/
expect status 200 200
open 1
expect regex "View Administrator"
• The probe http named ViewProbe will check the status of the connection broker every 5 seconds.
This example is used for VMware View 4.5. (A similar probe can be configured for Citrix
XenDesktop.) If the Cisco ACE receives two failures from the connection broker, Cisco ACE will
mark the server as failed. Also the new connection will be load-balanced to another connection
broker in the server farm. The Cisco ACE will determine if the connection broker has failed if the
HTTP response code is anything other than 200. The Cisco ACE will check to make sure the
application is up and running by applying a regular expression check on the content of the web page.
The Cisco ACE looks for the words "View Administrator" in the HTML source code. This level of
health monitoring provides network, server, and application level availability.
• When a user needs to access a virtual desktop in the data center, the user connects to a virtual IP
address configured on the Cisco ACE. Then the Cisco ACE forwards the request from the user to
the connection broker. Cisco ACE supports several session persistence mechanisms between the
client and the connection broker so that a particular client session is always directed to the same
server. The Cisco ACE is configured to perform session persistence based on the client IP address.
If endpoints are coming via proxy servers, this can cause some inconsistencies with session
persistence. It is then recommended to use JSESSIONID cookie persistence. The Cisco ACE will
need to terminate the SSL session the view the cookie.
• When deploying a Cisco ACE with connection brokers, it is best practice to use either tunneled or
direct mode or a combination of the two methods.
• Direct mode is when an endpoint establishes a connection to the virtual desktop instead of traversing
the connection broker for all display protocol activity. Direct mode has many advantages, which
include significantly less load (CPU/memory/network) on the connection broker. The connection
broker responds to the initial first phase connections from the endpoint, which is a very
low-resource-intensive operation. Subsequent data passes directly between the endpoint and agent
running on the virtual desktop. Direct mode has some disadvantages as well. Without
comprehensive security policies, it is possible for an endpoint to connect directly to the virtual
desktop, bypassing the connection broker.

Cisco Virtualization Experience Infrastructure


66
Network

• Tunneled (proxy) mode is when an endpoint establishes a connection to the connection broker for
all phases of communication, including the remote display data. Tunneled mode has some
advantages, which include tighter access control and policy enforcement and significantly more load
(CPU/memory/network) availability on the connection broker.
Figure 34 shows the deployment of the Cisco ACE 4710 appliances using direct mode. The Cisco ACE
4170 Application Control Engine Appliance provides load balancing only for the connection broker.
Once the connection broker has assigned the user a virtual desktop, the display protocol bypasses the
Cisco ACE 4710 appliance. The Cisco ACE is configured in one-armed mode. One-armed mode is
preferred in a Cisco VXI system-enabled network. Since the display protocol is between the end user
and virtual desktop, the Cisco ACE does not need to be in the middle of the transaction. This will save
resources on the Cisco ACE for load balancing incoming end-user requests for an available desktop from
the connection broker. Since ACE is deployed in one-armed mode you need to configure source network
address translation (SNAT).

Figure 34 ACE Deployment in the Network

DV Endpoint

Client to Connection Broker (HTTPS)


Display connection
Cisco ACE

Hypervisor
254399

Hypervisor
Connection containing virtual
Broker desktops

ACE Configuration Tasks


Table 8 provides the detailed configurations for enhancing the availability for DV servers. You will
notice that the table refers to contexts. The virtualized device environment provided by the Cisco ACE
is divided into objects called contexts. Each context behaves like an independent Cisco ACE load
balancer with its own policies, interfaces, domains, server farms, real servers, and administrators. Each
context also has its own management. When Cisco ACE is deployed, the Admin context is used for
managing and provisioning other virtual contexts. It is best practice to create a virtual context named
VMware DV load balancing.

Cisco Virtualization Experience Infrastructure


67
Network

Table 8 DV Server Configurations

Function Traffic Types Description/Capability Product Documentation


Admin Context Allow only Telnet, SSH, The Admin context is used Configuring Virtual Contexts
Configuration SNMP, HTTP, or HTTPS to configure the following:
or ICMP Physical interfaces
Management access
Resource Management
High availability
One-armed mode Recommend using Source nat is required Configuring One-Arm Mode
configuration One-armed mode for load
balancing connection
broker
Configuring the Virtual Vmware RDP and PCoIP Load balancing HTTPS Configuring Virtual Server
Context for VMware connection - Create a layer 4 Properties
DV class-map
Configuring the Virtual Citrix ICA Load balancing ICA Configuring Virtual Server
Context for Citrix DV connection - Create a layer 4 Properties
class-map
Configuring Session Session persistence based on IP Address Stickiness
Persistence the source IP address
Configuring Health HTTP Probe using regex Regex statement inside Configuring Health
Monitoring checking HTTP Probe Monitoring for Real Servers
expect regex "View
Administrator"
Configuring the Use Least-Loaded Least number of connection Load-Balancing Predictors
Load-Balancing algorithm on Server
Algorithm predictor leastconns
Configuration of SNAT needed so Use the virtual-address as ACE NAT Configuration
Source NAT connection does not NAT address
bypass ACE on the
response back to the
end-user

Campus
The campus network connects end users and devices in the corporate network with the data center, WAN,
and Internet. In addition to the high-speed connectivity service, the campus network, with its direct
interaction with end users and devices, provides a rich set of services, such as Power over Ethernet (PoE),
secure access control, and traffic monitoring and management.
When planning a campus network, it is important to consider the trends in the workspace environment
and design the infrastructure with performance, scalability, and services such as interactive multimedia,
collaboration and DV in mind. It is recommended to follow the Cisco Enterprise Campus 3.0
Architecture when designing a campus network. The Enterprise Campus 3.0 Architecture provides an
overview of the campus network architecture and includes descriptions of various design considerations,
topologies, technologies, configuration design guidelines, and other considerations relevant to the
design of highly available, full-service campus switching fabric. It is also intended to serve as a guide
to direct readers to more specific campus design best practices and configuration examples for each of
the specific design options.

Cisco Virtualization Experience Infrastructure


68
Network

A Cisco VXI system-enabled campus benefits from the security, optimization and availability network
functions described earlier and shown in Figure 35.

Figure 35 Cisco VXI Enabled Network

Campus

Campus Endpoints

Branch Office
Security

Display
Branch Endpoints protocol Optimization
over the
Mobile Teleworker network

Availability
Data Center

Mobile Teleworker
Endpoint
Fixed Teleworker

254400
IP

Fixed Teleworker Endpoint

Although the display protocol may be transparent to the Campus, the Cisco Catalyst® family of switches
provides security and availability functions in a Cisco VXI system-enabled network. This functionality
enables some DV traffic types to be characterized and secured based upon log-in characteristics. For
example, contractors may use their own laptops, with the contractor's virtual desktop accessed using a
thin client. When the contractor connects to the network, he/she will login using the laptops
username/password and be authenticated using 802.1X. The Catalyst switch can map that user into a
discrete contractor VPN with ACLs that enable access to the Internet for VPN traffic and connectivity
to only the contractor desktops. The Cisco Catalyst 4500 Series Switches can provide 802.1X
functionality. Users that need additional security could also connect to the data center via the Cisco ASA
appliances with the Cisco AnyConnect Secure Mobility Client. The Cisco AnyConnect client can be
installed on user laptops. This provides the end-user transport level security across the network.

Branch
As stated in the Desktop Virtualization chapter, traditional chatty applications, designed for the local
area networks, become a bottleneck to user productivity when accessed across a WAN. This causes
challenges in extending desktop and application virtualization to the branch office. The primary
challenge with the delivery of hosted virtual desktops to branch offices is the ability to ensure adequate
levels of performance over the WAN that meets the end-users experience expectations. When delivering
hosted virtual desktops over the WAN the end-user has to cope with the following characteristics:

Cisco Virtualization Experience Infrastructure


69
Network

• Limited WAN bandwidth - Bandwidth connecting branch-offices to the data centers is still
expensive. The increase of end-users and applications in the branch office will continue to drive
bandwidth usage and requirements. Most end-users in the branch office use the same applications
throughout the day, generating large amounts of repetitive data being sent between the branch office
and the data center. Such redundancy is a waste of limited WAN bandwidth. A Cisco VXI enabled
network provides data redundancy elimination and LZ compression to help ensure that redundant
data does not consume precious WAN bandwidth, and that all data transferred is compressed to
minimize bandwidth usage.
• Latency - Between the branch-office and data center propagation delay is the primary cause of
latency across the WAN. Latency affects all TCP based data transfers across the WAN, however
end-users using chatty protocols are impacted the most. For backup software that uses the CIFS
protocol, additional protocol-specific acceleration mechanisms such as read ahead, pipelining,
multiplexing, and prediction are employed to improve throughput and overcome WAN latency.
• Packet loss - When packets are dropped due to congestion in the WAN, TCP reduces the window
size and retransmits the packet. This causes a reduction in bandwidth efficiency and increase in
application response time. Packet loss also affects TCP-based data transfers across the WAN. In a
DV environment the end-users experience can be detrimentally impacted by packet loss across the
WAN. In a Cisco VXI enabled network the Cisco WAAS can be configured to provide TFO to help
replication applications make better use of available WAN capacity and overcome performance
challenges associated with packet loss.
In a branch office setup, the Cisco WAE appliance is connected to the local router, typically a Cisco
Integrated Services Router. The branch-office Cisco WAAS deployment, together with the data center
Cisco WAAS deployment offers a WAN optimization service through the use of intelligent caching,
compression, and protocol optimization. When end users access the virtual desktops through the
connection broker, Cisco WAAS compresses the response and then efficiently passes it across the WAN
with minimal bandwidth use and high speed. Commonly used information is cached at both the Cisco
WAAS solution in the branch office and the data center, which significantly reduces the burden on the
servers and the WAN.

Teleworker
According to the Washington Post, an epic snowstorm struck Washington, D.C on February 10 2010.
This storm virtually shut down the U.S. capital for four days. The U.S. government lost nearly $70
million a day due to lost productivity. While this number is staggering, it could have been as large as
$100 million per day. Fortunately, 30% of government workers were able to telecommute during the
storm. These government workers were telecommuting from location where Internet access was
available. This was a combination of home offices, coffee shops and books stores.
In a Cisco VXI enabled network teleworkers fall into two categories. These are fixed teleworkers and
mobile teleworkers. This is shown in figure 6. A teleworker could be accessing the same virtual desktop.
The only difference between a fixed verse mobile teleworker is location on where the remote-user is
connecting from.

Fixed Teleworker
A fixed teleworker is a user taking advantage of a solution such as Cisco Virtual Office (CVO). The Cisco
Virtual Office solution provides secure, rich network services to workers at locations outside of the
traditional corporate office, including full- and part-time home-office workers, mobile contractors, and

Cisco Virtualization Experience Infrastructure


70
Network

executives. By providing extensible network services that include data, voice, video, and applications,
the Cisco Virtual Office effectively creates a comprehensive office environment for employees. The
Cisco Virtual Office solution consists of the following components:
• A Cisco 800 series Integrated Services Router (ISR) and a Cisco Unified IP Phone.
• A data center presence that includes a VPN router and centralized management software for policy,
configuration and identity controls.
• Deployment and ongoing services from Cisco and approved partners for successful deployment and
integration as well as consultative guidance for automating the deployment.
A Cisco VXI enabled network provides additional benefits to the fixed teleworker using the CVO
solution. These enhancements include a DV endpoint behind the Cisco 800 ISR. The end-user can now
access their virtual desktop secured through a VPN tunnel established by the Cisco 800 ISR and
corporate edge router. The users virtual desktop will have Cisco Unified Personal Communicator
(CUPC) installed. From the Cisco Unified Personal Communicator application the user can control an
IP Phone on their desk. This will be covered in detail in the Endpoints and Applications chapter of the
CVD.

Mobile Teleworker
A mobile teleworker is someone that can connect to their virtual desktop securely from any endpoint
running Cisco AnyConnect Secure Mobility client. Mobile teleworkers are typically in unsecure network
locations. One of the advantages of Cisco VXI is the fact that the data is still secure, even if the endpoint
is stolen, damaged or lost. Figure 3 shows the mobile teleworker connecting to a hosted virtual desktop
from an unsecure location using a secure connection.

Figure 36 Thick Client Using Cisco AnyConnect

Virtual Desktop
running on
Hypervisor

WAN
VPN tunnel
Cisco ASA Connection
254401

Mobile Teleworker
Broker
Endpoint
1. The mobile teleworker establishes an SSL VPN connection with the ASA VPN Gateway using the
AnyConnect Client.
2. The mobile teleworker then uses the desktop virtualization client to communicate across the secure
tunnel and authenticate with the Connection Broker in the corporate data center.
3. After successful authentication the Connection Broker displays the available virtual desktops
available to the mobile teleworker.
4. The mobile teleworker then selects and connects to the desired virtual desktop to establish the virtual
desktop session. The entire session is now secured by the VPN Tunnel.

Cisco Virtualization Experience Infrastructure


71
Network

For More Information


The following list of web links offer detailed information regarding specific vendor information
including hardware and software discussed in this chapter.
First link Vmware View 4.x
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId
=1012382#View4.x
Second Vmware View 4.x
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1012382#
View4x
Introduction to the WAAS Central Manager GUI
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v421/configuration/guide/intro.html
#wp1122501
Configuring Traffic Interception
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/traffic.ht
ml
Configuring Application Acceleration
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/policy.ht
ml
Configuring Application Acceleration
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/policy.ht
ml
Configuring Network Settings
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/network.
html
Configuring and Managing WAAS Print Services
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/printsvr.h
tml
Data Center
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/DC-3_0_IPInfra.html
Campus
http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/HA_campus_DG/hacampusdg.html
Branch/WAN
http://www.ciscosystems.com/en/US/docs/solutions/Enterprise/Branch/srlgbrnt.pdf
Physical interfaces
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_ntwk.html
Management access
http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/v3.00_A1/configuration/
administration/guide/access.html

Cisco Virtualization Experience Infrastructure


72
Endpoints and Applications

Resource Management -
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_confg.html#wp2700097
High availability -
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_ha.html
Configuring Virtual Contexts -
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_confg.html
One-armed mode configuration -
http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/vA2_3_0/configuration/g
etting/started/guide/one_arm.pdf
Configuring the Virtual Context for Citrix DV -
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1044682
Configuring Session Persistence -
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1062118
Configuring Health Monitoring -
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1045366
Balancing Algorithm -
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1045366
Configuration of Source NAT -
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on
_the_Cisco_Application_Control_Engine_Configuration_Example
Washington Post Article Opinion Article: Snowstorm’s possible plus: Advancing causes of telework:
http://www.washingtonpost.com/wp-dyn/content/article/2010/02/10/AR2010021003715.html

Endpoints and Applications


The Cisco VXI system supports a wide variety of endpoints offering virtual desktop and collaboration
services. The endpoints for desktop virtualization (DV) support user desktop interaction through
keyboard, mouse, and touch screen input devices. They also offer some USB-based print and storage
capabilities. Unified communications endpoints support voice and video communications. Cisco’s
Unified Personal Communicator provides the functional integration between the virtual desktop and the
unified communications endpoint by allowing user control of a desk phone through a software
application deployed on the user’s Hosted Virtual Desktop (HVD).
Figure 37 provides a general endpiint data flow diagram for virtual desktop environment.

Cisco Virtualization Experience Infrastructure


73
Endpoints and Applications

Figure 37 General Endpoint Data Flow Diagram

Cisco Unified Personal


Communicator
End User Locations (CUPC)

Network Printer

User’s Virtual
Network Print Traffic Desktop
Desktop Display Protocol
Locally Attached Printer
Telephony Signaling
Cisco Unified Unified
Communications Presence
IP Manager Server
Cisco Unified
Communications
Endpoint

254640
WAAS Directory
Service

DV Endpoints
Selection of the user’s endpoint is typically driven by application requirements and work related tasks.
DV endpoint devices are generally desktop replacements and come in one of three types: zero clients,
thin clients, and thick clients. Each type offers different features, has different data flows, and places
different loads on the network.

Zero Clients
Zero clients are the simplest devices. They have embedded operating systems that are not exposed to the
user. Zero clients have reduced local capabilities (reduced CPU, smaller memory footprint, and little to
no local storage) and depend almost exclusively on the resources available within the HVD. This type
of device is typically targeted toward the task worker since it provides no facility for installing
applications locally and limits support for media streaming to the capabilities of the underlying DV
protocol. Because there is no exposed OS, there is no risk of virus infection, making zero clients a very
secure endpoint. For zero clients, all traffic runs between the endpoint and the remote virtual desktop.
They generally rely on either Personal Computer over IP (PCoIP) or Independent Computing
Architecture (ICA) as a display protocol.
Network policies for this class of device are very simple, given the reliance on a single IP connection
serving all the functionality needs of the endpoint. A simple security access control list controlling the
network flows admissible to and from the device can be created between the endpoint’s IP address and
the connection broker (and associated virtual desktop pool).

Cisco Virtualization Experience Infrastructure


74
Endpoints and Applications

Thin Clients
Thin client devices usually contain more local capabilities than zero clients and often have a
customizable locally embedded operating system (usually Linux or Windows). This class of endpoint
provides greater flexibility. They are generally customized by the administrator and then locked down.
The process of locking down the thin client minimizes the risk of virus infection, and it prevents the user
from making local changes to the endpoint configuration.
Thin clients are typically used by power users who need access not only to browsers, email clients, and
office automation tools, but also to additional features such as streaming audio and video.
Because thin clients may have local applications, the traffic patterns vary depending on what is installed.
This generally makes the network security polices more elaborate than for zero clients, and policies may
be dependent on the user’s application selection and preferences. Thin clients may use Remote desktop
Protocol (RDP), PCoIP, or ICA to communicate with their associated remote virtual desktop. If the user
opens a local browser, the traffic will not be directed to the virtual desktop. In some cases, media
redirection allows for traffic to originate from and terminate at the DV endpoint without flowing through
the HVD.

Thick Clients
Thick client devices refer to standard PC or laptops running a standard OS, but have software similar to
that of the thin client in that it is installed as an application. Thick client devices allow users to work
offline and are often the choice of the “road warrior” user.
The traffic patterns for thick clients may be almost the opposite of those for thin clients. The user may
choose to run most of the applications locally and use DV only to run secure applications hosted on the
virtual desktops stored in the data center.
There are many options for converting standard PCs and laptops into thick clients. Additional
information is available from the DV software vendors.

DV Endpoint Management
Typically management software for DV endpoints is provided from either the endpoint manufacturer or
the DV software vendor or both. There are several third-party managers available. See the Management
and Operations chapter for more information.

DV Endpoint Printing (Network Printing)


If a DV endpoint sends a document to a network printer local to itself, the request actually originates
from within the data center, as the desktop and print server are now located within the data center. The
data going to the network printer actually travels outside the desktop display protocol, and can therefore
be optimized with Cisco WAAS. Cisco WAAS can recognize the printing protocol, compress it to reduce
WAN bandwidth consumption, and send the resulting print file to the remote branch (see Figure 38).
Details of this print optimization and queuing can be found in the configuration guide for Cisco Wide
Area Application Services:
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v421/configuration/guide/cnfgbook.
pdf

Cisco Virtualization Experience Infrastructure


75
Endpoints and Applications

Figure 38 Network Printing Data Flow

WAAS
Network 2
Printer

End User Locations


User Virtual
Desktop

254641
Data Center

1. The request for document to print at local network printer travels within the desktop display
protocol.
2. From DV Desktop in data center, the print job is sent to the print server, also within the data center.
It is then streamed to the printer.

Note This protocol stream can be recognized and optimized by Cisco WAAS.

DV Endpoint Access to USB (Storage or Printing)


If a DV endpoint wants to redirect information to its locally attached printer or another type of
USB-attached device (USB thumb-drive or external hard-drive for example), the access is controlled by
the desktop protocol. Most desktop protocols provide the ability for administrators to restrict data from
being copied outside of the DV. Since data is housed in the data center, this can be considered part of a
secure document control environment. Details on enabling or disabling the USB access can be found in
the Cisco VXI Security chapter.
When determining if USB access should be allowed within an enterprise network, one of the
considerations should be that this USB data is designed to be local device traffic. It has even been noted
during testing, that the installation of the printer drivers should be on the local device rather than within
the DV. Printing can be problematic when the drivers are installed in the DV or in both locations as this
in expected to be a local machine function. Furthermore, the USB traffic will not be optimized to
traverse a network, and may consume large amounts of network resources. The data typically travels
within the desktop protocol, and is therefore difficult to gain visibility to and to optimize. This
optimization issue is understood by the desktop protocol vendors and they have responded in various
ways:
• VMware (PCoIP/RDP):
With PCoIP, printing is supported for USB-attached printers. RDP can support any locally attached
printer, however, redirection is available only for USB-attached devices. USB redirection is
implemented in both these protocols by placing the USB data flows in a separate channel from the
normal rendering traffic in the desktop protocol. This separation of the traffic is meant to allow some
methods of optimization. In PCoIP the USB redirection is via TCP port 32111. RDP redirects using
the RemoteFX enhancement.
• Citrix (ICA):
Because everything is embedded in the desktop protocol, Citrix allows you to tune how the protocol
consumes bandwidth. Increasing the performance of one protocol does impact the performance of
all the others. There are two ways to limit printing bandwidth in client sessions:

Cisco Virtualization Experience Infrastructure


76
Endpoints and Applications

– Use the Printer policy rule to enable and disable the printing bandwidth session limit.
– Use individual server settings to limit printing bandwidth in the server farm. Perform this task
in the Access Management Console or the Delivery Services Console, or the tool known as
XenApp Advanced Configuration or the Presentation Server Console, depending on the version
of XenApp you have installed.
http://support.citrix.com/proddocs/index.jsp?topic=/xenapp5fp-w2k8/ps-printing-limiting-printing
-bandwidth-all.html

Figure 39 USB Printing Data Flow

USB Attached

2
1

End User User Virtual


Locations Desktop

254644
Data Center
1. Request for document to print at local network printer travels within the desktop display protocol.
2. From DV Desktop in data center, the print job is sent to the local printer/USB device within the same
desktop protocol.

DV Tested Endpoints
Table 9 is a list of endpoints used during the testing to verify the Cisco VXI design. It is not intended to
be an exhaustive list of supported vendors or devices.

Table 9 Endpoints

DV Endpoint Type Vendor/Device Technology Support Notes Vendor Page


Zero Wyse Xenith Citrix HDX Citrix XenDesktop Xenith
Zero Wyse P20 HDW PCoIP VMware View P20
Zero Devon IT TC10 HDW PCoIP VMware View TC10
Thin Wyse C10LE Citrix ICA VMware View 4.x C10LE
Wyse R90LW Microsoft RDP Citrix XenDesktop R90LW
Wyse R90L7 SW PCoIP R90L7
Wyse X90LW X90L
Thin DevonIT TC5XW Citrix ICA VMware View 4.x TC5
DevonIT TC5DW Microsoft RDP Citrix XenDesktop

SW PCoIP
Thin IGEL Citrix ICA VMware View 4.x UD7
Microsoft RDP Citrix XenDesktop

SW PCoIP

Cisco Virtualization Experience Infrastructure


77
Endpoints and Applications

Unified Communications Endpoints


In a Cisco VXI system, support for Cisco Unified Communications is through the deployment of desktop
applications that utilize Cisco Client Services Framework and are thereby capable of controlling a
desktop phone. Cisco Unified Personal Communicator, WebEx Click-to-Call and Cisco Unified
Communications Integration for Microsoft Office Communicator are examples of such applications.
These applications can run within the DV desktop, and can be used to control the user's desktop hard
phone. Soft phone mode, where the call is placed from the desktop and the audio is also via the desktop,
is not yet supported for quality reasons.

Unified Communications Endpoints and Failover


In a standard enterprise deployment of Cisco Unified Communications Manager where desktop phones
might be across the WAN from the data center hosting the Cisco Unified Communications Manager,
Survivable Remote Site Telephony (SRST) is often configured. This feature on the local WAN router
helps to keep the Cisco Unified IP Phones viable in the event of site-wide data center connectivity loss.
Should the local phones lose communication with their Cisco Unified Communications Manager, they
can be set up to contact the local WAN router running SRST, and register with it, as if it were a Cisco
Unified Communications Manager, in order to keep telephony available within the site. The WAN
router also typically has telephone connectivity to a local service provider, in order to keep external calls
available. For more detailed information on Cisco Unified SRST please see the Cisco Unified SRST
Configuration Guide:
http://www.cisco.com/en/US/docs/voice_ip_comm/cusrst/admin/srst/configuration/guide/SRST_SysA
dmin.pdf

Applications
The installation, configuration, and security associated with any application installed on the DV user's
desktop should be subject to all the same corporate policies and procedures as if they were installed on
a traditional desktop. This includes the ability to load applications onto the desktop outside of the
corporate IT infrastructure. If corporate policy is that users cannot install applications on a traditional
desktop or laptop device, the local OS should be locked down on a thin client as well. Provided there are
no issues with the individual product’s ability to run in a DV environment, virus protection policies,
backup procedures for data, if still necessary, and personal security permissions within the enterprise
should all remain the same, even though the location of the data has moved to the data center.

Cisco Unified Personal Communicator


Cisco Unified Personal Communicator can be configured in two modes: Soft phone mode and hard
phone control mode. Soft phone mode requires no additional hardware, and is like having a phone within
the desktop image. Calls can be received, placed, and handled just like from a Cisco Unified IP Phone
even with the audio being sent and received by the desktop device itself. Hard phone control mode
requires a Cisco IP phone to be present, typically nearby, and allows the control of that phone via the
application on the desktop while still sending the audio of the call through the IP phone, rather than the
DV. The real issue in how this application should be deployed comes down to the audio stream of the
call. Since the audio stream in soft phone mode would attempt to go through the DV, and could not be
differentiated from other applications being transported within that desktop display protocol, it is
advised to run Cisco Unified Personal Communicator in hard phone control mode where the audio
stream remains separate, and can be handled within the network as it normally would.

Cisco Virtualization Experience Infrastructure


78
Endpoints and Applications

How best to handle the transport of audio streams while in soft phone mode on a DV client is still under
investigation. There is no means to prevent a user from attempting to use soft-phone mode, but the
experience is known to have issues and is not encouraged until more work can be done to improve the
results.
Hard phone control mode within Cisco Unified Personal Communicator allows the user to initiate
telephony calls via the virtual desktop’s Graphical User Interface (GUI) while keeping the audio/video
stream directed to the dedicated desktop telephone. Although the desktop telephone is controllable from
the virtual desktop’s GUI, it is a stand-alone device, able to offer collaboration services independently
of the DV endpoint, and able to provide a media stream outside of the desktop display protocol. This
independence is what allows the media to be queued and prioritized appropriate to delay-sensitive data.
Likewise, the GUI allows the end user to be completely unaware of the communication taking place in
the data center on behalf of their request to look-up and dial a colleague. LDAP, Cisco Unified Presence
Server, and Cisco Unified Communications Manager are all involved with passing the messages between
them that result in both parties able to have a phone conversation. A deployment of Cisco Unified
Personal Communicator as described above would look something like this, and would result in the
following protocol flows for DV traffic and Cisco Unified Personal Communicator/VoIP traffic:

Figure 40 Data Flow for Placing a Cisco Unified Personal Communicator Call from DV

Cisco Unified Personal


Communicator
Local or Remote (CUPC)
End User Locations

1 User
4 Virtual
5 Desktop
IP
UCS virtual
Extension compute 3 2
5500 resources
Data Center
9 6
7 Directory
Service
Extension
5510
IP 8
Local or Remote Cisco Unified
End User Locations Communications
Manager
Lightweight Directory Access Protocol
Display protocol (ICA, PCoIP, RDP)
Real Time Protocol (RTP) – Audio
254642

CTI/QBE Signaling
Telephony Signaling (SCCP/SIP)

In Figure 40, the DV endpoint provides the user with a visual representation of their individual virtual
desktop environment, and supports user input via keyboard, mouse and/or touch screen input device. The
user’s desktop is configured with an instance of Cisco Unified Personal Communicator, offering instant
messaging, presence, directory lookup and telephony control over the desk phone. The desk phone offers
the voice/video media handling.

Cisco Virtualization Experience Infrastructure


79
Endpoints and Applications

Note that all the user’s actions are relayed within the display protocol to the virtual desktop in the data
center. Likewise, all visual updates to the Cisco Unified Personal Communicator screen, like any other
screen update, are delivered to the user within the display protocol from the virtual desktop to the DV
endpoint’s screen.
As user input is processed by Cisco Unified Personal Communicator, which is on the DV desktop in the
data center, it communicates to the appropriate collaboration servers that are also within the data center,
and possibly within the same UCS enclave. Notice that since the desktop is now running on a platform
within the data center, this means that all communication controlling the call can remain within the data
center. The complete data flow for this call might look something like this:
1. The DV user types the name John Doe in the Cisco Unified Personal Communicator window.
Through the display protocol, the information is relayed to the user’s virtual desktop instance.
Cisco Unified Personal Communicator receives the input as though the user had a locally connected
keyboard to the guest OS.
2. Cisco Unified Personal Communicator then uses Lightweight Directory Access Protocol (LDAP) to
query the LDAP-compatible directory service for John Doe’s number.
3. The contact information query results are returned to Cisco Unified Personal Communicator.
4. After processing by Cisco Unified Personal Communicator, this information is relayed back to the
user’s display via the remote display protocol.
5. If the DV user initiates a call to a contact through the Cisco Unified Personal Communicator GUI
on their desktop, a similar display protocol update is transmitted to their DV instance in the data
center.
6. Computer Telephony Integration (CTI) control data is sent from Cisco Unified Personal
Communicator to Cisco Unified Communications Manager, requesting a call be placed from 5500
to 5510 (#6 in Figure 40).
7. Cisco Unified Communications Manager performs dial plan processing of the call request, resolves
the called number to a destination phone, and instructs the desk phone to place a call to the call
recipient. This is carried out through a call control exchange with both phones (#7 and 8 in
Figure 40).
8. Once established, the telephony call’s media is established between the two IP telephones directly
(#9 in Figure 40), without going through the data center’s virtual desktop. This aspect is key to
maintaining the quality of the user experience: the network’s Quality of Service (QoS) policies
protect the telephony media flow against drops, jitter and latency impairments that may otherwise
affect the flow if it were encapsulated within the display protocol. Note also that the IP telephony
endpoints are able to connect their media streams directly across the most direct available network
path.

Note The call is subject to Call Admission Control verification by Cisco Unified Communications
Manager prior to being placed. If the network does not have enough bandwidth to allow the call
to go through, the call may be processed through Cisco Unified Communications Manager’s
Automated Alternate Routing (AAR) feature, be allowed to proceed with no QoS guarantees, or
be blocked, as per Cisco Unified Communications Manager policy configuration.

Cisco Virtualization Experience Infrastructure


80
Endpoints and Applications

Note The type of call placed (audio, video), as well as what codec is used to place the call, are
dependent upon Cisco Unified Communications Manager policy configuration and endpoint
capabilities. If both the calling and called endpoint support Video streaming, the call may be
placed as a video call, based upon Cisco Unified Communications Manager policy and CAC
verification. If endpoint capabilities, Cisco Unified Communications Manager policy
configuration or network condition only afford the lacing of an audio call, codec selection will
be based on Cisco Unified Communications Manager configuration

Cisco Unified Personal Communicator, DV and SRST


Based on the Figure 40 above and assuming that this end user location is across a WAN link from the
data center hosting both the DV and the Cisco Unified Communications Manager, if the WAN link to the
data center is lost, the desktop with Cisco Unified Personal Communicator running will continue to have
contact with the Cisco Unified Communications Manager. However, the Cisco Unified Communications
Manager connectivity to the hard phone will be lost. If the end-user location is equipped with a voice
gateway capable of supplying SRST services, the phones can re-register with the gateway. In this way,
the phones will still be capable of placing and receiving calls. The data flows shown in Figure 41
illustrates this scenario.

Figure 41 SRST Signaling And Media Flow

Cisco Unified Personal


Communicator
Local or Remote (CUPC)
End User Locations

1 User
4 Virtual
5 Desktop
IP
UCS virtual
compute 3 2
7 resources
Data Center
9 SRST-capable
6
Voice Gateway Directory
Service
8

IP
Local or Remote Cisco Unified
End User Locations Communications
Manager
Lightweight Directory Access Protocol
Display protocol (ICA, PCoIP, RDP)
Real Time Protocol (RTP) – Audio
254643

CTI/QBE Signaling
Telephony Signaling (SCCP/SIP)

Cisco Virtualization Experience Infrastructure


81
Management and Operations

It is difficult to predict how the hard phone control configuration described above will respond when
network service to the data center is restored. The Cisco Unified Personal Communicator application has
never lost contact with Cisco Unified Communications Manager. With connectivity restored, the phone
can now resume its connection to Cisco Unified Communications Manager. The unpredictable behavior
arises from the timing of the end-user re-establishing connectivity with their desktop. If this is before
the phone re-connects with Cisco Unified Communications Manager, it is possible for the end-user to
attempt to control their hard desk phone before Cisco Unified Communications Manager is aware the
phone is present. The best way to avoid such issues is to verify that Cisco Unified Personal
Communicator can correctly reflect the status of the phone by going off-hook and confirming a status
change in Cisco Unified Personal Communicator. In some rare cases, this status may not re-sync, and
Cisco Unified Personal Communicator may need to be restarted within the DV in order to re-establish
proper status of the phone.

For More Information


Table 10 provides more detailed information regarding specific vendor information including hardware
and software discussed in this chapter.

Table 10 Endpoints and Applications Links

Cisco Wide Area Application Services (WAAS) Configuration Guide


http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v421/configuration/guide/cnfgbook
.pdf
Citrix Guide for Limiting Printing Bandwidth
http://support.citrix.com/proddocs/index.jsp?topic=/xenapp5fp-w2k8/ps-printing-limiting-printing-ba
ndwidth-all.html
Cisco Unified SRST Configuration Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cusrst/admin/srst/configuration/guide/SRST_Sys
Admin.pdf
Cisco Unified Communications 8.X SRND
http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/srnd/8x/uc8xsrnd.pdf
Cisco Unified Personal Communicator User's Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cupc/7_0/english/user/guide/windows/CUPC_7.0
_UG_win.pdf

Management and Operations


An end-to-end Cisco® Virtual Experience Infrastructure (VXI) deployment requires a comprehensive
management architecture that provides the capability to provision, monitor, and troubleshoot the service
for a large number of users on a continuous basis. This chapter provides best practices and guidelines
for managing a Cisco VXI system. It also describes the key tasks and tools associated with Cisco VXI
services delivery that pertain daily operations (postdeployment). An understanding of the demands of
managing the system can serve as a basis for validating off-the-shelf management tools and developing
such tools in house.

Cisco Virtualization Experience Infrastructure


82
Management and Operations

Managing a Cisco VXI system is challenging given the number of hardware and software components
in the end-to-end system (campus, branch office, and Internet; see Figure 42). These components
provide services (such as desktop virtualization, network, storage, compute, security, load-balancing,
WAN acceleration, and unified communications services) to local and remote end users and
administrators. Administrators are expected to manage different technologies and services, such as
network, storage, database, and desktop administration.

Figure 42 Tools Used to Manage the Cisco VXI System

HQ WAE
WAAS Central Manager

HQ WAE

WAN

Branch Branch Branch


endpoints IP Switch Router
WAN
Cisco IP Router
Wyse Device Manager phone
Internet
Campus
Cisco UM Suite IP
Internet endpoints
Router Campus
switch
IP CiscoWorks LMS

Nexus 7000

ASDM
NetApp Virtual Storage
Cisco ASA Console/StorageLink
ACE Device Manager

Cisco ACE Appliance NetApp storage

Cisco NAM Appliance


DCNM
NAM Admin GUI
Nexus 5000
Virtual machines running on vSphere/XenServer

vSphere/XenCenter
6100 Fabric
VMware vCenter Interconnects
virtual
Citrix XenCenter switch Fabric Manager
View Administrator
Console MDS 9222i
UCS
VMware View EMC Navisphere/Unisphere
Manager
UCS Manager
Citrix Delivery EMC storage
254647

Services console

Citrix Desktop Windows 7


Delivery Controller Desktops

Cisco Virtualization Experience Infrastructure


83
Management and Operations

Cisco VXI Management and Operations Architecture


This section describes the main aspects of a management architecture for a Cisco VXI system.
Operations management, service management, service statistics management, and provisioning
management are the main management elements for Cisco VXI (Table 11). Other important areas to
consider include scalability, high availability, management traffic, desktop management, and endpoint
management.

Table 11 Cisco VXI Management Architecture

Key Aspects of a Cisco VXI Management Architecture


Operations management is the capability to monitor the status of every element in real time and provide
diagnostics. It includes the use of Simple Network Management Protocol (SNMP), syslog, and
XML-based monitoring as well as using HTTP-based interfaces to manage devices. It also includes
inventory and asset management (hardware and software) of endpoints and virtual desktops.
Service management is the capability to monitor and troubleshoot the status and quality of experience
(QoE) of user sessions. It includes the use of packet capture and monitoring tools such as Cisco
Network Analysis Module (NAM), NetFlow, and Wireshark to monitor a session. It also enables the
desktop virtualization administrator to remotely access the endpoint and virtual desktop to observe
performance and collect bandwidth and latency measurements. The capability to measure compute,
memory, storage, and network utilization in real time to identify bottlenecks or causes of service
degradation is also provided. Session details records for a virtual desktop session can indicate
connection failures and quality problems.
Service statistics management is the capability to collect quality and resource usage measurements and
to generate reports useful for operations, infrastructure optimization, and capacity planning.
Measurements can include session volume, service availability, session quality, session detail records,
resource utilization, and capacity across the system. The reports can be used for billing purposes and
for management of service levels.
Provisioning management is the capability to provision end users, virtual desktops, and endpoints
using batch provisioning tools and templates. The APIs (XML) provided by the vendor can be used for
automation and self-service provisioning. It includes software image and application management on
endpoints and virtual desktops.

Scalability
As an alternative to provisioning and managing every device with a command-line interface (CLI), you
can use a management tool to scale management for a large number of endpoints though a centralized
control point. The use of polling, SNMP traps, and syslog messages are crucial for monitoring a
large-scale deployment. In addition, GUI-based tools are helpful for navigating complex deployments
and viewing detailed reports. Many device management tools also provide an API or CLI that can be
used to automate workflows. This feature is particularly useful when you are adding a large number of
users, desktops, or endpoints.

High Availability
For high availability, you should deploy management applications in redundant configurations (primary
and secondary servers) and back up configurations and databases periodically. You should also deploy
management application servers on virtual machines, to use resources efficiently and capitalize on
high-availability features provided by the hypervisor infrastructure, such as virtual machine migration

Cisco Virtualization Experience Infrastructure


84
Management and Operations

(VMware vMotion), VMware Distributed Resource Scheduling (DRS), and VMware Fault Tolerance
(FT). Although many applications will need to reside on a dedicated server, consider consolidating
applications and servers on the same virtual machine to conserve resources where possible. In general,
the ideal place to locate management servers is in the data center with other critical resources.

Management Traffic
As a general guideline, a separate IP network for management traffic is recommended. For example, a
dedicated IP subnet and VLAN for remote-access, SNMP, syslog, and FTP traffic could be managed out
of band, where practical. This approach helps ensure that end-user and administrative traffic flows do
not compete for, or interfere with, available bandwidth, and that remote access to a device is not
compromised when the device needs to be reset or provisioned. This approach also mitigates threats to
network security and availability that could be introduced when end-user and administrative traffic share
the same interface.
The isolation of management and user traffic is especially important in the data center, where
virtualization of the desktops concentrates user traffic in an infrastructure traditionally used for server
and management traffic loads.
The isolation of management traffic may not be practical in some parts of the system. For example, an
endpoint usually has a single Ethernet port, so it must use this port for both end-user and administrative
traffic. Also, it may not be practical to set up a separate out-of-band network dedicated to management
traffic across a WAN due to cost and network address conservation concerns. In these cases, keep in mind
that certain types of management traffic (SNMP polling) can consume substantial bandwidth and should
be scheduled appropriately, monitored, and possibly rate limited using standard quality-of-service (QoS)
techniques.
Each management tool uses a specific set of protocols to communicate with devices. Refer to the vendor
documentation for a complete list of protocols and ports used by each tool. Make sure that these ports
are open on all intermediary routers, switches, and firewalls.

Desktop Management
In a desktop virtualization environment, virtual desktops are organized in pools that are associated with
end users. The association of a user with a pool allows the user to access the desktops in that pool. A
dedicated pool consists of a single desktop with a single associated user, similar to a personal desktop.
A shared pool includes multiple desktops with multiple associated users. In a shared pool, the
assignment of desktops to users can be accomplished in several ways.
The user assignment can be performed statically through administrative configuration, or dynamically
using the connection manager. When the connection manager is used, assignments can be made in a
random or sticky pattern, so the user is always assigned the same desktop, or in a floating manner, with
the user choosing a desktop. The desktops in a pool can be generated statically or dynamically (on
demand), and they can be persistent (retained after the user logs off) or nonpersistent (deleted after the
user logs off.) A desktop can be newly created, or it can be cloned from a master desktop template.
From a desktop management perspective, you need to consider a few points when managing the Cisco
VXI system (Figure 43). Generally, virtual desktops run the same OS as physical desktops; thus, the
same component management strategy should be used for the OS, application updates, Microsoft
Windows user profiles, and network settings. Applications can be installed locally, on the virtual
desktop, or streamed from an application server. When managing persistent desktops (those retained
after use), you must update each virtual desktop individually. When managing nonpersistent desktops
(those that are deleted after use) that are generated on demand using a master template, you should
update only the master; subsequent desktops generated to service new user sessions will be cloned from
the updated master template.

Cisco Virtualization Experience Infrastructure


85
Management and Operations

Figure 43 Desktop Management

User Profile

APPS APPS

DV Agent

Guest OS
DV Client

Endpoint OS

Network
Virtual Desktops

Endpoint

254675
Hypervisor
To understand Cisco VXI desktop management, consider the following scenarios:
• In one Cisco VXI deployment, a personal desktop is created once, statically, by the administrator
and assigned to a user for regular use. The user is allowed to customize settings and install software.
• In another Cisco VXI deployment, a shared desktop is created dynamically (on demand) and
assigned to a user who is a task worker. Even though the user is allowed to customize settings and
install software, the changes made by the task worker are not permanent since the desktops are
deleted after the user logs off. New users will be provided with new desktops, generated on demand.
There are advantages to managing virtual desktops. The task of creating a new desktop—for instance,
installing the operating system and applications and customizing user settings for a new user or a user
who requires an upgrade—is much faster when the desktop is cloned from a master virtual machine
template that already has the OS, application, and user settings configured. The linked-clone feature
further optimizes and speeds up the cloning operation.
Applying updates to dedicated desktops (persistent) should be accomplished in a manner similar to that
used for physical desktops. In this scenario, the administrator still maintains some control, but the user
is given more flexibility as to applications installed and user settings (personalization). The tasks of
maintaining software updates, downloading and installing new software, performing file backup, and
running antivirus software are largely the same for virtual desktops as for physical desktops.
Applying updates to shared desktops (nonpersistent) can be accomplished in a centralized manner
because only the master template needs to be modified. Now with Cisco VXI deployments, IT can
manage single instances of each OS, application, and user profile to greatly simplify desktop
management. The administrator has complete control over the image version and application revisions
on the desktops. Any changes and updates to master templates can be implemented by the administrator
and made available to new user sessions immediately. This process can be completed either by statically
cloning a virtual machine template or by using the VMware linked-clone or Citrix Provisioning Services
feature for dynamically generated desktops.
Although some applications, such as antivirus software, can be installed locally on the virtual desktop,
others, such as Microsoft Office, can be streamed to the virtual desktop from a central repository. When
updating applications that are streamed, you need to update only the master copy on the central
application server. When updating applications that are installed locally on the virtual desktop, you need
to update the desktop or master template desktop.

Cisco Virtualization Experience Infrastructure


86
Management and Operations

Since virtual desktops reside in the data center and may not be in close proximity to network printers,
the appropriate changes should be made to the applications that install and configure printers on
desktops. Desktop virtualization vendor solutions support connectivity to USB devices that are attached
to endpoints; thus, printing to a local printer connected through a USB port should be enabled for the
end-user session.

Endpoint Management
The OS, applications, and settings on the endpoint also need to be managed. When these endpoints run
an embedded version of Microsoft Windows, they can be managed in much the same way as a physical
desktop. Endpoint management tools can be used to automate and simplify the task of provisioning and
monitoring the desktop virtualization endpoints. Network-based services such as Dynamic Host
Configuration Protocol (DHCP) and file servers can also be used to provision and update endpoints.
Also, local software repositories or caching services (Cisco Wide Area Application Services) can be
used to distribute patches and updates to desktop virtualization endpoints. Refer to the section describing
desktop virtualization endpoint management for more information.

Cisco VXI Management Tool Summary


This section summarizes the Cisco VXI system components and the management tools that can be used
to provision and monitor each element. It also provides a brief description of each associated capability
and a link to additional documentation.

Data Center and Applications


The main data center components that need to be managed are the compute servers, hypervisor, virtual
machines, storage, switching fabric, connection managers, and servers that provide network services.
The management tasks include the provisioning of end users, virtual desktops, desktop pools,
hypervisor, and storage, as well as the monitoring of sessions and use of resources (compute, memory,
storage, and network). Table 12 summarizes the components and management tools.

Table 12 Data Center and Applications – Management Tools

Product
Documentation
Product Management Tool Description Link
ESX/ESXi and virtual VMware vCenter and Use VMware ESX and VMware vSphere
machines Vsphere client ESXi hypervisor manager documentation
to create and manage
virtual machines
VMware View Manager VMware View Create virtual desktop VMware View
4.5 Administrator Console pools, and grant user documentation
privilegess, and monitor
sessions.
Citrix XenDesktop Citrix Delivery Create virtual desktop Citrix XenDesktop
Delivery Controller 4.0 Services Console pools, and grant user documentation
privilegess, and monitor
sessions.

Cisco Virtualization Experience Infrastructure


87
Management and Operations

Product
Documentation
Product Management Tool Description Link
EMC Unified Storage EMC Navisphere and Provision and monitor the EMC Navisphere
Unisphere SAN-based storage array Management Suite
Management Suites EMC Unisphere
Management Suite
NetApp FAS 3170 NetApp Virtual Provision and monitor the NetApp Virtual
Storage Console / network-attached storage Storage Console
StorageLink (NAS) based storage NetApp
array StorageLink
Cisco UCS B-Series Cisco UCS Manager Provision and monitor the UCS Manager
Blade Servers Cisco UCS B-Series
Blade Servers
Virtual desktops (guest Standard enterprise Provision and monitor the Altiris reference
OS) desktop and OS virtual desktops. documentation
management tools
(Altiris, Short Message
Service [SMS], and
Microsoft System
Center Configuration
Manager [SCCM])
Microsoft Active Standard enterprise Manage end user profiles Microsoft Active
Directory, Domain Name management tools and perform Directory and
System (DNS), DHCP authentication of user Network services
sessions. Provide DHCP
services to endpoints.

Network Infrastructure
The main elements in the network infrastructure (LAN, WAN, and SAN) that need to be managed are
the switches, routers, WAN acceleration devices, load balancers, security gateways, and network
analyzers. These components span the data center, enterprise core, WAN, and branch offices and provide
both data and storage connectivity. The management tasks include provisioning and monitoring these
elements and monitoring a desktop virtualization session and reporting on network use (Table 13).

Table 13 Network Infrastructure – Management Tools

Product
Documentation
Product Management Tool Description Link
Cisco Network Analysis Cisco NAM Admin GUI Provision and monitor Cisco NAM
Module Cisco NAM documentation
Application Control ACE Device Manager Provision and monitor the Cisco ACE
Engine ACE documentation

Cisco Virtualization Experience Infrastructure


88
Management and Operations

Product
Documentation
Product Management Tool Description Link
Cisco Wide Area Cisco WAAS Central Provision and monitor Cisco WAAS
Application Service Manager Cisco WAAS appliances, documentation
generate aggregate reports
on optimization
Cisco Adaptive Security Adaptive Security Provision and monitor Cisco ASA
Appliance Device Manager ASA appliance documentation
Cisco MDS 9000 Cisco Fabric Manager Provision and monitor the Cisco Fabric
Cisco MDS 9000 Family manager
SAN switches documentation
Cisco Nexus 7000/Nexus Cisco Data Center Provision and monitor the Cisco DCNM
5000 Network Manager Nexus 7000, Nexus 5000 documentation
and MDS switches.
Cisco Nexus 1000v Cisco NX-OS CLI, Provision and monitor the Cisco Nexus 1000v
Cisco DCNM and Cisco Nexus 1000v documentation
vCenter
Catalyst 6500, Catalyst CiscoWorks LAN Provision and monitor CiscoWorks
3560, ISR 3900/2900 Management Solution Cisco routers and switches documentation

Desktop Virtualization Endpoints


The desktop virtualization endpoints that need to be managed are zero clients, thin clients, virtual clients,
and web-based clients. The management of endpoints includes tasks to provision and monitor endpoints,
update images, perform asset management, measure the quality of the user experience, and remotely
access the endpoints. Use of enterprise desktop management tools such as Altiris, Wyse Device
Manager, Devon Thin Client Management, and IGEL Universal Management Suite is recommended to
manage the desktop virtualization endpoint OS (Table 14.

Table 14 DV Endpoints – Management Tools

Product
Documentation
Product Management Tool Description Link
Wyse DV endpoints Wyse Device Manager Image, configuration, Wyse Device
4.8 asset management, Manager
shadowing of DV documentation
endpoints

Devon DV endpoints Thin Client Image, configuration, Devon IT


Management 3.0 asset management, documentation
shadowing of DV
endpoints

Cisco Virtualization Experience Infrastructure


89
Management and Operations

Product
Documentation
Product Management Tool Description Link
IGEL DV endpoints Universal Management Image, configuration, IGEL
Suite 3 asset management, documentation
shadowing of DV
endpoints
Windows based endpoint Standard enterprise Image, configuration, Altiris reference
(thick client) desktop and OS asset management of DV documentation
management tools endpoints
(Altiris, SMS, SCCM)

Unified Communications
The main unified communications elements that need to be managed are Cisco Unified IP Phones, Cisco
Unified Personal Communicator clients, and Cisco Unified Communications servers (Cisco Unified
Communications Manager, Cisco Unified Presence, and Cisco Unity® devices). The management of UC
elements include tasks to provision and monitor the IP phones, Cisco Unified Personal Communicator
clients, Cisco Unity accounts, and servers. The Cisco Unified Communication servers include an
embedded web-based GUI management interface that can be used to complete the basic provisioning and
monitoring tasks. The Cisco Unified Communications Management suite of tools is recommended for
advanced management features, scalability, and performance (Table 15.

Table 15 Unified Communications – Management Tools

Product
Documentation
Product Management Tool Description Link
Cisco IP phones and Cisco Unified CUOM, CUPM, CUSM, Cisco UMS
Cisco Unified Personal Management Suite CUSSM documentation
Communicator clients
CUCM, CUP, Unity Cisco Unified CUOM, CUPM, CUSM, Cisco UMS
servers Management Suite CUSSM documentation

Cisco VXI Management Tasks and Work Flows


This section presents use cases for an IT administrator and includes workflows for the most common
operations that a Cisco VXI administrator performs in day-to-day operations. For example, adding a user
might involve creation of a new desktop user in VMware vCenter, VMware View Manager, and
Microsoft Active Directory; configuration of a new phone and Cisco Unified Personal Communicator
account in Cisco Unified Communications Manager, Cisco Unified Presence Server, and Cisco Unity;
provisioning of a new desktop virtualization endpoint in Wyse Device Manager and a DHCP server; and
addition of storage, memory, compute, and network resources in the data center.
A unified management console for service management is an important management tool for a Cisco
VXI system. This customized management application manages the main Cisco VXI components using
interfaces provided by each vendor. Consider using off-the-shelf enterprise network managers that

Cisco Virtualization Experience Infrastructure


90
Management and Operations

integrate with component-specific management tools through an API, protocol, or plug-in that exposes
some aspects and features of the tool in the central management utility and allows the administrator to
invoke the tool to access its full capabilities.
An orchestration tool is also an important management tool for large-scale deployments. It automates
the tasks associated with a particular workflow (such as the addition of users, endpoints, and virtual
desktops). The automation interface can also allow an end user to self-provision a virtual desktop
without the assistance of an administrator by automating the interaction with each vendor-specific
management tool. The topology diagrams in Figure 44 show actions in relation to components in the
solution related to a particular workflow (Table 16 through Table 20) summarize the tasks for each
workflow.
Consider using off-the-shelf enterprise network managers that support SNMP and syslog—for example,
IBM Network Manager—which can also be used to manage elements in the system that support these
protocols. The collection and correlation of logs from different devices in the system can be very useful
for an administrator troubleshooting or monitoring a user session. Consider using Splunk, which is a tool
that can perform this function. Consider using tools from the VizionCore tool suite (which includes
applications such as vFoglight for virtual machine performance management and reporting and vControl
for automation and workflow management) that provide advanced features to manage the virtual data
center and storage infrastructure. Tidal software has a suite of automation tools that can be used to
orchestrate the components of the Cisco VXI system.

Cisco Virtualization Experience Infrastructure


91
Management and Operations

Figure 44 Cisco VXI Workflow- Add User

Virtual Machines running on hypervisor

Hypervisor
Manager Create Desktop
Add storage resource Storage Array

Desktop Pool
Add compute/memory Compute Server
resource
Connection
Manager Create Desktop pool
Add network resource Campus
switch

DV Endpoint Add DV endpoint


management

Provision DV endpoint DV Endpoint

IP DNS, DHCP, Add user and


AD DV endpoint
IP phone

CUCM, CUP, Add phone and

254648
Unity CUPC account

Cisco Virtualization Experience Infrastructure


92
Management and Operations

Add User

Table 16 Cisco VXI Workflow – Add User

Task Things to Consider Workflow


Add User - Add • Is there sufficient 1. Create virtual desktop with required OS/applications
a user, virtual compute, memory, and view agent (this can be cloned from a virtual
desktop, storage, and machine template using a customization template)
endpoint for a network capacity
2. Join the desktop to the AD domain and check that a
new user for new user?
DNS entry is automatically created.
• Is this a static or
3. If more compute, memory, storage, or network
dynamically
resources are needed, add as required.
generated desktop?
4. Add a user to LDAP server with username/password
• Is there a phone or
Cisco Unified 5. Provision the desktop inside a dedicated pool with
Personal appropriate entitlement for the user.
Communicator that 6. Provision a DV endpoint with required
should also be OS/applications and device manager agent
provisioned?
7. Add the DV endpoint to device manager using
autodiscovery (optional)
8. Setup DHCP server with network settings and view
manager location
9. Create a Cisco Unified Personal
Communicator/voicemail account and associate the
user to the desk phone

Cisco Virtualization Experience Infrastructure


93
Management and Operations

Figure 45 Cisco VXI Workflow: Add User

Add user to
Active Directory

Allocate virtual
desktop in
vCenter/XenCenter

Need
more compute Yes Add compute
resources? resource in
vCenter/XenCenter

No

Provision
desktop pool in
View Manager/
XenDesktop
Controller

Provision DV
endpoint in DHCP
and WDM

Provision an IP
phone and CUPC
254649

account in CUCM
and CUP

Delete User

Table 17 Cisco VXI Workflow – Delete User

Task Things to Consider Workflow


Delete User - • Can any compute, 1. Remove the virtual desktop from the desktop pool
Remove a user, memory, storage, or
2. Remove the virtual desktop
virtual desktop, network capacity be
endpoint consolidated? 3. Remove the user in the LDAP server
• Is this a static or 4. Remove the DV endpoint in WDM and DHCP
dynamically 5. Remove the phone and Cisco Unified Personal
generated desktop? Communicator account
• Is there a phone or
Cisco Unified
Personal
Communicator that
should also be
un-provisioned?

Cisco Virtualization Experience Infrastructure


94
Management and Operations

Figure 46 Cisco VXI Workflow – Delete User

Remove the
desktop pool in
View Manager/
XenDesktop
Controller

Remove the
virtual desktop in
vCenter/XenCenter

Remove the user


in Active Directory

Remove the DV
endpoint in
DHCP and WDM

Remove the
phone and CUPC

254650
account in
CUCM and CUP

Change User

Table 18 Cisco VXI Workflow – Update Desktop

Task Things to Consider Workflow


Change a users • Is this a static or • Modify, update the virtual desktop itself for static
virtual desktop - dynamic desktop desktop.
changing the pool?
or
applications on
• Is the application
the virtual • Change the virtual machine template and create a new
installed locally or
desktop desktop for dynamic desktop pool.
streamed?

Figure 47 Cisco VXI Workflow – Update Desktop

Change
desktop

Is this
No Yes Modify the master
254652

Modify the desktop a dynamic


desktop? template VM

Cisco Virtualization Experience Infrastructure


95
Management and Operations

Table 19 Cisco VXI Workflow – Change Desktop

Task Things to consider Workflow


Change a users Increase size of existing 1. Add the storage resource or increase the size of the
virtual desktop - storage or add a new existing storage for the virtual machine
adding storage storage resource?
2. Use the disk management utility in guest OS to
to the desktop Is thin provisioning recognize the new storage
enabled for the virtual
desktop?
Can storage be added
via network mapped
drive?

Figure 48 Cisco VXI Workflow – Change Desktop

Add the storage


resource to VM in
vCenter/XenCenter

Add storage 254651


in guest OS

Monitor/Troubleshoot User Session

Table 20 Cisco VXI Workflow - Monitor or Troubleshoot User Session

Task Things to Consider Workflow


Monitor/Troubleshoot • Can the user 1. Determine the network address from device
User Session - session connect? manager or from the DV endpoint itself
Trouble shoot a user
• Is the user session 2. Check error messages/logs on the view manager
session that cannot
suffering from
connect or is suffering 3. Use Wyse Remote Shadow Virtual Network
poor quality?
from poor quality Computing (VNC) to access the desktop
• Is the view virtualization endpoint to further troubleshoot the
manager reporting problem.
insufficient
4. Check error messages, logs, and software versions
resources?
on the DV endpoint
• Is there
5. Start a packet capture for particular host
connectivity
between endpoint,
connection
manager, and
virtual desktop?

Cisco Virtualization Experience Infrastructure


96
Management and Operations

Figure 49 Cisco VXI workflow: Troubleshoot or Monitor user session

Determine
network address
from endpoint

Check error
messages on
View Manager /
XenDesktop
Controller

Remote shadow
into the client
on WDM

Check logs/error
messages on DV
endpoint

Start a packet
capture for that 254653

host on NAM

Cisco VXI Management Tools


This section includes a summary and brief description of each management tool that can be used to
provision and monitor the components of the system. It includes guidelines and best practices associated
with the tools used and tasks being performed and reference links to additional documentation.

VMware vCenter and vSphere


The VMware vCenter and vSphere management suite is used to manage the hypervisor (VMware ESX
and ESXi) running on hosts (physical compute servers). It enables administrators to manage multiple
hypervisor hosts and to deploy and manage virtual machines and virtual appliances. It is also used to
monitor compute, memory, storage, and network resource utilization on hosts and virtual machines;
monitor system events; and generate reports. The VMware vCenter server is accessed and administered
with the VMware vSphere client (Figure 50).

Cisco Virtualization Experience Infrastructure


97
Management and Operations

Figure 50 vSphere Client Interface

Guidelines for Using VMware vCenter and vSphere in the Cisco VXI System

Virtual Desktops
• When creating a new virtual desktop, the administrator should specify the virtual machine
properties: number of virtual CPUs, memory allocation, storage type and allocation (local, SAN, or
NAS), virtual network interface card (vNIC) connectivity to the virtual switch (vSwitch), and guest
OS (Microsoft Windows or Linux). The administrator should also assign the desktop to a physical
host running the hypervisor. The administrator should then proceed to install the guest OS on the
virtual machine. The virtual machine console can be accessed from VMware vSphere to perform any
additional customization of the guest OS. You should install VMware tools on the guest OS to
improve mouse and graphics performance for the remote desktop session and improve the
manageability of the desktop from VMware vCenter.
• When deploying a large number of virtual desktops, you should clone them from a master template
created based on a master virtual desktop that has the required OS, applications, and customization.
You should use the VMware virtual machine customization specification to simplify the process of
cloning multiple desktops.
• Use of a virtual machine template is mandatory for dynamically (on-demand) generated desktops.
You should enable DHCP in the master template to avoid duplicate network addresses on the
dynamically generated virtual desktops (since the network settings of a cloned desktop would be
identical to those specified in the master template).

Provisioning
• Use the VMware vSphere Microsoft Windows Powershell CLI for automating any administrative
tasks (provisioning or monitoring) usually performed with the VMware vSphere client. For
example, you can use the CLI to automate the task of cloning virtual machines that are part of a
statically generated persistent desktop pool. The CLI is useful for an administrator creating a large
desktop pool or an end user self-provisioning a virtual desktop.

Cisco Virtualization Experience Infrastructure


98
Management and Operations

• Add a VMware ESXi host to a VMware vCenter data center and manage it with the VMware
vSphere client. The VMware ESXi console can be accessed from the physical host, and it presents
a menu interface that can be used to provision an administrator password and network settings. Even
though VMware ESXi hosts can be accessed directly with the VMware vSphere client, you should
manage them only through the VMware vCenter server. VMware ESXi hosts can be locked down so
they cannot be accessed directly in VMware vCenter. Note that there is no service console in
VMware ESXi, and the service console available in VMware ESX has been deprecated.
• When a large number of virtual machines are being managed in VMware vCenter by multiple
administrators, you should set up individual administrator accounts (rather than a single global
account with full privileges) to track and log changes made by different administrator users.
VMware vCenter authentication can also be linked to Microsoft Active Directory to accomplish this
logging and manage the administrator accounts centrally. You also should use role-based access to
VMware vCenter and vSphere to limit the functions, privileges, and scope (by data center or cluster
or allowed capabilities) for changes that the user is allowed to perform.

Note You can take regular snapshots of persistent desktops or master virtual desktops and use these
snapshots to restore a desktop to a known state. This procedure can be useful when a desktop
has been compromised (attacked by a virus) or has performance problems.

• You should regularly back up the VMware ESXi host configurations and the virtual machines. The
preferred way to do this is to use the VMware vCenter CLI (the vicfg –cfgbackup command) and
VMware vSphere applications such as VMware Consolidated Backup, VMware Data Recovery, and
the VMware vStorage API. Refer to the VMware documentation for best practices for using these
backup and recovery features.
• In the event of power loss, you should automatically power on the virtual desktops after power is
restored to the physical host. The sequence and timing in which to power on virtual machines can
be specified in VMware vCenter. Consider staggering the power-on of the virtual desktops to avoid
excessive network and server load when Microsoft Windows network services (DHCP, DNS, and
Microsoft Active Directory) are accessed during bootup.
• Use an external Microsoft SQL or Oracle database engine for larger deployments since the default
SQL database bundled with VMware vCenter does not scale. (It is intended for use with up to 5 hosts
and 50 virtual machines.) Refer to the scaling and limit reference documentation Configuration
Maximums from VMware for information about the number of hosts and virtual machines that can
be managed by VMware vCenter. You also should back up the VMware vCenter database
periodically using the backup utility included with the external database.
• You should install VMware vCenter on a virtual machine to improve the availability of applications
by taking advantage of high-availability features such as VMware vMotion, DRS, and FT. For
high-availability deployments, VMware vCenter can be run on redundant servers that use a heartbeat
mechanism for establishing active and standby roles. For large-scale deployments, consider using
the linked-mode group feature, which allows VMware vCenter to be set up on a cluster of servers
that allow shared access to all the virtual machines in the group. Refer to the VMware
documentation for best practices for deploying VMware vCenter in a redundant configuration.

VMware View Manager Administrator Console

VMware View Manager 4.5 is used for managing remote desktop sessions between desktop
virtualization endpoints and virtual desktops. It authenticates users initiating a session and then redirects
them to a virtual desktop. The VMware View Administrator Console (Figure 51) is a web-based
application used to deploy and manage desktop pools, control user authentication, monitor desktop use,

Cisco Virtualization Experience Infrastructure


99
Management and Operations

examine system events, and perform analysis. VMware View 4.5 also includes the VMware View Agent
running on the virtual desktop and the VMware View Client running on the desktop virtualization
endpoint. Note that these are software components that also need to be managed as described here.

Figure 51 VMware View Administrator Console

Features and Guidelines for Using VMware View Manager Administrator Console in the Cisco VXI System

Features
• Use the automatic pool feature to generate desktops dynamically. Note that this feature can also be
used to implement nonpersistent desktops by deleting the desktop after the user logs out. The
administrator can specify the maximum number of desktops to generate and the number of spare
desktops to maintain for new incoming session requests. These specifications will help ensure that
a desktop is available for every new user session.
• Use the manual pool feature to create desktop pools composed of persistent desktops such as the
virtual desktops that are not created and removed dynamically. When setting up a persistent desktop
pool, use the floating assignment to allow the end user to choose an available desktop, instead of
automatic assignment, which will connect the user to any available desktop.

Note VMware View Manager Administrator Console provides the capability to restart or remove a
virtual desktop to recover it from an unknown state.

Provisioning
• Use the VMware View Manager Microsoft Windows Powershell cmdlets and vdmadmin CLI
commands to automate the task of adding a manual pool or monitoring active sessions or desktops.
These tools can be used by an administrator who needs to provision a large number of desktops or
by an end user who is self-provisioning through a web-based interface. Refer to the VMware View
Manager documentation for more information.

Cisco Virtualization Experience Infrastructure


100
Management and Operations

• The VMware View Manager configuration can be customized by exporting the configuration in
LDAP Data Interchange Format (LDIF), making changes using a text editor and then importing.
This process can also be used to back up the VMware View Manager configuration. The CLI-based
commands are vdmimport and vdmexport. Refer to the VMware View Manager documentation for
more information.

Monitoring
• The VMware View Manager event database is a crucial component for monitoring and
troubleshooting session activity, and it can be accessed in the VMware View Manager Administrator
Console using the event viewer. It can also be queried using an enterprise-class database manager.
When setting up the event viewer configuration, make sure that the administrator has the required
permissions to access the external database.
• Use the server log bundle utility (which allows the user to specify logging levels) for advanced
troubleshooting of the VMware View Manager.

VMware View Composer


The VMware View Composer is a software service that is installed on the VMware vCenter server to
allow VMware View Manager to rapidly deploy multiple linked-clone desktops from a master template
image.
• You should use the linked-clone feature to implement a nonpersistent shared pool of desktops. Note
that dynamic generation of desktops is faster (typically 10 times faster) with linked clones than with
template-based cloning.

VMware View Virtual Desktops


• The VMware View Agent installation and upgrade on persistent desktops can be managed using
standard desktop management tools such as Altiris. You should clone the virtual desktop after
installing all standard applications, including the agent on the master template.
• You should synchronize virtual desktop clocks to a network-based Network Time Protocol (NTP)
server. As an alternative, you can synchronize the virtual desktop clocks with the VMware ESXi
host, and the host can be synchronized with a central NTP server. This synchronization is
accomplished on the virtual desktop using the VMware tools configuration option. Consult the
VMware documentation for detailed guidelines for synchronizing the virtual desktop clocks.
• You can manage the VMware View Client installation and upgrade on a desktop virtualization
endpoint using standard desktop software management tools such as Altiris and Wyse Device
Manager.

Citrix XenDesktop Management Console


Citrix XenDesktop Delivery Controller 4.0 is used to manage the remote desktop sessions between
desktop virtualization endpoints and virtual desktops. It authenticates users initiating a session, and it
redirects them to a virtual desktop. The Citrix XenDesktop Management Console (Figure 52) is used to
provision desktop pools, grant user privileges, and monitor desktop use. The Citrix XenDesktop solution
also includes the Citrix XenDesktop Agent running on the virtual desktop and the Citrix XenDesktop
Receiver running on the desktop virtualization endpoint. These are components that also need to be
managed as described here.

Cisco Virtualization Experience Infrastructure


101
Management and Operations

Figure 52 XenDesktop Management Console

Guidelines for Using the Citrix XenDesktop Management Console in the Cisco VXI System

Management Console
• The Citrix XenDesktop Enterprise or Platinum Edition is recommended for large-scale deployments
and includes critical management features.

Note Citrix XenDesktop Express Edition can be used for a trial deployment.

• The provisioning services tool is used to provision dynamic desktop pools based on a master virtual
machine template. The desktop performance monitoring feature is used to proactively manage the
virtual desktop experience by monitoring the critical performance metrics (compute, memory,
storage, and network resource utilization). The Citrix Profile Management tool is used to manage
user personalization settings in the Microsoft Windows environment. The workflow studio provides
an easy-to-use GUI to orchestrate and tie together multiple components in a workflow.
• When provisioning desktop pools with virtual desktops managed by VMware vCenter, enable the
HTTP interface on the VMware vCenter server and specify HTTP in the VMware vCenter server
URL. Alternatively, install signed certificates from a trusted certificate authority on the VMware
vCenter server to use the HTTPS-based interface.

Citrix XenDesktop Virtual Desktop


• You may need to update the display driver on virtual desktops hosted on VMware ESXi using the
Microsoft Windows device manager to improve the graphics performance. You also should turn off
mouse shadowing in the mouse properties of the device manager on the virtual desktop to improve
performance. There are several other recommendations to improve the experience and performance
of the desktop. Consult the Citrix documentation for more information.

Cisco Virtualization Experience Infrastructure


102
Management and Operations

• Citrix EdgeSight for Endpoints provides comprehensive visibility into application performance at
the virtual desktop. It allows the administrator to monitor and measure the end-user experience,
identify poorly performing applications (it records all application crashes and hangs), and optimize
desktop delivery. The data collected by the agent running on the desktop is sent periodically to the
Citrix EdgeSight server (usually when the desktops are offline).

Citrix XenDesktop Receiver


• The receiver is an application running on the desktop virtualization endpoint that needs to be
installed, upgraded, and managed using standard desktop software management tools such as
Altiris.

Note When changes are made to desktop pools while an end user is logged into the desktop controller,
an application refresh on the desktop virtualization endpoint’s receiver application is required
before you can see the updated desktops available. This task is accomplished in different ways
on different desktop virtualization endpoint receivers. Consult the receiver documentation
provided by the vendor for more information.

Wyse Device Manager


Wyse Device Manager (Figure 53) allows the administrator to perform image (OS), configuration, and
asset management; remote control; monitoring; and shadowing of Wyse desktop virtualization
endpoints. It can be used to plan and implement enterprise-scale endpoint device discovery, upload and
download endpoint parameters, and distribute software packages to endpoints. It also supports a
thin-client cloning feature that allows the administrator to capture the image and configuration on a
reference thin client and deploy it across the enterprise. It allows the creation of customized scripts that
are distributed to endpoints to automate endpoint device setup and configuration. You should validate
scripts on a test device before deploying it across the enterprise. It supports thousands of Microsoft
Windows Embedded Standard (WES) or Compact Edition (CE), Linux, Wyse ThinOS, or Wyse XP
Embedded (XPe) thin-client devices deployed across a LAN, WAN, or wireless network. Wyse Device
Manager Enterprise Edition is recommended for large-scale deployments for added security, scalability,
and performance features (note that the Workgroup Edition supports up to 750 endpoints). Refer to the
Wyse documentation for a complete list of features. You should use Wyse Device Manager to manage
Wyse desktop virtualization endpoints to enable a more centralized and hands-on approach.

Figure 53 Wyse Device Manager Console

Cisco Virtualization Experience Infrastructure


103
Management and Operations

The asset information collection feature allows Wyse Device Manager to track hardware revision of
endpoint and software OS, application, and add-on versions that have been applied. Wyse Device
Manager monitors and stores all asset information about each device (including hardware asset
information and information about the software that is installed on each device). Software information
includes the operating system, as well as all applications and add-ons that have been applied to the
endpoint device. Note that Wyse Device Manager can monitor the status of the write filter on thin clients
running the XPe client OS. Reports can be generated to provide a list endpoints being managed,
endpoints scheduled for updates, current endpoint software revision levels, server logs, and endpoint
availability based on the endpoint uptime.
The device management feature allows the administrator to view and monitor the status of the desktop
virtualization endpoints in real time. Wyse Device Manager can be configured to automatically provide
up-to-date status information for all endpoints. To remotely manage an endpoint, a Wyse Device
Manager agent should be installed on the endpoint. The agent needs to be provisioned with the address
of the Wyse Device Manager server. This provisioning can be performed automatically during the device
discovery process, or can it be performed manually on the endpoint. It can also be performed using
DHCP options during endpoint bootup or using standard service (SRV) and DNS records that identify
the location of the server. The agent running on the endpoint periodically checks in with the device
manager using the heartbeat mechanism and looks for any updates. The agent also reports any status
changes to the server. The device discovery feature allows desktop virtualization endpoints to be
manually added to the device manager or discovered dynamically by the device manager by providing
an IP address range within which to perform the discovery.
Wyse Device Manager can remotely control a desktop virtualization endpoint, including reboot,
shutdown, and wakeup (using Wake on LAN) operations. Shadowing (using VNC) allows the
administrator to remotely access the endpoint console to verify settings and set up a remote desktop
session. Shadowing can be useful for troubleshooting endpoint issues in real time. Note that in addition
to allowing configuration access directly from the endpoint console, some zero and thin clients (Viance
and P20) allow the administrator to remotely access configuration information through a web-based
GUI.
The desktop virtualization endpoint updates include updates to the image (OS), applications (agent), and
settings (network and connection), and these can be initiated by the device manager server according to
a schedule (to reduce downtime and move it to off-peak hours) or when the device checks in to the server.
The repository creation and administration feature allows the administrator to easily build and
administer a repository of software, images, and configuration updates for distribution. The Wyse Device
Manager server includes a master software repository, which acts as the image and package server. The
master repository can be synchronized with a remote software repository that is located in a branch
office. For remote locations such as a branch office, you should use remote file repositories that are local
to the endpoints to conserve WAN bandwidth. Since the file transfers are completed using TCP based
protocols (FTP and HTTP), any file transferred across the WAN would benefit from the acceleration,
caching, and compression features of Cisco WAAS.
When deploying a large number of desktop virtualization endpoints, a centralized configuration
approach is recommended. This approach allows an endpoint to automatically detect the desktop
controller location and update itself with the correct image and settings. Endpoints should be
provisioned with network settings and the location of a desktop controller during bootup, through DHCP.
Refer to the Wyse thin-client and Wyse Device Manager documentation for more information about the
specific DHCP options to use here.
In addition, file server locations can provide initialization files instructing the endpoint to download
updated images and configurations. Users can be instructed to reset or reboot their endpoints through
email notification with instructions, or the administrator can use Wyse Device Manager to centrally
initiate this task and push configurations and image updates to endpoints. Remember that these settings
(filer server, Wyse Device Manager server, and desktop controller locations) can also be provisioned
manually by accessing the endpoint through the console or web interface. Consult the Wyse

Cisco Virtualization Experience Infrastructure


104
Management and Operations

documentation for specific instructions for setting up a master file server repository, including the
directory structure for storing configurations and images and the format of the configuration files for
each desktop virtualization endpoint type being deployed.

Note The Citrix desktop delivery controller can be used as a file server since it is already a web server.

Wyse Device Manager Default Device Configuration (DDC) is a policy management tool that allows
administrators to create rules for automatic deployment of OS images, scripts, packages, and other
settings to thin-client endpoints. DDC allows an administrator to configure default software and device
settings for a group of devices to simplify and fully automate the management of their endpoints. For
example, an administrator for a global organization can use DDC to map a subnet to a specific country.
When an endpoint is connected to the network for the first time, Wyse Device Manager DDC uses the
subnet information to provision the thin-client endpoint with the correct language settings and
configuration.
Figure 54 shows the overall architecture of Wyse Device Manager.

Figure 54 Wyse Device Manager Architecture

Branch Office Corporate Data Center

WAN

Branch Endpoint
DHCP Server WDM GUI
Campus Endpoint

Branch Endpoint
Remote SW WDM Server
Repository Campus Endpoint

DHCP Server

Master SW
254659

Repository

The Wyse Device Manager server can be integrated with IBM Tivoli NetView or CA Unicenter as an
add-on module. Add-on modules allow IT professionals to easily organize, upgrade, control, and support
thousands of Wyse desktop virtualization endpoints from within supported third-party management
suites such as IBM Tivoli NetView and CA Unicenter Network and Systems Management (NSM). The
add-on software enables an administrator to use the existing management framework to provide a
comprehensive and unified view of all the network devices. The add-ons integrate Wyse desktop
virtualization endpoints with the current framework and enable access to all features from the
management console. This integration reduces the learning time needed for IT professionals and
provides a cohesive experience.

Cisco Virtualization Experience Infrastructure


105
Management and Operations

Devon Thin-Client Management


Devon ThinManage Version 3 thin-client management allows an administrator to provision and monitor
the Devon desktop virtualization endpoints. It supports endpoint image and agent updates, provisioning
of settings, and the use of profiles for applying settings and shadowing. The Devon ThinManage server
can play the role of both thin-client manager and connection manager. The thin-client agent runs on the
thin-client endpoint and communicates with the Devon ThinManage server. You many need to update
the agent to enable new features and fix any known problems. The server console is accessed through
the web browser (Figure 55).

Figure 55 DevonIT Thin Client Management

Guidelines for Using Devon ThinManage

Note The Devon ThinManage server is distributed as a virtual appliance that runs on VMware Server or
Player. It can be converted to a virtual machine that runs on a VMware ESXi host by using VMware
vConverter.

• Use the shadowing feature to remotely access an endpoint console to provide assistance to a user or
troubleshoot a problem in real time. The Devon ThinManage server supports an event-logging
feature that is useful for monitoring and troubleshooting.

Cisco Virtualization Experience Infrastructure


106
Management and Operations

• The Devon ThinManage server allows the cloning of desktop virtualization endpoint settings,
connection settings, and full disk images. The endpoint settings include display, sound, keyboard,
mouse, and network configurations. The Devon ThinManage server allows the administrator to
clone these settings from one endpoint, store them in the database, and then apply them to other
endpoints. The connection settings include parameters that specify how the endpoints connect to the
connection server. The endpoint images can be stored on file servers and accessed using FTP or NFS
protocols.
• Use the backup feature to create a snapshot of the current management state so that it can be restored
later using the restore feature. This approach enables disaster recovery as well as persistence of the
management configuration and device inventory following an image upgrade.

IGEL Universal Management Suite


IGEL Universal Management Suite (UMS) Version 3 allows centralized management of IGEL desktop
virtualization endpoints. It supports automatic provisioning of endpoints with the correct profile
settings, image management, diagnostics, shadowing, and reporting. The IGEL UMS console
(Figure 56) is the GUI for the IGEL UMS server. The IGEL UMS console can control multiple IGEL
UMS servers. You should use IGEL UMS to manage large-scale deployments of IGEL desktop
virtualization endpoints in a Cisco VXI system.

Figure 56 IGEL UMS GUI

Guidelines for Using IGEL UMS


• The desktop virtualization endpoints can be remotely shut down, awakened, or booted using the
lights-out management feature and scheduled events. The shadowing capability allows the IT
administrator to remotely access the endpoints from the console to perform diagnostics. The
endpoints locate the IGEL UMS server using DNS (standard domain name). IGEL UMS has
reporting features that allow administrators to build complex reports that help detect problems and
allow the administrator to track configuration changes.

Cisco Virtualization Experience Infrastructure


107
Management and Operations

• The IGEL UMS server requires a database, which can be installed on the local server or on an
external server. Consult the IGEL documentation for a list of supported databases. All
communication between the server, console, and desktop virtualization endpoints is encrypted
(using HTTPS and SSL).
• IGEL UMS uses HTTP and FTP for image distribution. When distributing desktop virtualization
endpoint images across the WAN, you should use FTP servers located in the branch office, or you
should use the Buddy Update feature, which transforms an already updated IGEL desktop
virtualization endpoint into the image source for other local endpoints on the network. The IGEL
desktop virtualization endpoints come with a fail-safe update mechanism that allows the endpoint
to boot a previous configuration in the event of power loss during reimaging.
• IGEL UMS allows multiple system administrators to provision and monitor the system. Each
administrator can have a different span of control and level of authority. This approach allows
support responsibilities to be delegated throughout the organization.
• Multiple endpoints can be selected and modified using grouping and profiles at the IGEL UMS
console. By applying a profile to a group of endpoints, the administrator can change the settings
efficiently. The desktop virtualization endpoints can be automatically assigned to a group and
assigned a profile (based on the MAC address) when registering to the IGEL UMS server.

EMC Navisphere and Unisphere Management Suite


The EMC Navisphere (Figure 57) and Unisphere (Figure 58) management suites are used to provision
and monitor the EMC Unified Storage device. They provide web-based interfaces that allow
administrators to discover, monitor, and configure storage systems. They can also be used to collect
storage latency measurements and to perform trend analysis, reporting, and system capacity
management. They support automatic notification of events, allowing the administrator to proactively
manage critical status changes.

Cisco Virtualization Experience Infrastructure


108
Management and Operations

Figure 57 EMC Navisphere Managment Console

Cisco Virtualization Experience Infrastructure


109
Management and Operations

Figure 58 EMC Unisphere Managment Console

Since the storage device (NAS or SAN) maintains the virtual machine configuration and files for many
end user desktops, it represents a critical resource in the Cisco VXI system. In addition to using the
redundancy and fault tolerance built in to the storage device itself, you should back up storage to another
physical location, to enable restoration of service in the event of a failure.

NetApp Virtual Storage Console and StorageLink


The NetApp Virtual Storage Console (VSC; Figure 59) is recommended for provisioning and monitoring
the NAS device NetApp FAS 3170. NetApp VSC is a plug-in that can be installed in VMware vCenter.
In a system running Citrix XenServer, use NetApp StorageLink to provision the NetApp storage device.
NetApp StorageLink can also be used to collect storage latency measurements. Refer to the NetApp
documentation for guidelines and best practices for managing the NAS devices in a virtualized
environment.

Cisco Virtualization Experience Infrastructure


110
Management and Operations

Figure 59 NetApp Virtual Storage Console

Cisco NAM
The Cisco NAM Traffic Analyzer software (Figure 60) allows administrators to monitor, manage, and
improve the way that applications and services are delivered to end users. Cisco NAM offers flow-based
traffic analysis of applications, hosts, conversations, and streams that use differentiated services code
points (DSCPs); performance-based measurements of application, server, and network latency; QoE
metrics for network-based services such as voice over IP (VoIP) and video; and problem analysis using
deep, insightful packet captures. Cisco NAM can monitor traffic, collect packet traces, and generate
historical reports. It can provide comprehensive traffic analysis and insightful troubleshooting
information in real time, which can be used to increase the efficiency of the network and applications.
Cisco NAM can also be used to validate QoS planning assumptions to help ensure that service levels are

Cisco Virtualization Experience Infrastructure


111
Management and Operations

met and to assess the user experience with transaction-based statistics. Consider the guidelines and
recommendations presented here when using Cisco NAM to monitor and troubleshoot a Cisco VXI
system.
Note that Cisco NAM includes an embedded, web-based traffic analyzer GUI that provides access to the
configuration menus and presents easy-to-read performance reports about web, voice, and video traffic
utilization through the browser. You should dedicate a separate management interface on the Cisco NAM
appliance for accessing the GUI.

Figure 60 NAM Admin GUI

Cisco NAM comes in several form factors and can be deployed in different locations of the network. The
Cisco NAM probe should be placed in a location suitable to the specific task being performed. Any
location that is the ingress or egress point of a logical network boundary (aggregation layer, core, or
campus edge) can offer valuable insights into the network activity within that partition and is usually a
good choice for Cisco NAM deployment. For example, you should place Cisco NAM in the data center
core to monitor sessions connecting to critical server farms, such as the connection manager. This
location would allow you to monitor sessions initiated by end users located in all parts of the enterprise
network. Cisco NAM can also be placed in a branch office to troubleshoot a particular set of users
connecting across a WAN. The WAN edge is the optimal location for monitoring sessions originating
from multiple branch offices. In the branch office, Cisco NAM can be deployed as a network module on
a Cisco ISR branch-office router, or in the core network as a blade on a Cisco Catalyst 6000 Series core
switch or a separate appliance (Cisco NAM 2204 or 2220 Appliance.) Cisco NAM can also be deployed
as a software module on the Cisco Nexus 1000V or the Cisco WAAS device. In the data center core, you
should deploy a separate Cisco NAM appliance, which uses dedicated hardware to provide the required
level of performance. Refer to the Cisco NAM deployment guide for a comprehensive discussion of
considerations for Cisco NAM deployment.
Use NetFlow Data Export (NDE) from a remote router to Cisco NAM for network traffic utilization
reports. The NetFlow data can be exported from a Cisco WAAS appliance (to monitor the traffic across
the WAN), router, switch, or Cisco ASA appliance. For example, NetQoS flow data from Cisco WAAS

Cisco Virtualization Experience Infrastructure


112
Management and Operations

can provide information about application latency. The traffic utilization reports will help identify traffic
types that will benefit from QoS policies and Cisco WAAS optimization. These measurements can also
be used for growth forecasting and planning (Figure 61).

Figure 61 Cisco Network Analyzer Module Traffic Utilization Report

Switched Port Analyzer (SPAN) sessions on a core switch capable of monitoring multiple source ports,
such as the Cisco Catalyst 6000 Series or Cisco Nexus 7000 Series Switches, should be used for packet
captures. Use a SPAN port (preferably 10 Gigabit Ethernet) on the data center core switch (Cisco Nexus
7000 Series) to monitor all traffic to and from the data center servers. Note that there is a limit of 2 SPAN
sessions on the switch.
The main use of Cisco NAM is to troubleshoot and analyze a user session that fails to set up correctly
or a user session with poor quality. To display a specific conversation, a packet capture can be filtered
based on the endpoint, desktop controller, or virtual desktop network address. It can also be filtered
based on VLAN, protocol (port), or DSCP marking in the stream.
Note that Cisco NAM is not currently capable of decoding the contents of a desktop virtualization
protocol packet, since the protocol format is proprietary and may be encrypted. Cisco NAM can identify
and label the stream based on the port numbers used (Figure 62).

Cisco Virtualization Experience Infrastructure


113
Management and Operations

Figure 62 NAM Packet Decoder

Table 21 lists the protocol and port used by desktop virtualization sessions. This information can be
utilized to identify the user streams in a packet capture:

Table 21 Protocol/Port Information

DV Protocol Protocol Port


RDP TCP 3389/32111(USB)/9427(MMR)
PCOIP UDP 1472
ICA TCP 1494/8080

Note that a typical VMware View session setup (desktop virtualization endpoint to VMware View
Manager stream) uses TCP (port 80) and Transport Layer Security Version 1 (TLSv1; port 443). The
communication between VMware View Manager and View Agent uses TCP (port 4001). Refer to the
VMware vCenter documentation for information about specific port use by the VMware vSphere client.
The VMware vCenter server’s VMware vSphere client and software development kit (SDK) interface
uses HTTP (80) and HTTPS (443). VMware vCenter communication to the VMware ESXi host uses a
TCP (port 902) stream. The communication between the Citrix Desktop Delivery Controller and the
Citrix XenDesktop Agent uses TCP (port 8080). The Wyse Device Manager defaults to HTTP (80) and
HTTPS (443) for communication with desktop virtualization endpoint agents. The administrator should
make sure that these ports are open on all firewall devices.

Cisco Virtualization Experience Infrastructure


114
Management and Operations

Packet capture of the traffic from a desktop running Cisco Unified Personal Communicator with an
active voice call does not include any Real-Time Transfer Protocol (RTP) traffic, since the Cisco Unified
Personal Communicator client is running in desk phone control mode.
The desktop packet capture does include Cisco Unified Personal Communicator client communications
with Cisco Unified Presence and Cisco Unified Communications Manager through the Computer
Telephony Integration (CTI) protocol. The packet capture of a Cisco IP Phone that is co-located with the
desktop virtualization endpoint does include the RTP traffic. Cisco NAM can also collect RTP metrics
used for audio-quality measurements from the Cisco Unified Management Suite.

Desktop Management Tools


Desktop provisioning and monitoring is critical to successful management of a Cisco VXI system. Use
the guidelines and recommendations presented for this task.

Desktop Provisioning
Virtual desktops and desktop virtualization endpoints can be managed using standard Microsoft
Windows software management tools such as Altiris. These tools can be used to push out agent updates,
OS fixes, security updates, and applications. The Microsoft Windows desktop user profiles can be
managed and applied to desktops using tools such as Liquidware Labs ProfileUnity, AppSense, Citrix
XenDesktop profile management, and VMware Virtual Profiles. These desktop personalization tools
speed up login times and prevent the corruption of user profiles.
An antivirus scan can be scheduled and performed on online or offline desktops using a third-party
appliance. For better scheduling flexibility, consider using the VMware VMsafe API with an antivirus
scan appliance to conduct online or offline antivirus scans of virtual desktops.

Desktop Monitoring
Resource (CPU, memory, storage, and network) use on individual desktops can be measured and
monitored in real time by using the VMware statistics logging application. Consider using the Intel
IOMeter tool and the built-in Microsoft Windows Task Manager, which displays resource utilization and
performance measurements (Figure 63). These tools are also useful for troubleshooting purposes.
VMware vCenter can be used to capture these measurements on a per–virtual machine basis.

Cisco Virtualization Experience Infrastructure


115
Management and Operations

Figure 63 Windows Task Manager Performance Monitor

Measuring resource utilization of physical desktops enables administrators to assess the physical
desktop infrastructure prior to migration to a virtualized environment. It also enables capacity planning
for future resource buildout (compute, memory, storage, and network resources) and postdeployment
QoS monitoring on virtual desktops. Consider using tools from Liquidware Labs or Lakeside. An audit
log of the files being accessed and the applications being invoked on the desktop may also be useful for
monitoring desktop use and troubleshooting performance problems. Consider using Microsoft Windows
Performance Manager in addition to any other software or application probes that can be installed on the
desktop.

Cisco Management Tools


Other Cisco tools critical to management of the Cisco VXI system are Cisco UCS Manager, Cisco Fabric
Manager, Cisco DCNM, CiscoWorks LMS, Cisco ACE Device Manager, Cisco WAAS Central Manager,
and Cisco Adaptive Security Device Manager (ASDM). Many of these tools, such as Cisco UCS
Manager, Cisco ACE Device Manager, Cisco WAAS Central Manager, and Cisco ASDM, are embedded
in the managed device and are accessed through the browser. Some of these tools, such as CiscoWorks
LMS, Cisco Fabric Manager, and Cisco DCNM, are Microsoft Windows server applications and are
accessed through the browser or client applications.

Note Cisco WAAS Central Manager runs on a dedicated Cisco WAAS appliance.

Use the Cisco WAAS Central Manager to monitor and generate reports about traffic optimization for
sessions initiated from branch offices across the WAN (Figure 64). Since the Cisco WAAS appliance can
be deployed inline or offline, the traffic utilization and optimization reports generated by the central
manager may not represent all the traffic flowing through the neighboring router. The Cisco WAAS
device will be aware only of traffic presented to it in an offline configuration. You should use a backup
Cisco WAAS Central Manager with stateful redundancy to increase availability, and you should locate

Cisco Virtualization Experience Infrastructure


116
Management and Operations

the Cisco WAAS Central Manager in the data center with other critical resources. After a Cisco WAAS
configuration is defined on the Cisco WAAS Central Manager, it can be pushed out to all Cisco WAAS
devices in the network in an efficient manner.

Figure 64 Cisco WAAS Central Manager GUI

Use Cisco ASDM to monitor and generate reports about sessions initiated by teleworkers (located
outside the enterprise) who use Cisco ASA to access the corporate network using a VPN client. Cisco
ASDM can be used to provision firewalls, Network Address Translation (NAT), and VPN profiles on the
Cisco ASA appliance as well as to monitor firewall activity, CPU use, network interface use, and VPN
status on the Cisco ASA (Figure 65).

Cisco Virtualization Experience Infrastructure


117
Management and Operations

Figure 65 Cisco ASDM Device Dashboard

Use the Cisco ACE Device Manager to monitor and generate reports about all desktop virtualization user
sessions that use a Cisco ACE appliance to access the connection manager. The Cisco ACE Device
Manager can generate reports about the overall session load presented to the connection managers
located in the data center (Figure 66). You should deploy Cisco ACE in redundant pairs with stateful
redundancy to increase availability.

Cisco Virtualization Experience Infrastructure


118
Management and Operations

Figure 66 ACE Device Manager Monitor

You should use the web-based GUI (instead of the CLI) to manage Cisco WAAS and Cisco ASA, since
the GUI can manage multiple devices, provision them with similar settings, and generate aggregate
reports based on statistics collected from multiple managed devices. This approach is recommended for
large-scale deployments, which have many devices to be managed.
Use Cisco UCS Manager (Figure 67) for provisioning a Cisco UCS blade server (both the Cisco UCS
5100 Series Blade Server Chassis and the Cisco UCS 6100 Series Fabric Interconnects) because it can
scale up to 40 blades across multiple chassis. Provision separate VLANs on the uplink to separate
management traffic from virtual machine traffic. Perform this provisioning on the Ethernet uplink ports
and the vSwitch running in VMware ESXi. You should use the service profile templates, as well as
templates for vNIC and virtual host bus adapter (vHBA) characteristics, to apply uniform configurations
across all blades. Cisco UCS Manager also allows blade swapping to quickly reduce downtime during a
hardware exchange.

Figure 67 Cisco UCS Manager

Cisco Virtualization Experience Infrastructure


119
Management and Operations

Use Cisco DCNM to provision and monitor the Cisco Nexus 5000 and 7000 Series Switches (Figure 68).
Use Cisco Fabric Manager to manage the Cisco MDS 9000 Family of Fibre Channel switches
(Figure 69).

Figure 68 Data Center Network Manager

Figure 69 Fabric Manager Web Client

Use CiscoWorks LMS to monitor and provision routers, switches, and other network devices throughout
the end-to-end system (Figure 70). It can be used to measure network link utilization, latency, and jitter
on all managed interfaces.

Cisco Virtualization Experience Infrastructure


120
Management and Operations

Figure 70 CiscoWorks LMS Resource Manager Essentials

The Cisco Unified Communications Management Suite is the recommended tool for managing Cisco
Unified Communications Manager, Cisco Unified Presence, Cisco Unity Servers, Cisco Unified
Personal Communicator client and Cisco IP Phones deployed in the system. In addition to the embedded
web-based management interfaces available with the Cisco Unified Communications suite of servers, the
Cisco Unified Communications Management Suite is recommended for large-scale deployments and
consists of the following tools: Cisco Unified Operations Manager, Provisioning Manager, Service
Monitor, and Statistics Manager.

For More Information


The following list of web links offer detailed information regarding specific vendor information
including hardware and software discussed in this chapter.
Cisco Deployment Guide for VMWare View 4.0
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/cisco_VMwareView.html
Cisco UCS GUI Configuration Guide
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_GUI_Con
figuration_Guide_1_3_1.pdf
Cisco Unified Communications Manager Systems Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/admin/8_0_2/ccmsys/accm.pdf
VMware vSphere 4.2 documentation
http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esxi41_i_vc41.html
VMware vSphere Command-Line Interface Documentation
http://www.vmware.com/support/developer/vcli/
VMware APIs and SDKs Documentation
http://www.vmware.com/support/pubs/sdk_pubs.html
VMware View 4.0 Documentation
http://www.vmware.com/support/pubs/view_pubs.html

Cisco Virtualization Experience Infrastructure


121
Management and Operations

VMware View Installation Guide (release 4.5)


http://www.vmware.com/support/pubs/view_pubs.html
VMware View Administrators Guide (release 4.5)
http://www.vmware.com/support/pubs/view_pubs.html
VMware View Integration Guide (release 4.5)
http://www.vmware.com/support/pubs/view_pubs.html
VMware View Architecture Planning Guide (release 4.5)
http://www.vmware.com/support/pubs/view_pubs.html
VMware Configuration Maximums
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_config_max.pdf
Citrix XenDesktop 4 documentation
http://support.citrix.com/proddocs/index.jsp
EMC Navisphere Management Suite
http://www.emc.com/products/detail/software/navisphere-management-suite.htm
EMC Unisphere management suite
http://www.emc.com/products/detail/software/unisphere.htm?context=storage
NetApp Virtual Storage Console
http://www.netapp.com/us/products/management-software/vsc/virtual-storage-console.html
NetApp StorageLink
http://media.netapp.com/documents/tr-3732.pdf
Altiris reference documentation
http://www.symantec.com/business/theme.jsp?themeid=altiris
Microsoft Active Directory and Network services
http://www.microsfoft.com/
Cisco Network Analysis Module Deployment Guide
http://www.cisco.com/en/US/prod/collateral/modules/ps2706/white_paper_c07-505273.html#wp90002
47
Cisco NAM Appliance Documentation (command reference and user guide)
http://www.cisco.com/en/US/partner/products/ps10113/tsd_products_support_series_home.html
Cisco Wide Area Application Services Configuration Guide (Software Version 4.1.1)
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v411/configuration/guide/cnfg.html
Cisco Adaptive Security Device Manager Configuration Guide
http://www.cisco.com/en/US/products/ps6121/tsd_products_support_configure.html
Cisco ACE 4700 Series Application Control Engine Appliances Documentation
http://www.cisco.com/en/US/products/ps7027/tsd_products_support_series_home.html
Cisco UCS Manager Configuration Guide
http://www.cisco.com/en/US/products/ps10281/products_installation_and_configuration_guides_list.h
tml

Cisco Virtualization Experience Infrastructure


122
Management and Operations

Cisco DCNM documentation


http://www.cisco.com/en/US/products/ps9369/tsd_products_support_configure.html
Cisco Fabric manager documentation
http://www.cisco.com/en/US/partner/products/ps10495/tsd_products_support_configure.html
Cisco Nexus 1000v documentation
http://www.cisco.com/en/US/partner/products/ps9902/products_installation_and_configuration_guides
_list.html
CiscoWorks documentation
http://www.cisco.com/en/US/partner/products/sw/cscowork/ps4565/tsd_products_support_maintain_a
nd_operate.html
Wyse Device Manager Overview
http://www.wyse.com/products/software/devicemanager/index.asp
WDM Admin Guide (release 4.7.2)
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG9507DDFF08D288306AC95C3C3F78B52
551F2D290D30D2F9F&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZG48219
F51800ECBD67A914E693A1B35C69EBAE7BDBF7E5DDCF04394392EFE18816C09F686220DF68
2859E9756A62205A14A
WDM Install Guide (release 4.7.2)
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG2FBE29B2F0CB4CE374726CACD3884A
4E9ADCB633B42A7DA3&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZG48
219F51800ECBD67A914E693A1B35C69EBAE7BDBF7E5DDCF04394392EFE18816C09F686220D
F682859E9756A62205A14A
Wyse Device Manager Integration Add-ons Admin Guide
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG1C1D43ABA80F795F39F06E7C5731597
28391066249DD919D&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZG48219
F51800ECBD67A914E693A1B35C69EBAE7BDBF7E5DDCF04394392EFE18816C09F686220DF68
2859E9756A62205A14A
Wyse® Thin Clients, Based on Microsoft® Windows® XP Embedded, Admin Guide
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG42949BF90AE7F9412C1C931422B780A
1FEAB5C2D8D0D1645&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZGC29
B519BFE0416A1A6FCE711FE7D993C9F7AA9C19B22F9C27470952552D8AE1777F9506FF6A020
0B96E81A03AFC23839E6
Wyse Viance Admin Guide
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG841C2F2087C671DBE3AA9E0203EC852
3A6D6C2A5C8D157DD&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=_0_1_0_-1_f_nv_&etf
m1=&jfn=ZG0165717AA38109A13472984D2B8D94D79D98F9322001A2A71A96398EC4D334BE6
2DA4D17DCD2B34E7A8
Wyse P20 Admin Guide
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG012BFCA92639876559ACEEF431B1663
41DBDF9D663D2886D&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=_0_1_0_-1_f_nv_&etf
m1=&jfn=ZG1822D032C06C7952267DE698A72D680888C4CF4D85AE20A616CFC0617832C14C2
0C36ACA3BEE6A94BFD

Cisco Virtualization Experience Infrastructure


123
Cisco VXI Security

Devon IT Echo™ Thin Client Management Software Version 3 Overview


http://www.devonit.com/software/echo/overview
Devon ThinManage 3.1 Admin Guide
http://downloads.devonit.com/Website/public/Documentation/thinmanage-3-1-adminguide-rev12.pdf
Devon IT TC5 Terminal Quick Start Guide
http://www.devonit.com/_wp/wp-content/uploads/2009/04/tc5-quickstartguide-devonit-english.pdf
IGEL Universal Management Suite 3 Overview
http://www.igel.com/igel/live.php,navigation_id,3294,_psmand,9.html
IGEL Universal Management Suite 3 Install Guide
http://www.download-igel.com/index.php?filepath=manuals/english/universalmanagement/&webpath=
/ftp/manuals/english/universalmanagement/&rc=emea
IGEL Universal Management Suite 3 User Guide
http://www.download-igel.com/ftp/manuals/english/universalmanagement/IGEL%20UMS%20User%2
0Guide.pdf
IGEL Universal Desktops User Guide
http://www.download-igel.com/ftp/manuals/english/universalmanagement/IGEL%20UMS%20User%2
0Guide.pdf
Cisco UMS documentation
http://www.cisco.com/

Cisco VXI Security


Security concerns within a Cisco VXI system are similar to those in any distributed computing
environment. Cisco VXI brings complexities in the ability to validate identity, the type(s) of traffic that
need to be handled, and the potential for the development of very unique endpoints with unique access
challenges. With careful planning, Cisco VXI network infrastructure elements, such as routers and
switches, can be used as proactive policy monitors and enforcement agents.
One of the primary threats to such a dynamic, virtual environment like Cisco VXI is the ability to pose
as a valid member of the system. The system must protect itself against devices posing as a client to gain
access to the desktop information, or posing as a virtual desktop in order to gain password and other
end-user information that can then be used to gain access to even more information within the system.
In most cases, these can be mitigated with Layer 2 security structures like port security, Dynamic Host
Configuration Protocol (DHCP) snooping, dynamic Address Resolution Protocol (ARP) inspection, and
IP source guard. All of these security features are aimed at verifying that the device that is
communicating on a given access switch port is a valid and expected end device.
Further details on implementing these and many of the other security features mentioned in this
document can be found at:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Security/SAFE_RG/SAFE_rg.pdf

Cisco Virtualization Experience Infrastructure


124
Cisco VXI Security

Data Center
In many respects, protecting this particular portion of the Cisco VXI system does not differ greatly from
previous Cisco Data Center 3.0 security solutions that have been documented:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/dc_sec_design.pdf
Security within the Cisco VXI system-enabled data center moves beyond just controlling access to the
compute and storage resources; it includes controlling the access between virtual machines that may
reside on the same physical machine. While typically device authentication is thought of as occurring at
the access level within the network, in the Cisco VXI system there is also an access level within the data
center. The deployment of virtual machines on large server farms means that it’s essential to monitor not
only the physical connections, but also the virtual connections. This means monitoring should start
within the virtual switch located within the DV environment, just as if it were a physical switch in any
other network. The virtual desktop addresses can be dynamic and require the same level of surveillance
as a desktop device outside the data center's "glass-house" infrastructure. With this vulnerability to
spoofing in mind, we recommend that you implement an intelligent Layer 2 virtual switch within the
enclave of virtual machines. The Cisco Nexus® 1000V Series Switches can be deployed with its ability
to employ the safeguards of DHCP snooping, dynamic ARP inspection, and IP source guard. As noted
earlier, this can be an effective first line of verification that the machine originally associated with a port
is not replaced by a rogue machine.
Similarly, the security boundaries must be respected and enforced within the virtual machine enclave.
Typically, desktop traffic would be isolated from the data center by a firewall, but Cisco VXI moves the
desktops into the same data center they need to be shielded from. By not simply switching the traffic
between virtual machines without regard to IP address policies and access control lists, but by routing
traffic out of the Cisco Nexus 1000V Series to the data center aggregation layer, this firewall security
checkpoint is created. At the data center aggregation layer, the Cisco Nexus 7000 Series Switches and
Cisco ASA 5500 Series Adaptive Security Appliances can be deployed to make appropriate routing and
access decisions based on common security policies.
Figure 71 shows the Cisco XVI system enabled data center security architecture.

Cisco Virtualization Experience Infrastructure


125
Cisco VXI Security

Figure 71 Data Center Security Architecture

DC Core

Infrastructure Security
Data Center Firewalls Infrastructure Security features
Data Center Aggregation
are enabled to protect device,
Initial filter for DC ingress traffic plane, and control plane.
and egress traffic. Virtual
Context used to split policies VDC provides internal/external
for server-to-server filtering segmentation

Security Services Layer


IPS/IDS: provide traffic
Additional Firewall analysis and forensics
Services for Server Farm
specific protection
Network Analysis: provide
Data Center Services Layer traffic monitoring and data
Server Load Balancing analysis
masks servers and
applications
XML Gateway: protect
and optimize Web-based
Application Firewall services
mitigates XSS, HTTP,
SQL, XML based attacks

ACLs, CISF, Port


Security, QoS, CoPP, Access Layer
VN Tag

Virtual Access Layer L2 Security features are available within the


physical server for each VM. Features include

226559
VM VM VM ACLs, CISF, Port Security, Netflow ERSPAN,
QoS, CoPP, VN Tag

VM VM VM

Details of this implementation can be found in the data center security design guide, Security and
Virtualization in the Data Center:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/dc_sec_design.pdf

Network
The implementation of a Cisco VXI-enabled network moves the location of application data from
user-controlled desktops to a centralized data center; therefore, security within a Cisco VXI network is
primarily about protecting that data while it travels between these two locations. This is protection both
from malicious impact to the performance of the desktop and its applications as well as protection from
unauthorized viewing of the data. In a corporate environment, these precautions are typically already in
place. Cisco VXI also introduces the flexibility to be viewing and manipulating these desktop images
and applications from many more locations and devices. Some of these devices and locations may not
employ the same level of security as a corporate location, so it is necessary to extend the reach of
corporate security to wherever the end user may be.

Cisco Virtualization Experience Infrastructure


126
Cisco VXI Security

Effective network security requires the implementation of various security measures in a layered
approach to create a well- planned common strategy across the enterprise.

Campus Access
Campus access is from within the corporate network infrastructure. This type of secure access is
well-defined in Chapter 5 of the Cisco SAFE Reference Guide:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Security/SAFE_RG/SAFE_rg.pdf
The Cisco VXI system utilizes IEEE 802.1x for client authentication. Cisco Secure Services Client
(SSC) 5.1 can be used if a native IEEE 802.1x supplicant is not available on the thick client. Testing done
on the Cisco VXI system used SSC5.1 on Windows XP devices, and employed Microsoft's native
supplicant under Windows 7. Some thin clients support IEEE 802.1x, and if possible, this method should
be deployed in preference to MAC Authentication By-pass (MAB). Thin clients should be configured on
their access switch to use one method or the other. The details of deploying both these methods of user
authentication can be found here:
http://www.cisco.com/en/US/solutions/collateral/ns340/ns394/ns171/CiscoIBNS-Technical-Review.pd
f
During testing, it was found that MAB response time for acquiring DHCP information can be improved
on several thin clients by tuning the device's port configuration for dot1x timeout tx-period. The
following port configuration was found to provide a satisfactory experience with most thin clients:

interface GigabitEthernet0/10
switchport access vlan XX
switchport mode access
dot1x mac-auth-bypass
dot1x pae authenticator
dot1x port-control auto
dot1x timeout tx-period 2
spanning-tree portfast

Per the 3560 command reference at:


http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_35_se/command/re
ference/cli1.html#wp2757193
dot1x timeout tx-period is the number of seconds that the switch waits for a response to an Extensible
Authentication Protocol (EAP) request or identity frame from the client, before retransmitting the
request.
Beyond the authentication of the device as a network entity, devices should also be monitored for
continued authenticity as close to their access point as possible. This is accomplished through the
configuration of port security measures referred to earlier (DHCP snooping, dynamic ARP inspection,
and IP source guard).

Branch Access
Branch access is part of the corporate security enclave, but separated from it by a WAN link of some
speed. It is typically deployed with Cisco Integrated Services Routers Generation 2 (ISR G2) Routers,
setting up IP security (IPsec) encrypted tunnels between the data center and the remote branch office.
These tunnels can be deployed by means of certificates or preshared passwords for authentication of the
tunnel endpoints. This deployment model encrypts the Cisco VXI data between the two locations in
order to minimize data being casually captured along the route.
The details of this deployment can be found in Chapter 8 of the Cisco SAFE Reference Guide:

Cisco Virtualization Experience Infrastructure


127
Cisco VXI Security

http://www.cisco.com/en/US/docs/solutions/Enterprise/Security/Safe-RG/SAFE_rg.pdf
As with the campus located access switches, since the branch will typically service its own local DHCP
requests, the following inspection configurations should be deployed at the access layer switches: DHCP
Snooping, dynamic ARP inspection and IP source guard.
MAB and IEEE 802.1x should also be deployed locally, via branch switches or switch modules within
the Cisco ISR, for authentication.

Teleworker Access
The teleworker class of endpoints is divided into fixed teleworker endpoints and mobile teleworker
endpoints, each with their own set of secure connection criteria.
The fixed teleworker environment is the most well-defined. In this case, it is the standard Cisco Validated
Design allowing VPN access from a home office by means of an access router. This deployment creates
an encrypted VPN tunnel over which the desktop can access the data center environments. The
deployment of a home-based router is then just an extension of the corporate security policy into the
employee's home. For details, visit:
http://www.cisco.com/application/pdf/en/us/guest/netsol/ns171/c649/ccmigration_09186a008074f24a.
pdf
The mobile teleworker environment is much less well-defined. The access is typically from a very
nonsecure hotspot. The authenticity of the data being received is not reliable, so the user must create a
tunnel into the corporate network via a VPN client and authenticate against a database of users granted
such access. The Cisco Unified Personal Communicator for hard-phone control is also not part of mobile
teleworker as access profile because mobile teleworkers are more likely to use their cell phone or a soft
phone application on their thick client.
In a Cisco VXI system, the mobile teleworker is a thick client with the ability to launch Cisco
AnyConnect Secure Mobility Client 2.5. When a device cannot be verified because of the user’s location,
as is sometimes the case with mobile teleworkers, user authentication must be employed. The Cisco
AnyConnect connection would typically terminate at a Cisco ASA Adaptive Security Appliance within
the Internet access edge of the data center. Here, a user challenge (passcode request) is presented,
authentication takes place, and an encrypted tunnel is built to allow the employee access. The Cisco
AnyConnect client configuration should be for Datagram Transport Layer Security (DTLS), which is the
best transport protocol when latency-sensitive traffic is involved. It is also a requirement when PC over
IP (PCoIP) is deployed as the transport mechanism. The Cisco AnyConnect client should be configured
to automatically launch the desktop client once the tunnel is established. The following link provides an
example configuration:
http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808efbd2.sht
ml
It should be noted that the VMware View Security Server provides similar capabilities for remote access
users, but is not as robust a solution as the Cisco ASA Adaptive Security Appliance. Table XX highlights
some of those feature differences:

Table 22 Remote Access Features

Feature View Security Server Cisco ASA


Display Protocol Support Remote Desktop Protocol (RDP) RDP Versions 6 and 7 and PCoIP
Versions 6 and 7
Client OS Support Windows XP,7 Windows XP/7, Linux, MAC OS/X

Cisco Virtualization Experience Infrastructure


128
Quality of Service

Feature View Security Server Cisco ASA


Load Balancing External Load Balancer required Integrated VPN Load Balancing
Failover Stateless Stateful
VPN Support Secure Sockets Layer (SSL) SSL, DTLS, IPSec
Authentication Active Directory Active Directory, LDAP, RADIUS,
Certificates

For More Information


Table 23 provides more detailed information regarding specific vendor information, including hardware
and software, discussed in this chapter.

Table 23 Security Links

Cisco SAFE Architecture Guide


http://www.cisco.com/en/US/docs/solutions/Enterprise/Security/SAFE_RG/SAFE_rg.pdf
Cisco Data Center 3.0 Security Guide
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/dc_sec_design.pdf
Cisco Identity-Based Network Security Guide
http://www.cisco.com/en/US/solutions/collateral/ns340/ns394/ns171/CiscoIBNS-Technical-Review.p
df
Cisco 3650 Command Reference Guide
http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_35_se/command/reference/cl
i1.html#wp2757193
Cisco Business Ready Teleworker Design Guide
http://www.cisco.com/application/pdf/en/us/guest/netsol/ns171/c649/ccmigration_09186a008074f24
a.pdf
Cisco ASA 8.x : VPN Access with the AnyConnect VPN Client Configuration Example
http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808efbd2.s
html
Citrix – How to Disable USB Drive Redirection
http://support.citrix.com/article/CTX123700
Microsoft – Disabling Remote Desktop Services Features
http://msdn.microsoft.com/en-us/library/aa380804%28VS.85%29.aspx

Quality of Service
Quality of service (QoS) in a Cisco® Virtual Experience Infrastructure (VXI) system is required not only
within the delivery network, but within the data center as well. In the campus and WAN networks, QoS
is used to help ensure the same speed of access to data and desktop applications that people have come
to expect with their laptop and desktop systems. A Cisco VXI data center requires QoS. Moving the
desktop applications into the data center puts these applications on the same LAN as many of the

Cisco Virtualization Experience Infrastructure


129
Quality of Service

application servers they are accessing. The desktop to application server traffic no longer traverses the
WAN, but requires the same kind of service policies and prioritization it received while traversing the
larger network.
QoS in a Cisco VXI system has unique challenges. In particular, the protocols involved are proprietary
and in some cases encrypted. Often the applications being displayed on the desktop cannot be
differentiated because they are all encapsulated in the desktop virtualization display protocol. Traffic
(for example video) may be put into a separate channel, but this implementation still may be proprietary
and cannot be handled by normal QoS. Therefore, network devices, which formerly could optimize data
based on specific markings per application, types of traffic, or the protocol itself, no longer can use that
level of granularity. You can use the information presented here to improve the traffic for which you have
visibility.

Table of QoS Markings


The following sections discuss the various protocols, including how to recognize them, set their
Differentiated Services Code Point (DSCP) values, and optimize the settings to handle data throughout
the network. Table 24 lists the protocols in a generic Cisco VXI system, and Figure 72 shows their use
in a Cisco VXI network.

Table 24 QoS Markings of VXI-related Protocol

Protocol TCP/UDP Port DSC/CoS Value


Desktop Virtualization Protocols
Remote Desktop Protocol Version TCP 3389 DSCP af21 and CoS2
7
PC over IP (PCoIP*) TCP & UDP 50002
TCP 4172
Independent Computing
Architecture (ICA) TCP 1494 DSCP af21 and CoS 2
Session
TCP 2598 DSCP af21 and CoS 2
Session reliability TCP 80 DSCP af21 and CoS 2
Web services
USB redirection (PCoIP) TCP 32111 DSCP af11 and CoS 1
Multimedia redirection (MMR) TCP 9427 DSCP af31 and CoS 4
Other Protocols used within Cisco VXI
Network-based Printing (CIFS) TCP 445 DSCP af11 and CoS 1
Unified communications signaling TCP 2000 DSCP cs3 and CoS 3
(Skinny Client Control Protocol)
TCP 5060 DSCP cs3 and CoS 3
Unified communications signaling TCP 2748 DSCP cs3 and CoS 3
Signaling (Session Initiation
Protocol)
Unified communications signaling
Signaling (CTI)

Cisco Virtualization Experience Infrastructure


130
Quality of Service

Protocol TCP/UDP Port DSC/CoS Value


Unified communications signaling UDP 16384 to 32767 DSCP ef and CoS 5
Media (RTP, sRTP)
*PCoIP is moving away from using port 50002 and will move to using 4172 for the future

Figure 72 Overview of Protocols within Cisco VXI Network

Data Center
End User Locations

Cisco Unified
Network Printer Communications
Manager
Locally
Attached Site 1
Printer
V
Cisco Unified
Virtual Desktop
Network Print Traffic (CIFS) Compute System
Display Protocols
housing
(ICA, RDP, PCoIP) Desktop Display Protocol (ICA, PCoIP, RDP) Service VMs
Telephony Signaling (SCCP, SIP) (like print server)
IP Telephony Media (RTP, sRTP)
VM VM VM VM
Cisco Unified
Communications VMware/Citrix
Endpoint V
(SCCP, SIP, RTP, SRTP) Site 2

IP

Cisco Unified
Compute System
housing
Virtual Desktops

254673
Data Center
Classification and marking should be performed in the data center as close to the application servers and
virtual desktops as possible. Then, provided that proper QoS policies and queuing priorities are put in
place, these markings should be maintained and their traffic handled appropriately throughout the
network.

Markings
Many applications do not mark traffic with DSCP values. For even those that do, the marking may not
be appropriate for every enterprise's priority scheme. Therefore, you should perform hardware-based
classification (using a Cisco Catalyst® or Cisco Nexus® Family switch) instead of software-based
classification. In testing, the markings were implemented on a Cisco Nexus 1000V Switch whenever
possible. The DSCP values given and the ports matched are based on Table 1.

Cisco Virtualization Experience Infrastructure


131
Quality of Service

An example of how markings were set within the Cisco VXI system test is shown here.

Classification:
ip access-list RDP
permit tcp any eq 3389 any
ip access-list PCoIP-UDP
permit udp any eq 50002 any
ip access-list PCoIP-TCP
permit tcp any eq 50002 any
ip access-list PCoIP-UDP-new
permit udp any eq 4172 any
ip access-list PCoIP-TCP-new
permit tcp any eq 4172 any
ip access-list ICA
permit tcp any eq 1494 any

ip access-list View-USB
permit tcp any eq 32111 any

ip access-list MMR
permit tcp any eq 9427 any

ip access-list NetworkPrinter
permit ip any host 10.1.128.10
permit ip any host 10.1.2.201

ip access-list CUPCDesktopControl
permit tcp any host 10.0.128.125 eq 2748
permit tcp any host 10.0.128.123 eq 2748

Class-maps:
class-map type qos match-any CALL-SIGNALING
match access-group name CUPCDesktopControl

class-map type qos match-any MULTIMEDIA-STREAMING


match access-group name MMR

class-map type qos match-any TRANSACTIONAL-DATA


match access-group name RDP
match access-group name PCoIP-UDP
match access-group name PCoIP-TCP
match access-group name PCoIP-UDP-new
match access-group name PCoIP-TCP-new

class-map type qos match-any BULK-DATA


match access-group name View-USB
match access-group name NetworkPrinter

Policy-map:
policy-map type qos pmap-HVDAccessPort
class CALL-SIGNALING
set cos 3
set dscp cs3
! dscp = 24
class MULTIMEDIA-STREAMING
set cos 4
set dscp af31
! dscp = 26
class TRANSACTIONAL-DATA
set cos 2
set dscp af21

Cisco Virtualization Experience Infrastructure


132
Quality of Service

! dscp = 18
class BULK-DATA
set cos 1
set dscp af11
! dscp = 10

Apply policy-map to the switch port to which the hosted virtual desktop (HVD) virtual machine
connects:
port-profile type vethernet VM240
description Port profile for View HVD VM access
vmware port-group
switchport mode access
switchport access vlan 240
no shutdown
state enabled
system vlan 240
service-policy input pmap-HVDAccessPort

Another example of such markings for a VMware View installation can be found at:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/cisco_VMwareView.html

Note These examples are meant to be guidelines for deploying QoS in a Cisco VXI network and should not
be applied without consideration given to all traffic flows within the enterprise.

Queuing for Optimum Cisco Unified Computing System Performance


In addition to marking traffic to traverse the network outside the data center, you may need to mark
different traffic to traverse devices within the data center. For example, desktop virtualization traffic
servicing desktops should not starve the Cisco Unified Computing System™ for storage resources. The
Cisco Unified Computing System is uniquely equipped with the capability to mark and queue outbound
packets to improve its performance with other data center entities (such as storage). Details and
recommendations about queuing strategies for activities specific to the Cisco Unified Computing System
can be found in the Cisco Unified Computing System GUI Configuration Guide, at
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_GUI_Con
figuration_Guide_1_3_1.pdf

Network
The primary responsibility of the network devices relative to QoS is to route the traffic to the correct
queue, assuming that the local switch (in the branch office or data center) has marked the data
appropriately based on its type of traffic. Proper priority is given to forward the traffic based on the
DSCP markings. Although remarking could take place within the network, if the endpoint locations and
data center deploy the suggested remarking configurations, there should be no need for further
remarking within the network.
In a campus network, the bandwidth is high enough so that contention for the resources should be
minimal. However, slower connections in a branch WAN router network need to be examined. Here at
the egress point from the high-speed connections of the branch-office LAN to the slower-speed links of
the WAN is where bandwidth contention is likely to occur. Service policies that constrain the amount of

Cisco Virtualization Experience Infrastructure


133
Quality of Service

bandwidth that is dedicated to a given protocol are defined and applied at this point. These same queuing
and bandwidth configurations can be placed anywhere there is a concentration of Cisco VXI endpoints,
to enforce the appropriate response in case of traffic congestion (Figure 73).

Figure 73 Location of Service Policies for QoS

Data Center
End User Locations

QoS Service Policies


located here at egress Cisco Unified
Network Printer Communications
From Branch LAN to WAN Manager
Locally
Attached Site 1
Printer
V
Cisco Unified
Virtual Desktop
Network Print Traffic (CIFS) Compute System
Display Protocols
housing
(ICA, RDP, PCoIP) Desktop Display Protocol (ICA, PCoIP, RDP) Service VMs
Telephony Signaling (SCCP, SIP) (like print server)
IP Telephony Media (RTP, sRTP)
VM VM VM VM
Cisco Unified
Communications VMware/Citrix
Endpoint
(SCCP, SIP, RTP, SRTP)

Cisco Unified
Compute System
housing
Virtual Desktops

254674
Class Maps - defining the buckets:

class-map match-any BULK-DATA


match dscp af11 af12 af13
class-map match-all NETWORK-CONTROL
match dscp cs6
class-map match-all MULTIMEDIA-CONFERENCING
match dscp af41 af42 af43
class-map match-all VOICE
match dscp ef
class-map match-all SCAVENGER
match dscp cs1
class-map match-all CALL-SIGNALING
match dscp cs3
class-map match-all TRANSACTIONAL-DATA
match dscp af21 af22 af23
class-map match-any MULTIMEDIA-STREAMING
match dscp af31 af32 af33

Policy Maps - Assigning the bandwidth per bucket:


policy-map WAN-EDGE

Cisco Virtualization Experience Infrastructure


134
Quality of Service

class VOICE
priority percent 10
class NETWORK-CONTROL
bandwidth percent 2
class CALL-SIGNALING
bandwidth percent 5
class MULTIMEDIA-CONFERENCING
bandwidth percent 5
random-detect dscp-based
class MULTIMEDIA-STREAMING
bandwidth percent 5
random-detect dscp-based
class TRANSACTIONAL-DATA
bandwidth percent 65
random-detect dscp-based
class BULK-DATA
bandwidth percent 4
random-detect dscp-based
class SCAVENGER
bandwidth percent 1
class class-default
bandwidth percent 25
random-detect

Endpoint
Cisco VXI thin-client endpoints do not typically provide the capability to mark the session traffic.
Therefore, the same marking that was performed on the Cisco Nexus1000V in the data center for
outbound desktop virtualization traffic must be performed at the branch office on behalf of the endpoints
for the traffic returning to the data center virtual machine. The configuration on the branch switch will
look similar to configuration presented here.
Classification:
ip access-list RDP
permit tcp any eq 3389 any
ip access-list PCoIP-UDP
permit udp any eq 50002 any
ip access-list PCoIP-TCP
permit tcp any eq 50002 any
ip access-list PCoIP-UDP-new
permit udp any eq 4172 any
ip access-list PCoIP-TCP-new
permit tcp any eq 4172 any
ip access-list ICA
permit tcp any eq 1494 any

ip access-list View-USB
permit tcp any eq 32111 any

ip access-list MMR
permit tcp any eq 9427 any

ip access-list NetworkPrinter
permit ip any host 10.1.128.10
permit ip any host 10.1.2.201

ip access-list CUPCDesktopControl
permit tcp any host 10.0.128.125 eq 2748
permit tcp any host 10.0.128.123 eq 2748

Cisco Virtualization Experience Infrastructure


135
Quality of Service

Class-maps:
class-map type qos match-any CALL-SIGNALING
match access-group name CUPCDesktopControl

class-map type qos match-any MULTIMEDIA-STREAMING


match access-group name MMR

class-map type qos match-any TRANSACTIONAL-DATA


match access-group name RDP
match access-group name PCoIP-UDP
match access-group name PCoIP-TCP
match access-group name PCoIP-UDP-new
match access-group name PCoIP-TCP-new

class-map type qos match-any BULK-DATA


match access-group name View-USB
match access-group name NetworkPrinter

Policy-map:
policy-map type qos pmap-HVDAccessPort
class CALL-SIGNALING
set cos 3
set dscp cs3
! dscp = 24
class MULTIMEDIA-STREAMING
set cos 4
set dscp af31
! dscp = 26
class TRANSACTIONAL-DATA
set cos 2
set dscp af21
! dscp = 18
class BULK-DATA
set cos 1
set dscp af11
! dscp = 10

Admission control is another means of contention reduction that is required by some applications to
enforce QoS. Using Cisco Unified Personal Communicator for hard phone control, voice-over-IP (VoIP)
traffic is supported outside the display protocols. This traffic is admitted into the priority queue.
Admission control is required to prevent this priority queue from flooding. Cisco Unified
Communications Manager supports two kinds of admission control: location based and
Resource-Reservation Protocol (RSVP) based.
Although location-based communications admission control is extensively deployed in many Cisco
Unified Communications Manager installations, it cannot use redundant links or a mesh topology.
RSVP-based admission control is a network-aware admission control system. With the help of proxies,
called RSVP agents, that run in Cisco 2800, 2900, 3800, and 3900 Series Integrated Services Routers,
RSVP is a much more robust solution for controlling hard phones in Cisco VXI. For information about
implementing RSVP-based admission control in Cisco Unified Communications Manager, please see
Part 2, Chapter 8, in the 8.0 Cisco Unified Communications Manager System Guide, at
http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/admin/8_0_2/ccmsys/accm.pdf

For More Information


Table 25 provides more detailed information regarding specific vendor information, including hardware
and software discussed in this chapter.

Cisco Virtualization Experience Infrastructure


136
Performance and Capacity

Table 25 QoS Links

Cisco Deployment Guide for VMWare View 4.0


http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/cisco_VMwareView.ht
ml
Cisco UCS GUI Configuration Guide
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_GUI_Co
nfiguration_Guide_1_3_1.pdf
Cisco Unified Communications Manager Systems Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/admin/8_0_2/ccmsys/accm.pdf

Performance and Capacity


This section describes the tools and methods that can be used capacity planning of an end-to-end Cisco®
Virtual Experience Infrastructure (VXI) deployment. The primary focus of this chapter is on the data
center and the WAN.
Performance and capacity planning of hosted services in an enterprise network has three fundamental
dimensions:
• Capacity planning for computing and storage needs
• Capacity planning for service infrastructure and data center network infrastructure components
necessary to support the services
• WAN capacity planning for delivery of the service to remote sites
Recommendations and results presented in this chapter are based on specific user and workload profiles
for end-to-end Cisco VXI. Readers should carefully consider their own environments and use the
information here to guide them in capacity planning. Also, this chapter is not intended as a guide for
scaling all Cisco or third-party components used in Cisco VXI architecture; the reader is best served by
reading the product documentation directly.

Computing and Storage Capacity Planning


Migration from physical desktops to virtual desktops requires a detailed understanding of the workload
and the resource requirements of the user base being migrated.
Characterizing the user workloads in terms of the applications and their usage pattern helps to develop
a base profile for the user group. This profile should include resource utilization metrics such as CPU,
memory, bandwidth and disk I/O utilization from a statistically significant sample of users. Using this
base profile, the resource requirements of the desktop in a virtualized environment can be estimated. If
possible, large disparate user groups should be estimated separately; otherwise, capacity may be wasted
or an unexpected user experience may occur. Therefore, for large desktop virtualization deployments,
the recommendation is to keep similar user groups together for workload and resource estimation, rather
than averaging across the entire user base.
A base profile can also be used to classify users into one of the more generic profiles that desktop
virtualization vendors commonly use to characterize users with similar compute, storage, virtualization,
and networking requirements. Performance and scalability data from vendors are often characterized
using these profiles because simulation tools use it as the basis for defining the appropriate workload.

Cisco Virtualization Experience Infrastructure


137
Performance and Capacity

Therefore, these profiles can play a critical role in validating the resource estimations and the overall
capacity of the desktop virtualization system being deployed, if simulation tools are used. The generic
user profiles are described in greater detail in the Workload Considerations section of this chapter.
The next few sections provide a closer look at the steps involved in planning the computing and storage
capacity needed for a Cisco VXI deployment. The approach taken can be summarized as follows:
• Develop a base profile of the physical desktop user that includes resource utilization metrics such
as CPU, memory, storage and bandwidth utilization.
• Estimate the resource requirements for the desktop after it is virtualized.
• Estimate the per-server capacity in terms of the number of virtual desktops, including the associated
storage capacity and performance required to support them. This estimate should take into account
factors such as the effect of peak use, login storms, and outages and server downtime. You should
now be able extrapolate from this data to determine the overall computing and storage hardware
needs for Cisco VXI deployments of any size.
• Evaluate optimizations and other best practices for deployment that can improve resource utilization
and optimize performance.
• Validate the resource estimations using a representative workload.

Resource Utilization in Current Environment


A critical input for estimating resource requirements in a virtualized environment is the resource
utilization in the current physical desktop environment. Therefore, for a given target user group being
migrated to virtual desktops, it is important to have a full understanding of the current environment and
to characterize the resource utilization in terms of the following metrics:
• Average and peak CPU utilization
• Average and peak memory utilization
• Storage
– Per-user storage capacity
– I/O operations per second (IOPS)
– Throughput (in bytes per second)
• Bandwidth utilization on the LAN
Administrators should monitor the use pattern of the target user group and determine the average and
peak utilization for each of these in the environment. Monitoring should factor in potential variations in
use pattern based on geographical location, when users log on for the day including shift transitions for
environments that work in shifts, timing of backups, virus scans, and similar activities.
The resource utilization of the average physical desktop user can be determined as follows:
• CPU utilization: Use tools such as Microsoft Windows Perfmon or the Stratusphere tool from
Liquidware Labs to collect average and peak CPU utilization from physical desktops in the target
user group being migrated. Collect this data from a statistically significant number of desktops over
a period of time while there is significant workload. You can then use a statistical average from the
collected data to determine the peak and average CPU utilization for the group.
• Memory utilization: Also collect data about memory utilization on a physical desktop using the
same tools as for CPU utilization, or a similar tool. As with CPU, you should analyze the data
collected over a period of time from a significant number of desktops to determine the statistical
averages of the group in terms of peak and average memory utilization. This data will be used to
determine the memory needs of the group when using a virtualized desktop.

Cisco Virtualization Experience Infrastructure


138
Performance and Capacity

• Storage capacity and performance (IOPS and throughput) of the physical desktop: You can also
determine IOPS and throughput from the physical desktop using the same tools as for collecting
CPU and memory utilization information. Determine the peak and average data for the group in the
same way as for CPU and memory utilization. This data will be used to determine the storage
requirement of a virtualized desktop in that group.
After the administrator has characterized the average and peak resource utilization for the group using a
physical desktop, the process of estimating the computing, storage, and networking resources for
migrating the user group to Cisco VXI can begin.

Estimating Resource Requirements in a Virtualized Environment


You need to consider several factors when estimating the resource requirements for a large Cisco VXI
deployment. In the following section, three critical factors—CPU, memory, and storage—are examined
for a more accurate estimation of the computing and storage needs in a virtualized environment.
Virtualization introduces factors that the estimation process needs to account for and so the resource
utilization data gathered in the previous step may need to be adjusted accordingly. The per–virtual
machine resource estimation can then be used to estimate the capacity of a single server. The number of
virtual desktops that a server can support will serve as the baseline from which the hardware resources
necessary for large-scale desktop virtualization deployments can be extrapolated.

Estimating CPU
To estimate the CPU resources needed in a virtualized environment, you can use the data from the
physical desktops, as shown in the following example:
• Average CPU utilization for the physical desktops in the target user group is 5 percent.
• Assuming that the physical desktops are using a single-core 2.53-GHz processor, the average CPU
requirement for the desktop is 5 percent of 2.53 GHz = 126.5 MHz.
• VMware recommends using a guard band of 10 to 25 percent to handle the following:
– Virtualization overhead
– Peak CPU utilization
– Overhead associated with using a display protocol
– Spikes in CPU utilization
Therefore with a conservative guard band of 25 percent, the aggregate CPU requirement for a given
desktop is approximately 158 MHz.
• The CPU requirement for a virtualized desktop, along with the computing capabilities of the blade
server chosen for the deployment, can be used to estimate the number of virtualized desktops that
can be supported on a given blade. For the Cisco UCS blade servers, the processing capabilities are
listed in Table 1.

Note Each server model supports different processor types, though only one is shown per server in Table 26.

Table 26 UCS Server Model and Processor

Cisco Virtualization Experience Infrastructure


139
Performance and Capacity

Server Model Processor


Cisco UCS B200 M1 Two quad core Intel Xeon 5570 series processors at 2.93GHz
Blade Server
Cisco UCS B200 M2 Two six core Intel Xeon 5680 series processors at 3.33GHz
Blade Server
Cisco UCS B250 M1 Two quad core Intel Xeon 5570 series processors at 2.93GHz
Extended Memory
Blade Server
Cisco UCS B250 M2 Two six core Intel Xeon 5680 series processors at 3.33GHz
Extended Memory
Blade Server
Cisco UCS B440 M1 Four eight core Intel Xeon 7560 series processors at 2.26 GHz
High-Performance
Blade Server

For additional information about the Cisco UCS server chassis and blade servers, please refer to the Data
Center chapter.
• Using computing power as the only criterion, you can calculate the number of desktop virtual
machines on a given blade server as shown here. For example, the number of virtual desktops that a
Cisco UCS B250 M2 server can support would be:
Total compute power = 2 socket x 6 core a 3.33GHz = 39.96 GHz
Average CPU utilization of desktop = ~158MHz
Number of virtual desktops per server = 39.96GHz/158MHz = 252 desktops

Note Since this is strictly a theoretical exercise, please use actual data from testing when performing capacity
planning for specific customer deployments.

Similarly, the number of virtual desktops that a Cisco UCS B200 M1 server can support would be:
Total compute power = 2 socket x 4 core a 2.93GHz = 23.44 GHz
Average CPU utilization of desktop = ~158MHz
Number of virtual desktops per server = 23.4GHz/158MHz = 148 desktops

Note This estimate is theoretical, based on a single factor (CPU). A number of other factors need to be
considered to determine the actual number of virtual desktops that can be supported on a given server
blade.

Note that the preceding estimation for the sizing a single server could be lower or higher, depending on
the average utilization measured from the physical desktops. Similarly, the Cisco UCS server model
chosen for a given desktop virtualization deployment can also affect the number due to differences in the
computing capabilities of the server options available on the Cisco Unified Computing System™.
The number determined from the preceding sizing exercise could be the estimation used for estimating
the overall server needs of a Cisco VXI deployment, if computing power is identified as the limiting
factor. However, this scenario cannot be assumed to be the case until you have performed a similar
exercise using memory and storage. Moreover, a number of other factors have to be considered before
an estimation can be considered final for a given server blade.

Cisco Virtualization Experience Infrastructure


140
Performance and Capacity

Estimating Memory
To estimate the overall memory requirements in a virtualized environment, use the same methodology
used for estimating CPU. The memory estimate for a single virtual desktop can be calculated from the
statistical average determined from the physical desktops, as shown in the following example.
• Average memory utilization for the physical desktops in the target user group is approximately 750
MB.
• The transparent page sharing (TPS) feature on the VMware hypervisor can significantly reduce the
memory footprint, particularly in desktop virtualization deployments, in which the OS and
applications data loaded into memory from different desktop virtual machines on the same host may
have a lot in common. However, since TPS is a mechanism for overcommitting memory, it is not
factored into the calculation here; it is a deployment decision for the administrator to consider as a
part of the overall desktop virtualization rollout.
• To accommodate additional memory demands due to spikes in memory utilization or additional
applications, a 25 percent increase in the estimate is used. The aggregate memory requirement is
therefore approximately 938 MB.
• The memory requirement for a virtualized desktop, along with the physical memory resource
available on the blade server chosen for the deployment, can be used to estimate the number of
virtualized desktops that can be supported on a given blade. The memory capacity on the various
Cisco UCS server models is listed in Table 27:

Table 27 Cisco UCS Memory Capacity

Server Model Memory Capacity


Cisco UCS B200 M1 96 GB
Cisco UCS B200 M2 96 GB
Cisco UCS B250 M1 384 GB
Cisco UCS B250 M2 384 GB
Cisco UCS B440 M1 256 GB

For additional information about the Cisco UCS server chassis and blade servers, please refer to the
Data Center chapter in this design guide.
• Using memory as the single criteria for sizing the hardware needs in a Cisco VXI deployment, you
can calculate the number of desktop virtual machines that a blade server can support as shown here.
For example, the number of virtual desktops that a Cisco UCS B200 M1 server can support would
be:
Memory Capacity = 96GB
Average memory requirement for a virtualized desktop = ~938 MB
Number of virtual desktops per server= 96G/938 M = 102 desktops

As with CPU estimation, the estimated number of virtual desktops on a single server may be lower or
higher, depending on the data gathered from the physical desktops and the model of Cisco UCS server
selected for the deployment. Also, this data can be used to extrapolate the total number of servers needed
for a given Cisco VXI deployment if memory is determined to be the limiting factor.
Note that the memory utilization from a physical desktop used in the preceding calculation can vary
depending on the guest OS and applications deployed in a given environment. An alternative but also a
less accurate method of estimating the number of virtual desktops is to use the minimum
recommendation from Microsoft for per–virtual machine memory utilization, as shown in Table 28.

Cisco Virtualization Experience Infrastructure


141
Performance and Capacity

However, for Microsoft Windows XP, the minimum recommendation shown here should be increased to
512 MB to accommodate Microsoft Office applications and other applications that may be running on
the desktop. The memory configuration used for virtual desktops in an end-to-end Cisco VXI system is
also provided as an example. The memory configuration was sufficient to provide a good user experience
for the workload profile validated in the end-to-end system.

Table 28 Memory Configuration

Microsoft Windows Minimum (Microsoft) Memory Memory Configuration in Cisco VXI system
OS Requirement using Knowledge Worker + Profile
Microsoft Windows 256 MB 1 GB
XP with Service
Pack 3
Microsoft Windows 1 GB 2 GB
7 32b
Microsoft Windows 2 GB 2 GB
7 64b

Estimating Storage

For storage, the average IOPS and throughput data collected from monitoring the physical desktops can
be used as the storage requirements for the virtualized desktops. For example, if the average IOPS is 5
and the average throughput is 115 kbps, then the same IOPS and throughput values should be expected
when the desktop is virtualized. For a desktop virtualization deployment, the factors summarized here
can also have a significant effect and should be considered when sizing storage needs. For example,
IOPS can peak when:
• Users are powering on: When users come in at the beginning of the workday and start powering on
their virtual desktops, IOPS and throughput will peak, a situation referred to as a boot storm.
• Users are logging on: Though the virtual desktops do not need to be powered on, there can be peaks
in storage I/O as users are logging on in the morning to start their work. This situation is referred to
as a login storm.
• Other activities occur: Activities such as antivirus scan and backups can cause storage performance
requirements to spike.
Some applications specific to a customer environment can cause similar spikes in storage I/O. All these
factors must be taken into account when designing the storage environment for Cisco VXI deployments.
Another aspect that needs to be considered when sizing storage resource needs is the disk space allocated
to a virtual desktop. You can calculate this space by adding the storage requirements required for each
of the following items:
• Operating system and base set of applications
• Page and swap files and temporary files created by the OS and applications
• Page and swap files created by a VMware ESX and ESXi host for every virtual machine deployed
on the host (equals the memory allocated for the virtual machine)
• Microsoft Windows profile (user settings such as desktop wallpaper)
• User data (equivalent to the My Documents folder in Microsoft Windows)
• Buffer of 15 percent additional storage space to handle any spikes in storage needs

Cisco Virtualization Experience Infrastructure


142
Performance and Capacity

For example, for the Cisco VXI system discussed in this document, Table 29 shows storage allocation
for the desktop virtual machines for each OS.

Table 29 Storage Allocation for Desktop VM

Page/Swap Files
Min. Disk and Temporary
Space for Files used by Page/Swap
Guest OS on Guest OS & Guest OS and used by Windows User 15% Total
virtual desktop Applications Applications Hypervisor Profiles Data Buffer Storage
Windows XP 10 GB 3 GB 1 GB 2 GB 5 GB 4.8 GB 26 GB
Windows 7 20 GB 4 GB 2 GB 2 GB 5 GB 5 GB 38 GB
(32-bit)
Windows 7 20 GB 4 GB 2 GB 2 GB 5 GB 5 GB 38 GB
(64-bit)

Estimating Server Capacity


As stated before, several factors can influence the performance and scalability of a server. The estimation
for the number of virtual desktops on a given server can yield a different number if each factor is
considered independently. For this reason, the estimations performed in the Estimating CPU and
Estimating Memory sections earlier in this chapter for a Cisco UCS B200 M1 server are theoretical
exercises. However, the data (summarized in Table 30) can aid in finding the limiting factor for a given
server, as well as provide the initial virtual machine density to target if testing is performed to validate
the estimation using the specific workload for that environment.

Table 30 Estimated Capacity

Factor used to Average value for a Server Capacity


determine capacity virtualized desktop (Theoretical for Cisco UCS B200 M1)
CPU 158 MHz 148
Memory 938 MB 102

Note The estimates in Table 5 are not the actual capacity of the server. They are theoretical estimations based
on the CPU and memory utilization assumptions for the user group in a given environment.

The scalability testing performed with VMware ESX 3.5 Update 5 on a Cisco UCS B200 M1 determined
that the actual number of virtual desktops supported was different than what is shown in Table 31. For
more information, see Scalability Study for Deploying VMware View on Cisco UCS and EMC
Symmetrix V-Max Systems located at:
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/vdiucswp.html
A crucial difference between the theoretical and estimated data is the actual workload used in both
scenarios. Also note that the supported number from VMware would be less if the host were deployed
in a high-availability cluster. Similar scalability studies for other server models supported on the Cisco
Unified Computing System and using Citrix XenDesktop are also available. For better details of this
study, see Cisco Desktop Virtualization Solutions with Citrix XenDesktop located at:

Cisco Virtualization Experience Infrastructure


143
Performance and Capacity

http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_vsphere_nt
ap.pdf
In a Cisco VXI environment, workload is one of the most critical factors for accurately estimating the
virtual desktop density on a given server in that environment. Therefore, you must use a workload that
closely matches the user group’s workload for any estimation and testing you perform to determine the
per-server sizing in a given Cisco VXI deployment. However, a number of other factors in addition to
CPU, memory, and storage utilization can influence a server’s scale and capacity numbers in a Cisco
VXI deployment, as discussed in the next few sections.

Virtual CPU Considerations


For most desktop virtualization environments with a wide range of workloads, VMware recommends use
of a single virtual CPU (vCPU) per virtual desktop. Some power users may require more than one vCPU,
and if so, the virtual desktops of these users should not be deployed on the same server blade with
desktops that use one vCPU, due to the nature of CPU allocation on a VMware host. Though resource
reservation for CPU shares can be performed using resource pools, these are generally not needed for
virtualized desktops. VMware has increased the number of vCPUs that can be supported on a given host
to the numbers shown in Table 31, and these values are worth noting from a capacity planning
perspective. However, for servers capable of supporting 25 vCPUs per core with a high number of cores,
the limiting factor for the number of virtual desktops can be the number of virtual machines supported
per host, as the table shows. The data in Table 31 is from VMware’s configuration maximums document
for vSphere 4.0 and 4.1. See the For More Information section at the end of this chapter for a link to the
complete document.

Table 31 Virtual CPU Limits

Limits VMware vSphere 4.0 VMware vSphere 4.1


Virtual CPUs per core 25 25
Virtual CPUs per host 512 512
Virtual Machines per host 320 320

Note Though the number of vCPUs supported per core is 25, the number of achievable CPUs per core depends
on the workload.

Note In actuality, the number is probably closer to the 8 to 10 virtual desktop virtual machines per core in
desktop virtualization deployments.

Memory Considerations

Studies by VMware indicate that the overall memory consumption can be significantly reduced in a
virtualized environment by a feature called transparent page sharing, or TPS. TPS uses a background
process to monitor the contents of memory, and it evaluates the operating system and application data
being loaded to determine if it is the same as what is already in memory. If it is the same, the virtual
machine attempting to load the duplicate data will be redirected to the existing content in memory,

Cisco Virtualization Experience Infrastructure


144
Performance and Capacity

thereby enabling memory sharing. TPS can be thought of as the memory equivalent of the de-duplication
feature found in storage systems, and it is enabled by default. For more information about this feature,
see the VMware Memory Resource Management in VMware ESX Server document located here:
http://www.vmware.com/pdf/usenix_resource_mgmt.pdf
In a Cisco VXI environment, transparent memory sharing enables a given server blade to potentially
scale to accommodate a larger number of virtual desktops on a single blade, at least from a memory
perspective, though this may not be limiting factor.
Since TPS uses redundancy to share and overcommit memory between virtual machines running on a
host, the workloads on these virtual machines should be as similar as possible. To optimize the effect of
TPS in a Cisco VXI environment, you should group virtualized desktops of users with similar workloads,
such as the same guest OS (Microsoft Windows 7 and Windows XP) and applications (Microsoft Office
and antivirus applications), on the same host to optimize the effect of TPS.
TPS behaves differently on the newer hardware-assisted virtualization processors, such as the Nehalem
and Westmere processors that are used on Cisco UCS servers. The newer processors use memory pages
that are 9 KB in size and improve performance by 10 to 20 percent. TPS operates on 4-KB pages to
eliminate duplicate data. With these newer processors, TPS is not in effect until the available memory
reaches a minimum and there is a need to overcommit memory. A background process is still monitoring
and scanning the memory pages to determine when TPS takes effect. See the following VMware
Knowledge Base articles for more information about TPS with the newer hardware assisted
virtualization processors:
• TPS in Hardware Memory Management Unit (MMU) Systems:
http://kb.vmware.com/kb/1021095
• TPS Is Not Utilized Under Normal Workloads on Intel Xeon 5500 Series CPUs:
http://kb.vmware.com/kb/1020524
Also note that VMware studies have shown that TPS does not have any effect on the performance of the
VMware ESX or ESXi host and so VMware recommends the use of this feature.
Please contact VMware for additional information about TPS.

Storage Considerations

Desktop virtualization architectures from VMware and Citrix used in a Cisco VXI system reduce the
overall storage needs as follows:
• VMware View uses Linked Clone technology through which a parent virtual machine’s virtual disk
is used as the main OS disk for all clones created through the linked clone process. This feature
prevents each cloned desktop from needing its own OS disk, thereby reducing the overall storage
capacity needed for a Cisco VXI deployment. See Desktop Virtualization for more information
about deploying virtual desktops using VMware View.
• Citrix XenDesktop uses pooled desktops with a parent virtual machine, and it uses its virtual disk
as the main OS disk for all desktops in the pool. See Desktop Virtualization for more information
about deploying virtual desktops using Citrix XenDesktop.
Using pooled desktops with VMware View and Citrix XenDesktop greatly reduces the aggregate storage
capacity necessary for migrating to a virtualized environment. Although the cost of the shared storage
is significantly higher than that for using separate disks on laptops and desktops, both VMware View and
Citrix XenDesktop reduce overall storage costs due to the ability to share the same OS disks among many
desktop virtual machines.

Cisco Virtualization Experience Infrastructure


145
Performance and Capacity

Operating System Disk

In Citrix and View deployments, the OS disk refers to the parent virtual machine’s virtual disk on which
the guest OS (Microsoft Windows XP or Windows 7) and applications (Microsoft Office) are installed.
This OS disk is read by all desktops in the pool in both Citrix and VMware View deployments, resulting
in significant storage savings, since a single OS disk can be used by a large number of desktops without
each having to maintain its own OS disk. Ideally, this disk should be read-only for both storage and
operation efficiency, but it can be used as a read-write disk to store the following types of typical
Microsoft Windows desktop data:
• Microsoft Windows profile data
• Temporary files, including page files
• User data
For better storage and operation efficiency, the OS disk should be kept as a read-only disk, and the data
listed here should be redirected to another location as follows.
• Microsoft Windows profile data can also be redirected to a Microsoft Windows share or to another
virtual disk dedicated for this purpose, so that the data can be saved in the event that the OS disk is
updated.
• Temporary files can also be redirected to a nonpersistent disk so that the data can be flushed to
reduce storage use. A separate location on the SAN or on a transient volume on network-attached
storage (NAS) can be used.
• User data that is typically saved in the My Documents folder should be redirected to a Microsoft
Windows share or to a separate disk.

Note VMware View always maintains the main OS disk as read-only, but it uses a smaller per–virtual machine
differential disk for read-write purposes. For VMware View, this discussion applies to the differential
disk.

Thin Compared to Thick Provisioning

Thin provisioning is a way to conserve storage resources and increase storage utilization in a virtualized
environment. With thick provisioning, when a virtual machine is deployed, the virtual disk associated
with the virtual machine is given its full allocation of storage regardless of whether it uses it, resulting
in wasted space. With thin provisioning, this inefficiency is reduced by allocating storage resources only
when the virtual machine needs them. Therefore, a virtual desktop running Microsoft Windows 7 with a
20-GB disk will not have 20 GB of disk space reserved on the storage system (SAN or NAS), though
Microsoft Windows and applications running on the desktop will operate as if it has the full 20 GB of
space allocated to it. On the back end, VMware ESX and ESXi hide the actual state of the storage
allocation and allocate the space to the desktop only as and when it needs it. The dynamic allocation of
storage is performed in chunks, with the unit size of a chunk defined when the data store is created. This
specific type of thin provisioning is referred to as a VMware virtual disk and is supported with both
file-based (NFS) and block-based (SAN) data stores. For more information about VMware thin
provisioning, please refer to the following VMware Knowledge Base article:
• VMware KB: Using Thin Provisioned Disks with Virtual Machines:
http://kb.vmware.com/kb/1005418
Therefore, thin provisioning enables the efficient use of the underlying storage resources and improves
the scalability of the aggregate storage capacity by overcommitting the storage. In a Cisco VXI
environment, this approach results in a higher number of virtual desktops that can be supported with the

Cisco Virtualization Experience Infrastructure


146
Performance and Capacity

given storage capacity. For this reason, thin provisioning of VMware’s virtual disk is recommended in
an end-to-end Cisco VXI system, though you should note that thin provisioning does have some effect
on the CPU performance of the host.
In the Cisco VXI system, all VMware View and Citrix XenDesktop pools were validated with thin
provisioning enabled on the parent virtual machine used to create the VMware View and Citrix
XenDesktop desktop pools. In each case, the parent virtual machine’s virtual disk was residing on either
EMC’s Fibre Channel attached SAN storage or NetApp’s NAS (Network File System [NFS]) storage.
In addition to VMware’s virtual disk thin provisioning, storage vendors such as NetApp and EMC offer
thin provisioning at the storage level that further improves the storage efficiency gained by VMware thin
provisioning. With storage thin provisioning, the actual state of the storage allocation is hidden from
VMware ESX and ESXi by the storage system.
For Cisco VXI deployments, both virtual disk and storage thin provisioning can be deployed in a
complementary fashion to optimize storage utilization. All validation with NetApp in the Cisco VXI
system was performed with both virtual disk and storage thin provisioning in place.
Since thin provisioning is an overallocation of the storage resources, you should carefully monitor the
state of the thin-provisioned disk so that additional storage can be added to the data store before a lack
of space causes problems.
Please refer to the For More Information section of this chapter for more information.

Storage Footprint Reduction

Storage vendors support technologies that offer economies of scale for sizing the storage needs of a
desktop virtualization deployment. Technologies include data de-duplication to increase storage
efficiency by eliminating redundant information in the data being stored. This feature can be used for
primary file systems and end-user file data in VMware and other virtualized environments. If the
duplicate data is from different virtualized desktop virtual machines, the data is stored only once and the
metadata associated with it is changed so that both virtual machines have access to the data. As with thin
provisioning, de-duplication can provide significant storage efficiencies, improving desktop
virtualization scalability since the existing storage can now support a larger number of desktop
virtualization desktops. Therefore, enabling de-duplication, particularly in large desktop virtualization
deployments, is highly recommended.
Please refer to EMC and NetApp documentation for more information about using de-duplication in a
desktop virtualization environment.

Partition Alignment

Microsoft Windows file system partitions running on virtualized desktops should be aligned with the
underlying storage partitions. This alignment can improve storage performance by reducing overall I/O
latency while increasing storage throughput. This alignment is currently needed only with Microsoft
Windows XP because Microsoft Windows 7 (32-bit and 64-bit) automatically provides this alignment.
The problem occurs because Microsoft Windows writes 63 blocks of metadata directly at the beginning
of the drive, resulting in misalignment of the first partition created on the disk. As a result, the drives
may need to read an extra block of data unnecessarily, causing additional IOPS on the drive. To address
the misalignment problem, an aligned partition is created on the drive that aligns with the storage system
used. Both block-based and file-based storage systems can benefit from this alignment. In the Cisco VXI
system, a 64KB aligned partition was created on the parent virtual machine of the desktop pools to align
with EMC’s SAN and NetApp’s NAS storage. Please refer to Microsoft and VMware’s documentation
for information about implementation.

Cisco Virtualization Experience Infrastructure


147
Performance and Capacity

Storage Network Considerations

• Jumbo frames: Cisco VXI deployments using IP-based storage should enable jumbo frames to
increase storage bandwidth utilization and improve I/O response times. Jumbo frames increase the
maximum transmission unit (MTU) for Ethernet frames used to transport IP traffic in data center
LAN networks. Enabling jumbo frames increases the Ethernet MTU from the default value of 1518
bytes to 9000 bytes typically and should be enabled on every link between the server hosting the
virtual desktops and the IP storage it uses. Jumbo frames not only improve overall throughput, but
also reduce the CPU burden on the host for large file transfers.
• Separation of storage network: Storage traffic should be physically (ideal) or logically separated
from other network traffic using VLANs. Cisco Unified Computing System architecture supports
two host bus adapters (HBAs) dedicated to storage if Fibre Channel–attached SAN storage is used.
Similar physical separation is recommended for IP-based storage, such as storage of NFS and Small
Computer System over IP (iSCSI) traffic. A separate VMkernel port and VLAN should be used for IP
storage traffic using a dedicated uplink port on the host. This type of physical separation of the IP storage
traffic from other IP traffic is possible using the latest converged network adapters (CNAs) on the Cisco
Unified Computing System, which support 128 virtual uplink ports. If an uplink cannot be dedicated, the
separate VLAN used for storage will provide the logical isolation. The physical isolation should be
extended into the data center network by using dedicated ports or switches at the access layer where
Cisco Nexus 5000 Series Switches are typically deployed. If the storage traffic extends into the
aggregation layer of the data center network, Cisco Nexus 7000 Series Switches that are typically
deployed at this layer support physical separation through the virtual data center (VDC).
• PortChannels: To increase the aggregate uplink bandwidth without sacrificing availability,
PortChannels can be used between the LAN uplink ports on the host and the access layer switch
(Cisco Nexus 5000 Series). This approach is important if the Cisco VXI deployment uses IP-based
storage since it significantly increases the LAN bandwidth required.
• Multipathing: For both block-based SAN storage and IP-based NAS storage, multipathing can be
used to create load-balanced but redundant paths between the host and the storage it uses. VMware
learns the various physical paths associated with the storage device, and it uses a path selection
scheme to determine the path a given I/O request should take. The three options on VMware for
selecting the path to the storage device are fixed, most recently used, and round-robin. Round-robin
should be used to load balance I/O traffic across multiple physical paths. Because of the
performance improvements the multipathing provides through enhanced storage resiliency, Cisco
VXI deployments should enable multipathing if the storage vendors support it. Both EMC (Fibre
Channel SAN) and NetApp (NFS), included in the end-to-end Cisco VXI system, support this
capability.

Guest OS Optimizations

Following are some Microsoft Windows optimizations that can be performed to improve the
performance and scalability of desktop virtualization deployments:
• Optimize the Microsoft Windows virtual machine file system for optimal I/O performance by
disabling the last-access-time updates process in NTFS. Microsoft Windows will update files with
the last access update time when an application opens that file, and disabling this option will reduce
the IOPS occurring within the file system.
• Disable hibernation. To reduce the use of storage resources by a virtual desktop, disable hibernation,
which takes a snapshot of the machine’s running state and saves it on disk. The operating state of
the virtual desktop, such as suspend, power on, or power off, should instead be controlled from
VMware View or Citrix XenDesktop.

Cisco Virtualization Experience Infrastructure


148
Performance and Capacity

• Refresh the OS disk periodically. In VMware View deployments, linked clones maintain a
read-write file for storing temporary files or other changes that need to be made to the original OS
disk on the parent virtual machine. This file is a diff file, and with time, this file can grow in size
and become as large as the main OS disk. To keep this read-write file on every cloned desktop from
consuming a large amount of storage, refresh the OS disk periodically to clean up the diff file and
reset it to its original state, when the cloned desktop was first created.
• Prevent antivirus software from scanning the main OS disk that each virtual machine uses since it
is deployed as a read-only disk with antivirus checks run against it before it was deemed as the
golden master for use by the VMware View and Citrix XenDesktop pool of virtual desktops. This
step can help increase storage performance particularly in a large Cisco VXI deployment.
You can implement a number of performance-related optimizations on virtualized desktops running
Microsoft Windows. Table 32 shows some of the main optimizations implemented on the Cisco VXI
system for Microsoft Windows XP and Windows 7.

Table 32 Microsoft Windows Guests OS Optimizations

Microsoft Windows Guest OS Optimizations


Microsoft Windows Guest OS Optimizations
Disable Microsoft Windows Hibernation
Disable Microsoft Windows Defender (N/A on XP)
Disable Microsoft Feeds synchronization
Disable Microsoft Windows Scheduled Disk Fragmentation (N/A on XP)
Disable Microsoft Windows Registry Backup (N/A to XP)
For PCoIP, set the power options for Display to off
Disable mouse pointer shadow
McAfee Anti-virus scan on write only
Disable Pre-fetch/Superfetch Service (N/A on XP)
Disable Microsoft Windows Diagnostic Policy Service (N/A to XP)
Enable “No automatic updates"
Disable System Restore (since refresh can be done by composer)
Disable paging of the Microsoft Windows OS itself
Disable unwanted services
Turn off unnecessary sounds at startup and shutdown
Disable indexing services
Delete all background wallpapers
Disable screen saver

Hypervisor Considerations

The power management policy for the virtual desktops in a desktop virtualization environment can have
affect the host resources. VMware View and Citrix XenDesktop recommend that the virtual desktop be
put in a suspended state if it is not in use. The suspended state is an optimal configuration that enhances
the user experience while reducing resource (CPU and memory) use. If all virtual machines are left
powered on, the host resources cannot be used by other virtual machines on the same server. With
persistent desktops, the virtual machine can be immediately suspended when the user logs off.

Cisco Virtualization Experience Infrastructure


149
Performance and Capacity

Hypervisor resource consumption should be closely monitored for CPU, memory and storage
performance. Memory consumption on a hypervisor should be monitored to see if any memory
ballooning could be an indication that the host is in need of additional memory. VMware
high-availability features such as VMware Dynamic Resource Scheduling (DRS) can also be deployed
so that VMware can dynamically load-balance desktop virtual machines across hosts in the cluster, based
on resource use. Resources should also be monitored at the cluster level so that additional resources can
be added if needed.

High Availability Considerations

For a desktop virtualization deployment of significant scale, high availability of the virtual desktop is a
concern for most administrators. Virtual desktop machines, as a best practice, are typically deployed in
clusters so that a pool of hosts are available to the cluster. With clustering, VMware can distribute the
virtual desktops across the pool of resources using VMware DRS to increase the resources available to
the virtual machine. The clusters are also used to implement specific high-availability features such as
VMware High Availability (HA), DRS, Fault Tolerance, and vMotion. However, deploying servers in a
cluster can change and potentially limit the maximums that VMware supports. The supported limits are
available through VMware configuration maximums and are available when new releases change the
supported limits. Table 33 lists some of the data relevant to sizing a desktop virtualization deployment.
This table should be reviewed for planning any large-scale deployment of Cisco VXI. For a complete set
of configuration maximums, refer to VMware documentation.

Table 33 Configuration Maximums

Limits VMware vSphere 4.0 Update 2 VMware vSphere 4.1


VMs per host 320 320
Number of vCPUs per host 25 25
Hosts per HA cluster 32 32
Virtual Machines per Cluster - 3000
Number of VMs per Host with 8 or fewer 160 N/A
in the HA cluster
Number of VMs per host with 8+ hosts in 40 N/A
HA cluster
Hosts per vCenter server 1000 1000
Hosts per data center 100 400

Validating Capacity Estimates


The next step in the overall resource planning process is to validate the capacity estimations based on
factors such as CPU, memory, and storage as well as factors outlined in the previous section. On the basis
of the baseline performance data from the physical desktop and the theoretical estimation for the number
of virtual desktops that can be rolled out on the Cisco UCS blade server chosen for the deployment,
perform performance characterization and validation in a virtualized environment to determine the
following:
• Average CPU utilization at the virtual machine and blade server and hypervisor levels
• Memory utilization at the virtual machine and blade server and hypervisor levels
• Storage IOPS at the virtual machine and blade server and hypervisor levels
• Network bandwidth utilization at the virtual machine and blade server and hypervisor levels

Cisco Virtualization Experience Infrastructure


150
Performance and Capacity

• VMware ESX performance


• Application response times

Workload Considerations
To validate capacity estimates for a Cisco VXI deployment, one option is to roll out the service to a pilot
group and validate the resource estimation with actual users. Alternatively, the data regarding user
activities, applications used, and use patterns collected from the physical desktops can be used to define
a workload specific to that environment. For testing, the custom workload can then be automated to
simulate the user workload, or it can be mapped to one of the generic profiles commonly used by
workload simulation tools.
The workload defines the applications that a person actively uses, such as word processing, presentation,
and other office applications, but it can also include background activities such as backups and antivirus
scans. Workloads for users within a company will vary depending on a person’s job or functional role
and may differ according to the organization structure (sales, marketing, manufacturing, etc.).
Workloads can also vary based on the time of day, particularly if the desktop virtualization users are
geographically dispersed. Background activities that begin at specific times, such as backups and
antivirus scans, can also increase workloads.
For a comprehensive look at workload definition and other considerations in an enterprise network,
please refer to the Desktop Virtualization chapter.
The workload can be used to classify users into one of the generic profiles shown in Table 34. Desktop
virtualization vendors commonly use these profiles to characterize desktop virtualization users with
similar computing, storage, virtualization, and networking requirements. Simulation tools also use
similar profiles for defining workloads and can be used to validate the resource requirements.

Table 34 User Profiles

RAM per Virtual Storage per Network


User Profile Machine Storage Virtual Machine IOPS Bandwidth Description
Task Worker 1.5 G 20 GB 10-15 384 kbps One application open at a
time
Limited printing. Limited
mouse usage. Primarily
text editing.
Knowledge 2G 30 GB 15-20 500 kbps Multiple apps open at a
Worker time. Variety of apps.
Graphical Apps with
multimedia and use of
USB peripherals.
Power User 2G 40 GB 20-25 700 kbps Multiple apps open at a
time. Graphical and/or
computational intensive
applications. User may
need administrative rights.

For the end-to-end Cisco VXI system outlined in this document, a variation of the first two user profiles
was used for testing and will be referred to as the Cisco VXI knowledge worker profile.

Cisco Virtualization Experience Infrastructure


151
Performance and Capacity

Capacity Planning for Cisco VXI Service and Data Center Infrastructure
Components
In addition to the per-desktop resource use and the number of server blades and amount of storage
required to host a large number of Cisco VXI users, this section looks at the desktop virtualization
services infrastructure and data center components that factor into the overall capacity planning for and
end-to-end Cisco VXI system.

Desktop Virtualization Service Infrastructure

VMware View Connection Server

The connection server is the first point of contact for end users in a VMware View environment who are
trying to access their virtual desktops. It provides initial authentication and single sign-on by
authenticating the user on the back end with Microsoft Active Directory. If the VMware View
deployment uses direct-connect mode, the connection between the end-user client and the connection
server is taken down after the desktop virtualization user acquires the virtual desktop. However, if tunnel
mode is used, this connection will stay up for the life of the desktop virtualization session. See the
Desktop Virtualization chapter for additional details.

Note VMware View direct-connect mode is used to validate the Cisco VXI system discussed in this design
document.

Multiple connection servers should be deployed to optimize the performance of the desktop
virtualization system by using a load balancer to distribute the desktop virtualization traffic across all
available servers. A pair of Cisco Application Control Engine (ACE) devices at the front end of the
connection servers can provide both high availability and load balancing. Cisco ACE will monitor both
the load and availability of the servers to help ensure even distribution of connections requests (direct
mode) and desktop virtualization sessions (tunnel mode). For more information about Cisco ACE, please
refer to the Network chapter.
The scale and performance data for the connection server is needed for capacity planning; please contact
VMware to get the latest scalability and performance data.

Note If tunnel mode is used in VMware View deployments, Cisco highly recommends using the SSL offload
feature on Cisco ACE to improve the performance of the connection servers.

To monitor the performance and proactively identify bottlenecks on the connection server that can affect
the end-user experience, you should use tools capable of providing the following data:
• Connections per second between end users and connection servers, to identify login storms
• Latency between end users and connection servers: network and application response times to
identify login delays and slow response times from an end-user perspective
• Transactions per second and throughput, to verify that operations are well within the server’s
performance limits and to proactively add resources if necessary
• Errors: connectivity or login failures, packet loss, and so on

Cisco Virtualization Experience Infrastructure


152
Performance and Capacity

Citrix XenDesktop Controller

The Citrix XenDesktop controller in Citrix deployments provides functions similar to the VMware View
connection server by serving as a broker for virtual desktop connection requests coming from Citrix
endpoints. Multiple desktop controllers should be deployed for high availability and performance so that
users can acquire their virtual desktops with reasonable response times even during bootup and login
storms, when many users are trying to come up simultaneously. Cisco ACE load balancers should also
be used at the front end of the controllers so that connection requests are evenly distributed.
For more information about the Citrix XenDesktop controller and Cisco ACE, please refer to the
Desktop Virtualization and Network chapters respectively. From a capacity planning perspective, the
scalability of the Citrix XenDesktop controller is needed; please contact Citrix to get the latest
performance data.

Cisco Nexus 1000V

The Cisco Nexus 1000V is a distributed virtual switch that serves as the first software-based access layer
switch in the Cisco VXI system. The Cisco Nexus 1000V has a rich set of features and functions
comparable to Cisco Catalyst and other Cisco Nexus Family switches. For large-scale deployment of
Cisco VXI, you should evaluate the following attributes of the Cisco Nexus 1000V to determine this
switch its applicable to the environment targeted for a scaled deployment of Cisco VXI:
• The Cisco Nexus 1000V sees each host as a module on the switch and can handle up to 64 hosts in
a single switch.
• If the Cisco VXI deployment spans multiple data centers, multiple Cisco Nexus 1000V Switches
will be required.
• The uplinks from the Cisco Nexus 1000V to the rest of the data center network use the uplinks on
the hosts, and therefore traffic leaving the Cisco Nexus 1000V will have the same amount of network
bandwidth as the host itself—very much different from a physical switch, in which traffic from
multiple modules may traverse the same uplinks.
• The Cisco Nexus 1000V version used for validating the Cisco VXI system currently supports 1024
virtual Ethernet ports per port profile. Since virtual Ethernet ports represent the port that connects
to the virtual desktops running on different hosts, for large Cisco VXI deployments additional port
profiles may need to be used if 1024 ports per port profile per Cisco Nexus 1000V is not sufficient
• Each Cisco Nexus 1000V can support up to 2000 virtual Ethernet ports, and in a Cisco VXI
deployment, this would be a maximum of 2000 virtual desktops.
• PortChannels should be deployed on the uplink modules to increase the available uplinks on a host
while providing redundancy.
• The Cisco Nexus 1000V Virtual Switching Module (VSM) can be deployed on a host along with
other virtualized desktops, but the VSM’s resource use should be accounted for in deploying the
virtual desktops.
For more information about the Cisco Nexus 1000V, see the Data Center chapter in this document.

Cisco UCS 6100 Series Fabric Interconnects

Large Cisco VXI deployments that require multiple Cisco UCS chassis will need to take into account the
scalability of the Cisco 6100 Series Fabric Interconnects. A single Cisco UCS 6120XP 20-Port Fabric
Interconnect can support up to 20 chassis, and a single Cisco 6140XP 40-Port Fabric Interconnect can
support up to 40 chassis and can provide up to 1.04 terabits per second (Tbps) of throughput. However,

Cisco Virtualization Experience Infrastructure


153
Performance and Capacity

these values assume that there is one link from each Cisco UCS chassis to the Cisco UCS 6100 Series
Fabric Interconnect, but a minimum of two is recommended to reduce the level of oversubscription. For
more information about the Cisco 6100 Series, please refer to the Data Center chapter of this document.

Cisco ACE

As discussed earlier, Cisco ACE provides load balancing of initial connection requests to the connection
brokers in the Citrix and VMware View environments. The scale and performance of the Cisco ACE is
important to help ensure good response times when acquiring a hosted virtual desktop (HVD),
particularly during boot and login storms. Therefore, overall capacity planning should consider the Cisco
ACE scale and performance data, provided in Table 35.

Table 35 Cisco ACE

Product Connections/Sec Concurrent Sessions


Cisco ACE 325000 4 million (established in 12s)

Postdeployment Performance Monitoring


Capacity and performance analysis tools to monitor capacity on all components in the data center as well
as application performance in desktop virtualization sessions are needed so that the administrator can
increase capacity or make changes as needed based on use and activity patterns. Some tools that can be
used specifically for capacity and performance monitoring are as follows:
• The VMware esxtop tool provides a real-time view of the VMware ESX server, and specifically the
following data:
– CPU use for individual virtual machines
– CPU use for each physical processor
– Memory use
– Virtual disk use
– Network bandwidth for LAN and storage access
• A network analysis module deployed in the data center can be used to gather network performance
statistics for the desktop virtualization traffic related to bandwidth utilization, drops, and latency.
• Storage performance tools available from storage vendors can provide an accurate assessment of the
storage performance and provide metrics such as IOPS and throughput. These tools can also help in
identifying potential problems that can affect performance. Both NetApp and EMC have such tools
and were used in the validation of the end-to-end Cisco VXI system.
Please refer to the Management and Operations chapter for a comprehensive look at the network
management tools available for monitoring an end-to-end Cisco VXI system.

WAN Sizing
This section looks at the enterprise WAN and the factors that need to be considered to determine the
number of desktop virtualization users that can be deployed at a given remote site given a finite amount
of bandwidth. In simple terms, the number of desktop virtualization users that a WAN link of a given
bandwidth can support is:

Cisco Virtualization Experience Infrastructure


154
Performance and Capacity

Bandwidth of WAN link / Bandwidth of single desktop virtualization session

However, in desktop virtualization environments, this type of sizing is a challenge due to a number of
variables that can affect the amount of bandwidth in a single desktop virtualization session. In voice over
IP (VoIP) environment, in which a voice call is a voice call—that is, there is a predictable, smooth, and
a constant 80 kbps per call for a given codec—desktop virtualization session traffic is a complete
variable, as a user’s network traffic is today. In a voice environment, if a branch location needs to support
10 simultaneous calls, 10 x 80 kbps = 800 kbps of traffic is needed. For desktop virtualization, the
session traffic flow can vary widely between two different users, based on what they are doing on their
desktops—and this is just one of several variables that can change the bandwidth of a session.
With desktop virtualization, every action that a user initiates and the subsequent response is relayed over
the network as a part of the session traffic from the hosted virtual desktop in the data center to the client
that the end user is using. For a given desktop virtualization session, depending on the protocol and
vendor, traffic may be carried in additional TCP sessions (multimedia redirection [MMR] or USB
redirection) associated with the main desktop virtualization session, or it may be carried within a virtual
channel (half duplex [HDX]) within the same TCP or UDP session as the main desktop virtualization
session. Three display protocols—PCoIP and Remote Desktop Protocol (RDP) for VMware View and
Independent Computing Architecture (ICA) for Citrix XenDesktop—can be used to relay this
information. Each protocol is different in the way it is implemented in terms of the:
• Transport protocol used
• Type of encryption used
• Level of compression it provides
• Way that rendering of the display is handled (remotely or locally)
• Way that it adapts to changes in available network bandwidth
In addition to the display protocol itself, the screen resolution and the number of monitors that an end
client uses affect the network bandwidth.
The applications (Microsoft Office, backup, web browser, teller application for banking users, etc.) and
activities of a user and their individual workloads are also constant variables with significant impact.
The guest OS commonly used on the virtual desktops, Microsoft Windows XP and Windows 7 (32-bit
and 64-bit), can also change the overall desktop virtualization experiences due to differences in the OS
and can therefore affect the desktop virtualization session traffic.
If video traffic is supported on a virtualized desktop, differences outside the video content itself can
affect the bandwidth requirements for a given desktop virtualization session as well as the end-user
experience:
• How the video is transported? Is the video stream carried using a separate TCP/UDP session, or is
it in-band in the main desktop virtualization session?
If VMware View MMR is used, the video is carried across a separate TCP session, enabling the network
administrator to provide the necessary QoS for video and optimize it using Cisco Wide Area Application
Services (WAAS) to improve both the user experience and the bandwidth requirements per user.
If Citrix XenDesktop is deployed, the video stream can be carried using HDX, which can also optimize
the traffic and reduce the bandwidth per user. Note that HDX can be used only if the latency is less than
30 ms and should be factored in for WAN links, on which the latency could easily exceed this value.
• What type of video content, such as Adobe Flash or Microsoft Windows Media Player content, is
used? Adobe Flash content cannot currently be optimized using Cisco WAAS, and therefore Cisco
WAAS cannot be used to reduce the bandwidth requirements per user.
Adobe Flash is also not a supported media format with VMware View MMR and so preferential QoS for
Adobe Flash video cannot be provided, thereby affecting the user experience.

Cisco Virtualization Experience Infrastructure


155
Performance and Capacity

For Citrix XenDesktop, Adobe Flash content is carried by HDX, which can optimize the traffic.

Note Currently, VMware View 4.5 provides no support for MMR when the guest OS running on the HVD is
Microsoft Windows 7.

In addition to the preceding considerations, video is very bandwidth intensive, with stringent loss, jitter,
and latency requirements, and therefore the QoS and the bandwidth allocated for video should be
carefully planned for WAN links.
Printing in a desktop virtualization environment can also affect the per-session traffic depending on
whether network printing to a local printer at the remote site is used, or whether USB printing is used
that transports the print traffic on a separate channel to the USB-attached printer on the same endpoint
that the user uses for the main desktop virtualization session.
Another important consideration when sizing WAN links is whether WAN optimization is being
deployed. It is common to optimize application traffic traversing a WAN link using Cisco WAAS
devices. Cisco WAAS effectively reduces the bandwidth utilization on the WAN link through the
application level, thereby enabling a large number of users to be deployed across a given WAN link.
However, Cisco WAAS may not be as effective in some desktop virtualization environments due to the
display protocol used and the application traffic being transported. Please refer to the Network chapter
of this document for information about how to use Cisco WAAS in desktop virtualization deployments.
It should now be clear that a number of variables are involved in desktop virtualization that can change
the per-session bandwidth utilization. Therefore, the per-session bandwidth estimation is unique to a
given desktop virtualization deployment and can significantly vary even within one enterprise depending
on the applications, workloads, and use patterns of the desktop virtualization users. Therefore, the sizing
of the WAN link is not a trivial effort, and any sizing estimations must be validated in that environment.
Due to the highly variable nature of desktop virtualization traffic, the common approach to determining
the bandwidth requirements for a WAN link is to characterize the data using one of the generic workload
profiles defined in the Workload Considerations section earlier in this chapter. However, it is important
to adjust the estimations based on a more representative workload for the specific desktop virtualization
deployment. In addition, if traffic is carried out of band, as may be the case for video or USB printing
traffic, you need to characterize this traffic independently as well. If Cisco WAAS is used for WAN
optimization, the effects with and without Cisco WAAS should also be considered.

For More Information


Table 36 provides more detailed information regarding specific vendor information including hardware
and software discussed in this chapter.

Cisco Virtualization Experience Infrastructure


156
Performance and Capacity

Table 36 Performance and Capacity Links

Reference Architecture-Based Design for Implementation of Citrix XenDesktops on Cisco Unified


Computing System, VMware vSphere, and NetApp Storage
http://preview.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_vspher
e_ntap.pdf
VMware Reference Architecture for Cisco UCS and EMC CLARiiON with VMware View 4
http://www.vmware.com/go/vce-ra-brief
Workload Considerations for Virtual Desktop Reference Architectures Info Guide
http://www.vmware.com/files/pdf/VMware-WP-WorkloadConsiderations-WP-EN.pdf
VMware View on NetApp Deployment Guide
http://www.vmware.com/files/pdf/resources/VMware_View_on_NetApp_Unified_Storage.pdf
Performance Best Practices for VMware vSphere 4.0
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Virtual Reality Check - Phase II version 2.0
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Scalability Study for Deploying VMware View on Cisco UCS and EMC Symmetrix V-Max Systems
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/vdiucswp.htm
l
Configuration Maximums for vSphere 4.0
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/vdiucswp.htm
l
Configuration Maximums for vSphere 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdf
VMware KB Article on Transparent Page Sharing
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&external
Id=1021095
Performance Best Practices for VMware vSphere 4.0
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Performance Study of VMware vStorage Thin Provisioning
http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
VMware KB: Using thin provisioned disks with virtual machines
http://kb.vmware.com/kb/1005418

Cisco Virtualization Experience Infrastructure


157
Acronyms

Acronyms
Table 37 defines the acronyms and abbreviations used in this publication.

Table 37 List of Acronyms

Acronym Expansion
CNA Converged Network Adapter
FC Fibre Channel
FCoE Fibre Channel over Ethernet
HBA Host Bus Adapter
HVD Hosted Virtual Desktop
ICA Independent Computing Architecture
IOPS I/O per second
iSCI Small Computer System Interface over IP
LAN Local Area Network
NFS Network File Share
PCoIP PC over IP
RDP Remote Desktop Protocol
TPS Transparent Page sharing
vDC Virtual Data Center
VEM Virtual Ethernet Module
VM Virtual Machine
vPC Virtual Port Channel
VSM Virtual Switch Module
VoIP Voice Over IP
WAN Wide Area Network

Additional References
The following links provide more detailed information regarding specific vendor information including
hardware and software discussed in this document.

Desktop Virtualization Links:

VMware
VMware view 4.0 architecture planning guide
http://www.vmware.com/pdf/view401_architecture_planning.pdf
Introduction to VMware vSphere (ESX 4.1, ESXi 4.1)
http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esx41_vc41.html
VMware VDI implementation best practices

Cisco Virtualization Experience Infrastructure


158
Additional References

http://www.vmware.com/files/pdf/vdi_implementation_best_practices.pdf
Performance Best Practices for VMware vSphere® 4.0 (ESX 4.0 and ESXi 4.0)
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Citrix
XenDesktop 4 @ Citrix knowledge base
http://support.citrix.com/proddocs/index.jsp
Best Practices for Citrix XenDesktop with Provisioning Server
http://support.citrix.com/article/CTX119849
Citrix XenDesktop Modular Reference Architecture
http://support.citrix.com/article/CTX124087
Citrix XenDesktop - Design Handbook
http://support.citrix.com/article/CTX120760
Citrix XenServer – Documentation
http://docs.vmd.citrix.com/XenServer/5.6.0/1.0/en_gb/

Data Center links:

Cisco Data Center Design Zone


http://www.cisco.com/en/US/netsol/ns743/networking_solutions_program_home.html
Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B Series Blade Servers
http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.html
VMware VSphere Documentation Page
http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esxi40_i_vc40.html
Cisco Solutions for a VMware View 4.0 Environment Design Guide.
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/cisco_VMwareView.pdf
VMware View on NetApp Deployment Guide
http://media.netapp.com/documents/tr-3770.pdf
Citrix XenDesktop Deployment Guide
http://www.citrix.com/%2Fsite%2Fresources%2Fdynamic%2Fsalesdocs%2FXD_Enterprise_Design_
White_Paper.pdf
Citrix XenServer 5.6 Documentation
http://support.citrix.com/product/xens/v5.6/
Scalability Study for Deploying VMware View on Cisco UCS and EMC Symmetrix V-Max Systems
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/vdiucswp.html

Network Links:

First link Vmware View 4.x


http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId
=1012382#View4.x
Second Vmware View 4.x
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1012382#
View4x
Introduction to the Cisco WAAS Central Manager GUI
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v421/configuration/guide/intro.html
#wp1122501

Cisco Virtualization Experience Infrastructure


159
Additional References

Configuring Traffic Interception


http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/traffic.ht
ml
Configuring Application Acceleration
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/policy.ht
ml
Configuring Application Acceleration
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/policy.ht
ml
Configuring Network Settings
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/network.
html
Configuring and Managing WAAS Print Services
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v4013/configuration/guide/printsvr.h
tml
Data Center
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/DC-3_0_IPInfra.html
Campus
http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/HA_campus_DG/hacampusdg.html
Branch/WAN –
http://www.ciscosystems.com/en/US/docs/solutions/Enterprise/Branch/srlgbrnt.pdf
Physical interfaces
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_ntwk.html
Management access
http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/v3.00_A1/configuration/
administration/guide/access.html
Resource Management
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_confg.html#wp2700097
High availability
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_ha.html
Configuring Virtual Contexts
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_confg.html
One-armed mode configuration
http://www.cisco.com/en/US/docs/interfaces_modules/services_modules/ace/vA2_3_0/configuration/g
etting/started/guide/one_arm.pdf
Configuring the Virtual Context for Citrix DV

Cisco Virtualization Experience Infrastructure


160
Additional References

http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1044682
Configuring Session Persistence
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1062118
Configuring Health Monitoring
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1045366
Balancing Algorithm
http://www.cisco.com/en/US/docs/app_ntwk_services/data_center_app_services/ace_appliances/vA1_
7_/configuration/device_manager/guide/UG_lb.html#wp1045366
Configuration of Source NAT
http://docwiki.cisco.com/wiki/Basic_Load_Balancing_Using_One_Arm_Mode_with_Source_NAT_on
_the_Cisco_Application_Control_Engine_Configuration_Example
Cisco Enterprise Campus 3.0 Architecture
http://www.cisco.com/en/US/docs/solutions/Enterprise/Campus/campover.html

Endpoints and Applications Links:

Cisco Wide Area Application Services (WAAS) Configuration Guide


http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v421/configuration/guide/cnfgbook.
pdf
Citrix Guide for Limiting Printing Bandwidth
http://support.citrix.com/proddocs/index.jsp?topic=/xenapp5fp-w2k8/ps-printing-limiting-printing-ban
dwidth-all.html
Cisco Unified SRST Configuration Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cusrst/admin/srst/configuration/guide/SRST_SysA
dmin.pdf
Cisco Unified Communications 8.X SRND
http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/srnd/8x/uc8xsrnd.pdf
Cisco Unified Personal Communicator User's Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cupc/7_0/english/user/guide/windows/CUPC_7.0_
UG_win.pdf

Management and Operations Links:

Cisco Deployment Guide for VMWare View 4.0


http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/cisco_VMwareView.html
Cisco UCS GUI Configuration Guide
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_GUI_Con
figuration_Guide_1_3_1.pdf
Cisco Unified Communications Manager Systems Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/admin/8_0_2/ccmsys/accm.pdf

Cisco Virtualization Experience Infrastructure


161
Additional References

VMware vSphere 4.2 documentation


http://www.vmware.com/support/pubs/vs_pages/vsp_pubs_esxi41_i_vc41.html
VMware vSphere Command-Line Interface Documentation
http://www.vmware.com/support/developer/vcli/
VMware APIs and SDKs Documentation
http://www.vmware.com/support/pubs/sdk_pubs.html
VMware View 4.0 Documentation
http://www.vmware.com/support/pubs/view_pubs.html
VMware View Installation Guide (release 4.5)
http://www.vmware.com/support/pubs/view_pubs.html
VMware View Administrators Guide (release 4.5)
http://www.vmware.com/support/pubs/view_pubs.html
VMware View Integration Guide (release 4.5)
http://www.vmware.com/support/pubs/view_pubs.html
VMware View Architecture Planning Guide (release 4.5)
http://www.vmware.com/support/pubs/view_pubs.html
VMware Configuration Maximums
http://www.vmware.com/pdf/vsphere4/r40/vsp_40_config_max.pdf
Citrix XenDesktop 4 documentation
http://support.citrix.com/proddocs/index.jsp
EMC Navisphere/Unisphere Management Suite
http://www.emc.com/products/detail/software/navisphere-management-suite.htm
NetApp Provisioning Manager
http://www.netapp.com/us/products/management-software/provisioning.html
Altiris reference documentation
http://www.symantec.com/business/theme.jsp?themeid=altiris
Microsoft Active Directory and Network services
http://www.microsfoft.com/
Cisco Network Analysis Module Deployment Guide
http://www.cisco.com/en/US/prod/collateral/modules/ps2706/white_paper_c07-505273.html#wp90002
47
Cisco NAM Appliance Documentation (command reference and user guide )
http://www.cisco.com/en/US/partner/products/ps10113/tsd_products_support_series_home.html
Cisco Wide Area Application Services Configuration Guide (Software Version 4.1.1)
http://www.cisco.com/en/US/docs/app_ntwk_services/waas/waas/v411/configuration/guide/cnfg.html
Cisco Adaptive Security Device Manager Configuration Guide
http://www.cisco.com/en/US/products/ps6121/tsd_products_support_configure.html
Cisco ACE 4700 Series Application Control Engine Appliances Documentation

Cisco Virtualization Experience Infrastructure


162
Additional References

http://www.cisco.com/en/US/products/ps7027/tsd_products_support_series_home.html
Cisco UCS Manager Configuration Guide
http://www.cisco.com/en/US/products/ps10281/products_installation_and_configuration_guides_list.h
tml
Cisco DCNM documentation
http://www.cisco.com/en/US/products/ps9369/tsd_products_support_configure.html
Cisco Fabric manager documentation
http://www.cisco.com/en/US/partner/products/ps10495/tsd_products_support_configure.html
Cisco Nexus 1000v documentation
http://www.cisco.com/en/US/partner/products/ps9902/products_installation_and_configuration_guides
_list.html
CiscoWorks documentation
http://www.cisco.com/en/US/partner/products/sw/cscowork/ps4565/tsd_products_support_maintain_a
nd_operate.html
Wyse Device Manager Overview
http://www.wyse.com/products/software/devicemanager/index.asp
WDM Admin Guide (release 4.7.2)
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG9507DDFF08D288306AC95C3C3F78B52
551F2D290D30D2F9F&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZG48219
F51800ECBD67A914E693A1B35C69EBAE7BDBF7E5DDCF04394392EFE18816C09F686220DF68
2859E9756A62205A14A
WDM Install Guide (release 4.7.2)
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG2FBE29B2F0CB4CE374726CACD3884A
4E9ADCB633B42A7DA3&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZG48
219F51800ECBD67A914E693A1B35C69EBAE7BDBF7E5DDCF04394392EFE18816C09F686220D
F682859E9756A62205A14A
Wyse Device Manager Integration Add-ons Admin Guide
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG1C1D43ABA80F795F39F06E7C5731597
28391066249DD919D&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZG48219
F51800ECBD67A914E693A1B35C69EBAE7BDBF7E5DDCF04394392EFE18816C09F686220DF68
2859E9756A62205A14A
Wyse® Thin Clients, Based on Microsoft® Windows® XP Embedded, Admin Guide
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG42949BF90AE7F9412C1C931422B780A
1FEAB5C2D8D0D1645&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=&etfm1=&jfn=ZGC29
B519BFE0416A1A6FCE711FE7D993C9F7AA9C19B22F9C27470952552D8AE1777F9506FF6A020
0B96E81A03AFC23839E6
Wyse Viance Admin Guide
https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG841C2F2087C671DBE3AA9E0203EC852
3A6D6C2A5C8D157DD&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=_0_1_0_-1_f_nv_&etf
m1=&jfn=ZG0165717AA38109A13472984D2B8D94D79D98F9322001A2A71A96398EC4D334BE6
2DA4D17DCD2B34E7A8

Cisco Virtualization Experience Infrastructure


163
Additional References

Wyse P20 Admin Guide


https://support.wyse.com/OA_HTML/cskatch.jsp?fileid=ZG012BFCA92639876559ACEEF431B1663
41DBDF9D663D2886D&jttst0=6_23871%2C23871%2C-1%2C0%2C&jtfm0=_0_1_0_-1_f_nv_&etf
m1=&jfn=ZG1822D032C06C7952267DE698A72D680888C4CF4D85AE20A616CFC0617832C14C2
0C36ACA3BEE6A94BFD
Devon IT Echo™ Thin Client Management Software Version 3 Overview
http://www.devonit.com/software/echo/overview
Devon ThinManage 3.1 Admin Guide
http://downloads.devonit.com/Website/public/Documentation/thinmanage-3-1-adminguide-rev12.pdf
Devon IT TC5 Terminal Quick Start Guide
http://www.devonit.com/_wp/wp-content/uploads/2009/04/tc5-quickstartguide-devonit-english.pdf
IGEL Universal Management Suite 3 Overview
http://www.igel.com/igel/live.php,navigation_id,3294,_psmand,9.html
IGEL Universal Management Suite 3 Install Guide
http://www.download-igel.com/index.php?filepath=manuals/english/universalmanagement/&webpath=
/ftp/manuals/english/universalmanagement/&rc=emea
IGEL Universal Management Suite 3 User Guide
http://www.download-igel.com/ftp/manuals/english/universalmanagement/IGEL%20UMS%20User%2
0Guide.pdf
IGEL Universal Desktops User Guide
http://www.download-igel.com/ftp/manuals/english/universalmanagement/IGEL%20UMS%20User%2
0Guide.pdf
Cisco UMS documentation
http://www.cisco.com/

Cisco VXI Security Links:

Cisco SAFE Architecture Guide


http://www.cisco.com/en/US/docs/solutions/Enterprise/Security/SAFE_RG/SAFE_rg.pdf
Cisco Data Center 3.0 Security Guide
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/DC_3_0/dc_sec_design.pdf
Cisco Identity-Based Network Security Guide
http://www.cisco.com/en/US/solutions/collateral/ns340/ns394/ns171/CiscoIBNS-Technical-Review.pd
f
Cisco 3650 Command Reference Guide
http://www.cisco.com/en/US/docs/switches/lan/catalyst3560/software/release/12.2_35_se/command/reference/cli1
.html#wp2757193
Cisco Business Ready Teleworker Design Guide
http://www.cisco.com/application/pdf/en/us/guest/netsol/ns171/c649/ccmigration_09186a008074f24a.
pdf

Cisco Virtualization Experience Infrastructure


164
Additional References

Cisco ASA 8.x : VPN Access with the AnyConnect VPN Client Configuration Example
http://www.cisco.com/en/US/products/ps6120/products_configuration_example09186a00808efbd2.sht
ml
Citrix – How to Disable USB Drive Redirection
http://support.citrix.com/article/CTX123700
Microsoft – Disabling Remote Desktop Services Features
http://msdn.microsoft.com/en-us/library/aa380804%28VS.85%29.aspx

Quality of Service (QoS) Links:

Cisco Deployment Guide for VMWare View 4.0


http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/vmware/cisco_VMwareView.html
Cisco UCS GUI Configuration Guide
http://www.cisco.com/en/US/docs/unified_computing/ucs/sw/gui/config/guide/1.3.1/UCSM_GUI_Con
figuration_Guide_1_3_1.pdf
Cisco Unified Communications Manager Systems Guide
http://www.cisco.com/en/US/docs/voice_ip_comm/cucm/admin/8_0_2/ccmsys/accm.pdf

Performance and Capacity Links:

Reference Architecture-Based Design for Implementation of Citrix XenDesktops on Cisco Unified


Computing System, VMware vSphere, and NetApp Storage
http://preview.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/Virtualization/ucs_xd_vsphere
_ntap.pdf
VMware Reference Architecture for Cisco UCS and EMC CLARiiON with VMware View 4
http://www.vmware.com/go/vce-ra-brief
Workload Considerations for Virtual Desktop Reference Architectures Info Guide
http://www.vmware.com/files/pdf/VMware-WP-WorkloadConsiderations-WP-EN.pdf
VMware View on NetApp Deployment Guide
http://www.vmware.com/files/pdf/resources/VMware_View_on_NetApp_Unified_Storage.pdf
Performance Best Practices for VMware vSphere 4.0
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Virtual Reality Check - Phase II version 2.0
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Scalability Study for Deploying VMware View on Cisco UCS and EMC Symmetrix V-Max Systems
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/vdiucswp.html
Configuration Maximums for vSphere 4.0
http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/App_Networking/vdiucswp.html
Configuration Maximums for vSphere 4.1
http://www.vmware.com/pdf/vsphere4/r41/vsp_41_config_max.pdfnfiguration Maximums for vSphere
4.1

Cisco Virtualization Experience Infrastructure


165
Cisco Validated Design

VMware KB Article on Transparent Page Sharing


http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId
=1021095
Performance Best Practices for VMware vSphere 4.0
http://www.vmware.com/pdf/Perf_Best_Practices_vSphere4.0.pdf
Performance Study of VMware vStorage Thin Provisioning
http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
VMware KB: Using thin provisioned disks with virtual machines
http://kb.vmware.com/kb/1005418

Cisco Validated Design


About Cisco Validated Design (CVD) Program
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more
reliable, and more predictable customer deployments. For more information visit www.cisco.com/go/designzone.
ALL DESIGNS, SPECIFICATIONS, STATEMENTS, INFORMATION, AND RECOMMENDATIONS
(COLLECTIVELY, "DESIGNS") IN THIS MANUAL ARE PRESENTED "AS IS," WITH ALL FAULTS. CISCO
AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, INCLUDING, WITHOUT LIMITATION, THE
WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT OR ARISING FROM A COURSE OF DEALING, USAGE, OR TRADE PRACTICE. IN
NO EVENT SHALL CISCO OR ITS SUPPLIERS BE LIABLE FOR ANY INDIRECT, SPECIAL,
CONSEQUENTIAL, OR INCIDENTAL DAMAGES, INCLUDING, WITHOUT LIMITATION, LOST PROFITS
OR LOSS OR DAMAGE TO DATA ARISING OUT OF THE USE OR INABILITY TO USE THE DESIGNS,
EVEN IF CISCO OR ITS SUPPLIERS HAVE BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR
OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT
THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY
DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx,
the Cisco logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play,
and Learn and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To
You, Catalyst, CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork
Expert logo, Cisco IOS, Cisco Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity,
Collaboration Without Limitation, EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing,
FormShare, GigaDrive, HomeLink, Internet Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo,
LightStream, Linksys, MediaTone, MeetingPlace, MeetingPlace Chime Sound, MGX, Networkers, Networking
Academy, Network Registrar, PCNow, PIX, PowerPanels, ProConnect, ScriptShare, SenderBase, SMARTnet,
Spectrum Expert, StackWise, The Fastest Way to Increase Your Internet Quotient, TransPath, WebEx, and the
WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain
other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of
the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
© 2010 Cisco Systems, Inc. All rights reserved

Cisco Virtualization Experience Infrastructure


166

You might also like