You are on page 1of 48

Hands-on lab: Introduction to the Cisco Nexus 1000V

The next generation virtual datacenter from VMware will ensure efficient collaboration between network administrators and VMware administrators with the use of vNetwork Distributed Switches. By replacing an existing virtual switch with the Cisco Nexus 1000V and the availability of familiar Cisco NX-OS, Cisco Nexus 1000V supports the traditional boundaries between server and network administrators, allowing network administrators to also manage virtual switches. This lab will augment your knowledge about the Cisco Nexus 1000V with a considerable amount of hands-on experience.

2011 Cisco | VMware. All rights reserved.

Page 1

Contents
Hands-on lab: Introduction to the Cisco Nexus 1000V .............................................................................................................................. 1 Lab Overview ............................................................................................................................................................................................. 4 Objectives ..............................................................................................................................................................................................4 Cisco CloudLab .....................................................................................................................................................................................4 Lab Exercises ........................................................................................................................................................................................5 Network Admin vs. Server Admin ..........................................................................................................................................................5 Lab Topology and Access .......................................................................................................................................................................... 6 Logical Topology ...................................................................................................................................................................................6 Access ...................................................................................................................................................................................................7 Connecting via the vSphere Client ........................................................................................................................................................8 Deployment ................................................................................................................................................................................................ 9 Connect to the Cisco Nexus 1000V Virtual Supervisor Module (VSM) ..................................................................................................9 Creating an uplink port profile for the Management Traffic ..................................................................................................................10 Creating an uplink port profile for the Management Traffic ..................................................................................................................11 Adding an ESX host to the Distributed Virtual switch ..........................................................................................................................12 Attaching a Virtual Machine to the Network .............................................................................................................................................. 16 Creating a port profile for virtual machines ..........................................................................................................................................16 Verify the successful creation of the port-group: ..................................................................................................................................17 Network Administrator view of Virtual Machine connectivity ................................................................................................................21 VMotion and Visibility ............................................................................................................................................................................... 24 VMotion Configuration .........................................................................................................................................................................24 Network Administrators view of VMotion .............................................................................................................................................28 Perform a VMotion ...............................................................................................................................................................................28 Verify the new Network Administrators view on the Virtual Machine ...................................................................................................29 Policy-based virtual machine connectivity ................................................................................................................................................ 30 Verify open ports within your virtual machine ......................................................................................................................................30 Configuration of an IP-based access list..............................................................................................................................................32 Verify the application of the IP-based access list .................................................................................................................................33 Mobile VM Security .................................................................................................................................................................................. 34 Private VLANs .....................................................................................................................................................................................34 Removing the Private VLAN configuration...........................................................................................................................................38 Traffic Inspection of individual Virtual Machines ....................................................................................................................................... 39 Configure an ERSPAN monitor session ..............................................................................................................................................39 Create an ERSPAN Session on the Nexus 1000V ..............................................................................................................................40 Configuring a VMkernel Interface to transport the ERSPAN Session ..................................................................................................41 Test the session and VMotion the VM .................................................................................................................................................43 Conclusion................................................................................................................................................................................................ 46 Feedback.................................................................................................................................................................................................. 46 Lab proctors ............................................................................................................................................................................................. 47

2011 Cisco | VMware. All rights reserved.

Page 2

Hands-on lab: Introduction to the Cisco Nexus 1000V


Lab Summary
In this self-paced lab, participants will discover how the Cisco Nexus 1000V software switch for VMware vSphere enables organizations to unleash the true power and flexibility of server virtualization, by offering a set of network features, management tools and diagnostic capabilities consistent with the customer's existing physical Cisco network infrastructure and enhanced for the virtual world. Some of the features of the Cisco Nexus 1000V that will be covered include: Policy based virtual machine (VM) connectivity Mobility of security and network properties Non-disruptive operational model for both Server and Network administrators

In the highly agile VMware environment, the new Cisco Virtual Network Link (VN-Link) technology on the Nexus 1000V will integrate with VMware's vNetwork Distributed Switch framework to create a logical network infrastructure across multiple physical hosts that will provide full visibility, control and consistency of the network.

Key Benefits of the Cisco Nexus 1000V


Policy-based virtual machine (VM) connectivity Provides real-time coordinated configuration of network and security services Maintains a virtual machine-centric management model, enabling the server administrator to increase both efficiency and flexibility

Mobile VM security and network policy Policy moves with a virtual machine during live migration ensuring persistent network, security, and storage compliance Ensures that live migration won't be affected by disparate network configurations Improves business continuance, performance management, and security compliance

Non-disruptive operational model for your server virtualization, and networking teams Aligns management and operations environment for virtual machines and physical server connectivity in the data center Maintains the existing VMware operational model Reduces total cost of ownership (TCO) by providing operational consistency and visibility throughout the network

2011 Cisco | VMware. All rights reserved.

Page 3

Lab Overview
Objectives
The goal of this manual is to give you a chance to receive hands-on experience with a subset of the features of the Cisco Nexus 1000V Distributed Virtual Switch (DVS). The Cisco Nexus 1000V introduces many new features and capabilities. This lab will give you an overview of these features and introduce you to the main concepts.

Cisco CloudLab
This lab is hosted in Ciscos cloud-based hands-on and demo lab. Within this cloud you are provided with your personal dedicated virtual pod (vPod). You connect via RDP to a so-called control center within this host and walk through the lab steps below. All necessary tools to complete this lab can be found in the control center. Refer to the separate documentation for Cisco CloudLab for details on how to reach the control center within your vPod.
Figure 1. Logical Lab Topology

The username and password to access the Control Center of this vPod are listed below: User Name: VPOD\administrator Password: <Refer to the CloudLab Portal>

2011 Cisco | VMware. All rights reserved.

Page 4

Lab Exercises
This lab was designed to be completed in sequential order. As some steps rely on the successful completion of previous steps, you are required to complete all steps before moving on. The individual lab steps are: Cisco Nexus 1000V deployment Attaching Virtual Machines to the Cisco Nexus 1000V VMotion and Visibility Policy-based Virtual Machine connectivity Traffic Inspection of a Virtual Machine Quality of Service (QoS) for Virtual Machines

Network Admin vs. Server Admin


One of the key features of the Cisco Nexus 1000V is the non-disruptive operational model for both Network and Server administrators. This means that in a real world deployment scenario of this product, both Network Admin and VMware administrator would have their own management perspectives with different views and tools. This lab purposely exposes you to both of these perspectives: The Network administrator perspective with the Cisco NX-OS Command Line Interface (CLI) as the primary management tool and the VMware administrator perspective with vCenter as the primary management tool. Even if you won't be exposed to "the other side during your regular job it might be a good idea to understand the overall operation and handling of the Nexus 1000V.

2011 Cisco | VMware. All rights reserved.

Page 5

Lab Topology and Access


The lab represents a typical VMware setup with two physical ESX hosts, offering services to virtual machines and a vCenter to coordinate this behavior. Furthermore a Cisco Nexus 1000V will be used to provide network services to the two physical ESX hosts as well as the virtual machines residing on them.

Logical Topology
The diagram below represents the logical lab setup of a vPod as it pertains to the Cisco Nexus 1000V.
Figure 2. Logical Pod Design

Your pod consists of: Two physical VMware ESX servers. They are called esx01.vpod.local and esx02.vpod.local. One VMware vCenter, reachable at vcenter.vpod.local via the vSphere client. One Cisco Nexus 1000V Virtual Supervisor Module, reachable at vsm.vpod.local via SSH. One pre-configured upstream switch to which you do not have access to.

2011 Cisco | VMware. All rights reserved.

Page 6

Access
During this lab configuration steps need to be performed on the VMWare vCenter as well as the Cisco Nexus 1000V Virtual Supervisor Module (VSM) within the CloudLab Virtual Pod. The VMWare vCenter is accessible through the vClient application. The VSM is accessible through a SSH connection. Use the usernames and passwords listed below for accessing your vPods elements. Usernames and Passwords vCenter Login Password VPOD\Administrator Cisco123 Use the vSphere client feature Use Windows session credentials for easier login. Nexus 1000V VSM Login Password admin Cisco123

All necessary applications used within this lab are available on the desktop of the control center machine to which you are connected via Remote Desktop Protocol (RDP).

2011 Cisco | VMware. All rights reserved.

Page 7

Connecting via the vSphere Client


Start VMware vSphere client by double clicking on the VMWare vSphere Client icon on the desktop.

The following figure shows the vSphere Client login screen.

Figure 1: vSphere Client login screen

Please tick Use Windows session credentials and click on Login for vSphere Client authentification. After a successful login youll see the following vSphere Client application screen.

Figure 2: vSphere Client application screen

2011 Cisco | VMware. All rights reserved.

Page 8

Deployment
While the Nexus 1000V has already been registered in vCenter, it is still necessary to connect the different ESX hosts as part of the Nexus 1000V. In order to automatically install the necessary Virtual Ethernet Module (VEM) of the Cisco Nexus 1000V into the ESX hosts, we will be using VMware Virtual Update Manager (VUM). In a vSphere setup VUM is used to stage and apply patches and updates to ESX hosts. The goal of this step consists of adding the two hosts to the Nexus 1000V. In this lab you will: Create a uplink port-profile and apply it on the uplink interface of the ESX hosts Add the two hosts to the Nexus 1000V Switch

Lab Setup
In order to add a new host to the Distributed Switch we need to create a port-profile to enable the communication between the Virtual Supervisor Module and the different Virtual Ethernet Module. On top of that we want to enable the VMotion traffic on the same interface. Each pod is composed of 2 ESX Host, 1 Virtual Supervisor Module and one Virtual Center. Both ESX host are connected an upstream switch using 4 different NICs. Out of these NICs one will be used with the Nexus 1000V to carry all the management, VMotion traffic and application traffic coming from the VM.

Connect to the Cisco Nexus 1000V Virtual Supervisor Module (VSM)


Use the following credentials to connect via SSH to the Cisco Nexus 1000V Virtual Supervisor Module (VSM). The SSH client software called Putty can be found on the desktop of your vCenter host. It has been preconfigured to connect to the correct VSM module vsm.vpod.local.

Hostname Username Password

vsm.vpod.local admin Cisco123

2011 Cisco | VMware. All rights reserved.

Page 9

Creating an uplink port profile for the Management Traffic


In order to configure the communication between the VSM and the VEM, as well as the communication for the Virtual Machine we will use different VLANs to segregate the different type of traffic. You will utilize 4 different VLANs. Control VLAN: 10: VLAN used to allow the communication between the VSM and the VEM Packet VLAN: 10: VLAN used to exchange some specific packets e.g. CDP between the VSM and the VEM VMotion: 12: VLAN used for VMotion traffic Virtual Machine: 11: VLAN used for the application traffic Private VLAN Secondary VLAN: 111: Secondary VLAN for the Private VLAN lab step

Specify the VLANs for later usage.


Nexus1000V# conf t Nexus1000V(config)# vlan Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# 10 name vlan name vlan name vlan name end

N1KV_Control_Packet 11 VM_Network 12 VMotion 111 PVLAN_Secondary

2011 Cisco | VMware. All rights reserved.

Page 10

Creating an uplink port profile for the Management Traffic


In this part you will learn how to configure a port-profile that will be applied on an uplink interface. We will use this port-profile for all the management VLANs, for VMotion as well as productive VM traffic. A portprofile can be compared to a template that will contain all the networking information that will be applied on different interfaces. If the port-profile is configured as type ethernet the port-profile it is targeted to be applied on a physical interface. If not it will be applied on a Virtual Machine interface.
Nexus1000V# conf t Nexus1000V(config)# system update vem feature level 2 Old feature level: 4.0(4)SV1(1) New feature level: 4.0(4)SV1(3) Nexus1000V(config)# port-profile type ethernet Uplink Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode private-vlan trunk promiscuous Nexus1000V(config-port-prof)# switchport private-vlan trunk allowed vlan 10-12,111 Nexus1000V(config-port-prof)# channel-group auto mode on mac-pinning Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# system vlan 10,12 Nexus1000V(config-port-prof)# state enabled
Note:

The uplink port-profile already includes a configuration line for private vlans. This configuration is necessary for a later lab step and will be explained in the corresponding section. It already has to be included at this stage as certain configurations cannot be altered once the uplink port profile is in use.

One special characteristics of the uplink port profile should be pointed out at this stage: type ethernet: This configuration line means that the corresponding port-profile can only be applied to a physical Ethernet port. This is also indicated through a special icon in the vSphere client: channel-group auto: This configuration line activates the feature virtual port-channel host mode. It allows the Nexus 1000V to form a port-channel with upstream switches that do not support multichassis etherchannel. Congratulation you just configured your first port-profile!

2011 Cisco | VMware. All rights reserved.

Page 11

Adding an ESX host to the Distributed Virtual switch


We will now add the two ESX hosts of your pod to the Nexus 1000V DVS and apply the port-profile that we just created to the uplink interface of the different hosts. Utilizing the traditional non-distributed vSwitches requires multiple manual steps to ensure consistent hosts and is therefore time consuming and error-prone. Consistent network configuration across host is required for successful VMotion. Adding a host to the Distributed Virtual Switch is done by assigning some or all of the physical NICs of an ESX host to become part of the DVS and assign previously created uplink port-profile to these NICs. 1. Navigate to the Networking view by clicking on the Home -> Inventory -> Networking tab. To reach this view click on the arrow to the right of Inventory and pick Networking from the list being displayed.

2. Right-click on your DVS and choose Add Host....

3.

You are presented with all hosts that are part of the data center but not part of the DVS. The VEM component has already been pre-installed on the ESX hosts. An alternative would be the usage of VMware Update Manager (VUM), which would make the integration of the ESX host to the Nexus 1000V completely automated and transparent.
Page 12

2011 Cisco | VMware. All rights reserved.

4. Select the hosts and the NICs that will be assigned to the DVS. Currently vmnic0 is already in use by the traditional vSwitch to enable the initial management of your ESX hosts, while vmnic1 is used for iSCSI storage traffic and vmnic2 provides network access to the existing VMs through a vSwitch. Please only choose vmnic3 to become part of the Cisco Nexus 1000V DVS. Assign the uplink port profile Uplink that you created in the previous step to vmnic3 on host esx01.vpod.local and click on Next.

Note:

In real life scenarios uplink port-profiles are configured by the networking administrator to match the setting of the physical upstream switches. This ensures that there is no mis-configuration between the physical network and the virtual network. It also enables network administrators to use features for this uplink that are available on other Cisco switches (e.g. QoS, Etherchannel, )

2011 Cisco | VMware. All rights reserved.

Page 13

5. The next screen offers you the possibility to migrate existing VMKernel to the Nexus 1000V. For the purpose of this lab do not choose to migrate any VMkernel ports and click on Next

Note:

Migrating the Management Network and/or iSCSI will result in a loss of management and storage connectivity of the hosts. In a real-life scenario it is possible to even migrate the service console to the Cisco Nexus 1000V and thereby completely decommission the VMware vSwitch. But this lab has not been prepared to do so. Therefore under no circumstances choose vmnic0 and/or vmnic1 to become part of the Cisco Nexus 1000V DVS.

6. Similar to the previous screen, this next screen allows you to migrate existing Virtual Machine Networks to the Nexus 1000V. For the purpose of this lab do not choose to migrate any existing Virtual Machine Networks and click Next

2011 Cisco | VMware. All rights reserved.

Page 14

7. You are presented with an overview of the uplink ports that are created. By default VMWare creates 32 uplink ports per hosts and leaves it to the Nexus 1000V VSM to map them to useful physical ports.

8. Acknowledge these settings by clicking on Finish. After a few seconds this ESX host esx01.vpod.local will appear in the Hosts view of the Distributed Virtual Switch.

Repeat the same steps to add the host esx02.vpod.local to the Cisco Nexus 1000V.
2011 Cisco | VMware. All rights reserved. Page 15

Attaching a Virtual Machine to the Network


The next step demonstrates how Network Administrator and VMware Administrator work hand in hand to provide the network connectivity for virtual machines. The workflow to attach a virtual machine to the network consists of the following steps: The network admin creates a port profile which can be considered as a configuration template for virtual Ethernet ports. The port profile is translated into a port-group and appears in Virtual Center The ESX admin assigns a virtual machine to a port-group. The Nexus 1000V creates a virtual ethernet port (Veth) to connect the VM and configures the port based on the port profile that was tied to the port group chosen by the ESX admin.

This lab step consists of: Configure a port profile for Virtual Machines (Network Administrator) Assign a VM to a port profile (VMware Administrator)

Creating a port profile for virtual machines


On the CLI create the port profile VM-Client by typing the shown configuration commands
Nexus1000V# conf t Nexus1000V(config)# port-profile VM-Client Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode access Nexus1000V(config-port-prof)# switchport access vlan 11 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# state enabled Nexus1000V(config-port-prof)# exit

2011 Cisco | VMware. All rights reserved.

Page 16

Verify the successful creation of the port-group:


1. Navigate to the Networking view by choosing the Home -> Inventory -> Networking tab at the top of the screen. 2. Verify that the port-profile with the name VM-Client appears in the resource tree view under the distributed virtual switch called Nexus1000V, as well as in the Networks tab. Choose the Nexus 1000V Distributed Virtual Switch object called Nexus1000V in order to gain the same insight under the Networks tab.

2011 Cisco | VMware. All rights reserved.

Page 17

Assign a Virtual Machine to a port profile


As you can see under the Home -> Inventory -> Hosts and Clusters tab, your lab pod already includes a virtual machine named Windows 7 A. This VM initially uses the vSwitch port-group labeled VM Network. To reach this tab, click on the arrow to the right of Inventory and pick Hosts and Clusters from the dropdown menu.

Add a vNIC to the VM inside your pod, by associating it to the port-group VM-Client. 1. In VMWare Virtual Center open the settings dialog of the first VM by clicking on Edit Settings. Navigate to the Virtual NIC section and choose the port group VM-Client for the network label and finalize by clicking on OK.

2011 Cisco | VMware. All rights reserved.

Page 18

2. Verify that the Virtual Machine is using the port-group VM-Client.

3. Open the Virtual Machine Console for the VM Windows 7 A

4. Click on the Cisco Systems, Inc. link, which you can find on the desktop inside the VM. This opens the web page www.cisco.com with the internet browser and verifies the network connectivity of the VM.

2011 Cisco | VMware. All rights reserved.

Page 19

5. Close the Virtual Machine Console 6. Repeat steps 1 to 4 for the Virtual Machine Windows 7 B Congratulation you successfully configured the network connectivity for a virtual Machine! This step demonstrated that the workflow introduced by the Cisco Nexus 1000V is much more efficient than the traditional approach using vSwitches: The network team configures the network for the server team. The server team only needs to apply the prepared settings.

2011 Cisco | VMware. All rights reserved.

Page 20

Network Administrator view of Virtual Machine connectivity


Now that the Nexus 1000V is up and ready, you can take some time to explore more details of the virtual switch 1. Connect to the Cisco Nexus 1000V Virtual Supervisor Module through an SSH connection. The correct host and access credentials are already setup for you. 2. Issue the command show module
Nexus1000V# Mod Ports --- ----1 0 3 248 4 248 Mod --1 3 4 Mod --1 3 4 Mod --1 3 4 show module Module-Type -------------------------------Virtual Supervisor Module Virtual Ethernet Module Virtual Ethernet Module Hw -----0.0 2.0 2.0 Serial-Num ---------NA NA NA Server-Name -------------------NA esx02.vpod.local esx01.vpod.local Model -----------------Nexus1000V NA NA Status -----------active * ok ok

Sw --------------4.0(4)SV1(3a) 4.0(4)SV1(3a) 4.0(4)SV1(3a)

MAC-Address(es) -------------------------------------00-19-07-6c-5a-a8 to 00-19-07-6c-62-a8 02-00-0c-00-03-00 to 02-00-0c-00-03-80 02-00-0c-00-04-00 to 02-00-0c-00-04-80 Server-IP --------------10.2.11.5 10.2.11.12 10.2.11.11

Server-UUID -----------------------------------NA 422c745e-64b2-09d0-e470-dcc9cdacb560 422c9ae2-9381-e104-6a91-2f2815f5028d

* this terminal session Nexus1000V#

In the output of the show module command you can see different familiar components: Module 1 and module 2 are reserved for the Virtual Supervisor Module (VSM). The Cisco Nexus 1000V supports a model, where the supervisor can run in an active/standby high availability mechanism. Your labs pod is only equipped with a primary VSM, but not a secondary VSM. Module 3 and module 4 represent a Virtual Ethernet Module (VEM). As shown at the bottom of the screen, each VEM corresponds to a physical ESX host, identified by the server IP address and name. This mapping of virtual line-card to a physical server eases the communication between the network and server team.

2011 Cisco | VMware. All rights reserved.

Page 21

3. Lets have a look at the interfaces next by using the show interface brief command
Nexus1000V# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 10.2.11.5 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth3/4 1 eth trunk up none 1000(D) 1 Eth4/4 1 eth trunk up none 1000(D) 2 -------------------------------------------------------------------------------Port-channel VLAN Type Mode Status Reason Speed Protocol Interface -------------------------------------------------------------------------------Po1 1 eth trunk up none a-1000(D) none Po2 1 eth trunk up none a-1000(D) none -------------------------------------------------------------------------------Interface VLAN Type Mode Status Reason MTU -------------------------------------------------------------------------------Veth1 11 virt access up none 1500 Veth2 11 virt access up none 1500 -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------ctrl0 -up -1000 1500 Nexus1000V#

The output of the command show interface brief shows you the different interface types that are used within the Cisco Nexus 1000V:

Mgmt0: This interface is used for out of band management and correspond to the second vNIC of the VSM Ethernet Interfaces: These are physical Ethernet interface and correspond to the physical NICs of the ESX hosts. The numbering scheme lets you easily identify the corresponding module and NIC. Port-Channels: Ethernet Interfaces can be bound manually or automatically through vPC-HM into port channels. When using the uplink port-profile configuration mac-pinning there is no need for the configuration of a traditional port-channel on the upstream switch(es). Nonetheless on the Nexus 1000V a virtual port-channel is still formed.

2011 Cisco | VMware. All rights reserved.

Page 22

Veths: Virtual Ethernet Interfaces connect to VMs and are independent of the host the host that the VM runs on. The numbering scheme therefore does not include any module information. The Veth identifier remains with the VM during its entire life time even while the VM is powered down.

4. Verify on the Nexus 1000V CLI that the corresponding Virtual Ethernet interface has been created for the two virtual machines by issuing the command show interface virtual.
Nexus1000V# show interface virtual ------------------------------------------------------------------------------Port Adapter Owner Mod Host ------------------------------------------------------------------------------Veth1 Net Adapter 1 Windows 7 - A 3 esx01.vpod.local Veth2 Net Adapter 1 Windows 7 - B 4 esx02.vpod.local Nexus1000V#

The output of the above command gives you a mapping of the VM name to its Veth interface. 5. On top of that the Network Administrator can see at any given time which VM is in use and which portprofile it is attached to it by using the show port-profile usage command.
Nexus1000V# show port-profile usage ------------------------------------------------------------------------------Port Profile Port Adapter Owner ------------------------------------------------------------------------------Uplink Po1 Po2 Eth3/4 vmnic3 esx01.vpod.local Eth4/4 vmnic3 esx02.vpod.local VM-Client Veth1 Net Adapter 1 Windows 7 A Veth2 Net Adapter 1 Windows 7 B Nexus1000V#
Note:

The Network administrator can manage the shown virtual ethernet interfaces the same way as a physical interface on a Cisco switch.

Congratulations! You have successfully added Virtual Machines to the Nexus 1000V distributed virtual switch! As a result the network team now has complete insight into the network part of the Server Virtualization infrastructure.

2011 Cisco | VMware. All rights reserved.

Page 23

VMotion and Visibility


The next section demonstrates the configuration of the VMKernel VMotion interface in order to perform a successful VMotion. In the second step the continuous visibility of virtual machines during VMotion is demonstrated. This lab step consists of the following: Configure a VMotion network connection Perform a VMotion and note the veth mapping

VMotion Configuration
You will now create a VMkernel Interface that will be used for VMotion. VMotion is a well-known feature of VMware which allows users to move the Virtual Machine from one physical host to another while the VM remains operational. Therefore this feature is also called live migration. In this step you will configure the VMKernel VMotion interface for both servers 1. The first step is to provision a port-profile for the VMotion Interface. Lets call this port-profile VMotion
Nexus1000V# conf t Nexus1000V(config)# port-profile VMotion Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode access Nexus1000V(config-port-prof)# switchport access vlan 12 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# system vlan 12 Nexus1000V(config-port-prof)# state enabled

2. Go to the Home -> Inventory -> Hosts and Clusters tab and choose the first server esx01 of your pod.

2011 Cisco | VMware. All rights reserved.

Page 24

3.

Click on the Configuration tab and within the Hardware area on Networking. Under View choose Distributed Virtual Switch.

4.

In order to add the VMKernel VMotion interface choose Manage Virtual Adapters... and afterwards click on Add within the Manage Virtual Adapters dialog. In the Add Virtual Adapter Wizard choose to create a New Virtual Adapter, and then click on the Next button.

2011 Cisco | VMware. All rights reserved.

Page 25

5. As Virtual Adapter Types you can only choose VMKernel. Click Next
6. Choose VMotion as the port group name. Also check the box right next to Use this virtual NIC for

VMotion to enable VMotion on this interface. Click Next

7. Configure the IP settings for the VMotion interface.

For the host esx01 choose the IP address 192.168.12.11 and for host esx02 the IP address 192.168.12.12. For both hosts choose the Subnet Mask of 255.255.255.0. Do not change the VMkernel Default Gateway and click on the Next button.

2011 Cisco | VMware. All rights reserved.

Page 26

8.

Before finishing the Wizard you are presented with an overview of your setting. Verify the correctness of these settings and choose Finish.

9.

You have now successfully added the VMkernel VMotion interface. Close the Manage Virtual Adapters window.

Congratulation! You successfully configured the VMKernel VMotion interface leveraging the Cisco Nexus 1000V. 10. Repeat steps 3 to 8 to configure the VMkernel VMotion Interface on the second host esx02.

2011 Cisco | VMware. All rights reserved.

Page 27

Network Administrators view of VMotion


An important attribute of the Nexus 1000V with regards to VMotion is the capability that the VM keeps its virtual connection identifier throughout the VMotion process. This way a VMotion does not influence the interface policies, network management capabilities or traceability for a VM from the perspective of the Network Administrator. Instead the Virtual Machines keep its Veth identifier across the VMotion process. Before VMotioning your pods Virtual Machine, make note of the current veth for the given Virtual Machine. 1. Prior to the VMotion perform a lookup of the used Virtual Interfaces with the command show interface virtual. This yields the following or similar results:
Nexus1000V# show interface virtual -------------------------------------------------------------------------------Port Adapter Owner Mod Host -------------------------------------------------------------------------------Veth1 Net Adapter 1 Windows 7 - A 3 esx01.vpod.local Veth2 Net Adapter 1 Windows 7 - B 4 esx02.vpod.local Veth3 vmk2 VMware VMkernel 3 esx01.vpod.local Veth4 vmk2 VMware VMkernel 4 esx02.vpod.local Nexus1000V#

2. Make note of the associated Veth port and the Module and the ESX hostname currently associated to the Virtual Machine.

Perform a VMotion
Test your previous VMotion configuration by performing a VMotion process. 1. Go to the Home -> Inventory -> Hosts and Clusters tab 2. Drag & drop the Virtual Machine Windows 7 A from the first ESX host of your setup to your second ESX host.

3. Walk through the appearing VMotion wizard by leaving the default settings and clicking on Next and finally finish.

2011 Cisco | VMware. All rights reserved.

Page 28

4. Wait for the VMotion to successfully complete.

5. Open the Virtual Machine Console again and verify that the Virtual Machine still has network connectivity by reloading the default webpage.

Verify the new Network Administrators view on the Virtual Machine


After a successful VMotion the expected behavior is that the Virtual Machine can be seen and managed by the network administrator through the same virtual Ethernet port. Verify that this is the case. 1. Again use the show interface virtual command to perform a lookup of the used Virtual Interfaces.
2. Nexus1000V# show interface virtual -------------------------------------------------------------------------------Port Adapter Owner Mod Host -------------------------------------------------------------------------------Veth1 Net Adapter 1 Windows 7 - A 4 esx02.vpod.local Veth2 Net Adapter 1 Windows 7 - B 4 esx02.vpod.local Veth3 vmk2 VMware VMkernel 3 esx01.vpod.local Veth4 vmk2 VMware VMkernel 4 esx02.vpod.local Nexus1000V#

Congratulation! You are now able to trace a VM moving across physical ESX hosts via VMotion. The resulting output shows you the current mapping of a Veth port to the Virtual Machine. By comparing the output before and after the VMotion process, you can notice that the Virtual Machine still uses the same Veth port, while the output for Module and Host changes. The Cisco Nexus 1000V provides all the monitoring capabilities that the network team is used to for a Virtual Ethernet port, even while the VM attached to it is live migrated. On top of that all the configuration and statistics follow the VM across the VMotion process. Please migrate the Virtual Machine Windows 7 A back to the host esx01.vpod.local before progressing to the next lab step.

2011 Cisco | VMware. All rights reserved.

Page 29

Policy-based virtual machine connectivity


After the basic functionality of the Cisco Nexus 1000V distributed virtual switch has been demonstrated, it is time to explore some of the more advanced features. Thus this section will demonstrate the policy-based virtual machine capabilities in form of IP based filtering. The steps of this section include: Configure an IP-based access list Apply the access list to a port-group Verify the functionality of the access list

Verify open ports within your virtual machine


In a previous section of this lab guide it was already demonstrated that the Virtual Machine inside your pod, which is connected to the Cisco Nexus 1000V switch has basic connectivity to the upstream network. This could be seen by opening the webpage www.cisco.com. At the same this also means that the VM is accessible by hosts on the upstream network and might be at risk for various network based attacks. To demonstrate this, the Virtual Machine inside your pod has two Windows specific ports open which might be used for attacks. Before configuring the access list to block access, verify that your Virtual Machine currently has two open ports: 1. Open the Virtual Machine Console of the VM Windows 7 A inside your pod

2011 Cisco | VMware. All rights reserved.

Page 30

2. Click on the Cisco Systems, Inc. icon to load the default webpage and choose the link for the Host PortStatus Analyzer

3. Verify that port 135 (Windows RPC) and 445 (Windows CIFS) are open

2011 Cisco | VMware. All rights reserved.

Page 31

Configuration of an IP-based access list


In this lab step you will create an IP based access list, which blocks access to these two ports. 1. Using the CLI, create an access list within the Cisco Nexus 1000V VSM. The name ProtectVM is chosen as name for this access list.
Nexus1000V# conf t Nexus1000V(config)# ip access-list ProtectVM Nexus1000V(config-acl)# deny tcp any any eq 135 Nexus1000V(config-acl)# deny tcp any any eq 445 Nexus1000V(config-acl)# permit ip any any

This access list denies all TCP traffic to port 135 (Windows RPC) and 445 (Windows CIFS) while permitting any other IP traffic. 2. You will now apply the access list ProtectVM as an outbound-rule to the virtual Ethernet interfaces (veth) of the existing VMs running Windows 7. Here the concept of port-profiles comes very handy in simplifying the work. As the Veth interface of the Windows 7 VM leverage the port profile VM-Client, adding the access list to this port profile will automatically update all associated Veth interfaces and assign the access list to them.
Nexus1000V(config-acl)# port-profile VM-Client Nexus1000V(config-port-prof)# ip port access-group ProtectVM out

As a result access to both open ports within your Virtual Machine has been blocked.
Note:

The directions in and out of an ACL have to be seen from the perspective of the Virtual Ethernet Module (VEM), not the Virtual Machine. Thus in specifies traffic flowing in to the VEM from the VM, while out specifies traffic flowing out from the VEM to the VM.

2011 Cisco | VMware. All rights reserved.

Page 32

Verify the application of the IP-based access list


Verify that both ports that were open before have been blocked: 1. Again, open the Virtual Machine Console. 2. Click on the Cisco.com icon to load the default webpage and choose the link for the Host Port-Security Analyzer. 3. Verify that port 135 (Windows RPC) and 445 (Windows CIFS) are filtered

Congratulations! You have successfully created, applied and verified an IP based access list. This exercise demonstrated that all the features usually used on a physical switch interface can now be applied on the veth and that the concept of port-profile makes the network configuration much easier: Changes to a port-profile will be propagated on the fly on all the VM using it.

2011 Cisco | VMware. All rights reserved.

Page 33

Mobile VM Security
Another key differentiator of the Cisco Nexus 1000V over the VMWare DVS is the advanced Private VLAN capability. This section demonstrates the capabilities of Private VLANs by placing individual VMs in a Private VLAN while utilizing the uplink port as a promiscuous PVLAN trunk. Thus VMs will not be able to communicate among each other but can only communicate with the default gateway and any other peer beyond the default gateway. The upstream switch does not need to be configured for that. This can for example be used to deploy Server Virtualization within a DMZ. The content of this step includes: Configure Private VLANs. Removing the Private VLAN configuration.

Private VLANs
This section demonstrates the configuration of a Private VLAN towards the connected VM. First we will update the VLAN to run in isolated mode. Then we will configure the VM and uplink port-profile to do the translation between the isolated and the promiscuous VLAN. In order to prevent the requirement of configuring the PVLAN merging on the upstream switch the new feature of promiscuous PVLAN trunks is showcased on the uplink port. This means that the primary and secondary VLAN will be merged before leaving the uplink port.
Note:

When a VLAN is specified to be a primary VLAN for usage with private VLANs it instantly becomes unusable as a VLAN. As your Virtual Machines are still using VLAN 11 for network connectivity your VMs will encounter connectivity issues while you perform the configuration steps below. It is therefore recommend not to change an in-use VLAN from non-PVLAN usage to PVLAN usage in a production environment.

1. First, you will prepare the primary and secondary VLAN on the VSM.
Nexus1000V# conf t Nexus1000V(config)# vlan Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# Nexus1000V(config-vlan)# 11 private-vlan primary vlan 111 private-vlan isolated vlan 11 private-vlan association add 111

2011 Cisco | VMware. All rights reserved.

Page 34

You can check that the configuration has been successfully applied by issuing the show vlan privatevlan command
Nexus1000V# show vlan private-vlan Primary ------11 Secondary --------111 Type --------------isolated Ports ------------------------------------------

2. As a next step configure the uplink port profile as a promiscuous PVLAN trunk with the primary VLAN 11 and the secondary VLAN 111. The configuration of the promiscuous trunk has already been done during the creation of system-uplink. So it is not necessary to configure it again.
Nexus1000V(config)# port-profile type ethernet Uplink Nexus1000V(config-port-prof)# switchport private-vlan mapping trunk 11 111

3. After this step has been completed, configure the port profile VM-pvlan which connects the Virtual Machines - as a private VLAN in host mode, thus isolating the individual VMs from each other.
Nexus1000V(config)# port-profile VM-pvlan Nexus1000V(config-port-prof)# vmware port-group Nexus1000V(config-port-prof)# switchport mode private-vlan host Nexus1000V(config-port-prof)# switchport private-vlan host-association 11 111 Nexus1000V(config-port-prof)# no shutdown Nexus1000V(config-port-prof)# state enabled

4. Apply the port-profile on both Windows 7 A and Windows 7 B

2011 Cisco | VMware. All rights reserved.

Page 35

5. After a applying a new port-profile to a Virtual Machine is created. Therefore the VMs Windows 7 A and Windows 7 B will no longer be connected to Veth1 and Veth2 respectively as shown in a previous lab step. Verify the current Veth-mapping of the VMs and the usage of PVLAN.
Nexus1000V(config-port)# show interface virtual ------------------------------------------------------------------------------Port Adapter Owner Mod Host ------------------------------------------------------------------------------Veth3 vmk2 VMware VMkernel 3 esx01.vpod.local Veth4 vmk2 VMware VMkernel 4 esx02.vpod.local Veth5 Net Adapter 1 Windows 7 - A 3 esx01.vpod.local Veth6 Net Adapter 1 Windows 7 - B 4 esx02.vpod.local Nexus1000V(config-port)# show interface brief -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------mgmt0 -up 10.2.11.5 1000 1500 -------------------------------------------------------------------------------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # -------------------------------------------------------------------------------Eth3/4 1 eth trunk up none 1000(D) 1 Eth4/4 1 eth trunk up none 1000(D) 2 -------------------------------------------------------------------------------Port-channel VLAN Type Mode Status Reason Speed Protocol Interface -------------------------------------------------------------------------------Po1 1 eth trunk up none a-1000(D) none Po2 1 eth trunk up none a-1000(D) none -------------------------------------------------------------------------------Interface VLAN Type Mode Status Reason MTU -------------------------------------------------------------------------------Veth1 11 virt access down nonParticipating 1500 Veth2 11 virt access down nonParticipating 1500 Veth3 12 virt access up none 1500 Veth4 12 virt access up none 1500 Veth5 111 virt pvlan up none 1500 Veth6 111 virt pvlan up none 1500 -------------------------------------------------------------------------------Port VRF Status IP Address Speed MTU -------------------------------------------------------------------------------ctrl0 -up -1000 1500

2011 Cisco | VMware. All rights reserved.

Page 36

6. The expected behavior of the above configuration is that the first two virtual machines of your pod should both still be able to reach the default gateway and all host beyond this gateway. However they should not be able to reach each other. This can be verified by pinging the default gateway 192.168.1.1 from Windows 7 A. To do so, login to one of the Windows 7 VMs and open the console where you enter the command ping 192.168.1.1. Click on the Command Prompt icon on the desktop within the VM. Now issue the command ping 192.168.1.1.

Try now to ping Windows 7 B from Windows 7 A. The IP address of Windows 7 B is 192.168.1.12. Issue the command ping 192.168.1.12.

As expected, the ping times out.


2011 Cisco | VMware. All rights reserved. Page 37

7. You can now change the isolated vlan to community vlan. The community VLAN can talk to each other as well as two the promiscuous port. However they cannot talk to an isolated port.
Nexus1000V(config-port)# vlan 111 Nexus1000V(config-vlan)# private-vlan community
Note:

The Virtual Machines using the port-profile VM-pvlan will lose network connectivity for a brief moment (interface flap), when changing the PVLAN mode.

8. Again try to ping the second VM from the first. This time the ping will work.

Congratulations, you have successfully configured a Private VLAN with a promiscuous PVLAN trunk on the uplink! This feature allows you to utilize server virtualization in new areas, such as in the deployment of DMZ. Feel free to move the VMs around the two ESX hosts via VMotion. You will notice that no matter where the 2 VMs reside, the network policies are enforced the same way.

Removing the Private VLAN configuration


Before continuing with further lab steps, please remove the Private VLAN configuration from VLAN 11 again. The previously created port-profile VM-pvlan will become unusable and your VMs will therefore lose connectivity. 1. Remove the configuration of VLAN 11 as a primary PVLAN
Nexus1000V# conf t Nexus1000V(config)# vlan 11 Nexus1000V(config-vlan)# no private-vlan primary

2011 Cisco | VMware. All rights reserved.

Page 38

Traffic Inspection of individual Virtual Machines


One of the main drawbacks of server virtualization up until today was the lack of visibility into the VM from a network perspective. Especially features such as VMotion aided to this lack of visibility. Advanced features of the Cisco Nexus 1000V, such as ERSPAN give the network administrator back the capability to inspect traffic of a virtual machine within the virtual network infrastructure. In this lab step, you will configure an ERSPAN session to inspect the traffic of a virtual Ethernet interface connected to a certain VM. The ERSPAN session will terminate in another virtual machine running Wireshark. In a second step you will then live-migrate the VM using VMotion and observe how the monitor session is still spanning the traffic to Wireshark. The different steps include Configure an ERSPAN type monitor session Create a port-profile to enable the SPAN traffic to be send to a Virtual Machine containing our sniffing application (Wireshark) Verify the configuration of the ERSPAN session Verify that the Wireshark VM receives the traffic

Configure an ERSPAN monitor session


1. Apply the VM-Client port-profile back on Windows 7 A and Windows 7 B

Note:

After configuring the VMs to use the original port-profile of VM-Client again, the Veth mapping will correspond again

to the original mapping as outlined in the Attaching a Virtual Machine to the Network lab guide step.

2011 Cisco | VMware. All rights reserved.

Page 39

Create an ERSPAN Session on the Nexus 1000V


As the Cisco Nexus 1000V VSM is running NX-OS, the configuration of an ERSPAN session is equivalent to the configuration of this feature on other products of the Cisco Nexus platform. The only difference is the ability to select a veth interface as a source. 1. Before creating the ERSPAN Session, identify the veth port of the VM to be spanned by using show interface virtual
Nexus1000V# show interface virtual -------------------------------------------------------------------------------Port Adapter Owner Mod Host -------------------------------------------------------------------------------Veth1 vmk2 VMware VMkernel 4 esx01.vpod.local Veth2 vmk2 VMware VMkernel 3 esx02.vpod.local Veth3 Net Adapter 1 Windows 7 - A 3 esx02.vpod.local Veth4 Net Adapter 1 Windows 7 - B 3 esx02.vpod.local Nexus1000V#

Find out what veth interface is being used by the VM named Windows 7 A. In the above example it is associated with veth3.
Note:

Changing the association of a Virtual Machine to a port-group, will create a new veth interface for this VM. You would therefore have to go through the following steps again and update the ERSPAN configuration with the new veth interface information, should you change the port-group.

2. In the VSM configure a new ERSPAN session by issuing the commands below. Note that vethZZ correspond to the veth number of Windows 7 A as identified in step 1. In the above case ZZ would be replaced by 3.
Nexus1000V# conf t Nexus1000V(config)# monitor session 1 type erspan-source Nexus1000V(config-erspan-src)# description Monitor Windows 7 - A VM Nexus1000V(config-erspan-src)# source interface vethZZ both Nexus1000V(config-erspan-src)# destination ip 192.168.1.12 Nexus1000V(config-erspan-src)# erspan-id 999 Nexus1000V(config-erspan-src)# mtu 128 Nexus1000V(config-erspan-src)# no shut

192.168.1.12 is the IP address of Windows 7 B. We will use this VM as our ERSPAN target, where the packet sniffer is installed.
Note:

One of the powerful features of the Nexus 1000V, is the ability to use truncated ERSPAN. Unlike any other switch, the Nexus 1000V, since it is a software switch, can change the size of the ERSPAN Packets to receive only the useful information desired by the network administrator. By changing the MTU to 128, I will only send the GRE header plus some of the packet header but will not saturate the link by sending to much information.
Page 40

2011 Cisco | VMware. All rights reserved.

Configuring a VMkernel Interface to transport the ERSPAN Session


The Nexus 1000V leverages a VMKernel Interface to transport the SPAN traffic when using ERSPAN. In this lab step define a new port-profile which will be used by the VMKernel interface to send the ERSPAN traffic. We could configure the interface directly, but leveraging the port-profile concept is a more scalable approach. In case you need to e.g. update VLAN used for the ERSPAN traffic, this change can easily be accomplished. 1. Configure a new port-profile for the VMKernel interface used for ERSPAN.
Nexus1000V# conf t Nexus1000V(config)# port-profile ERSPAN Nexus1000V(config-port)# vmware port-group Nexus1000V(config-port)# capability l3control Nexus1000V(config-port)# switchport mode access Nexus1000V(config-port)# switchport access vlan 11 Nexus1000V(config-port)# no shutdown Nexus1000V(config-port)# system vlan 11 Nexus1000V(config-port)# state enable
Note:

The keywords capability l3control indicates to the Cisco Nexus 1000V that the interface will be used to carry L3 Traffic.

2. Create a new VMKernel interface using Virtual Center and apply the newly created port-profile

2011 Cisco | VMware. All rights reserved.

Page 41

3. Choose VMkernel as Virtual Adapter Type

4. Select the ERSPAN port-profile that you created before.


5. Configure the IP settings for the VMKernel ERSPAN interface. Use the IP address 192.168.1.101 with a

Subnet Mask of 255.255.255.0 on the host esx01 and the IP address 192.168.1.102 with the same Subnet Mask of 255.255.255.0 on the host esx02

6. Click on Next and Finish. 7. Repeat steps 2 to 4 to add the new VMKernel ERSPAN interface on server 2 as well

Congratulation! You configured your first ERSPAN session. Now you can monitor and troubleshoot the traffic of a particular Virtual Machine. As the source of the ERSPAN session is a veth interface, you will still be able to span traffic, even if the VM moves to another host due to a VMotion.

2011 Cisco | VMware. All rights reserved.

Page 42

Test the session and VMotion the VM


1. You can issue the command show monitor session 1 to verify if the ERSPAN session is up and working
Nexus1000V# show monitor session 1 session 1 --------------description : "Monitor Windows 7 - A VM" type : erspan-source state : up source intf : rx : Veth3 tx : Veth3 both : Veth3 source VLANs : rx : tx : both : filter VLANs : filter not specified destination IP : 192.168.1.12 ERSPAN ID : 999 ERSPAN TTL : 64 ERSPAN IP Prec. : 0 ERSPAN DSCP : 0 ERSPAN MTU : 128 ERSPAN Header Type: 2

2. From the Windows 7 A Console. Issue a continuous ping to the default gateway at 192.168.1.1. To do so type ping -t 192.168.1.1

2011 Cisco | VMware. All rights reserved.

Page 43

3. Open the console to control the VM called Windows 7 B. 4. Start Wireshark by double click on icon on the desktop. Click on Intel(R) PRO 1000MT Network Connection under Interface List to start capturing packets.

2011 Cisco | VMware. All rights reserved.

Page 44

5. You will a see various different traffic received by the sniffer. Fine-tune the selection of traffic by applying the filter erspan.spanid == 999 && (icmp.type == 0 || icmp.type == 8)

As a result of the filter you will only see the ICMP requests and replies received via ERSPAN. 6. Initiate a VMotion of Windows 7 A from one ESX host to the other one by dragging the VM icon to the new ESX host. Observe that even during the VMotion Wireshark is receiving the spaned traffic. Only while the VM named Windows 7 A is stunned (at around 78% progress) for a very brief moment as part of the VMotion, you will lose a minimal amount of packets (1-2). This is the moment when VMware briefly halts (stuns) all components such as CPU, I/O (NICs) and transfers control from the original VM to the VMotioned VM.

Congratulation! You have successfully monitored the traffic of a particular VM using ERSPAN. Furthermore you saw that you can do this even across a VMotion.

2011 Cisco | VMware. All rights reserved.

Page 45

Conclusion
You are now familiar with the Nexus 1000V. As you have experienced during the lab, The Nexus 1000V is based on three important pillars:
Security Mobility of the network Non-disruptive operational model

In this lab you: Have gotten familiar with the Cisco Nexus 1000V Distributed Virtual Switch for VMWare ESX. o Install and configure the Nexus 1000V o Added physical ESX host to the DVS o Attached a Virtual Machine to the Distributed Virtual Switch o Tested the VMotion capability Familiarized yourself with advanced features of the Cisco Nexus 1000V o IP based access lists o Configure an ERSPAN session to troubleshoot the VM Traffic o Configure Private-VLAN

Feedback
We would like to improve this lab to better suit your needs. To do so, we need your feedback. Please take 5 minutes to complete the online feedback for this lab. Just click on the link below and answer the online questionnaire. Online Feedback

2011 Cisco | VMware. All rights reserved.

Page 46

Lab proctors
Christian Elsen Kishan Pallapothu Cuong Tran

2011 Cisco | VMware. All rights reserved.

Page 47

For More Information


For more information about the Cisco Nexus 1000V, visit http://www.cisco.com/go/nexus1000v or contact your local Cisco account representative. For more information about the VMware vNetwork capabilities augmenting physical networking, go to: http://www.vmware.com/technology/virtual-datacenter-os/infrastructure/vnetwork.html

Revision: 1.1

Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

VMware, Inc 3401 Hillview Ave Palo Alto, CA 94304 USA www.vmware.com Tel: 1-877-486-9273 or 650-427-5000 Fax: 650-427-5001

Copyright 2008. VMware, Inc. All rights reserved. Protected by one or more U.S. Patent Nos. 6,397,242, 6,496,847, 6,704,925, 6,711,672, 6,725,289, 6,735,601, 6,785,886, 6,789,156, 6,795,966, 6,880,022, 6,944,699, 6,961,806, 6,961,941, 7,069,413, 7,082,598, 7,089,377, 7,111,086, 7,111,145, 7,117,481, 7,149, 843, 7,155,558, 7,222,221, 7,260,815, 7,260,820, 7,269,683, 7,275,136, 7,277,998,7,277,999, 7,278,030, 7,281,102, 7,290,253, 7,356,679 and patents pending. Cisco, the Cisco logo, and Cisco Systems are registered trademarks or trademarks of Cisco Systems, Inc. and/or its affiliates in the United States and certain other countries. All other trademarks mentioned in this document or Website are the property of their respective owners. The use of the word partner does not imply a partnership relationship between Cisco and any other company. (0807R) 09/08

2011 Cisco | VMware. All rights reserved.

Page 48

You might also like