Professional Documents
Culture Documents
374498662.doc
1 Overview
This TOI outlines the features of Microsoft Hyper-V technology and how it relates to NetApp
storage. Hyper-V is a virtualization technology, which consists of a hypervisor that resides
between the physical hardware and the operating system kernel. Hyper-V uses a Parent
partition, which is similar to Dom0 in Xen, which hosts the Guest partitions. The Parent partition
is created automatically when the Hyper-V server role is installed.
Hyper-V also utilizes a new bus called a VMBUS which is an in-memory kernel bus. This allows
IO to flow from Guest through Parent/Drivers as fast as the memory bus in the physical machine.
This is a dramatic improvement over previous technologies where VMBUS is not used, and IO
must traverse user-mode and then context switch to kernel-mode for access to the Parent driver
stack. Operating systems, which use the VMBUS, have enlightenments which Microsoft
considers modifications to the original Guest OS to take advantage of hardware acceleration.
The following lists are the currently supported enlightened and un-enlightened operating systems.
Enlightened
o Windows Server 2008 x86 and x64
o Windows Vista SP1 x86 and x64
o SUSE Linux Enterprise Server 10 SP1 x86 and x64
Un-Enlightened
o Other Linux
o Solaris, SCO
o Windows Server 2003, XP, 2000
Un-enlightened operating systems use virtualized hypervisor layer which passes IO through the
parent process which hosts the Guest. This causes a context switch to kernel-mode to actually
send the IO.
When VMBUS is used, it consists of a VSP (Virtual Service Provider) and VSC (Virtual Service
Consumer). The VSP and VSC interface directly with the VMBUS. The VSP handles a specific
type of IO or IRP such as Storage, Network, USB, etc.
2 Related Documents
All related documents can be found in the Windows SAN Interop SharePoint site.
3 Contents
1 Overview............................................................................................................................... 1
2 Related Documents............................................................................................................. 1
3 Contents............................................................................................................................... 2
4 LUN Configurations............................................................................................................. 2
4.1 Boot LUN Configurations................................................................................3
4.2 Data LUNs.........................................................................................................3
4.3 Failover Clustering..........................................................................................4
4.4 LUN Types........................................................................................................4
5 Deployment.......................................................................................................................... 4
5.1 Packaging.........................................................................................................4
5.2 Installation........................................................................................................5
6 Feature Overview................................................................................................................. 5
6.1 Architecture......................................................................................................5
6.2 Failover Clustering..........................................................................................7
6.3 WHU 5.0............................................................................................................ 7
7 User Interfaces..................................................................................................................... 7
7.1 Command Line Interface.................................................................................7
7.2 Graphical Interface..........................................................................................7
8 Programmatic Interfaces..................................................................................................... 7
8.1 API..................................................................................................................... 8
8.2 Wire protocols..................................................................................................8
9 RAS....................................................................................................................................... 8
9.1 Reliability..........................................................................................................8
9.2 Availability........................................................................................................8
9.3 Supportability...................................................................................................9
9.4 Error reporting.................................................................................................9
10 Performance......................................................................................................................... 9
11 Revision History................................................................................................................... 9
12 Approvals............................................................................................................................... 9
4 LUN Configurations
The following sections list the recommended best practices and caveats for using Hyper-V in a
NetApp environment.
NOTE: NetApp recommends using a single LUN per virtual machine guest using VHD and having
the configuration files stored on the same physical LUN. This allows for automatic expansion of
the LUN if the VHD is set to auto grow (it should be) and allows consistent snapshots of both the
configuration files and the VHD simultaneously.
NOTE-1: An alternative configuration exists for those customers using many virtual machines.
Using a single LUN per virtual machine will become unmanageable for customers using 10-20-
100 virtual machines. For this case, the user may place multiple virtual machines along with their
associated configuration files on a single LUN. The drawback from this configuration is that when
the virtual machines are used in an HA environment (Failover Cluster), all virtual machines will
migrate if one of them fails.
5 Deployment
The deployment of Hyper-V must follow the recommended deployment configurations. This
includes best practices by both NetApp and Microsoft.
The NetApp best practices will be defined by the testing results and dependent applications such
as SnapDrive for Windows.
5.1 Packaging
WHU 5.0 will be a self contained .msi or .exe which is downloaded via the NOW website. See
the WHU 5.0 documentation and functional spec for more information.
Hyper-V is packaged as part of Windows Server 2008 x64 only. It is not available for any other
version of Windows. The RTM release of Hyper-V will be available through a separate download
and eventually through Microsoft Update.
5.2 Installation
Hyper-V is installed via the Add Role wizard, which is accessible from the Server Manager MMC.
Once the Hyper-V role is installed, a reboot is required. When the Server boots after installation,
the hypervisor is installed and the Hyper-V management console is available from the
Administrative Tools menu.
6 Feature Overview
6.1 Architecture
The following diagram displays the overall architecture of Hyper-V. The enlightened operating
systems, such as Server 2008 and Vista SP1 have the VSC interface directly with the VMBUS
and then the VSP in the Parent kernel. For Linux hypervisor aware applications, there is a Linux
VSC, which then interfaces with a hypercall adapter, and then the VMBUS directly. This method
also has much improved performance and should be equal to a Windows enlightened operating
system. The third configuration is a non-enlightened operating system, such as Windows 2000 or
XP and non-hypervisor aware Linux distributions. This configuration uses a hypervisor emulation
layer, which does not utilize the VMBUS, and instead uses the Parent VM Worker Process to
provide services to the guest. This causes a context switch to kernel mode for IRP processing.
This is shown in more detail in the next diagram.
As described above, there are two types of virtualization stacks. One utilizes the new VMBus and
synthetic devices while the other uses the standard hypervisor layer and emulated devices. The
following figures display each of these software stacks.
The I/O stack for enlightened operating systems has these characteristics:
No I/O traps
Little hypervisor involvement
Enlightened I/O make requests of storage server
Storage server passes on the request to either:
o VHD parser
o Directly to LUN (raw pass-through)
Requires very little context switching
The most important enhancement to the enlightened I/O stack is the Fast Path Filter driver which
allows I/O to be sent directly to the parent partition via VMBUS and then to the corresponding
driver without context switches. This provides a pure kernel I/O path for the virtual machine.
Hyper-V v1 will support passive migrations of virtual machines. This entails saving the running
machines memory to a file on disk, moving the disk ownership to another cluster node, and then
resuming the virtual machine. In subsequent releases of Hyper-V, hot or live migration of a virtual
machine will be available where the virtual machine continues to run and serve client requests as
it is being moved between cluster nodes.
7 User Interfaces
7.1 Command Line Interface
WHU 5.0 will provide command line utilities standard in existing WHUs. In addition, WHU 5.0 will
also provide a new utility for displaying virtual machine status and configuration. For more details
please see the WHU 5.0 functional spec.
There is no known command line interface for managing Hyper-V directly. A number of other
products such as SCVMM (System Center Virtual Machine Manager) do provide command line
interfaces but those are not covered by this functional spec.
8 Programmatic Interfaces
Microsoft provides interfaces for building tools and management of virtualized environments. The
known interfaces consist of a WMI provider and Hypercall APIs.
8.1 API
WMI Provider for Virtualization
The WMI Provider was designed to configure, manage, and monitor the Hyper-V server and
associated virtual machines. The WMI Provider for virtualization documentation is not yet
complete and is still being updated by Microsoft. The most current WMI interface documentation
can be found at:
http://msdn2.microsoft.com/en-us/library/cc136992%28VS.85%29.aspx
Hypercall API
The Hypercall APIs are intended to be used as low level interfaces to the Microsoft Hypervisor
and Hyper-V services. The Hypercall APIs provide interfaces for partition management, physical
hardware management, scheduling, partition state, etc. More information on the Hypercall APIs
can be found at:
http://www.microsoft.com/downloads/details.aspx?FamilyID=91E2E518-C62C-4FF2-
8E50-3A37EA4100F5&displaylang=en
9 RAS
9.1 Reliability
All Guest partitions and the Hyper-V services must survive failure scenarios of the target, fabric,
and host protocols when the failure itself does provide for recovery. For instance, a clustered
target, which has one controller panic, should not result in any IO error or application disruption.
9.2 Availability
The availability of Hyper-V is dependent on the storage stack, the virtual machine availability, and
the target. High availability is obtained by utilizing a clustered storage controller, clustered host or
Parent partitions, and MPIO within the host.
Clustered Storage Controller Provides for availability of LUN access during controller
failure.
Windows Failover Cluster Provides for availability of Guest partitions in the event of
physical host, Parent partition, and failure.
MPIO Provides for availability of IO during fabric (iSCSI or FCP) failure, storage
controller failure, or host hardware failure (such as an HBA or NIC).
Guest Partition Clustering It is unclear whether this feature is supported, if so, it would
provide the administrator greater control over application level failover within Guest
partitions
9.3 Supportability
Supportability of Hyper-V is obtained through the use of the WHU as well as standard Windows
reporting tools.
The WHU provides utilities for collecting data on the fabric, target, and host configurations. This
information is used to debug and support various configurations.
For highly available virtual machines, the Failover Cluster validation tool must be run and pass in
order to obtain support by Microsoft and NetApp.
Hyper-V-Config
Hyper-V-High-Availability
Hyper-V-Hypervisor
Hyper-V-Integration
Hyper-V-Network
Hyper-V-SynthNic
Hyper-V-SynthStor
Hyper-V-vhdsvc
Hyper-V-VMMS
Hyper-V-Worker
10 Performance
Initial perforformance analysis shows the following:
2008 with Hyper-V installed but not used 15% drop in performance
o When the hypervisor is installed, the Parent is essentially virtualized. The
processor interrupts must be scheduled through the hypervisor and applications
are interrupted much more frequently than with no hypervisor.
2008 Guest
o RAW SCSI/IDE Disk mapped to Guest 40% drop in performance
This is still quite good considering 600MB/s from a Guest
o iSCSI Direct from Guest 70% drop in performance
This is caused by the lack of jumbo frame support and RSS in the virtual
switch
11 Limitations
No CIFS Support
Cannot boot from SCSI devices
Cannot boot from iSCSI
V1 will not have live migration, but quick migration (suspend, move, resume)
Migration limited to Nodes within a Windows Failover Cluster
12 Revision History
13 Approvals