You are on page 1of 33

VMware vCloud

Private vCloud
Implementation
Example

Version 1.6
T e c h n i c a l W HI T E P A P E R

VMware vCloud
Private vCloud Implementation Example

Table of Contents
1. Purpose and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.1 Executive Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2 Business Requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.4 Document Purpose and Assumptions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2. VMware vCloud Architecture Design Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 vCloud Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 vCloud Component Design Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3. vSphere Architecture Design Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.1 High Level Architecture. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
3.2 Site Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3.3 Design Specifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4. vSphere Architecture Design Management Cluster. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1 Compute Logical Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.1. Datacenters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
4.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.2 Network Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
4.3 Shared Storage Logical Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
4.4 Management Components. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4.5 Management Component Resiliency Considerations . . . . . . . . . . . . . . . . . . . . . . . 12
5. vSphere Architecture Design Cloud Resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1 Compute Logical Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1.1. Datacenters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
5.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.2 Network Logical Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
5.3 Shared Storage Logical Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
5.4 Cloud Resources Datastore Considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5.4.1. Datastore Sizing Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

T ECHNICAL W HI T E P A P E R / 2

VMware vCloud
Private vCloud Implementation Example

6. vCloud Provider Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17


6.1 Abstractions and VMware vCloud Director Constructs . . . . . . . . . . . . . . . . . . . . . 17
6.2 Provider vDCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.3 Organizations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4 Networks. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4.1. External Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4.2. Network Pools. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4.3. Networking Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
6.4.4. Special Use Cases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
6.5 Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7. vCloud Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.1

vSphere Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.1.1. Host Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
7.1.2. Network Security. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
7.1.3. vCenter Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

7.2

VMware vCloud Director Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

7.3

Additional Security Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

8. vCloud Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
8.1 vSphere Host Setup Standardization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
8.2 VMware vCloud Director Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8.3 vSphere Host Logging. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
8.4 VMware vCloud Director Monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
8.5 Capacity Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
9. Extending vCloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9.1 vCloud Connector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
9.2 vCloud API. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
9.3 Orchestrating vCloud. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
Appendix A Bill of Materials. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

T ECHNICAL W HI T E P A P E R / 3

VMware vCloud
Private vCloud Implementation Example

1. Purpose and Overview


1.1 Executive Summary
ACME Enterprise will be implementing an internal next generation datacenter private cloud built on VMware
technologies.
This document defines the vCloud architecture and provides detailed descriptions and specifications of the
architectural components and relationships for the initial implementation. This design is based on a combination
of VMware best practices and specific business requirements and goals.

1.2 Business Requirements


The vCloud for ACME Enterprise has the following characteristics and provides:
Compute capacity to support 400 virtual machines, which are predefined workloads.
Secure multi-tenancy, permitting more than one organization to share compute resources. In a private cloud,
organizations typically represent different departments, and each department may have several environments
such as development or production.
A self-service portal where Infrastructure as a Service (IaaS) can be consumed from a catalog of predefined
applications (vApp Templates).
A chargeback mechanism, so resource consumption can be metered and the associated cost provided back to
the appropriate organization or business unit.
Refer to the corresponding Service Definition for further details.

1.3 Use Cases


The target use case for the vCloud includes the following workloads:
Development and test
Pre-production
Demos
Training
Tier 2 and Tier 3 applications

1.4 Document Purpose and Assumptions


This vCloud Architecture Design document is intended to serve as a reference for ACME Enterprise architects,
and assumes they have familiarity with VMware products, including VMware vSphere, vCenter, and VMware
vCloud Director.

T ECHNICAL W HI T E P A P E R / 4

VMware vCloud
Private vCloud Implementation Example

The vCloud architecture detailed in this document is organized into these sections:
S ec t io n

D esc r ip t io n

vCloud Definition

Inventory of components that comprise the cloud solution

vSphere Management

vSphere and vCenter components that support running workloads

vSphere Resources

Resources for cloud consumption


Design organized by compute, networking, and shared storage
Detailed through logical and physical design specifications and considerations

Management and Security

Considerations as they apply to vSphere and VMware vCloud Director


management components

vCloud Logical Design

VMware vCloud Director objects and configuration


Relationship of VMware vCloud Director to vSphere objects

This document is not intended as a substitute for detailed product documentation. Refer to the installation and
administration guides for the appropriate product as necessary for further information.

2. VMware vCloud Architecture Design


Overview
2.1 vCloud Definition
The VMware vCloud is comprised of the following components:
v C lo u d
C ompo n e n t

D esc r ip t io n

VMware vCloud
Director

Abstracts and coordinates underlying resources


Includes:
VMware vCloud Director Server (1 or more instances, each installed on a Linux VM
and referred to as a Cell)
VMware vCloud Director Database (1 instance per clustered set of VMware vCloud
Director Cells)
vSphere compute, network and storage resources

VMware vSphere

Foundation of underlying cloud resources


Includes:
VMware ESXi hosts (3 or more instances for Management cluster and 3 or more
instances for Resource Cluster, also referred to as Compute Cell)
vCenter Server (1 instance managing a management cluster of hosts, and 1 or more
instances managing one or more clusters of hosts reserved for vCloud consumption.
For a Proof of Concept installation, 1 instance of vCenter server managing both the
management cluster and a single cloud resource cluster is allowable.)
vCenter Server Database (1 instance per vCenter Server)

T ECHNICAL W HI T E P A P E R / 5

VMware vCloud
Private vCloud Implementation Example

v C lo u d
C ompo n e n t

D esc r ip t io n

VMware vShield

Provides network security services including NAT and firewall


Includes:
vShield Edge (deployed automatically as virtual appliances on hosts by VMware
vCloud Director)
vShield Manager (1 instance per vCenter Server in the cloud resources)

VMware vCenter
Chargeback

Provides resource metering, and chargeback models


Includes:
vCenter Chargeback Server (1 instance)
Chargeback Data Collector (1 instance)
vCloud Data Collector (1 instance)
VSM Data Collector (1 instance)

2.2 vCloud Component Design Overview


The components comprising the vCloud are detailed in this document in the following sections:
D esi g n S ec t io n

v C lo u d C ompo n e n t( s )

vSphere Architecture
Management Cluster

vCenter Server and vCenter Database


vCenter Cluster and ESXi hosts
vCenter Chargeback Server and Database
vCenter Chargeback Collectors
vShield Manager and vShield Edge(s)
VMware vCloud Director Cell and Database (Oracle)

vSphere Architecture Cloud


Resources

vCenter Server(s) and vCenter Database(s)


vCenter Cluster(s) and ESXi hosts

3. vSphere Architecture Design Overview


3.1 High Level Architecture
vSphere resources are organized and separated into:
A management cluster containing all core components and services needed to run the cloud.
One or compute clusters that represent dedicated resources for cloud consumption. Each cluster of ESXi
hosts is managed by a vCenter Server, and is under the control of VMware vCloud Director. Multiple clusters
can be managed by the same VMware vCloud Director.
Reasons for organizing and separating vSphere resources along these lines are:
Facilitating quicker troubleshooting and problem resolution. Management components are strictly contained in
a relatively small and manageable management cluster. Otherwise, running on a large set of host clusters could
lead to situations where it is time-consuming to track down and manage such workloads.
Management components are separate from the resources they are managing.
Resources allocated for cloud use have little overhead reserved. For example, cloud resources would not host
vCenter VMs.
Cloud resources can be consistently and transparently managed, carved up, and scaled horizontally.
T ECHNICAL W HI T E P A P E R / 6

VMware vCloud
Private vCloud Implementation Example

The high level logical architecture is depicted as follows.


Management Cluster

Cloud Resources

Compute Resources

Org vDC
#1

vSphere 4.1

Shared Storage

SAN

Virtual Machines
VM

VM

VM

vCD

VM

VM

VM

vCenter (RG)

VM

VM

Chargeback

VM

VM

vSM

MSSQL

VM

Compute Resources

vSphere 4.1

vSphere 4.1

Shared Storage

Shared Storage

SAN

SAN

VM

VM

VM

AD/DNS

VM

Oracle 11g

VM

Compute Resources

vCenter (MC)

VM

Org vDC
#2

VM

VM

Log/Mon
(optional)

VM

vCenter DB

Figure 1. vCloud Director Logical Architecture Overview

The following diagram depicts the physical design corresponding to the logical architecture previously described.

Physical Layer

vSphere Layer
Cloud Resources

Network Infrastructure

Fabric

Fabric

10Gbps
10Gbps

10Gbps
10Gbps

10Gbps
10Gbps

Provider vDC Cluster A

Switch
vCenter01 - Cluster01

10Gbps
10Gbps

Provider vDC Cluster B

Server Infrastructure

10Gbps

Switch

10Gbps
10Gbps

10Gbps
10Gbps

Resource
Pool

Host C1
Host C2
Host C3

Resource
Pool
HA=N+1
CPU=TBD
MEM=TBD

HA=N+1
CPU=TBD
MEM=TBD

Data
Store

Data
Store

Resource
Pool

Port
Group

Host C4
Host C5

10Gbps

Host C6

10Gbps

Storage Infrastructure

Management Cluster
vCenter01 - Cluster02

10Gbps
10Gbps

Management and DB Cluster

Host M1
Host M2
Host M3

Resource
Pool
HA=N+1
CPU=TBD
MEM=TBD

Data
Store

FC SAN

Port
Group

Figure 2. vCloud Physical Design Overview

T ECHNICAL W HI T E P A P E R / 7

VMware vCloud
Private vCloud Implementation Example

3.2 Site Considerations


The management cluster and the cloud resources reside within a single physical Datacenter. Servers in both
clusters are striped across the server chasses. This provides for business continuity of clusters, for HA, should
one chassis fail.
Secondary and/or DR sites are out of scope for this engagement.

3.3 Design Specifications


The architecture is described by a logical design that is independent of hardware-specific details. The focus is on
components, their relationships, and quantity.
Additional details are found in Appendix A.

4. vSphere Architecture Design


Management Cluster
4.1 Compute Logical Design
The compute design encompasses the ESXi hosts contained in the management cluster. In this section the scope
is limited to only the infrastructure supporting the management component workloads.
4.1.1. Datacenters
The management cluster is contained within a single vCenter datacenter.
4.1.2. vSphere Clusters
The management cluster will be comprised of the following vSphere cluster:
At t r i b u t e

S peci f icat io n

Number of ESXi Hosts

VMware DRS Configuration

Fully automated

VMware DRS Migration Threshold

3 (of 5)

VMware HA Enable Host Monitoring

Yes

VMware HA Admission Control Policy

Cluster tolerances 1 host failure (percentage based)

VMware HA Percentage

33% (e.g. N+1 for 3 host cluster)

VMware HA Admission Control Response

Prevent VMs from being powered on if they violate availability


constraints

VMware HA Default VM Restart Priority

N/A

VMware HA Host Isolation Response

Leave VM Powered On

VMware HA Enable VM Monitoring

Yes

VMware HA VM Monitoring Sensitivity

Medium

Table 1. vSphere Clusters Management Cluster

T ECHNICAL W HI T E P A P E R / 8

VMware vCloud
Private vCloud Implementation Example

4.1.3. Host Logical Design


Each ESXi host in the management cluster will have the following specifications:
At t r i b u t e

S peci f icat io n

Host Type and Version

VMware ESXi Installable

Processors

x86 Compatible

Storage

Local for ESX binaries


SAN LUN for virtual machines

Networking

Connectivity to all needed VLANs

Memory

Sized to support estimated workloads

Table 2. Host Logical Design Specifications Management Cluster

4.2 Network Logical Design


The network design section defines how the vSphere virtual networking will be configured.
Following best practices, the network architecture will meet these requirements:
Separate networks for vSphere management, VM connectivity, and vMotion traffic
Redundant vSwitch ports with at least 2 active physical NIC adapters each
Redundancy across different physical adapters to protect against NIC or PCI slot failure
Redundancy at the physical switch level
A mandatory standard vSwitch in the Management Cluster with vDS as an optional secondary switch

S w i tc h Name

S w i tc h T y pe

F u n c t io n

# o f P h y sica l
NIC Ports

vSwitch0

Standard

Management Console

2 x 1 GigE (Teamed for


failover)

vMotion
Production VMs
(Optional)
Table 3. Virtual Switch Configuration Management Cluster

The physical NIC ports will be connected to redundant physical switches.


The following diagrams depict the virtual network infrastructure designs:

T ECHNICAL W HI T E P A P E R / 9

VMware vCloud
Private vCloud Implementation Example

Management Cluster
Management Network
Switch

vSwitch0
Management

Native
VLAN 100

vMotion

VLAN 200

Production Virtual
Machines

Optional
VLAN for
VCD-NI

vmnic0

Upstream
Switch

vmnic 1

Switch

Figure 3. vSphere Logical Network Design Management Cluster

Pa r ame t e r

Setting

Failover Detection

Link status

Notify Switches

Enabled

Failover Order

All active except for Management Network


Management Console: Active, Standby
vMotion: Standby, Active

Table 4. Virtual Switch Configuration Settings Management Cluster

4.3 Shared Storage Logical Design


The shared storage design section defines how the vSphere datastores will be configured. The same storage
will be used for both the management cluster as well as the cloud resources.
Following best practices, the shared storage architecture will meet these requirements:
Storage paths will be redundant at the host (connector), switch, and storage array levels.
All hosts in a cluster will have access to the same datastores.

T ECHNICAL W HI T E P A P E R / 1 0

VMware vCloud
Private vCloud Implementation Example

At t r i b u t e

S peci f icat io n

Number of Initial LUNs

1 dedicated, 1 interchange (shared with


Compute Cell cluster)

LUN Size

500 GB

Zoning

Single-initiator, single-target

VMFS Datastores per LUN

VMs per LUN

10 (distribute redundant VMs)

Table 5 Shared Storage Logical Design Specifications Management Cluster

4.4 Management Components


The following components will run as VMs on the management cluster hosts:
vCenter Servers
vCenter Database
vCenter Update Manager Database
vCloud Director Cells
vCloud Director Database
vCenter Chargeback Server
vCenter Chargeback Database
vShield Manager
VMware vCloud Director Cell(s) are stateless in operation with all information stored in the database. There is
some caching that happens at the VMware vCloud Director Cell level, such as SSL session data, but all refreshes
and updates are done to information stored in the database. As such, the database is critical to the operation of
VMware vCloud Director. In a production environment, VMware recommends the database be housed in either a
managed cluster configuration, or at the very least have a hot standby available.

T ECHNICAL W HI T E P A P E R / 1 1

VMware vCloud
Private vCloud Implementation Example

ESXi

ESXi

vCenter
Database
vCenter
Server

JDBC

VIM API
Data Collector

vCenter
Chargeback

Chargeback
Database

Load Balancer
JDBC
HTTPS
vSM Data Collector

HTTPS

vCloud Data
Collector

vCenter
Chargeback UI

JDBC

vSM

vCD
Database

Figure 4. vCenter Chargeback Logical Diagram

4.5 Management Component Resiliency Considerations


The following management components will rely on HA and FT for redundancy.
M a n ag eme n t C ompo n e n t

H A E n a b l ed ?

vCenter Server

Yes

VMware vCloud Director

Yes

vCenter Chargeback Server

Yes

vShield Manager

Yes

Table 6. Management Component Resiliency

T ECHNICAL W HI T E P A P E R / 1 2

VMware vCloud
Private vCloud Implementation Example

5. vSphere Architecture Design


Cloud Resources
5.1 Compute Logical Design
The compute design encompasses the ESXi host clusters. In this section the scope is further limited to only the
infrastructure dedicated to the cloud workloads.
5.1.1. Datacenters
Cloud resources can map to different vCenter datacenters and are managed by a single vCenter server.
5.1.2. vSphere Clusters
All vSphere clusters will be configured similarly with the following specifications.
At t r i b u t e

S peci f icat io n

VMware DRS Configuration

Fully automated

VMware DRS Migration Threshold

3 (of 5)

VMware HA Enable Host Monitoring

Yes

VMware HA Admission Control Policy

Cluster tolerances 1 host failure (Percentage based)

VMware HA Percentage

N+1 (e.g. 17% in a 6 host cluster)

VMware HA Admission Control Response

Prevent VMs from being powered on if they violate


availability constraints

VMware HA Default VM Restart Priority

N/A

VMware HA Host Isolation Response

Leave VM Powered On

Table 7. vSphere Cluster Configuration Cloud Resources

The cloud resources will have the following vSphere clusters.


C lu s t e r Name

v C e n t e r S e r ve r Name

# o f Hos t s

H A P e r ce n tag e

vCDCompute01

vcd_vc01.acme.com

17%

Table 8. vSphere Clusters Cloud Resources

T ECHNICAL W HI T E P A P E R / 1 3

VMware vCloud
Private vCloud Implementation Example

5.1.3. Host Logical Design


Each ESXi host in the cloud resources will have the following specifications.

At t r i b u t e

S peci f icat io n

Host Type and Version

VMware ESXi Installable

Processors

x86 Compatible

Storage

Local for ESX binaries


Shared for virtual machines

Networking

Connectivity to all required VLANs

Memory

Sizing based on estimated workloads

Table 9. Host Logical Design Specifications Cloud Resources

5.2 Network Logical Design


The network design section defines how the vSphere virtual networking will be configured.
Following best practices, the network architecture will meet these requirements:
Separate networks for vSphere management, VM connectivity, vMotion traffic
vdSwitch with at minimum 2 active physical adapter ports
Redundancy across different physical adapters to protect against NIC or PCI slot failure
Redundancy at the physical switch level

S w i tc h Name

S w i tc h T y pe

F u n c t io n

# o f P h y sica l
NIC Ports

vdSwitch

Distributed

External Networks

2 x 1 GigE NIC for each


Org network (or 2 x
10Gbps Links)

Network Pools
Table 10. Virtual Switch Configuration Cloud Resources

When using the distributed virtual switch, dvUplink ports are the number of physical NIC ports on each host.
The physical NIC ports will be connected to redundant physical switches.

T ECHNICAL W HI T E P A P E R / 1 4

VMware vCloud
Private vCloud Implementation Example

The following diagram depicts the virtual network infrastructure design.

Cloud Resources Networking

vNetwork Distributed Switch (vDS)

Switch

External Networks
(Production)

vmnic0
Network Pools

vLAN 400

Fabric

vmnic 1

Switch

Figure 5. vSphere Logical Network Design Cloud Resources

Pa r ame t e r

Setting

Load Balancing

Route based on NIC load (for vDS)

Failover Detection

Link status

Notify Switches

Enabled

Failover Order

All active except for Management Network

Management Console: Active, Standby


vMotion: Standby, Active
Table 11. Virtual Switch Configuration Settings Cloud Resources

5.3 Shared Storage Logical Design


The shared storage design section defines how the vSphere datastores will be configured.
Following best practices, the shared storage architecture will meet these requirements:
Storage paths will be redundant at the host (HBA), switch, and storage array levels.
All hosts in a cluster will have access to the same datastores.

T ECHNICAL W HI T E P A P E R / 1 5

VMware vCloud
Private vCloud Implementation Example

At t r i b u t e

S peci f icat io n

Number of Initial LUNs

6 dedicated, 1 interchange (shared with management cluster)

LUN Size

500 GB

Zoning

Single-initiator, single-target

VMFS Datastores per LUN

VMs per LUN

12 15 (simultaneous active VMs)

Table 12. Shared Storage Logical Design Specifications Cloud Resources

5.4 Cloud Resources Datastore Considerations


The most common aspect of LUN/datastore sizing is what limit should be implemented regarding the number of
VMs per datastore. The reason for limiting this number is to minimize the potential for SCSI locking and to spread
the I/O across as many storage processors as possible. Most mainstream storage vendors will provide VMwarespecific guidelines for this limit, and VMware recommends an upper limit of 15 VMs (active) per VMFS datastore,
regardless of storage platform. In many cases it is forgotten that the number of VMs per LUN is also influenced
by the size and I/O requirements of the VM but perhaps more importantly the selected storage solution and
even disk types.
When VMware vCloud Director provisions VMs it automatically places the VMs on datastores based on the free
disk space of each of the associated datastores in an Org VDC. Due to this mechanism, we will need to keep the
size of the LUNs and the number of VMs per LUN relatively low to avoid possible I/O contention.
When considering the number of VMs to place on a single datastore, some of the following factors should be
considered in conjunction with any recommended VMs-per-LUN ratio:
Average VM workload/profile (in particular, the amount of I/O)
Typical VM size (including configuration files, logs, swap files, and snapshot files)
VMFS metadata
Max requirement for IOPs and throughput per LUN, dependency on storage array and design
Max RTO, if a LUN is lost, i.e. your backup and restore design
If we approach this from an average I/O profile it would be tempting to create all LUNs the same, say as RAID 5,
and let the law of averages take care of I/O distribution across all the LUNs and VMs on those LUNs. Another
approach would be to create LUNs with different RAID profiles based on anticipated workloads within an
Organization. This would dictate creating Provider virtual datacenters (VDCs) that took into account the
allocation models as well as the storage profile in use. We would end up with the following types of Provider
VDCs as an example:
Allocated_High_Performance
Allocated_Generic
As a starting point, VMware recommends RAID 5 storage profiles, and only creating storage tier-specific
Provider VDCs as one-offs to address specific organization or business unit requirements.
The VMware Scalable Storage Performance Study (http://www.vmware.com/files/pdf/scalable_storage_
performance.pdf) provides additional information regarding vSphere storage design.

T ECHNICAL W HI T E P A P E R / 1 6

VMware vCloud
Private vCloud Implementation Example

5.4.1. Datastore Sizing Estimation


An estimate of the typical datastore size can be approximated by considering the following factors.
Va r ia b l e

Va l u e

Maximum Number of VMs per Datastore

12 - 15

Average Size of Virtual Disk(s) per VM

60 GB

Average Memory Size per VM

2 GB

Safety Margin

20% (to avoid warning alerts)

Table 13. Datastore Size Estimation Factors

For example,
((12 * 60GB) + (15 * 2GB))+ 20% = (720GB + 30GB) * 1.2 = 900GB

6. vCloud Provider Design


6.1 Abstractions and VMware vCloud Director Constructs
A key tenet of the cloud architecture is resource pooling and abstraction. VMware vCloud Director further
abstracts the virtualized resources presented by vSphere by providing logical constructs that map to vSphere
logical resources:
Organization organizational unit to which resources (VDCs) are allocated.
Virtual Datacenter (VDC) Deployment environments, scoped to an organization, in which virtual machines run.
Provider Virtual Datacenter vSphere resource groupings that power VDCs, further segmented out into
organization VDCs.
Organization Virtual Datacenter (VDC) An organizations allocated portion of provider VDC.

vCD

Organization vDC

Org Network
External Network

Network Pool

vSphere

Provider vDC

Resource Pool

(d)VS Port Group

vDS

Compute Cluster

Datastore

Physical Network

Physical Host

Storage Array

Physical
VLAN

Figure 6. VMware vCloud Director Abstraction Layer Diagram

T ECHNICAL W HI T E P A P E R / 1 7

VMware vCloud
Private vCloud Implementation Example

6.2 Provider vDCs


The following diagram shows how the Provider VDCs map back to vSphere resources:

vCDCompute01
Provider vDC 1

vCDcomputecluster1_1

vCDcomputecluster1_2

vCDcomputecluster1_3

Provider vDC 2

vCDcomputecluster1_4

vCDcomputecluster2_1

vCDcomputecluster2_2

vCDcomputecluster2_3

VMFS

VMFS

VMFS

vcd_compute
_01 (500 GB)

vcd_compute
_02 (500 GB)

vcd_compute_0X (500 GB)

vCDcomputecluster2_4

Figure 7. Provider vDCs in Cloud Resources

All ESXi hosts will belong to a vSphere cluster which will be associated with one and only one ACME Enterprise VDC.
A vSphere cluster will scale to 32 hosts (although typically 25 is a good starting point, allowing future growth),
allowing for up to 14 clusters per vCenter Server (the limit is bound by the maximum number of hosts per
datacenter possible) and an upper limit of 10,000 VMs (this is a vCenter limit).
The recommendation is to start with 8 hosts in a cluster and add resources (Hosts) to the cluster as dictated by
customer consumption. However, for the initial implementation, the provider VDC will start with 6 hosts. When
utilization of the resources reaches 60%, VMware recommends that a new provider VDC/cluster be deployed.
This provides for growth within the provider VDCs for the existing organizations / business units without
necessitating their migration as utilization nears maxing out a clusters resources.
As an example, a fully loaded resource group will contain 14 Provider VDCs, and up to 350 ESXi hosts, giving an
average VM consolidation ratio of 26:1 assuming a 5:1 ratio of vCPU:pCPU. To increase this ratio, ACME Enterprise
would need to increase the vCPU:pCPU ratio that they are willing to support. The risk associated with an
increase in CPU over commitment is mainly in degraded overall performance that can result in higher than
acceptable vCPU ready times. The vCPU:pCPU ratio is based on the amount of CPU over commitment, for the
available cores, that ACME is comfortable with. For VMs that are not busy this ratio can be increased without any
undesirable effect on VM performance. Monitoring of vCPU ready times helps identify if the ratio needs to be
increased or decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multi-core system.

T ECHNICAL W HI T E P A P E R / 1 8

VMware vCloud
Private vCloud Implementation Example

A Provider VDC can map to only one vSphere cluster, but can map to multiple datastores and networks.
Multiple Provider VDCs are used to map to different types/tiers of resources.
Compute this is a function of the mapped vSphere clusters and the resources that back it
Storage this is a function of the underlying storage types of the mapped datastores
Networking this is a function of the mapped vSphere networking in terms of speed and connectivity
Multiple Provider VDCs are created for the following reasons:
The cloud requires more compute capacity than a single vSphere cluster (a vSphere resource pool cannot span
vSphere clusters)
Tiered storage is required; each Provider VDC maps to datastores on storage with different characteristics
Requirement for workloads to run on physically separate infrastructure
At t r i b u t e

S peci f icat io n

Number of Provider VDCs

Number of Default External Networks

1 (production)

Table 14. Provider vDC Specifications

P r ovide r v D C

Reso u r ce P oo l

Datas to r es

vSphere
Ne t w o r k s

ACME

vCDCompute01

vcd_compute-01

Production

vcd_compute-02
vcd_compute-03
vcd_compute-04
vcd_compute-05
Table 15. Provider vDC to vSphere Mapping

VMware recommends assessing workloads to assist in sizing. Following is a sample sizing table that can be used
as a reference for future design activities.
VM Size

D is t r i b u t io n

Number of vMs

1 vCPU / 1 GB RAM

65%

260

2 vCPU / 2 GB RAM

29%

116

4 vCPU / 4 GB RAM

5%

20

8 vCPU / 8 GB RAM

1%

Total

100%

400

Table 16. Virtual Machine Sizing and Distribution

T ECHNICAL W HI T E P A P E R / 1 9

VMware vCloud
Private vCloud Implementation Example

6.3 Organizations
O r g a n i z at io n Name

D esc r ip t io n

AIS

ACME Information Systems

Table 17. Organizations

6.4 Networks
At t r i b u t e

S peci f icat io n

Number of Default External Networks

Number of Default vApp Networks

End-user controlled

Number of default Organization Networks

Default Network Pool Types Used

vCloud Director Network Isolation (vCD-NI)

Is a Pool of Public Routable IP Addresses Available?

Yes, for access to Production but only a certain range


is given to each Organization.

Table 18. Network Specifications

6.4.1. External Networks


ACME Enterprise will provide the following External Network for the initial implementation:
Production (VLAN 400)
Part of the provisioning for an organization can involve creating an external network for each Organization, such
as internet access, and a VPN network if desired, and associating them with the required Org Networks.
6.4.2. Network Pools
ACME will provide the following sets of Network Pools based on need:
VMware vCloud Director - Network Isolation-backed
VLAN-Backed (Optional)
For the vCD-NI-backed pool VMware recommends the transport VLAN (VLAN ID: 1254) be a VLAN that is not in
use within the ACME infrastructure for increased security and isolation. In the case of this initial implementation,
we do not have this option so will use Production VLAN 400.
6.4.3. Networking Use Cases
ACME will provide the following two use cases for the initial implementation to both demonstrate VMware
vCloud Director capabilities and as a use case for deploying their production vApp:
1. Users should be able to completely isolate vApps for their Development and/or Test Users
2. Users should be able to connect to the Organization Networks either directly or via fencing and the
Organization Networks will not have access to any public Internet.

T ECHNICAL W HI T E P A P E R / 2 0

VMware vCloud
Private vCloud Implementation Example

vApp01

APP
OS

APP
OS

APP
OS

DB
x.10

Web
x.11

App
x.12

Network Pool
(vCD-NI-backed)

vApp Network 1

Figure 8. vApp Isolated Network

vApp01

vApp02

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

DB
x.10

Web
x.11

App
x.12

DB
x.13

Web
x.14

App
x.15

vApp Network 1

vApp Network 2

Direct

Direct

Isolated Org Network

Network Pool
(vCD-NI-backed)

Figure 9. vApp Network Direct Attached to Org Network

This is an example for a Dev/Test environment where developers will use the different IPs in their vApps, so the
VMs in a vApp can communicate to the VMs in another vApp without any conflicts.

T ECHNICAL W HI T E P A P E R / 2 1

VMware vCloud
Private vCloud Implementation Example

vApp01

vApp02

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

DB
x.10

Web
x.11

App
x.12

DB
x.10

Web
x.11

App
x.12

vApp Network 1

vApp Network 2

Fenced

Fenced

Network Pool
(vCD-NI-backed)

Isolated Org Network

Figure 10. vApp Network Fenced to Org Network

This is an example for Dev/Test where developers will have duplicate IPs in their vApps.

vApp01

vApp02

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

DB
x.10

Web
x.11

App
x.12

DB
x.13

Web
x.14

App
x.15

vApp Network 1

vApp Network 2

Direct or Fenced

Direct or Fenced

Org Network

Network Pool
(vCD-NI-backed/
VLAN backed/
Port Group backed)

Direct

External Network
Physical Backbone

Figure 11. vApp Network Bridged or Fenced to an Org Network that is Direct attached to External Network

T ECHNICAL W HI T E P A P E R / 2 2

VMware vCloud
Private vCloud Implementation Example

vApp01

vApp02

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

APP
OS

DB
1.10

Web
1.11

App
1.12

DB
1.13

Web
1.14

App
1.15

vApp Network 1

vApp Network 2

Direct or Fenced

Direct or Fenced

Org Network

Network Pool
(vCD-NI-backed/
VLAN backed/
Port Group backed)

Fenced

External Network
Physical Backbone

Figure 12. vApp Network Fenced to Fenced Org Network

This is one way to connect the External network and preserve VLANs by sharing the same VLAN for the Internet
among multiple Organizations. The vShield Edge is needed to provide NAT and firewall services for the different
Organizations.
Once the External Networks have been created, a VMware vCloud Director Administrator can create the
Organization Networks as shown above. The vShield Edge (VSE) device is needed to perform Address
translation between the different networks. The VSE can be configured to provide for port address translation to
jump hosts located inside the networks or to gain direct access to individual hosts.
VMware recommends separating External and Organization networks by using two separate vdSwitches. For
ACMEs initial implementation, we do not have the option to create two vdSwitches as we only had one network
(Production VLAN 400) to route vCD-NI traffic between ESX hosts.

T ECHNICAL W HI T E P A P E R / 2 3

VMware vCloud
Private vCloud Implementation Example

6.4.4. Special Use Cases


There are some instances where an organization vDC needs to have a 1:1 mapping with its Provider vDC network.
The following are reasons why network resources need to be specifically dedicated in this particular case:
When third-party software is implemented and licensing terms must be considered due to overall cost, e.g.
Oracle CPU licensing
If a web server is deployed within a vCloud vApp where the server relies on public DNS address resolution and
the organization network is using a load balancer
When there are developer environments with third party tools deployed (e.g. version/revision control software)
with public IP address requirements that cannot function properly with any NAT

6.5 Catalogs
The catalog contains ACME-specific templates that are made available to all organizations / business units.
ACME will make a set of catalog entries available to cover the classes of virtual machines, templates, and media
as specified in the corresponding Service Definition.
For the initial implementation, a single cost model will be created using the following fixed cost pricing and
chargeback model:
V M C o n f i g u r at io n

P r ice

1 vCPU and 512 MB RAM

$248.00

1 vCPU and 1 GB RAM

$272.00

1 vCPU and 2 GB RAM

$289.00

2 vCPUs and 2 GB RAM

$308.00

1 vCPU and 3 GB RAM

$315.00

2 vCPUs and 3 GB RAM

$331.00

1 vCPU and 4 GB RAM

$341.00

2 vCPUs and 4 GB RAM

$354.00

4 vCPUs and 4 GB RAM

$386.00

1 vCPU and 8 GB RAM

$461.00

2 vCPUs and 8 GB RAM

$477.00

4 vCPUs and 8 GB RAM

$509.00

Table 19. ACME Fixed-cost Cost Model

7. vCloud Security
7.1 vSphere Security
7.1.1. Host Security
Chosen in part for its limited management console functionality, ESXi will be configured by ACME with a strong
root password stored following corporate password procedures. ESXi lockdown mode will also be enabled to
prevent root access to the hosts over the network, and appropriate security policies and procedures will be
created and enforced to govern the systems. Because ESXi cannot be accessed over the network, sophisticated
host-based firewall configurations are not required.

T ECHNICAL W HI T E P A P E R / 2 4

VMware vCloud
Private vCloud Implementation Example

7.1.2. Network Security


Virtual switch security settings will be as follows:
F u n c t io n

Setting

Promiscuous Mode

Management cluster Reject


Cloud resources Reject

MAC Address Changes

Management cluster Reject


Cloud resources Reject

Forged Transmits

Management cluster Reject


Cloud resources Reject

Table 20. Virtual Switch Security Settings

7.1.3. vCenter Security


vCenter Server is installed using a local administrator account. When vCenter Server is joined to a domain, this will
result in any domain administrator gaining administrative privileges to vCenter. VMware recommends ACME remove
this potential security risk by creating new vCenter Administrators group in Active Directory and assign it to the
vCenter Server Administrator Role, making it possible to remove the local Administrators group from this role.

7.2 VMware vCloud Director Security


Standard Linux hardening guidelines need to be applied to the VMware vCloud Director VM. There is no need
for local users, and the root password will only be needed during install and upgrades to the VMware vCloud
Director binaries. Additionally, certain network ports must be open for vCloud Director use. Refer to the vCloud
Director Administrators Guide for further information.

7.3 Additional Security Considerations


The following are examples of use cases requiring special security considerations:
End-to-end encryption form guest VM to its communication endpoint, including encrypted storage via
encryption in the guest OS and/or storage infrastructure
Provisioning of user accounts and/or access control from a single console
Need to control access to each layer of a hosting environment, i.e. rules and role-based security requirements,
for an organization
vApp requirements for secure traffic and/or VPN tunneling from a vShield Edge device at any network layer

8. vCloud Management
8.1 vSphere Host Setup Standardization
Host Profiles can be used to automatically configure network, storage, security and other features. This feature
along with automated installation of ESXi hosts is used to standardize all host configurations.
VM Monitoring is enabled on a cluster level within HA and uses the VMware Tools heartbeat to verify a virtual
machine is alive. When a virtual machine fails, causing VMware Tools heartbeat to not be updated, VM
Monitoring will verify if any storage or networking I/O has occurred over the last 120 seconds and if not, the
virtual machine will be restarted.

T ECHNICAL W HI T E P A P E R / 2 5

VMware vCloud
Private vCloud Implementation Example

As such VMware recommends enabling both VMware HA and VM monitoring on the management cluster and
the cloud resources clusters.

8.2 VMware vCloud Director Logging


Logging is one of key components in any infrastructure. It provides audit trails for user logins and logouts
among other important functions. Logging records various events happening in the servers, and helps diagnose
problems, detect unauthorized access, etc. In some cases, regular log analyses proactively stave off problems
that may turn out to be critical to the business.
Each VMware vCloud Director cell logs audit messages to the database where they are retained for 90 days by
default. If log retention is needed longer than 90 days and or centralized logging is required, an external Syslog
server can be configured and used as a duplicate destination for the events that are logged. Individual
components can also be configured to redirect syslog messages to tenant designated syslog servers if so
desired.
In vCloud Director, there are two options for logging. The logs can be stored either locally to the server or in a
centralized (syslog) location. During the initial installation, the administrator has the option to choose either.
During the initial installation, if a syslog server address is not entered, then the logs are stored locally. The syslog
server listens on port 514 using the UDP protocol.
If during the installation local storage was chosen and later the administrator decides to change it, then the
following procedure needs to be followed:
Log into vCloud Director cell using a vCenter VM console or SSH, if it is enabled
Change the directory cd /opt/vmware/cloud-director/etc
Make a backup of log4j.properties cp log4.properties{,.original}
Modify the following line by appending the string in bold
log4j.rootLogger=ERROR, vcloud.system.debug, vcloud.system.info, vcloud.system.syslog

Edit the log4j.properties with your preferred editor and add the following lines:
log4j.appender.vcloud.system.syslog=org.apache.log4j.net.SyslogAppender
log4j.appender.vcloud.system.syslog.syslogHost=remoteSyslogHost.example.com:<PORT>
# For default listening port of 514, <PORT>

can be left blank

log4j.appender.vcloud.system.syslog.facility=LOCAL1 log4j.appender.vcloud.system.
syslog.layout=com.vmware.vcloud.logging.CustomPatternLayout
log4j.appender.vcloud.system.syslog.layout.ConversionPattern=%d{ISO8601} | %-8.8p |
%-25.50t | %-30.50c{1} | %m | %x%n log4j.appender.vcloud.system.syslog.threshold=INFO

Save the file and restart the vCloud Director cell using service vmware-vcd restart
To enable centralized logging in all the vCloud Director cells, repeat the procedure for each cell.

8.3 vSphere Host Logging


Remote logging to a central host provides a way to greatly increase administration capabilities. Gathering log
files on a central server facilitates monitoring of all hosts with a single tool as well as enables aggregate analysis
and the ability to search for evidence of coordinated attacks on multiple hosts. This will apply to the following
log analysis:
messages (host log)
hostd (host agent log)
vpxa (vCenter agent log)

T ECHNICAL W HI T E P A P E R / 2 6

VMware vCloud
Private vCloud Implementation Example

Within each ESXi host, Syslog behavior is controlled by the Syslog advanced settings. These settings determine
the central logging host that will receive the Syslog messages. The hostname must be resolvable using DNS.
For this initial implementation, none of the ESXi hosts at ACME will be configured to send log files to a central
Syslog server residing in the management cluster.

8.4 VMware vCloud Director Monitoring


The following items should be monitored through VMware vCloud Director. As of VMware vCloud Director 1.0
this will need to be done with custom queries to VMware vCloud Director using the Admin API to get the
consumption data on the different components. Some of the components in VMware vCloud Director can also
be monitored by aggregating the Syslog-generated logs from the different VMware vCloud Director Cells that
would be found on the centralized log server.
S cope

I t em

System

Leases
Quotas
Limits

vSphere Resources

CPU
Memory
Network IP address pool
Storage free space

Virtual Machines/vApps

Not in scope

Table 21. VMware vCloud Director Monitoring Items

Monitoring can be accomplished through a JMX interface in addition to the vCloud Director UI.

8.5 Capacity Management


vCenter CapacityIQ will be running in the management cluster to monitor vSphere resources.
Cloud resources will be monitored through vCloud Director monitoring, mentioned previously.

T ECHNICAL W HI T E P A P E R / 2 7

VMware vCloud
Private vCloud Implementation Example

9. Extending vCloud
9.1 vCloud Connector
VMware vCloud Connector (vCC) is an appliance that allows vSphere administrators to move VMs from vSphere
environments or vApps from a vCloud to a remote vCloud. The origination and destination vCloud can be a
public or private cloud.The following diagram gives an overview of communication protocols between vCloud
Connector and vCloud Director:

On Premise Private Cloud


CB
Server
VM

VM

Off Premise Private Cloud

vCloud
Director

CB
Server

vCloud
Director

VM

wa
re

VM

VM

VM

wa
re

REST APIs
REST APIs
vCenter
Server
VM

vCC
Appliance
VM

VM

REST APIs

wa
re

vCC Plugin for


vSphere Client

CB
Server

vCloud
Director
VM

VM

VM

wa
re

On Premise vSphere
Virtualization

Public Cloud

Figure 13. vCloud Connector

vCloud Connector Design Considerations:


vCloud Connector requires vSphere 4.0 or higher and administrative privileges
The appliance requires a dedicated static IP address
It is recommended for the appliance to reside on the same subnet as vCenter Server
Ports 80, 443 and 8443 must be open on any firewall to allow communication between vCenter and the
vCloud Connector appliance.
V M wa r e C ompo n e n t s

D esc r ip t io n

VMware vSphere 4.x

VMware Virtualization software

vCloud Connector (vCC) appliance

vCC must be deployed in vSphere 4.x environment.

vSphere Client

vCC plug-in installs in vSphere Client

Microsoft Internet Explorer 7 or higher

Requirement for vSphere Plug-in

Table 22. vCloud Connector Components

T ECHNICAL W HI T E P A P E R / 2 8

VMware vCloud
Private vCloud Implementation Example

9.2 vCloud API


There are two ways to interact with the vCloud Director cell; via the browser-based GUI or via the vCloud API.
The browser-based GUI has limited customization capability; therefore, to enhance the user experience a service
provider or an enterprise customer may wish write their own portals to integrate with vCloud Director. To enable
integration, the vCloud API provides a rich set of calls in VMware vCloud Director.
vCloud APIs are REST-like, which allows for loose coupling of services between the server and consumer, are
highly scalable, and use HTTP/S protocol for communication. The APIs are grouped into three sections based
upon the functionality they provide and type of operation. There are several options available to implement the
custom portal using the vCloud API. These are using VMware vCloud Request Manager, VMware vCloud
Orchestrator, or by utilizing third party integrators. Some of these may require customization to design
workflows in order to satisfy customer requirements.
The following diagram shows a use case where a service provider has exposed a custom portal to end-users on
the Internet:

Internet

DMZ

Virtual
Appliance

Custom Portal
&
Orchestration
Workflow Engine

Internal Network

VM

VM

VM
VM
VM

VM

vCloud
Director

VM

VM

vCenter Server
vSphere Infrastructure
Deploy Workload

Figure 14. vCloud API Logical Representation

End-users log into the portal with a valid login/password and are able to select a predefined workload (from a
catalog list) to deploy. The users selection, in turn, initiates a custom workflow which deploys the requested
catalog item (e.g. vApp) in the vCloud.
Currently, the vCloud API is available in the form of a vCloud SDK with the following language bindings: Java,
C-Sharp and PHP.

T ECHNICAL W HI T E P A P E R / 2 9

VMware vCloud
Private vCloud Implementation Example

9.3 Orchestrating vCloud


Because vCloud Director leverages core vSphere infrastructure, automation is possible through vCenter
Orchestrator. vCenter Orchestrator provides out-of-the-box workflows that can be customized in order to
automate existing manual tasks; administrators can utilize sample workflows from a standard workflow library
that provides blueprints for creating additional workflows, or create their own custom workflows. Currently there
are approximately over 800 tasks that can be automated within vCenter Server through the use of vCenter
Orchestrator.
vCenter Orchestrator integrates with vCloud Director through a vCloud Director plug-in that communicates via
the vCloud API. vCenter Orchestrator can also orchestrate workflows at the vSphere level through a vSphere
plug-in, if necessary.

vCenter Orchestrator (vCO)


Financial
Systems

Orchestration
Engine

1. User Workflow
Initiation

End Users

User Portal + vCloud API

2. User Resource
Interaction

Approval
Systems

vCloud API
Asset
Systems

vCloud API

vCenter Chargeback

VMware vCloud Director


vShield Edge

CMDB

vSphere API
Physical Config

VMware Sphere

Figure 15. vCloud Orchestration

T ECHNICAL W HI T E P A P E R / 3 0

VMware vCloud
Private vCloud Implementation Example

Appendix A Bill of Materials


The inventory and specifications of components comprising the vCloud are provided.
I t em

Q ua n t i t y

Name / D esc r ip t io n

ESXi Host

Vendor X Compute Resource


Chassis: 3
Blades per Chassis: 1
Processors: 2 Socket Intel
Xeon X5670 (6 core, 2.9 GHz
Westmere)
Memory: 96 GB
Version: vSphere 4.1 (ESXi)

vCenter Server

Type: VM
Guest OS: Windows 2008
x86_64
2 vCPUs
4 GB memory
1 vNIC
Min. free disk space: 10 GB
Version: 4.1

vCenter and Update Manager


Database

VMware Update Manager


Database Calculator
http://www.vmware.com/
support/vsphere4/doc/vsp_
vum_40_sizing_estimator.xls

VMware vCloud Director Cell

Minimum number of VMware


vCloud Director Cells: 1
Type: VM
Guest OS: RHEL 5
4 vCPUs
4 GB memory
1 vNIC
Version: 1.0.1

VMware vCloud Director


Database

Type: VM (unless using an


existing, managed db cluster)
Guest OS: RHEL
Oracle 11g
4 vCPUs
4 GB memory
1 NIC

vShield Manager

Type: VM appliance
Version: 4.1
1 vCPU
4 GB memory
1 vNIC

T ECHNICAL W HI T E P A P E R / 3 1

VMware vCloud
Private vCloud Implementation Example

I t em

Q ua n t i t y

Name / D esc r ip t io n

vCenter Chargeback Server

Type: VM
Guest OS: Windows Server
2008 x64
2 vCPUs
2 GB memory
1 vNIC
Version: 1.5

vCenter Chargeback Database

Type: VM (unless using an


existing, managed db cluster)
Guest OS: Windows 2008
x86_64
MS SQL Server 2008
2 vCPUs
4 GB memory
1 vNIC

NFS Appliance

N/A

vCenter CapacityIQ

Type: VM
Guest OS: Windows Server
2008 x64
2 vCPUs
2 GB memory
1 vNIC

vShield Edge Appliances

Multiple

Type: VM
1 vCPU
256 MB RAM
1 vNIC

Domain Controllers (AD)

Isolated AD VM built specifically


for PoC infrastructure, no
access to other DCs
Type: VM
MS Windows Server 2008
1 vCPU
4 GB Memory
1 NIC

API Servers

N/A

Monitoring Server

N/A

Logging Server

N/A

Storage

FC SAN Array
VMFS
LUN Sizing: 500 GB
RAID 5

Table 23. Management Cluster Inventory

T ECHNICAL W HI T E P A P E R / 3 2

VMware vCloud
Private vCloud Implementation Example

I t em

Q ua n t i t y

Name / D esc r ip t io n

ESXi Host

Vendor X Compute Resource


Chassis: 6
Blades per Chassis: 1
Blade Type: Vendor x
Blade Type Y
Processors: 2 Socket Intel
Xeon X5670 (6 core, 2.9 GHz
Westmere)
Memory: 96GB
Version: vSphere 4.1 (ESXi)

vCenter Server

Same as Management Cluster

Storage

FC SAN Array
VMFS
LUN Sizing: 500 GB
RAID Level: 5

Table 24. Cloud Resources Inventory

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www.vmware.com
Copyright 2011 VMware, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. VMware products are covered by one or more patents listed at
http://www.vmware.com/go/patents. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be
trademarks of their respective companies. Item No: VMW_11Q1_WP_ImplementationPrivatevCloud_p33_R2

You might also like