You are on page 1of 67

Vmware Administration

1. How to configure DRS and HA?


2. What all we need to do on ESX server?
3. How ESX server takes backup?
4. What all files are created for a Virtual machine when it is created on it and
what is the use of them?
5. How to convert a Physical server to virtual server?
6. How to expend C drive?
7. How to import & export VMDK files to ESX server, how to convert it to
VMFD to use in ESX?
8. How to check a Virtual machine status and start/stop from ESX console?
9. How to add and remove license in VMware infrastructure?
10. Difference in ESX and GSX.
11. What are the steps of virtual machine migration?
12. What is the latest version for ESX Server?
13. What is all the Virtual Centre?
14. List of Virtual Servers.
15. List of ESX Servers.
16. What all VM features enabled on Vmware centre, like HA, Vmotion,
17. DRS, Snapshots, Backup, Resource pool?
18. Have we separated our Development and Production servers?
19. Do we have Vmware support Contract?
20. How can we prepare another ESX server to take place of corrupted
21. Production ESX?
22. ESX Server patching.
23. VMware Licensing.
24. ESX Backup.
25. Snapshots Features.
26. DRS
27. Database connectivity.
28. SAN connectivity with ESX Servers.
29. What is the purpose of VLAN ID and where does we put it?
30. Difference between Host Based and License Server-Based Licensing
31. And/or what all type of licenses are available in VMware Infa?
32. What are all versions of ESX server with features difference in them?
33. What are all versions of Virtual Centre with features difference in them?
34. What is time Synchronization in VMware ESX?
35. What is the difference between Suspend and Shutdown or Rest a VM?
36. What are Bus Logic and LSI Logic SCSI Controller?
37. Can VMware work on NAS in place of SAN? Yes
38. What are partition types in ESX/Linux?
39. What is resource pool and how it works, what is allocates (Memory, CPU)?
40. How Many Partitions are there in ESX Server and what contains what,
what is recommended?
41. What is HA cluster?
42. What is host profile?
43. How we can get crash VM logs
44. What is ha cluster requirement : and VMotion requirement
45. How we can repair a host of ha cluster
46. How many max VM can be created on a host
47. how max switch can be created on a host? (127)
48. what is the different between esx 3.5 and vsphare
49. what is network policy
50. what is promecusios mode
51. what is hypervioser
52. how to expend one HA cluster?
53. difference between esx and esxi (ESXi does not have Service Console)
54. What is ISC?
55. Type of storage which can be mapped with ESX? SAN-FC & ISCSI, NAS
56. Difference between ESX 3.0 and 3.5?
57. Is there any way to find ESX server name from VM guest OS?
58. What is LUN masking & Zoning?
59. What are the available option during HA enable? (Admission control- up
to 4 host failure)
60. What is admission control? (Tolrance setting for ESX host failure in HA
Cluster)
61. From where can we put resource utilization setting for a specific VM
machine? (from Cluster settings and individual VM’e Edit Setting-
Resources tab)
62. What to do incase we are not able to power on/off a virtual machine?
(Check for available resources & Reservation)
63. How to delete datastore?
64. How to extend datastore?
65. What are the different option in DRS like Manual-Semi Autometic and full
automatic?
66. What is the VM machine setting options in HA configuration area, What HA
settings can we put to a specific VM?
67. What is the size limit of Datastore?
68. When we put ESX in maintenance mode manually or shutdown manually,
do it restarts VM Guest on other ESX or do VMotion with Zero downtime?
(VM guests does VMotion with no downtime)
69. What is the location of License file in case of ESX base license?
(/etc/vmware/vmware.lic)
70. What are the different versions of ESX and what is the difference?
71. What the different port used for communication?
72. How will you recover fast in case of ESX crash if you don’t have backup,
how fast can you make this identical?
73. What are the different methods of adding a Virtual Machine (Like from
template, cloning etc.)?
74. How would you plan to setup a new Infrastructure, what all are the
requirements?
75. How licensing works, would we need to purchase more licenses if I already
have Host with 2 Physical CPU and want to add new hardware with 4
Physical CPUs (In case of Enterprise edition)?
76. There are two types of VMDK files under a virtual machine, what is the
difference between them? (flat.vmdk is RAW (actual) file.)
77. How VMotion works in background?
78. What is the process to upgrade ESX 3.0 to 3.5 or 3.5 to 4.x?
79. What is the command to see all VMDK files (VMs on the host)? (vmware-
cmd)
80. What is the command to see available LUNs? Esxcgf-mpaths -l
81. License Server uses port 27000 & 27010, How to change that? From VC
Server Administration menu
82. How many VM Guests you can use per Virtual Center? (3.5 – 2000, 3.0 –
1500).
83. How many ESX hosts you can connect per Virtual center? (3.5 -200, 3.0
100)
84. How do you check the corrupted storage information?
85. What are the user roles in virtual center?
86. What will be the impact if VC SQL Server goes down?
87. Hiding the VMware Tools Icon - Open the registry and navigate to the
following key: HKEY_CURRENT_USER\Software\VMware, Inc.\VMware
Tools\ShowTray
88. What is the file name which stores Gateway information?
89. What is the command to register and unregistered a VM?
90. What are Affinity & Anti-Affinity policies?
91. How to check a Preferred or Active path?
92. What are the communication port / protocol during VMotion?
93. What is the maximum size of ESX swap partition? (Default-544MB)
94.

Some other Q.

1. Is HA dependent on virtual center (Only for Install)

2. What is the Maximum Host Failure allowed in a cluster (4)

3. How does HA know to restart a VM from a dropped Host (storage lock will be
removed from the metadata)

4. How many iSCSI targets will ESX support 8 for 3.01, (64 for 3.5)

5. How Many FiberChannel targets(256) (128 on Install)

6. What is Vmotion(ability to move running vm from one host to another)


7. Ask what is the different when you use viclient connect to VC and directly to
ESX server itself.
8. When you connect to VC you manage ESX server via vpxa (Agent on esx server).
Vpxa then pass those request to hostd (management service on esx server).
9. When you connect to ESX server directly, you connect to hostd (bypass vpxa).
You can extend this to a trobleshoot case, where connect to esx see one thing and
connect to VC see another. So the problem is most likely out of sync between
hostd and vpxa, "service vmware-vpxa restart" should take care of it.

10. the default partitions in vmware --/ (5 gb),/boot(100mb),/swap(544 mb) /vmkcore


(100) ,/vmfs

11. Types of licensing : starter and standard.

a. Starter: limited production oriented features ( Local and NAS storage


only)
b. Standard: production oriented features and all the add-on licenses can be
configured with this edition.Unlimited max number of vms,san,iscsi,nas,
vsmp support and VCB addon at additional cost.

12. Vmware service console manages firewall, snmp agent, apache tomcat other
services like HA & DRs

13. Vmware vitual machine files are vmdk, vmx (configuration), nvram (BIOS file),
Log file

14. The eSx server hypervisor is known as vmkernal.

15. Hypervisor ESX server offers basic partition of server resources, however it also
acts as the foundation for virtual infrastructure software enabling Vmotion,DRS
and so forth kerys to the dynamic, automated datacenter.

16. host agent on each managed host software collects , communicates, and executes
the action recieved through the VI client. it is installed as a part of the ESX Server
Installation.

17. Virtual Center agent : On each managed host, software that collects
commmunicates and executes the actions received from the virtual center server.
Ther virtual center agent is installed the first time any host is added to the virtual
center inventory.
18. ESX server installation requirements 1500 mhz cpu intel or amd, memory 1 gb
minimum up to 256 mb, 4 gb hard drive space,

19. The configuration file that manages the mapping of service console file system to
mount point is /etc/fstab,

20. ESX mount points at the time of installaiton

21. /boot 100 mb ext3


22. / 5 gb ext3
23. swap 544mb
24. /var/log 2 gb
25. /vmfs/volumes as required vmfs-3
26. vmkcore 100 mb vmkcore
27. /cciss/c0d0 consider as local SCSI storage
28. /dev/sda is storage network based LUN

29. VI client provides direct access to an ESX server for configuration and virtual
machine management

30. The VI Client is also used to access Virtual Center to provide management
configuration and monitoring of all ESX servers and their virtual machines within
the virtual infrastructure environment. However when using the VI client to
connect directly to the ESX server, no management of Virtual center feature is
possible. Eg- you cannot configure and administer, VMware DRS or VMware
HA.
31. VMware license mode : default 60 days trail. After 60 days you can create VM's
but you cannot power on VM's. The license types are Foundation, Standard and
Enterprise.

32. Foundation license: VMFS, Virtual SMP, Virtual center agent, VMware update
manager, VCB

33. VI standard license: Foundation license + HA feature

34. Enterprise: Foundation license + Standard license + VMotion, VM Storage


VMotion, and VMware DRS.

35. Virtual machines time sync with ESX host

36. By default the first service console network connection is always named service
console. It always in vSwitch 0. The switch always connects vmnic 0
37. To gather VMware diagnostics information run the script vm-spport from the
service console. If you generate the diagnostic information, this information will
be stored in Vmware-virtualcenter-support-date@time folder. The folder contains
Vclient-support, which will hold vi client log files. Another file is esx-support-
date@time.tgz, which is compressed archive file contains esx server diagnostics
information

38. The virtual switch work at layer 2 of the OSI model.

39. You cannot have two virtual Switches mapped to one physical NIC

40. You can map two or more physical NIC mapped to one virtual switch.

41. A switch used by Vmkernal for accessing ISCSI or NAS based storage

42. Virtual switch used to give the service console access to a management LAN

43. Virtual switch can have 1016 ports and 8 ports used for management purpose total
is 1024.

44. Virtual switch default ports are 56.

45. ESX server can have 4096 ports max.

46. Maximum 4 virtual NIC per VM.

47. ESX server can have 32 NIC on Intel NIC, 20 Broadcom NIC's

48. Three type of network connections-

49. Service console port : access to ESX server management network

50. Vmkernal port : Access to VMotion, ISCSI, NFS/NAS networks

51. Virtual machine port group : Access to VM networks

52. Service console port requires network lable,VLAN ID optional, Static ip or


DHCP

53. Multiple service console connections can be created only if they are configured on
different network. In addition only a single service console gateway, IP address
can be defined

54. A VMkernal port allows to use ISCSI, NAS based networks. Vmkernal port is
required for VMotion.
55. It requires network label, VLAN id optional, IP setting

56. Multiple Vmkernal connections can be configured only if they are configured on
different networks only single vmkernal gateway IP address can be defined.

57. Virtual machine port group required A network lable, VLAN id optional

58. Three network policies are available for the vswitch

59. Security
60. Traffic shaping
61. NIC teaming

62. Network security policy mode

63. Promiscuous mode : when set to reject, placing a guest adapter in promiscuous
mode has no which frames are received by the adapter

64. Mac address changer : when set to reject, if the guest attempts to change the MAC
address assigned to the virtual NIC, it stops receiving frames

65. Forged transmits - When set to reject drops any frames that the guest sends, where
the source address field contains a MAC address other than the assigned virtual
NIC mac address ( default accept)

66. 32 ESX hosts can use a single shared storage

67. vmhba0:0:11:3 = adapter 0 : target id : LUN: partition

68. SNMP incoming port 161 & out going port 162

69. ISCSI client outgoing port 3260

70. Virtual center agent 902

71. NTP Client - 123 port, VCB 443, 902 Ports.

72. The default ISCSI storage adapter is vmhba32

73. ISCSI follow Iqn naming convention

74. ISCSI uses CHAP authentication

75. VMware license port is 27000


76. After changing made at command line for reflecting the changes you need to start
the hostd daemon. Service vmware-mgmt restart.

77. View the iSCSI name assigned to the iSCSI software adapter: vmkiscsi-tool -I -1
vmhba40

78. View the iSCSI alias assigned to the iSCSI software adapter:vmkiscsi-tool -k -1
vmhba40

79. Login to the service console as root and execute esxcfg - vmhbadevs to identify
which LUNs are currently seen by the ESX server. # esxcfg-vmhbadevs

80. Run the esxcf g-vmhbadevs command with the -m option to map VMFS names to
81. VMFS UUIDs. Note that the LUN partition numbers are shown in this output.
The hexidecimal values are described later. # esxcfg-vmhbadevs -m

82. Use the vdf -h comand to identify disk statistics (Size, Used, Avail, Use%,
Mounted on) for all file system volumes recognized by your ESX host.

83. List the contents of the /vmfs/volumes directory. The hexidecimal numbers (in
dark blue) are unique VMFS names. The names in light blue are the VMFS labels.
The labels are symbolically linked to the VMFS volumes. ls -l \vmfs\volumes

84. Using the Linux device name (obtained using esxcfg - vmhbadevs command),
check LUNs A, B and C to see if any are partitioned. If there is no partition table,
example a. below, go to step 3. If there is a table, example b. go to step 2. # fdisk
-1 /dev/sd<?>

85. Format a partitioned LUN using vmkf s tool s . Use the - C and - S options
respectively, to create and label the volume. Using the command below, create a
VMFS volume on LUN A. Ask your instructor if you should use a custom VMFS
label name.# vmkfstools -C vmfs3 -S LUNc#> vmhbal:O:#:l

86. Now that the LUN has been partitioned and formatted as a VMFS volume, it can
be used as a datastore. Your ESX host recognizes these new volumes. vdf -h

87. Use the esxcf g-vmhbadevs command with the -m option to map the VMFS hex
names to SAN LUNs. # esxcfg-vmhbadevs -m

88. It may be helpful to change the label to identify that this VMFS volume is
spanned. Add - spanned to the VMFS label name. # In -sf /vmfs/volumes/<V~~S-
UUID> /vmfs/volumes/<New- L abel- N ame>
89. In order to remove a span, you must reformat LUN B with a new VMFS volume
(because it was the LUN that was spanned to).THIS WILL DELETE ALL DATA
ON BOTH LUNS IN THE SPAN !# vmkfstools -C vmfs3 -S <label>
vmhbal:O:#:l

90. Enable the ntpclient service on the Service Console - # esxcfg-firewall -e ntpclient

91. Determine if the NTP daemon starts when the system boots. # chkconfig --list
ntpd

92. Configure the system to synchronize the hwclock and the operating system clocks
each time the ESX Server host is rebooted. # nano -w /etc/sysconfig/clock UTC=
true

93. List the available services. # esxcfg-firewall -s

94. Communication between VI client and ESX server required ports 902, 903

95. Communication between VI client and virtual center - 902

96. Communication between VI web access client and ESX 80, 443

97. Communication between VI client and virtual center 80, 443

98. Communication between ESX server and License server 27010 (in), 27000(out)

99. ESX server in a VMware HA clster 2050 -5000 (in), 8042-8045 (out)

100. ESX server during VMotion 8000

101. The required port for ISCSI 3260, NFS : 2049

102. Update manager SOAP port – 8086

103. Update manager Web port - 9084

104. Vmware converter SOAP port 9085

105. Vmware converter Web port 9084

106. List the different ways to identify your virtual machine. To do this, use the
vcbVmName command: vcbVmName -h < V i r t u a l Center - Server-IP-
Address-or-Hostname> -u < V i r t u a l c e n t e r- S e r v e r- u s e r- a ccount> -p
< V i r t u a l c e n t e r- S e r v e r- p assword> -s ipaddr:<IP - address - of -
virtual-machine - to - backup>
107. Unmount the virtual disk(s): mountvm -u c:\backups\tempmnt

108. VMFS Volume can be created one partition 256 GB in the maximum size
of a VM For a LUN 32extents can be added up to 64 TB

109. 8 mount points for NFS are the maximum

110. Service console will use 272 MB

111. The files for vmware virtual machine

112. vmname.vmx --virtual machine configuration file

113. vmname.vmdk -- actual virtual hard drive for the virtual guest operation
system

114. vmname_flot.vmdk--preallocated space

115. vmname.log --virtual machine log file

116. vmname.vswap -- vm swap file

117. vmname.vmsd ---vm snapshot file

118. Log files should be used only when you are having trouble with a virtual
machine.

119. VMDK files – VMDK files are the actual virtual hard drive for the virtual
guest operation system (virtual machine / VM). You can create either dynamic or
fixed virtual disks. With dynamic disks, the disks start small and grow as the disk
inside the guest OS grows. With fixed disks, the virtual disk and guest OS disk
start out at the same (large) disk. For more information on monolithic vs. split
disks see this comparison from sanbarrow.com.

120. VMEM – A VMEM file is a backup of the virtual machine’s paging file. It
will only appear if the virtual machine is running, or if it has crashed.

121. VMSN & VMSD files – these files are used for VMware snapshots. A
VMSN file is used to store the exact state of the virtual machine when the
snapshot was taken. Using this snapshot, you can then restore your machine to the
same state as when the snapshot was taken. A VMSD file stores information
about snapshots (metadata). You’ll notice that the names of these files match the
names of the snapshots.
122. NVRAM files – these files are the BIOS for the virtual machine. The VM
must know how many hard drives it has and other common BIOS settings. The
NVRAM file is where that BIOS information is stored.

123. VMX files – a VMX file is the primary configuration file for a virtual
machine. When you create a new virtual machine and answer questions about the
operating system, disk sizes, and networking, those answers are stored in this file.
As you can see from the screenshot below, a VMX file is actually a simple text
file that can be edited with Notepad. Here is the “Windows XP Professional.vmx”
file from the directory listing, above:

124. we can create VM


125. Vm from scratch
126. Deploy from template
127. Cloned
128. P2V
129. Iso file
130. vmx file

131. Max CPU's per core is 4 to 8 vcpu's

132. At the time of VMotion ARP notification will be released. 70 to 80 % will


be copied to the other ESX host, a bit map file will be created, and uses will be
working on the bitmap file and the changes will be copied to the other ESX host

133. How VMotion works: --Live migration of a virtual machine from one
physical server to Another with VMotion is enabled by three underlying
technologies.

134. First, the entire state of a virtual machine is encapsulated by a set of files
stored on shared storage such as FC or iSCSI SAN or NAS. VMware clustered
VMFS allows multiple installations of ESX Server to access the same virtual
machine files concurrently.

135. Second, the active memory and precise execution state of the virtual
machine is rapidly transferred over a high speed network, allowing the virtual
machine to instantaneously switch from running on the source ESX Server to the
destination ESX Server. VMotion keeps the transfer period imperceptible to users
by keeping track of on-going memory transactions in a bitmap. Once the entire
memory and system state has been copied over to the target ESX Server, VMotion
suspends the source virtual machine, copies the bitmap to the target ESX Server,
and resumes the virtual machine on the target ESX Server. This entire process
takes less than two seconds on a Gigabit Ethernet network.
136. Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring that even after the migration, the virtual
machine network identity and network connections are preserved. VMotion
manages the virtual MAC address as part of the process. Once the destination
machine is activated, VMotion pings the network router to ensure that it is aware
of the new physical location of the virtual MAC address. Since the migration of a
virtual machine with VMotion preserves the precise execution state, the network
identity, and the active network connections, the result is zero downtime and no
disruption to users.

137. DRS will balance the workload across the resources you presented to the
cluster. It is an essential component of any successful ESX implementation.

138. With VMware ESX 3.x and VirtualCenter 2.x, it's possible to configure
VirtualCenter to manage the access to the resources automatically, partially, or
manually by an administrator.

139. This option is particularly useful for setting an ESX server into
maintenance mode. Maintenance mode is a good environment to perform tasks
such as scanning for new storage area network (SAN) disks, reconfiguring the
host operating system's networking or shutting down the server for maintenance.
Since virtual machines can't be run during maintenance mode, the virtual
machines need to be relocated to other host servers. Commonly, administrators
will configure the ESX cluster to fully automate the rules for the DRS settings.
This allows Virtual Center to take action based on workload statistics, available
resources and available host servers.

140. An important point to keep in mind is that DRS works in conjunction with
any established resource pools defined in the Virtual Center configuration. Poor
resource pool configuration (such as using unlimited options) can cause DRS to
make unnecessary performance adjustments. If you truly need to use unlimited
resources within a resource pool the best practice would be to isolate. Isolation
requires a separate ESX cluster with a limited number of ESX hosts that share a
single resource pool where the virtual machines that require unlimited resources
are allowed to operate. Sharing unlimited setting resource pools with limited
setting resource pools within the same cluster could cause DRS to make
unnecessary performance adjustments. DRS can compensate for this scenario, but
that could be by bypassing any resource provisioning and planning previously
established.

141. How VMotion works with DRS :- The basic concept of VMotion is that
ESX will move a virtual machine while it is running to another ESX host with the
move being transparent to the virtual machine. ESX requires a dedicated network
interface at 1 GB per second or greater, shared storage and a virtual machine that
can be moved. Not all virtual machines can be moved. Certain situations, such as
optical image binding to an image file, prevent a virtual machine from migrating.
With VMotion enabled, an active virtual machine can be moved automatically or
manually from one ESX host to another. An automatic situation would be as
described earlier when a DRS cluster is configured for full automation. When the
cluster goes into maintenance mode, the virtual machines are moved to another
ESX host by VMotion. Should the DRS cluster be configured for all manual
operations, the migration via VMotion is approved within the Virtual
Infrastructure Client, and then VMotion proceeds with the moves.

142. VMware ESX 3.5 introduces the highly anticipated Storage VMotion.
Should your shared storage need to be brought offline for maintenance, Storage
VMotion can migrate an active virtual machine to another storage location. This
migration will take longer, as the geometry of the virtual machine's storage is
copied to the new storage location. Because this is not a storage solution, the
traffic is managed through the VMotion network interface.

143. Points to consider - One might assume that with the combined use of DRS
and VMotion that all bases are covered. Well, not entirely. There are a few
considerations that you need to be aware of so that you know what DRS and
VMotion can and cannot do for you.

144. VMotion does not give an absolute zero gap of connectivity during a
migration. In my experiences the drop in connectivity via ping is usually limited
to one ping from a client or a miniscule increase in ping time on the actual virtual
machine. Most situations will not notice the change and reconnect over the
network during a VMotion migration. There also is a slight increase in memory
usage and on larger virtual machines this may cause a warning light on RAM
usage that usually clears independently.

145. Some virtual machines may fail to migrate, whether by automatic


VMotion task or if evoked manually. This is generally caused by obsolete virtual
machines, CD-ROM binding or other reasons that may not be intuitive. In one
migration failure I experienced recently, the Virtual Infrastructure client did not
provide any information other than the operation timed out. The Virtual Center
server had no information related to the migration task in the local logs. In the
database Identification of your risks is the most important pre-implementation
task you can do with DRS and VMotion. So what can you do to identify your
risks? Here are a couple of easy tasks:
146. Schedule VMotion for all systems to keep them moving across hosts.
147. Regularly put ESX hosts in and then exit maintenance mode.
148. Do not leave mounted CD-ROM media on virtual machines
(datastore/ISO file or host device options).
149. Keep virtual machines up to date with VMware tools and virtual machine
versioning.
150. Monitor the VPX_EVENT table in your ESX database for the
EVENT_TYPE = vim.event.VmFailedMigrateEvent
151. All in all, DRS and VMotion are solid technologies. Anomalies can
happen, and the risks should be identified and put into your regular monitoring for
visibility.
152. VMotion usage scenarios

153. Now that VMotion is enabled on two or more hosts, when should it be
used? There are two primary reasons to use VMotion: to balance the load on the
physical ESX servers and eliminate the need to take a service offline in order to
perform maintenance on the server.

154. VI3 balances its load by using a new feature called DRS. DRS is included
in the VI3 Enterprise edition along with VMotion. This is because DRS uses
VMotion to balance the load of an ESX cluster in real time between all of the
server involved in the cluster. For information on how to configure DRS see page
95 of the VMware VI3 Resource Management Guide. Once DRS is properly
configured it will constantly be evaluating how best to distribute the load of
running VMs amongst all of the host servers involved in the DRS-enabled cluster.
If DRS decides that a particular VM would be better suited to run on a different
host then it will utilize VMotion to seamlessly migrate the VM over to the other
host.

155. While DRS migrates VMs here and there with VMotion, it is also possible
to migrate all of the VMs off of one host server (resources permitting) and onto
another. This is accomplished by putting a server into "maintenance mode." When
a server is put into maintenance mode, VMotion will be used to migrate all of the
running VMs off it onto another server. This way it is possible to bring the first
server offline to perform physical maintenance on it without impacting the
services that it provides.

156. How VMotion works


157. As stated above, VMotion is the process that VMware has invented to
migrate, or move, a virtual machine that is powered on from one host server to
another host server without the VM incurring downtime. This is known as a "hot-
migration." How does this hot-migration technology that VMware has dubbed
VMotion work? Well, as with everything, in a series of steps:

158. A request has been made that VM-A should be migrated (VMotioned)
from ESX-A to ESX-B

159. VM-A's memory is pre-copied from ESX-A to ESX-B while ongoing


changes are written to a memory bitmap on ESX-A.

160. VM-A is quiesced on ESX-A and VM-A's memory bitmap is copied to


ESX-B.
161. VM-A is started on ESX-B and all access to VM-A is now directed to the
copy running on ESX-B.

162. The rest of VM-A's memory is copied from ESX-A all the while memory
is being read and written from VM-A on ESX-A when applications attempt to
access that memory on VM-A on ESX-B.

163. If the migration is successful VM-A is unregistered on ESX-A.


164. ------------------------------------

165. For a VMotion event to be successful the following must be true:

166. The VM cannot be connected to an internal vswitch.

167. The VM cannot be connected to a CD-ROM or floppy drive that is using


an ISO or floppy image stored on a drive that is local to the host server.

168. The VM's affinity must not be set, i.e., binding it to physical CPU(s).

169. The VM must not be clustered with another VM (using a cluster service
like the Microsoft Cluster Service (MSCS).

170. The two ESX servers involved must both be using (the same!) shared
storage.

171. The two ESX servers involved must be connected via Gigabit Ethernet (or
better).

172. The two ESX servers involved must have access to the same physical
networks.

173. The two ESX servers involved must not have virtual switch port groups
that are labeled the same.

174. The two ESX servers involved must have compatible CPUs. (See support
on Intel and AMD).
175. If any of the above conditions are not met, VMotion is not supported and
will not start. The simplest way to test these conditions is to attempt a manual
VMotion event. This is accomplished by right-clicking on VM in the VI3 client
and clicking on "Migrate..." The VI3 client will ask to which host this VM should
be migrated. When a host is selected, several validation checks are performed. If
any of the above conditions are true then the VI3 client will halt the VMotion
operation with an error.

176. Conclusion
177. The intent of this article was to provide readers with a solid grasp of what
VMotion is and how it can benefit them. If you have any outstanding questions
with regards to VMotion or any VMware technology please do not hesitate to
send them to me via ask the experts.
178. ------------------------------------------------------------------------
179. What Is VMware VMotion?
180. VMware® VMotion™ enables the live migration of running virtual
machines from one physical server to another
181. with zero downtime, continuous service availability, and complete
transaction integrity. VMotion allows IT organizations
182. to:
183. • Continuously and automatically allocate virtual machines within
resource pools.
184. • Improve availability by conducting maintenance without disrupting
business operations
185. VMotion is a key enabling technology for creating the dynamic,
automated, and self-optimizing data center.

186. How Is VMware VMotion Used?


187. VMotion allows users to:
188. • Automatically optimize and allocate entire pools of resources for
maximum hardware utilization, flexibility and
189. availability.
190. • Perform hardware maintenance without scheduled downtime.
191. • Proactively migrate virtual machines away from failing or
underperforming servers.

192. How Does VMotion work?

193. Live migration of a virtual machine from one physical server to another
with VMotion is enabled by three underlying technologies.

a. First, the entire state of a virtual machine is encapsulated by a set of files


stored on shared storage such as Fibre Channel or iSCSI SAN or NAS.
VMware’s clustered VMFS allows multiple installations of ESX Server to
access the same virtual machine files concurrently.
b. Second, the active memory and precise execution state of the virtual
machine is rapidly transferred over a high speed network, allowing the
virtual machine to instantaneously switch from running on the source ESX
Server to the destination ESX Server. VMotion keeps the transfer period
imperceptible to users by keeping track of on-going memory transactions
in a bitmap. Once the entire memory and system state has been copied
over to the target ESX Server, VMotion suspends the source virtual
machine, copies the bitmap to the target ESX Server, and resumes the
virtual machine on the target ESX Server. This entire process takes less
than two seconds on a Gigabit Ethernet network.
c. Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring that even after the migration, the
virtual machine network identity and network connections are preserved.
VMotion manages the virtual MAC address as part of the process. Once
the destination machine is activated, VMotion pings the network router to
ensure that it is aware of the new physical location of the virtual MAC
address. Since the migration of a virtual machine with VMotion preserves
the precise execution state, the network identity, and the active network
connections, the result is zero downtime and no disruption to users.

194. What is VirtualCenter? VirtualCenter is virtual infrastructure management


software that centrally manages an enterprise’s virtual machines as a single,
logical pool of resources. VirtualCenter provides:·
a. Centralized virtual machine management. Manage hundreds of virtual
machines from one location through robust access controls.
b. Monitor system availability and performance. Configure automated
notifications and e-mail alerts.
c. Instant provisioning. Reduces server-provisioning time from weeks to tens
of seconds.
d. Zero-downtime maintenance. Safeguards business continuity 24/7, without
service interruptions for hardware maintenance, deployment,or migration.
e. Continuous workload consolidation. Optimizes the utilization of data
center resources to minimize unused capacity.
f. SDK. Closely integrates 3rd-party management software with
VirtualCenter, so that the solutions you use today will work seamlessly
within virtual infrastructure.With VirtualCenter, an administrator can
manage

195. What is Storage VMotion (SVMotion) and How do you perform a


SVMotion using the VI Plugin?
a. There are at least 3 ways to perform a SVMotion – from the remote
command line, interactively from the command line, and with the
SVMotion VI Client Plugin. Note: You need to have VMotion configured
and working for SVMotion to work. Additionally, there are a ton of
caveats about SVMotion in the ESX 3.5 administrator’s guide (page 245)
that could cause SVMotion not to work. One final reminder, SVMotion
works to move the storage for a VM from a local datastore on an ESX
server to a shared datastore (a SAN) and back – SVMotion will not move
a VM at all – only the storage for a VM.

196. Overview of VMware ESX / VMware Infrastructure Advanced Features


a. ESX Server & ESXi Server. Even if all that you purchase is the most basic
VMware ESXi virtualization package at a cost of $495, you still gain a
number of advanced features. Of course, virtualization, in general, offers
many benefits, no matter the virtualization package you choose. For
example - hardware independence, better utilization of hardware, ease of
management, fewer data center infrastructure resources required, and
much more. While I cannot go into everything that ESX Server (itself)
offers, here are the major advanced features:
b. Hardware level virtualization – no based operating system license is
needed, ESXi installs right on your hardware (bare metal installation).
VMFS file system – see advanced feature #2, below.
c. SAN Support – connectivity to iSCSI and Fibre Channel (FC) SAN
storage, including features like boot from SAN
d. Local SATA storage support.
e. 64 bit guest OS support.
f. Network Virtualization – virtual switches, virtual NICs, QoS & port
configuration policies, and VLAN.
g. Enhanced virtual machine performance – virtual machines may perform,
in some cases, even better in a VM than on a physical server because of
features like transparent page sharing and nested page table.
h. Virtual SMP – see advanced feature #4, below.
i. Support for up to 64GB of RAM for VMs, up to 32 logical CPUs and
256GB of RAM on the host.
j. #2 VMFS
k. VMware’s VMFS was created just for VMware virtualization. Thus, it is
the highest performance file system available to use in virtualizing your
enterprise. While VMFS is included with any edition or package of ESX
Server or VI that you choose, VMFS is still listed as a separate product by
VMware. This is because it is so unique.
l. VMFS is a high performance cluster file system allowing multiple systems
to access the file system at the same time. VMFS is what gives you a
solid platform to perform VMotion and VMHA. With VMFS you can
dynamically increase a volume, support distributed journaling, and the
addition of a virtual disk on the fly.

m. #3 Virtual SMP
n. VMware’s Virtual SMP (or VSMP) is the feature that allows a VMware
ESX Server to utilize up to 4 physical processors on the host system,
simultaneously. Additionally, with VSMP, processing tasks will be
balanced among the various CPUs.

o. #4 VM High Availability (VMHA)


p. One of the most amazing capabilities of VMware ESX is VMHA. With 2
ESX Servers, a SAN for shared storage, Virtual Center, and a VMHA
license, if a single ESX Server fails, the virtual guests on that server will
move over to the other server and restart, within seconds. This feature
works regardless of the operating system used or if the applications
support it.

q. #5 VMotion & Storage VMotion


r. With VMotion, VM guests are able to move from one ESX Server to
another with no downtime for the users. VMotion is what makes DRS
possible. VMotion also makes maintenance of an ESX server possible,
again, without any downtime for the users of those virtual guests. What is
required is a shared SAN storage system between the ESX Servers and a
VMotion license.

s. Storage VMotion (or SVMotion) is similar to VMotion in the sense that


"something" related to the VM is moved and there is no downtime to the
VM guest and end users. However, with SVMotion the VM Guest stays on
the server that it resides on but the virtual disk for that VM is what moves.
Thus, you could move a VM guest's virtual disks from one ESX server’s
local datastore to a shared SAN datastore (or vice versa) with no
downtime for the end users of that VM guest. There are a number of
restrictions on this. To read more technical details on how it works, please
see the VMware ESX Server 3.5 Administrators Guide.

t. #6 VMware Consolidated Backup (VCB)


u. VMware Consolidated Backup (or VCB) is a group of Windows command
line utilities, installed on a Windows system, that has SAN connectivity to
the ESX Server VMFS file system. With VCB, you can perform file level
or image level backups and restores of the VM guests, back to the VCB
server. From there, you will have to find a way to get those VCB backup
files off of the VCB server and integrated into your normal backup
process. Many backup vendors integrate with VCB to make that task
easier.

v. #7 VMware Update Manager


w. VMware Update Manager is a relatively new feature that ties into Virtual
Center & ESX Server. With Update Manager, you can perform ESX
Server updates and Windows and Linux operating system updates of your
VM guests. To perform ESX Server updates, you can even use VMotion
and upgrade an ESX Server without ever causing any downtime to the VM
guests running on it. Overall, Update Manager is there to patch your host
and guest systems to prevent security vulnerabilities from being exploited.

x. #8 VMware Distributed Resource Scheduler (DRS)


y. VMware’s Distributed Resource Scheduler (or DRS) is one of the other
truly amazing advanced features of ESX Server and the VI Suite. DRS is
essentially a load-balancing and resource scheduling system for all of your
ESX Servers. If set to fully automatic, DRS can recognize the best
allocation of resource across all ESX Server and dynamically move VM
guests from one ESX Server to another, using VMotion, without any
downtime to the end users. This can be used both for initial placement of
VM guests and for “continuous optimization” (as VMware calls it).
Additionally, this can be used for ESX Server maintenance.
z. #9 VMware’s Virtual Center (VC) & Infrastructure Client (VI Client)
aa. I prefer to list the VMware Infrastructure client & Virtual Center as one of
the advanced features of ESX Server & the VI Suite. Virtual Center is a
required piece of many of the advanced ESX Server features. Also, VC
has many advanced features in its own right. When tied with VC, the VI
Client is really the interface that a VMware administrator uses to
configure, optimize, and administer all of you ESX Server systems.

197. With the VI Client, you gain performance information, security & role
administration, and template-based rollout of new VM guests for the entire virtual
infrastructure. If you have more than 1 ESX Server, you need VMware Virtual
Center.

198. #10 VMware Site Recovery Manager (SRM)


199. Recently announced for sale and expected to be shipping in 30 days,
VMware’s Site Recovery Manager is a huge disaster recovery feature. If you have
two data centers (primary/protected and a secondary/recovery), VMware ESX
Servers at each site, and a SRM supported SAN at each site, you can use SRM to
plan, test, and recover your entire VMware virtual infrastructure.

200. VMware ESX Server vs. the VMware Infrastructure Suite


201. VMware ESX Server is packaged and purchased in 4 different packages.

a. VMware ESXi – the slimmed down (yet fully functional) version of ESX
server that has no service console. By buying ESXi, you get VMFS and
virtual SMP only.
b. VMware Infrastructure Foundation – (previously called the starter kit, the
Foundation package includes ESX or ESXi, VMFS, Virtual SMP, Virtual
Center agent, Consolidated backup, and update manager.
c. VMware Infrastructure Standard – includes ESX or ESXi, VMFS, Virtual
SMP, Virtual center agent, consolidated backup, update manager, and
VMware HA.
202. VMware Infrastructure Enterprise – includes ESX or ESXi, VMFS,
Virtual SMP, Virtual center agent, consolidated backup, update manager,
VMware HA, VMotion, Storage VMotion, and DRS.
203. You should note that Virtual Center is required for some of the more
advanced features and it is purchased separately. Also, there are varying levels of
support available for these products. As the length and the priority of your support
package increase, so does the cost

204. Advantages of VMFS

a. VMware’s VMFS was created just for VMware virtualization. VMFS is a


high performance cluster file system allowing multiple systems to access
the file system at the same time. VMFS is what gives you the necessary
foundation to perform VMotion and VMHA. With VMFS you can
dynamically increase a volume, support distributed journaling, and the
addition of a virtual disk on the fly.

205. Virtual center licenses issues

a. However, all licensed functionality currently operating at the time the


license server becomes unavailable continues to operate as follows:

b. All VirtualCenter licensed features continue to operate indefinitely,


relying on a cached version of the license state. This includes not only
basic VirtualCenter operation, but licenses for VirtualCenter add-ons, such
as VMotion and DRS.

c. For ESX Server licensed features, there is a 14-day grace period during
which hosts continue operation, relying on a cached version of the license
state, even across reboots. After the grace period expires, certain ESX
Server operations, such as powering on virtual machines, become
unavailable. During the ESX Server grace period, when the license server
is unavailable, the following operations are unaffected:

d. Virtual machines continue to run. VI Clients can configure and operate


virtual machines.

e. ESX Server hosts continue to run. You can connect to any ESX Server
host in the VirtualCenter inventory for operation and maintenance.
Connections to the
f. VirtualCenter Server remain. VI Clients can operate and maintain virtual
machines from their host even if the VirtualCenter Server connection is
also lost.

g. During the grace period, restricted operations include:

h. Adding ESX Server hosts to the VirtualCenter inventory. You cannot


change VirtualCenter agent licenses for hosts.

i. Adding or removing hosts from a cluster. You cannot change host


membership for the current VMotion, HA, or DRS configuration.

j. Adding or removing license keys.

k. When the grace period has expired, cached license information is no


longer stored.
l. As a result, virtual machines can no longer be powered on. Running
virtual machines continue to run but cannot be rebooted.

m. When the license server becomes available again, hosts reconnect to the
license server.

n. No rebooting or manual action is required to restore license availability.


The grace period timer is reset whenever the license server becomes
available again.

206. By default, ESX has 22 different users and 31 groups.

207. VMware ESX Server, you have 4 Roles, by default

208. Recovery command for vmkfstools. It failed. # vmkfstools -R


/vmfs/volumes/SAN-storage-2/

209. ESX host's system UUID found in the /etc/vmware/esx.conf file.

210. Recently VMware added a some what useful command line tool named
vmfs-undelete which exports metadata to a recovery log file which can restore
vmdk block addresses in the event of deletion. It's a simple tool and at present it's
experimental and unsupported and is not available on ESXi. The tool of course
demands that you were proactive and ran it's backup function in order to use it.
Well I think this falls well short of what we need here. What if you have no
previous backups of the VMFS configuration, so we really need to know what to
look for and how to correct it and that's exactly why I created this blog.

211. The command to scan luns --esxcfg-mpath -l and also check


var\log\vmkernal

212. Is HA dependent on virtualcenter (No, Only for Install)

213. What is the Maximum Host Failure allowed in a cluster (4)

214. How does HA know to restart a VM from a dropped Host (storage lock
will be removed from the metadata)

215. How many iSCSI targets will ESX support 8 for 3.01, (64 for 3.5)

216. How Many FiberChannel targets(256) (128 on Install)

217. What is Vmotion(ability to move running vm from one host to another)


218. Ask what the difference is when you use VI client to connect to VC and
directly to ESX server itself.

a. When you connect to VC you manage ESX server via vpxa (Agent on esx
server). Vpxa then pass those request to hostd (management service on esx
server). When you connect to ESX server directly, you connect to hostd
(bypass vpxa). You can extend this to a troubleshoot case, where connect
to esx see one thing and connect to VC see another. So the problem is
most likely out of sync between hostd and vpxa, "service vmware-vpxa
restart" should take care of it.

219. The default partitions in vmware --/ (5 gb),/boot(100mb),/swap(544


mb) /vmkcore (100) ,/vmfs

220. how do you check the corrupted storage information

221. how many vm's you can use in Virtual center (ESX 3.5 - 2000)

222. how many esx hosts you can connect in Virtual center (3.5 – 200)

223. Vmware service console manages firewall, snmp agent, apache tomcat
other services like HA & DRs

224. Vmware vitual machine files are vmdk, vmx (configuration), nvram
(BIOS file), Log file

225. The eSx server hypervisor is known as vmkernal.

226. Hypervisor ESX server offers basic partition of server resources, however
it also acts as the foundation for virtual infrastructure software enabling
Vmotion,DRS and so forth kerys to the dynamic, automated datacenter.

227. Host agent on each managed host software collects , communicates, and
executes the action received through the VI client. it is installed as a part of the
ESX Server Installation.

228. Virtual Center agent: On each managed host, software that collects,
commmunicates and executes the actions received from the virtual center server.
Ther virtual center agent is installed the first time any host is added to the virtual
center inventory.

229. ESX server installation requirements 1500 mhz cpu intel or amd, memory
1 gb minimum up to 256 mb, 4 gb hard drive space,
230. The configuration file that manages the mapping of service console file
system to mount point is /etc/fstab

231. ESX mount points at the time of installaiton

a. /boot 100 mb ext3


b. / 5 gb ext3
c. swap 544mb
d. /va/log 2 gb
e. /vmfs/volumes as required vmfs-3
f. vmkcore 100 mb vmkcore

232. /cciss/c0d0 consider as local SCSI storage

233. /dev/sda is storage network based lun

234. VI client provides direct access to an ESX server for configuration and
virtual machine management

235. The VI Client is also used to access Vitual center to provide management
configuration and monitoring of all ESX servers and their virtual machines within
the virtual infractur environment. However when using the VI client to connect
directly to the ESX server, no management of Virtual center feature is possible.
EG; you cannot configure and administer, Vmware DRS or Vmware HA.

236. Vmware license mode: default 60 days trail. after 60 days you can create
VM's but you cannot power on VM's

237. The license types are Foundation, Standard and Enterprise

a. Fondation license : VMFS,Virtual SMP, Virtual center agent, Vmware


update manager, VCB

b. VI standard license : Foundation license + HA feature

c. Enterprise : Foundation license + STandard license + Vmotion, VM


Storage vmotion, and VMware DRS

238. Virtual machines time sync with ESX host


239. By default the first service console network connection is always named
service console. It always in Vswitch 0. The switch always connects vmnic 0

240. To gather vmware diagnostics information run the script vm-spport from
the service consolem If you generate the diagnostic information, this information
will be stored in Vmware-virtualcenter-support-date@time folder, The folder
contains Viclient-support, which will holds vi client log files, Another file is esx-
support-date@time.tgz, which is compressed archive file contains esx server
diagnostics information.

241. The virtual switch work at layer 2 of the OSI model.

242. You cannot have two virtual switchs mapped to one physical NIC

243. You can map two or more physical NIC mapped to one virtual switch.

244. A switch used by VMkernal for accessing ISCSI or NAS based storage

245. Virtual switch used to give the service console access to a management
LAN

246. Vitual swich can have 1016 ports and 8 ports used for management
purpose total is 1024

247. Virtual switch default ports are 56

248. ESX server can have 4096 ports max

249. Maximum 4 virtual NIC per VM

250. ESX server can have 32 NIC on Intel NIC, 20 Broadcom NIC's

251. Three type of network connections

a. Service console port : access to ESX server management network

b. Vmkernal port : Access to Vmotion, ISCSI, NFS/NAS networks

c. Virtual machine port group : Access to VM networks

252. Service console port requires network label, VLAN ID optional, Static ip
or DHCP

253. Multiple service console connections can be created only if they are
configured on different network. In addition only a single service console
gateway, IP address can be defined

254. A VMkernal port allow to use ISCSI, NAS based networks. Vmkernal
port is requied for Vmotion. It requires network lablel, vlan id optional, IP setting
255. Multiple Vmkernal connections can be configured only if they are
configured on a different networks,only single vmkernal gateway Ip address can
be defined.

256. Virtual machine port group required A network lable,VLAN id optional

257. to list physical nics esxcfg -nics -l

258. Three network policies are available for the vswitch

a. Security
b. Traffic shaping
c. NIC teaming

259. Network security policy mode

a. Promiscos mode : when set to reject, placing a guest adapter in


promiscuous mode has no which frames are received by the adapter

b. Mac address changer : when set to reject, if the guest attempts to change
the MAC address assigned to the virtual NIC, it stops receiving frames

c. Forged transmits - When set to reject drops any frames that the guest
sends, where the source address field contains a MAC address other than
the assigned virtual NIC mac addresss ( default accept)

260. 32 ESX hosts can user a single shared storage

261. vmhba0:0:11:3=adapter 0 : target id : LUN: partition

262. SNMP incoming port 161 out going port 162

263. ISCSI client outgoing port 3260

264. Virtual center agent 902

265. ntp client 123 port

266. VCB 443, 902 ports

267. the default ISCSI storage adapter is vmhba32

268. ISCSI follow IQN naming convention

269. ISCSI uses CHAP authentication.


270. Vmware license port is 27000

271. After changing made at command line for reflecting the changes you need
to start the hostd daemon, service vmware-mgmt restart

272. View the iSCSI name assigned to the iSCSI software adapter: vmkiscsi-
tool -I -1 vmhba40

273. View the iSCSI alias assigned to the iSCSI software adapter:
a. vmkiscsi-tool -k -1 vmhba40

274. Login to the service console as root and execute e sxc f g - vmhbadevs to
identify which LUNs are currently seen by the ESX server. # esxcfg-vmhbadevs

275. Run the esxcf g-vmhbadevs command with the -m option to map VMFS
names to VMFS UUIDs. Note that the LUN partition numbers are shown in this
output. The hexidecimal values are described later. # esxcfg-vmhbadevs -m

276. Use the vdf -h comand to identify disk statistics (Size, Used, Avail, Use%,
Mounted on) for all file system volumes recognized by your ESX host.

277. List the contents of the /vmfs/volumes directory. The hexidecimal


numbers (in dark blue) are unique VMFS names. The names in light blue are the
VMFS labels. The labels are symbolically linked to the VMFS volumes.
ls -l \vmfs\volumes

278. Using the Linux device name (obtained using e sxc f g - vmhbadevs
command), check LUNs A, B and C to see if any are partitioned. If there is no
partition table, example a. below, go to step 3. If there is a table, example b. go
to step 2. # fdisk -1 /dev/sd<?>

279. Format a partitioned LUN using vmkf s tool s . Use the - C and - S options
respectively, to create and label the volume. Using the command below, create a
VMFS volume on LUN A.
280. Ask your instructor if you should use a custom VMFS label name.
a. # vmkfstools -C vmfs3 -S LUNc#> vmhbal:O:#:l

281. Now that the LUN has been partitioned and formatted as a VMFS volume,
it can be used as a datastore. Your ESX host recognizes these new volumes. vdf
-h
282. Use the esxcfg-vmhbadevs command with the -m option to map the
VMFS hex names to SAN LUNs. # esxcfg-vmhbadevs -m

283. It may be helpful to change the label to identify that this VMFS volume is
spanned. Add - spanned to the VMFS label name. # In -sf /vmfs/volumes/<V~~S-
UUID> /vmfs/volumes/<New- L abel- N ame>
284. In order to remove a span, you must reformat LUN B with a new VMFS
volume (because it was the LUN that was spanned to). THIS WILL DELETE
ALL DATA ON BOTH LUNS IN THE SPAN !# vmkfstools -C vmfs3 -S
<label> vmhbal:O:#:l

285. Enable the ntpclient service on the Service Console # esxcfg-firewall -e


ntpclient

286. Determine if the NTP daemon starts when the system boots. # chkconfig
--list ntpd

287. Configure the system to synchronize the hwclock and the operating system
clocks each time the ESX Server host is rebooted. # nano -w /etc/sysconfig/clock
UTC= true

288. List the available services.# esxcfg-firewall -s

289. Communication between VI client and ESX server the ports required 902,
903

290. Communication between VI client and virtual center 902

291. Communication between VI web access client and ESX 80, 443

292. Communication between VI web access client and virtual center 80, 443

293. Communication between ESX server and License server 27010 (in),
27000(out)

294. ESX server in a vmware HA cluster 2050 -5000 (in), 8042-8045 (out)

295. ESX serever during VMotion 8000

296. The required port for ISCSI 3260, NFS : 2049

297. Update manager SOAP port - 8086


298. Update manager Web port - 9084

299. Vmware converter SOAP port 9085


300. Vmware converter Web port 9084

301. VCB Mounter is used, among other things, to create the snapshot for the
3rd party backup software to access: vcbMounter -h <VC - IP - address - or -
hostname> -u <VC- u ser- a ccount> -p cVC user password> -a ~~dzntifi-eo rf - t
he- V M -t o- b ackup> -r <Directory - on - VCB Proxy - to - putbackup>-t
<Backup - type: - file - or - fullvm>

302. List the different ways to identify your virtual machine. To do this, use the
303. vcbVmName command:
304. vcbVmName
305. -h < V i r t u a l Center - Server-IP-Address-or-Hostname>
306. -u < V i r t u a l c e n t e r- S e r v e r- u s e r- a ccount>
307. -p < V i r t u a l c e n t e r- S e r v e r- p assword>
308. -s ipaddr:<IP - address - of - virtual-machine - to - backup>

309. -------------------------

310. Unmount the virtual disk(s):


311. mountvm -u c:\backups\tempmnt

312. -----------------

313. VMFS Volume can be created one partition 256 GB in the maimum size
of a VM

314. For a LUN 32extents can be added up to 64 TB

315. 8 mount points for NFS are the maximum

316. -----------------

317. Service console will use 272 mb


318. ----------------

319. The files for vmware virtual machine

320. vmname.vmx --virtual machine configuration file

321. vmname.vmdk -- actual virtual hard drive for the virtual guest operation
system

322. vmname_flot.vmdk--preallocated space

323. vmname.log --virtual machine log file

324. vmname.vswap -- vm swap file

325. vmname.vmsd ---vm snapshot file


326. Log files should be used only when you are having trouble with a virtual
machine.

327. VMDK files – VMDK files are the actual virtual hard drive for the virtual
guest operation system (virtual machine / VM). You can create either dynamic or
fixed virtual disks. With dynamic disks, the disks start small and grow as the disk
inside the guest OS grows. With fixed disks, the virtual disk and guest OS disk
start out at the same (large) disk. For more information on monolithic vs. split
disks see this comparison from sanbarrow.com.

328. VMEM – A VMEM file is a backup of the virtual machine’s paging file. It
will only appear if the virtual machine is running, or if it has crashed.

329. VMSN & VMSD files – these files are used for VMware snapshots. A
VMSN file is used to store the exact state of the virtual machine when the
snapshot was taken. Using this snapshot, you can then restore your machine to the
same state as when the snapshot was taken. A VMSD file stores information
about snapshots (metadata). You’ll notice that the names of these files match the
names of the snapshots.

330. NVRAM files – these files are the BIOS for the virtual machine. The VM
must know how many hard drives it has and other common BIOS settings. The
NVRAM file is where that BIOS information is stored.

331. VMX files – a VMX file is the primary configuration file for a virtual
machine. When you create a new virtual machine and answer questions about the
operating system, disk sizes, and networking, those answers are stored in this file.
As you can see from the screenshot below, a VMX file is actually a simple text
file that can be edited with Notepad. Here is the “Windows XP Professional.vmx”
file from the directory listing, above:

332. -------------

333. we can create VM


334. Vm from scratch
335. 2.Deploy from templete
336. Cloned
337. P2V
338. Iso file
339. 6..vmx file

340. -----------
341. Max CPU's per core is 4 to 8 vcpu's
342. -----------------

343. At the time of vomotion arp notification will be released

344. 70 to 80 % will be copied to the other ESX host

345. a bit map file will be created, and uses will be working on the bitmap file

346. and the changes will be copied to the other ESX host

347. How vmotion works : --

348. Live migration of a virtual machine from one physical server to


349. another with VMotion is enabled by three underlying technologies.

350. First, the entire state of a virtual machine is encapsulated by a set


351. of files stored on shared storage such as Fibre Channel or iSCSI
352. Storage Area Network (SAN) or Network Attached Storage (NAS).
353. VMware’s clustered Virtual Machine File System (VMFS) allows
354. multiple installations of ESX Server to access the same virtual
355. machine files concurrently.

356. Second, the active memory and precise execution state of the
357. virtual machine is rapidly transferred over a high speed network,
358. allowing the virtual machine to instantaneously switch from
359. running on the source ESX Server to the destination ESX Server.
360. VMotion keeps the transfer period imperceptible to users by
361. keeping track of on-going memory transactions in a bitmap. Once
362. the entire memory and system state has been copied over to the
363. target ESX Server, VMotion suspends the source virtual machine,
364. copies the bitmap to the target ESX Server, and resumes the virtual
365. machine on the target ESX Server. This entire process takes less
366. than two seconds on a Gigabit Ethernet network.

367. Third, the networks being used by the virtual machine are also virtualized
368. by the underlying ESX Server, ensuring that even after the
369. migration, the virtual machine network identity and network connections
370. are preserved. VMotion manages the virtual MAC address
371. as part of the process. Once the destination machine is activated,
372. VMotion pings the network router to ensure that it is aware of the
373. new physical location of the virtual MAC address. Since the migration
374. of a virtual machine with VMotion preserves the precise execution
375. state, the network identity, and the active network connections,
376. the result is zero downtime and no disruption to users.

377. -----------------------------------
378. DRS

379. DRS will balance the workload across the resources you presented to the
cluster. It is an essential component of any successful ESX implementation.

380. With VMware ESX 3.x and VirtualCenter 2.x, it's possible to configure
VirtualCenter to manage the access to the resources automatically, partially, or
manually by an administrator.

381. This option is particularly useful for setting an ESX server into
maintenance mode. Maintenance mode is a good environment to perform tasks
such as scanning for new storage area network (SAN) disks, reconfiguring the
host operating system's networking or shutting down the server for maintenance.
Since virtual machines can't be run during maintenance mode, the virtual
machines need to be relocated to other host servers. Commonly, administrators
will configure the ESX cluster to fully automate the rules for the DRS settings.
This allows VirtualCenter to take action based on workload statistics, available
resources and available host servers

382. An important point to keep in mind is that DRS works in conjunction with
any established resource pools defined in the VirtualCenter configuration. Poor
resource pool configuration (such as using unlimited options) can cause DRS to
make unnecessary performance adjustments. If you truly need to use unlimited
resources within a resource pool the best practice would be to isolate. Isolation
requires a separate ESX cluster with a limited number of ESX hosts that share a
single resource pool where the virtual machines that require unlimited resources
are allowed to operate. Sharing unlimited setting resource pools with limited
setting resource pools within the same cluster could cause DRS to make
unnecessary performance adjustments. DRS can compensate for this scenario, but
that could be by bypassing any resource provisioning and planning previously
established.

383. ----------------------

384. How VMotion works with DRS

385. The basic concept of VMotion is that ESX will move a virtual machine
while it is running to another ESX host with the move being transparent to the
virtual machine.

386. ESX requires a dedicated network interface at 1 GB per second or greater,

387. shared storage and a virtual machine that can be moved.

388. Not all virtual machines can be moved. Certain situations, such as optical
image binding to an image file, prevent a virtual machine from migrating. With
VMotion enabled, an active virtual machine can be moved automatically or
manually from one ESX host to another. An automatic situation would be as
described earlier when a DRS cluster is configured for full automation. When the
cluster goes into maintenance mode, the virtual machines are moved to another
ESX host by VMotion. Should the DRS cluster be configured for all manual
operations, the migration via VMotion is approved within the Virtual
Infrastructure Client, then VMotion proceeds with the moves.

389. VMware ESX 3.5 introduces the highly anticipated Storage VMotion.
Should your shared storage need to be brought offline for maintenance,Storage
VMotion can migrate an active virtual machine to another storage location. This
migration will take longer, as the geometry of the virtual machine's storage is
copied to the new storage location. Because this is not a storage solution, the
traffic is managed through the VMotion network interface.

390. Points to consider

391. One might assume that with the combined use of DRS and VMotion that
all bases are covered. Well, not entirely. There are a few considerations that you
need to be aware of so that you know what DRS and VMotion can and cannot do
for you.

392. VMotion does not give an absolute zero gap of connectivity during a
migration. In my experiences the drop in connectivity via ping is usually limited
to one ping from a client or a miniscule increase in ping time on the actual virtual
machine. Most situations will not notice the change and reconnect over the
network during a VMotion migration. There also is a slight increase in memory
usage and on larger virtual machines this may cause a warning light on RAM
usage that usually clears independently.

393. Some virtual machines may fail to migrate, whether by automatic


VMotion task or if evoked manually. This is generally caused by obsolete virtual
machines, CD-ROM binding or other reasons that may not be intuitive. In one
migration failure I experienced recently, the Virtual Infrastructure client did not
provide any information other than the operation timed out. The Virtual Center
server had no information related to the migration task in the local logs. In the
database

394. Identification of your risks is the most important pre-implementation task


you can do with DRS and VMotion. So what can you do to identify your risks?
Here are a couple of easy tasks:

395. Schedule VMotion for all systems to keep them moving across hosts.
396. Regularly put ESX hosts in and then exit maintenance mode.
397. Do not leave mounted CD-ROM media on virtual machines
(datastore/ISO file or host device options).
398. Keep virtual machines up to date with VMware tools and virtual machine
versioning.
399. Monitor the VPX_EVENT table in your ESX database for the
EVENT_TYPE = vim.event.VmFailedMigrateEvent
400. All in all, DRS and VMotion are solid technologies. Anomalies can
happen, and the risks should be identified and put into your regular monitoring for
visibility.

401. VMotion usage scenarios


402. Now that VMotion is enabled on two or more hosts, when should it be
used? There are two primary reasons to use VMotion: to balance the load on the
physical ESX servers and eliminate the need to take a service offline in order to
perform maintenance on the server.

403. VI3 balances its load by using a new feature called DRS. DRS is included
in the VI3 Enterprise edition along with VMotion. This is because DRS uses
VMotion to balance the load of an ESX cluster in real time between all of the
server involved in the cluster. For information on how to configure DRS see page
95 of the VMware VI3 Resource Management Guide. Once DRS is properly
configured it will constantly be evaluating how best to distribute the load of
running VMs amongst all of the host servers involved in the DRS-enabled cluster.
If DRS decides that a particular VM would be better suited to run on a different
host then it will utilize VMotion to seamlessly migrate the VM over to the other
host.

404. While DRS migrates VMs here and there with VMotion, it is also possible
to migrate all of the VMs off of one host server (resources permitting) and onto
another. This is accomplished by putting a server into "maintenance mode." When
a server is put into maintenance mode, VMotion will be used to migrate all of the
running VMs off it onto another server. This way it is possible to bring the first
server offline to perform physical maintenance on it without impacting the
services that it provides.

405. How VMotion works


406. As stated above, VMotion is the process that VMware has invented to
migrate, or move, a virtual machine that is powered on from one host server to
another host server without the VM incurring downtime. This is known as a "hot-
migration." How does this hot-migration technology that VMware has dubbed
VMotion work? Well, as with everything, in a series of steps:

407. A request has been made that VM-A should be migrated (VMotioned)
from ESX-A to ESX-B

408. VM-A's memory is pre-copied from ESX-A to ESX-B while ongoing


changes are written to a memory bitmap on ESX-A.
409. VM-A is quiesced on ESX-A and VM-A's memory bitmap is copied to
ESX-B.

410. VM-A is started on ESX-B and all access to VM-A is now directed to the
copy running on ESX-B.

411. The rest of VM-A's memory is copied from ESX-A all the while memory
is being read and written from VM-A on ESX-A when applications attempt to
access that memory on VM-A on ESX-B.

412. If the migration is successful VM-A is unregistered on ESX-A.


413. For a VMotion event to be successful the following must be true:
414. *Editor's Note: Special thanks to Colin Stamp of IBM United Kingdom
Ltd. for rewriting the following list.

415. The VM cannot be connected to an internal vswitch.

416. The VM cannot be connected to a CD-ROM or floppy drive that is using


an ISO or floppy image stored on a drive that is local to the host server.

417. The VM's affinity must not be set, i.e., binding it to physical CPU(s).

418. The VM must not be clustered with another VM (using a cluster service
like the Microsoft Cluster Service (MSCS)).

419. The two ESX servers involved must both be using (the same!) shared
storage.

420. The two ESX servers involved must be connected via Gigabit Ethernet (or
better).

421. The two ESX servers involved must have access to the same physical
networks.

422. The two ESX servers involved must not have virtual switch port groups
that are labeled the same.

423. The two ESX servers involved must have compatible CPUs. (See support
on Intel and AMD).
424. If any of the above conditions are not met, VMotion is not supported and
will not start. The simplest way to test these conditions is to attempt a manual
VMotion event. This is accomplished by right-clicking on VM in the VI3 client
and clicking on "Migrate..." The VI3 client will ask to which host this VM should
be migrated. When a host is selected, several validation checks are performed. If
any of the above conditions are true then the VI3 client will halt the VMotion
operation with an error.

425. Conclusion
426. The intent of this article was to provide readers with a solid grasp of what
VMotion is and how it can benefit them. If you have any outstanding questions
with regards to VMotion or any VMware technology please do not hesitate to
send them to me via ask the experts.
427. ------------------------------------------------------------------------
428. What Is VMware VMotion?
429. VMware® VMotion™ enables the live migration of running virtual
machines from one physical server to another
430. with zero downtime, continuous service availability, and complete
transaction integrity. VMotion allows IT organizations
431. to:
432. • Continuously and automatically allocate virtual machines within
resource pools.
433. • Improve availability by conducting maintenance without disrupting
business operations
434. VMotion is a key enabling technology for creating the dynamic,
automated, and self-optimizing data center.

435. How Is VMware VMotion Used?


436. VMotion allows users to:
437. • Automatically optimize and allocate entire pools of resources for
maximum hardware utilization, flexibility and
438. availability.
439. • Perform hardware maintenance without scheduled downtime.
440. • Proactively migrate virtual machines away from failing or
underperforming servers.

441. How Does VMotion work?

442. Live migration of a virtual machine from one physical server to another
with VMotion is enabled by three underlying
443. technologies.
444. First, the entire state of a virtual machine is encapsulated by a set of files
stored on shared storage such as Fibre Channel
445. or iSCSI Storage Area Network (SAN) or Network Attached Storage
(NAS). VMware’s clustered Virtual Machine File
446. System (VMFS) allows multiple installations of ESX Server to access the
same virtual machine files concurrently.
447. Second, the active memory and precise execution state of the virtual
machine is rapidly transferred over a high speed
448. network, allowing the virtual machine to instantaneously switch from
running on the source ESX Server to the destination
449. ESX Server. VMotion keeps the transfer period imperceptible to users by
keeping track of on-going memory transactions
450. in a bitmap. Once the entire memory and system state has been copied
over to the target ESX Server, VMotion
451. suspends the source virtual machine, copies the bitmap to the target ESX
Server, and resumes the virtual machine on
452. the target ESX Server. This entire process takes less than two seconds on a
Gigabit Ethernet network.
453. Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring
454. that even after the migration, the virtual machine network identity and
network connections are preserved. VMotion
455. manages the virtual MAC address as part of the process.
456. Once the destination machine is activated, VMotion pings the network
router to ensure that it is aware of the new
457. physical location of the virtual MAC address. Since the migration of a
virtual machine with VMotion preserves the precise
458. execution state, the network identity, and the active network connections,
the result is zero downtime and no disruption
459. to users.

460. ---------------------------------------------

461. What is VirtualCenter?

462. VirtualCenter is virtual infrastructure management software that centrally


manages an enterprise’s virtual machines as a single, logical pool of resources.
VirtualCenter provides:
463. · Centralized virtual machine management. Manage hundreds of virtual
machines from one location through robust access controls.
464. Monitor system availability and performance. Configure automated
notifications and e-mail alerts.
465. · Instant provisioning. Reduces server-provisioning time from weeks to
tens of seconds.
466. · Zero-downtime maintenance. Safeguards business continuity 24/7,
without service interruptions for hardware maintenance, deployment,or migration.
467. · Continuous workload consolidation. Optimizes the utilization of data
center resources to minimize unused capacity.
468. · SDK. Closely integrates 3rd-party management software with
VirtualCenter, so that the solutions you use today will work seamlessly within
virtual infrastructure.With VirtualCenter, an administrator can manage

469. -------------------------------------------------

470. What is Storage VMotion (SVMotion) and How do you perform a


SVMotion using the VI Plugin?
471. there are at least 3 ways to perform a SVMotion – from the remote
command line, interactively from the command line, and with the SVMotion VI
Client Plugin

472. Note:
473. You need to have VMotion configured and working for SVMotion to
work. Additionally, there are a ton of caveats about SVMotion in the ESX 3.5
administrator’s guide (page 245) that could cause SVMotion not to work. One
final reminder, SVMotion works to move the storage for a VM from a local
datastore on an ESX server to a shared datastore (a SAN) and back – SVMotion
will not move a VM at all – only the storage for a VM.

474. ----------------------------

475. Overview of VMware ESX / VMware Infrastructure Advanced Features

476. #1 ESX Server & ESXi Server


477. Even if all that you purchase is the most basic VMware ESXi
virtualization package at a cost of $495, you still gain a number of advanced
features. Of course, virtualization, in general, offers many benefits, no matter the
virtualization package you choose. For example - hardware independence, better
utilization of hardware, ease of management, fewer data center infrastructure
resources required, and much more. While I cannot go into everything that ESX
Server (itself) offers, here are the major advanced features:

478. Hardware level virtualization – no based operating system license is


needed, ESXi installs right on your hardware (bare metal installation).
479. VMFS file system – see advanced feature #2, below.
480. SAN Support – connectivity to iSCSI and Fibre Channel (FC) SAN
storage, including features like boot from SAN
481. Local SATA storage support.
482. 64 bit guest OS support.
483. Network Virtualization – virtual switches, virtual NICs, QoS & port
configuration policies, and VLAN.
484. Enhanced virtual machine performance – virtual machines may perform,
in some cases, even better in a VM than on a physical server because of features
like transparent page sharing and nested page table.
485. Virtual SMP – see advanced feature #4, below.
486. Support for up to 64GB of RAM for VMs, up to 32 logical CPUs and
256GB of RAM on the host.
487. #2 VMFS
488. VMware’s VMFS was created just for VMware virtualization. Thus, it is
the highest performance file system available to use in virtualizing your
enterprise. While VMFS is included with any edition or package of ESX Server or
VI that you choose, VMFS is still listed as a separate product by VMware. This is
because it is so unique.

489. VMFS is a high performance cluster file system allowing multiple systems
to access the file system at the same time. VMFS is what gives you a solid
platform to perform VMotion and VMHA. With VMFS you can dynamically
increase a volume, support distributed journaling, and the addition of a virtual
disk on the fly.

490. #3 Virtual SMP


491. VMware’s Virtual SMP (or VSMP) is the feature that allows a VMware
ESX Server to utilize up to 4 physical processors on the host system,
simultaneously. Additionally, with VSMP, processing tasks will be balanced
among the various CPUs.

492. #4 VM High Availability (VMHA)


493. One of the most amazing capabilities of VMware ESX is VMHA. With 2
ESX Servers, a SAN for shared storage, Virtual Center, and a VMHA license, if a
single ESX Server fails, the virtual guests on that server will move over to the
other server and restart, within seconds. This feature works regardless of the
operating system used or if the applications support it.

494. #5 VMotion & Storage VMotion


495. With VMotion, VM guests are able to move from one ESX Server to
another with no downtime for the users. VMotion is what makes DRS possible.
VMotion also makes maintenance of an ESX server possible, again, without any
downtime for the users of those virtual guests. What is required is a shared SAN
storage system between the ESX Servers and a VMotion license.

496. Storage VMotion (or SVMotion) is similar to VMotion in the sense that
"something" related to the VM is moved and there is no downtime to the VM
guest and end users. However, with SVMotion the VM Guest stays on the server
that it resides on but the virtual disk for that VM is what moves. Thus, you could
move a VM guest's virtual disks from one ESX server’s local datastore to a shared
SAN datastore (or vice versa) with no downtime for the end users of that VM
guest. There are a number of restrictions on this. To read more technical details on
how it works, please see the VMware ESX Server 3.5 Administrators Guide.

497. #6 VMware Consolidated Backup (VCB)


498. VMware Consolidated Backup (or VCB) is a group of Windows command
line utilities, installed on a Windows system, that has SAN connectivity to the
ESX Server VMFS file system. With VCB, you can perform file level or image
level backups and restores of the VM guests, back to the VCB server. From there,
you will have to find a way to get those VCB backup files off of the VCB server
and integrated into your normal backup process. Many backup vendors integrate
with VCB to make that task easier.
499. #7 VMware Update Manager
500. VMware Update Manager is a relatively new feature that ties into Virtual
Center & ESX Server. With Update Manager, you can perform ESX Server
updates and Windows and Linux operating system updates of your VM guests. To
perform ESX Server updates, you can even use VMotion and upgrade an ESX
Server without ever causing any downtime to the VM guests running on it.
Overall, Update Manager is there to patch your host and guest systems to prevent
security vulnerabilities from being exploited.

501. #8 VMware Distributed Resource Scheduler (DRS)


502. VMware’s Distributed Resource Scheduler (or DRS) is one of the other
truly amazing advanced features of ESX Server and the VI Suite. DRS is
essentially a load-balancing and resource scheduling system for all of your ESX
Servers. If set to fully automatic, DRS can recognize the best allocation of
resource across all ESX Server and dynamically move VM guests from one ESX
Server to another, using VMotion, without any downtime to the end users. This
can be used both for initial placement of VM guests and for “continuous
optimization” (as VMware calls it). Additionally, this can be used for ESX Server
maintenance.

503. #9 VMware’s Virtual Center (VC) & Infrastructure Client (VI Client)
504. I prefer to list the VMware Infrastructure client & Virtual Center as one of
the advanced features of ESX Server & the VI Suite. Virtual Center is a required
piece of many of the advanced ESX Server features. Also, VC has many
advanced features in its own right. When tied with VC, the VI Client is really the
interface that a VMware administrator uses to configure, optimize, and administer
all of you ESX Server systems.

505. With the VI Client, you gain performance information, security & role
administration, and template-based rollout of new VM guests for the entire virtual
infrastructure. If you have more than 1 ESX Server, you need VMware Virtual
Center.

506. #10 VMware Site Recovery Manager (SRM)


507. Recently announced for sale and expected to be shipping in 30 days,
VMware’s Site Recovery Manager is a huge disaster recovery feature. If you have
two data centers (primary/protected and a secondary/recovery), VMware ESX
Servers at each site, and a SRM supported SAN at each site, you can use SRM to
plan, test, and recover your entire VMware virtual infrastructure.

508. VMware ESX Server vs. the VMware Infrastructure Suite


509. VMware ESX Server is packaged and purchased in 4 different packages.
510. VMware ESXi – the slimmed down (yet fully functional) version of ESX
server that has no service console. By buying ESXi, you get VMFS and virtual
SMP only.
511. VMware Infrastructure Foundation – (previously called the starter kit, the
Foundation package includes ESX or ESXi, VMFS, Virtual SMP, Virtual Center
agent, Consolidated backup, and update manager.
512. VMware Infrastructure Standard – includes ESX or ESXi, VMFS, Virtual
SMP, Virtual center agent, consolidated backup, update manager, and VMware
HA.
513. VMware Infrastructure Enterprise – includes ESX or ESXi, VMFS,
Virtual SMP, Virtual center agent, consolidated backup, update manager,
VMware HA, VMotion, Storage VMotion, and DRS.
514. You should note that Virtual Center is required for some of the more
advanced features and it is purchased separately. Also, there are varying levels of
support available for these products. As the length and the priority of your support
package increase, so does the cost

515. ----------------------------------------

516. Advantages of VMFS

517. VMware’s VMFS was created just for VMware virtualization. VMFS is a
high performance cluster file system allowing multiple systems to access the file
system at the same time. VMFS is what gives you the necessary foundation to
perform VMotion and VMHA. With VMFS you can dynamically increase a
volume, support distributed journaling, and the addition of a virtual disk on the
fly.

518. ------------------

519. Virtual center licenses issues

520. However, all licensed functionality currently operating at the time the
license server
521. becomes unavailable continues to operate as follows:
522. ?? All VirtualCenter licensed features continue to operate indefinitely,
relying on a
523. cached version of the license state. This includes not only basic
VirtualCenter
524. operation, but licenses for VirtualCenter add-ons, such as VMotion and
DRS.
525. ?? For ESX Server licensed features, there is a 14-day grace period during
which hosts
526. continue operation, relying on a cached version of the license state, even
across
527. reboots. After the grace period expires, certain ESX Server operations,
such as
528. powering on virtual machines, become unavailable.
529. During the ESX Server grace period, when the license server is
unavailable, the
530. following operations are unaffected:
531. ?? Virtual machines continue to run. VI Clients can configure and operate
virtual
532. machines.
533. ?? ESX Server hosts continue to run. You can connect to any ESX Server
host in the
534. VirtualCenter inventory for operation and maintenance. Connections to
the
535. VirtualCenter Server remain. VI Clients can operate and maintain virtual
536. machines from their host even if the VirtualCenter Server connection is
also lost.
537. During the grace period, restricted operations include:
538. ?? Adding ESX Server hosts to the VirtualCenter inventory. You cannot
change
539. VirtualCenter agent licenses for hosts.
540. ?? Adding or removing hosts from a cluster. You cannot change host
membership for
541. the current VMotion, HA, or DRS configuration.
542. ?? Adding or removing license keys.
543. When the grace period has expired, cached license information is no
longer stored.
544. As a result, virtual machines can no longer be powered on. Running
virtual machines
545. continue to run but cannot be rebooted.
546. When the license server becomes available again, hosts reconnect to the
license server.
547. No rebooting or manual action is required to restore license availability.
The grace
548. period timer is reset whenever the license server becomes available again.

549. -------------------

550. By default, ESX has 22 different users and 31 groups.

551. VMware ESX Server, you have 4 Roles, by default

552. -------------

553. Vmware SAN paths


554. I had a question from a fellow blogger about the Fixed/Most Recently
Used setting for a SAN’s path policy. This was related to an IBM SVC, which
was only supported as an MRU setup at the moment, but as of ESX U3: IBM
SAN Volume Controller — SVC is now supported with Fixed Multipathing
Policy as well as MRU Multipathing Policy. (although the SAN guide still says
it’s not…)

555. We can have a long discussion about this, but it’s plain and simple:
556. On an Active/Passive array you would need to set the path policy to “Most
Recently Used“.
557. An Active/Active array must have the path policy set to “Fixed“.

558. Now I always wondered why there was a difference in these path policies.
There probably are a couple of explanations but the most obvious one is:
559. MRU fails over to an alternative path when any of the following SCSI
sense codes NOT_READY, ILLEGAL_REQUEST, NO_CONNECT and
SP_HUNG are received. Keep in mind that MRU doesn’t failback.

560. For Active/Active SAN’s, the Fixed path policy a fail over only occurs
when the SCSI sense code “NO_CONNECT” is received. When the path returns a
fail back will occur.

561. As you can see, four against just one SCSI sense code. You can imagine
what happens if you change MRU to Fixed when it’s not supported by the array.
SCSI sense codes will be send out, but ESX isn’t expecting them and will not do a
path fail over.

562. ----------------

563. One more, what is the maximum swap size we can allocate for an esx
host..Ans:1600mb as,a maximum of only 800mb of RAM can be allocated for
COS/SC..Hence twice the size of COS/SC = Swap Size..

564. ----------------------------

565. enable VMotion via the command-line changed. So for anyone looking for
this particular command:

566. /usr/bin/vmware-vim-cmd "hostsvc/vmotion/vnic_set vmk0"

567. -----------------------

568. HA best practices


569. Your ESX host-names should be in lowercase and use fqdn’s
570. Provide Service Console redundancy
571. If you add an isolation validation address with “das.isolationaddress”, add
an additional 5000 to “das.failuredetectiontime”
572. If your Service Console network is setup with “active / standby”
redundancy then your ”das.failuredetectiontime” needs to be set to 60000
573. If you ensured Service Console redundancy by adding a secondary service
console then ”das.failuredetectiontime” needs to be set to 20000 and you need to
setup an additional “das.isolationaddress”
574. If you setup a secondary Service Console use a different subnet and
vSwitch then your primary has
575. If you don’t want to use your default gateway as an isolation validation
address or can’t use it because it’s a non-pingable device then disable the usage
by setting das.usedefaultisolationaddress to false and add a pingable
“das.isolationaddress”
576. Change default isolation response to “power off vm” and set restart
priorities for your AD/DNS/VC/SQL servers

577. ------------------------------

578. In this example “vmk0? is the first vmkernel. This is one of the things that
changed, so no portgroup id’s anymore. And if you need to do anything via the
command-line that doesn’t seem to be possible with the normal commands:
vmware-vim-cmd. Definitely the way to go.

579. -------------

580. to see how many virtual machines running on ESX server, the command is
581. vmware-cmd -l

582. ---------------
583. The primary difference between the NAS and SAN is at the
communication level. NAS communicates over the network using a network
share, while SAN primarily uses the Fiber Channel protocol.

584. NAS devices transfer data from storage device to server in the form of
files. NAS units use file systems, which are managed independently.These
devices manage file systems and user authentication.

585. Recommended limit of 16 ESX servers per VMFS volume,


586. based on limitations of a VirtualCenter-managed ESX setup.
587. • Recommended maximum of 32 IO-intensive VMs sharing a
588. VMFS volume.
589. • Up to 100 non-IO-intensive VMs can share a single VMFS
590. volume with acceptable performance.
591. • No more than 255 files per VMFS partition.
592. • Up to 2TB limit per physical extent of a VMFS volume.

593. When ESX is booted, it scans fiber and SCSI devices for new and
594. existing LUNs. You can manually initiate a scan through the VMware
595. Management Interface or by using the cos-rescan.sh command.
596. VMware recommends using cos-rescan.sh because it is easier to use
597. with certain Fibre Channel adapters than with vmkfstools.

598. Detecting High Number and ‘missing’ LUNs


599. ESX Server, by default, only scans for LUN 0 to LUN 7 for every
600. target. If you are using LUN numbers larger than 7, you will need to
601. change the setting for the DiskMaxLUN field from the default of 8
602. to the value that you need.

603. VMware recommends increasing the maximum


604. queue depth for the Fiber Channel adapters. This change is
605. done by editing the hwconfig file in /etc/vmware directory.A typical
606. /etc/vmware/hwconfig file contains lines similar to the following:

607. HBA Settings for Failover of QLogic Adapters


608. For QLogic cards, VMware suggests that you adjust the
609. PortDownRetryCount value in the QLogic BIOS. This value determines
610. how quickly a failover occurs when a link goes down.

611. you can also use the command vmkmultipath


612. in the service console to view and change multipath settings.
613. To view the current multipathing configuration, use the ‘-q’
614. switch with the command.

615. vmkmultipath
616. Using the -r switch will allow you to specify the preferred path to a
617. disk. syntax is
618. vmkmultipath –s <disk> -r <NewPath>
619. # vmkmultipath -s vmhba1:0:1 -r vmhba2:0:1

620. Ensure that your


621. policy is set to ‘fixed’ by setting the path policy using the –p switch
622. with the command.
623. vmkmultipath –s <disk> -p <policy>
624. vmkmultipath -s vmhba1:0:1 -p fixed
625. However, VMware suggests that you run the vmkmultipath
626. command with the –S switch, as root, to ensure that they are saved.
627. # /usr/sbin/vmkmultipath -S
628. VMFS Volumes

629. when using commands like df, you will not see the
630. /vfms directory. Instead, you need to use vdf, which reports all of the
631. normal df information plus information about the VMFS volumes.

632. ESX configuration, the following services are running and


633. generating traffic over eth0:
634. • ESX MUI
635. • ssh sessions to COS
636. • Remote Console connections to guest operating systems
637. • VirtualCenter communication to vmware-ccagent on host
638. • Monitoring agents running on the COS
639. • Backups that occur on the COS

640. the location of nic information


641. For Intel Adapters:
642. /proc/net/PRO_LAN_Adapters/eth0.info
643. For Broadcom Adapters:
644. /proc/net/nicinfo/eth0.info

645. The /etc/modules.conf file allows for the


646. manual configuration of the speed and duplex settings for eth0.

647. If you notice slow speeds or disconnected sessions


648. to your ESX console, the following command may be run to
649. determine your current speed and duplex configuration:
650. # mii-tool
651. eth0: 100 Mbit, half duplex, link ok

652. Refer to the table below to


653. determine which file is required to modify a specific setting.
654. 150 VMware ESX Server
655. /etc/sysconfig/network-scripts/eth0/ifcfg-eth0
656. IP Address
657. Subnet Mask
658. /etc/resolv.conf
659. Search Suffix
660. DNS Servers
661. /etc/sysconfig/network
662. Hostname
663. Default Gateway

664. ESX provides several tools that can be used to monitor the utilization.
665. Vmkusage is an excellent tool for graphing historical data in
666. regards to VMNIC performance.

667. The second tool that can be utilized for performance monitoring is
668. esxtop.

669. Once the bond is configured with the


670. proper VMNICs, a virtual switch needs to be defined in
671. /etc/vmware/netmap.conf that references this bond.

672. The device variable references a pre-defined


673. bond configuration from the /etc/vmware/hwconfig file.
674. network.0.name = “VSwitch1”
675. network.0.device = “bond0”
676. network.1.name = “VSwitch2”
677. network.1.device = “bond1”
678. network.2.name = “VMotion”
679. network.2.device = “bond2”

680. By combining the two configuration files we have 3 virtual switches. In


the above example, we have defined two virtual switches for use by virtual
machines and a third for the specific purpose of utilizing VMotion in the virtual
infrastructure. Manually modifying the above files will NOT automatically
activate new virtual switches. If a virtual
681. switch is not created using the MUI, the vmware-serverd (or vmware-
ccagent if you are using VirtualCenter) process must be restarted.

682. Since private virtual switches do not need to map back to VMNICs, there
is no need to touch the /etc/vmware/hwconfig file.
683. We can add two simple lines to /etc/vmware/netmap.conf to create a new
private virtual switch:

684. network3.name = “Private Switch 1”


685. network3.device = “vmnet_0”
686. network4.name = “Private Switch 2”
687. network4.device = “vmnet_1”

688. ---
689. We can easily configure the gigabit connection to be “home link” for the
virtual switch. Upon failure of the home link, the backup link will automatically
activate and handle the virtual machine traffic until the issue with the high speed
connection can be resolved. When performing this failover, ESX utilizes the same
methodology as MAC Address load balancing of instantly re-ARPing the virtual
MAC addresses for the virtual machines down the alternative path. To make this
configuration, add the following line to /etc/vmware/hwconfig:
nicteam.bond0.home_link = “vmnic1”

690. ------------

691. IP Address
692. The second method that ESX is capable of providing for load balancing is
based on destination IP address. Since outgoing virtual machine traffic is balanced
based on the destination IP address of the packet, this method provides a much
more balanced configuration than MAC Address based balancing. Like the
previous method, if a link failure
693. is detected by the VMkernel, there will be no impact to the connectivity of
the virtual machines. The downside of utilizing this method of load balancing is
that it requires additional configuration of the physical network equipment.

694. Because of the way the outgoing traffic traverses the network in an
695. IP Address load balancing configuration, the MAC addresses of the
696. virtual NICs will be seen by multiple switch ports. In order to get
697. around this “issue”, either EtherChannel (assuming Cisco switches are
698. utilized) or 802.3ad (LACP - Link Aggregation Control Protocol)
699. must be configured on the physical switches. Without this configuration,
700. the duplicate MAC address will cause switching issues.
701. In addition to requiring physical switch configuration changes, an
702. ESX server configuration change is required. There is a single line
703. that needs to be added to /etc/vmware/hwconfig for each virtual
704. switch that you wish to enable IP Address load balancing on. To make
705. this change, use your favorite text editor and add the line below to
706. /etc/vmware/hwconfig. You will need to utilize the same configuration
707. file to determine the name of the bond that you need to reference
708. in the following entry (replace “bond0” with the proper value):

709. nicteam.bond0.load_balance_mode = “out-ip”


710. --------------------

711. The following steps assume no virtual machines have been configured
712. on the ESX host:
713. Modify /etc/modules.conf to comment out the line that
714. begins with “alias eth0”. This will disable eth0 on the next
715. reboot of the ESX host.
716. Run vmkpcidivy –i at the console. Walk through the current
717. configuration. When you get to the network adapter that is
718. assigned to the Console (c), make sure to change it to Virtual
719. Machines (v). This should be the only value that changes
720. from running vmkpcidivy.
721. Modify /etc/vmware/hwconfig by reconfiguring the network
722. bonds. Remove any line that begins with the following: (In
723. this case, “X” can be any numeric value.)
724. nicteam.vmnicX.team
725. Once the current bond has been deleted, add the following
726. two lines to the end of the file:
727. nicteam.vmnic0.team = “bond0”
728. nicteam.vmnic1.team = “bond0”
729. ESX Blade
730. VMotion
731. COS
732. Virtual Machines

733. Modify /etc/vmware/netmap.conf to remove and recreate the


734. required virtual switches. Since we are working under the
735. assumption that this is a new server configuration, remove
736. any lines that exist in this file and add the following 4 lines:
737. network0.name = “Production”
738. network0.device = “bond0”
739. network1.name = “VMotion”
740. network1.device = “vmnic2”
741. This will establish two new virtual switches once the system
742. reboots. The first virtual switch will consist of VMNIC0 and
743. VMNIC1 and will be utilized for virtual machines. The second
744. virtual switch consists of only VMNIC2 and is dedicated
745. for VMotion traffic.
746. Modify the /etc/rc.d/rc.local file to properly utilize the
747. vmxnet_console driver to utilize a bond for eth0. Add the following
748. 2 lines at the end of /etc/rc.d/rc.local:
749. insmod vmxnet_console devName=”bond0”
750. ifup eth0
751. Reboot the server.
752. When the server comes back online, we will have a pretty advanced
753. network configuration. The COS will actually be utilizing a redundant
754. bond of two NICs as eth0 through the vmxnet_console driver.
755. Virtual machines will utilize the same bond through a virtual switch
756. within ESX and have the same redundancy as eth0. VMotion will
757. have the dedicated NIC that VMware recommends for optimal
performance.
758. Advantages of using the vmxnet_console Driver
759. • Redundant eth0 connection.
760. • VMotion traffic will not impact virtual machine performance.
761. Disadvantages of using the vmxnet_console Driver
762. • Difficult configuration that is difficult to troubleshoot.
763. • eth0 must reside on the same VLAN as the virtual machines.

764. ------------------------

765. What's New


766. With this release of VMware Infrastructure 3, VMware innovations
reinforce three driving factors of virtualization adoption that continue to make
VMware Infrastructure the virtualization platform of choice for datacenters of all
sizes and across all industries:

767. Effective Datacenter Management


768. Mainframe-class Reliability and Availability
769. Platform for any Operating System, Application, or Hardware
770. The download bundle available for ESX Server 3.5 from the VMware
Web site is an update from the original release download bundle. The updated
download bundle fixes an issue that might occur when upgrading from ESX
Server 3.0.1 or ESX Server 3.0.2 to ESX Server 3.5. The updated download
bundle and the original download bundle released on 12/10/2007 are identical
with the exception of a modification to the upgrade script that automates the
installation of an RPM. For information about manually installing an RPM for
upgrading ESX Server, refer to KB 1003801.

771. Effective Datacenter Management


772. Guided Consolidation—Guided Consolidation, an enhancement to
VMware VirtualCenter, guides new virtualization users through the consolidation
process in a wizard-based, tutorial-like fashion. Guided Consolidation leverages
capacity planning capabilities to discover physical systems and analyze them.
Integrated conversion functionality transforms these physical systems into virtual
machines and intelligently places them on the most appropriate VMware ESX
Server hosts.
773. VMware Converter Enterprise integration—VirtualCenter 2.5 provides
support for integrated Physical-to-Virtual (P2V) and Virtual-to-Virtual (V2V)
migrations. This supports scheduled and scripted conversions, Microsoft
Windows Vista conversions, and restoration of virtual disk images that are backed
up using VCB, from within the VI Client.
774. VMware Distributed Power Management (experimental)—VMware DPM
reduces power consumption by intelligently balancing a datacenter's workload.
VMware DPM, which is part of VMware Distributed Resource Scheduler,
automatically powers off servers whose resources are not immediately required
and returns power to these servers when the demand for compute resources
increases again.
775. Image Customization for 64-bit guest operating systems—Image
customization provides administrators with the ability to customize the identity
and network settings of a virtual machine's guest operating system during virtual
machine deployment from templates.
776. Provisioning across datacenters—VirtualCenter 2.5 allows you to
provision virtual machines across datacenters. As a result, VMware Infrastructure
administrators can now clone a virtual machine on one datacenter to another
datacenter.
777. Batch installation of VMware Tools—VirtualCenter 2.5 provides support
for batch installations of VMware Tools so that VMware Tools can now be
updated for selected groups of virtual machines.
778. Datastore browser—This release of VMware Infrastructure 3 supports file
sharing across ESX Server hosts (ESX Server 3.5 or ESX Server 3i) managed by
the same VirtualCenter Server.
779. Open Virtual Machine Format (OVF)—The Open Virtual Machine Format
(OVF) is a virtual machine distribution format that supports sharing of virtual
machines between products and organizations. VMware Infrastructure Client
version 2.5 allows you to import and generate virtual machines in OVF format
through the File > Virtual Appliance > Import/Export menu items.
780. NEW: Lab Manager 2.5.2 Support—ESX Server 3 version 3.5 hosts can
be used with VMware Lab Manager 2.5.2. ESX Server 3.0.x hosts managed by
VirtualCenter 2.5 are also supported in Lab Manager 2.5.2. However, hosts used
in Lab Manager 2.5.2 must be of the same type.
781. Mainframe-class Reliability and Availability
782. VMware Storage VMotion—Storage VMotion allows IT administrators to
minimize service disruption due to planned storage downtime previously incurred
for rebalancing or retiring storage arrays. Storage VMotion simplifies array
migration and upgrade tasks and reduces I/O bottlenecks by moving virtual
machines to the best available storage resource in your environment.

783. Migrations using Storage VMotion must be administered through the


Remote Command Line Interface (Remote CLI), which is available for download
at the following location: http://www.vmware.com/download/download.do?
downloadGroup=VI-RCLI.
784. VMware Update Manager—Update Manager automates patch and update
management for ESX Server hosts and select Microsoft and Linux virtual
machines.
785. VMware High Availability enhancements—Enhanced high availability
provides experimental support for monitoring individual virtual machine failures.
VMware HA can now be set up to either restart the failed virtual machine or send
a notification to the administrator.
786. VMware VMotion with local swap files—VMware Infrastructure 3 now
allows swap files to be stored on local storage while still facilitating VMotion
migrations for these virtual machines.
787. VMware Consolidated Backup (VCB) enhancements—The VMware
Consolidated Backup (VCB) framework has the following enhancements:
788. iSCSI Storage—The VCB framework now supports backing up of virtual
machines directly from iSCSI storage. Previously, the VCB proxy was only
supported with Fibre Channel SAN storage.
789. Integration in Converter—VMware Converter has the ability to restore
virtual machine image backups created through the VCB framework. Now that
Converter is integrated into VirtualCenter 2.5, you can perform image restores
directly through the VI Client.
790. For details on many improvements and enhancements that this release of
Consolidated Backup offers, see the VMware Consolidated Backup 1.1 Release
Notes.

791. Platform for any Operating System, Application, or Hardware


792. Management of up to 200 hosts and 2000 virtual machines—VirtualCenter
2.5 can manage many more hosts and virtual machines than previous releases,
scaling the manageability of the virtual datacenter to up to 200 hosts and 2000
virtual machines.
793. Large memory support for both ESX Server hosts and virtual machines—
ESX Server 3.5 supports 256GB of physical memory and virtual machines with
64GB of RAM.
794. ESX Server host support for up to 32 logical processors— ESX Server 3.5
fully supports systems with up to 32 logical processors. Systems with up to 64
logical processors are supported experimentally.
795. SATA support—ESX Server 3.5 supports selected SATA devices
connected to dual SAS/SATA controllers.
796. 10 Gigabit Ethernet support—Neterion and NetXen 10 Gigabit Ethernet
NIC cards are supported in ESX Server 3.5.
797. N-Port ID Virtualization (NPIV) support—ESX Server 3.5 introduces
support for NPIV for Fibre Channel SANs. Each virtual machine can now have its
own World Wide Port Name (WWPN).
798. Cisco Discovery Protocol (CDP) support—This release of VMware
Infrastructure 3 incorporates support for CDP to help IT administrators better
troubleshoot and monitor Cisco-based environments from within VirtualCenter
2.5 and the VI Client. CDP allows VMware Infrastructure administrators to know
which Cisco switch port is connected to each virtual switch uplink (that is, each
physical NIC).
799. NEW: NetFlow support (experimental)—NetFlow is a networking tool
with multiple uses, including network monitoring and profiling, billing, intrusion
detection and prevention, networking forensics, and Sarbanes-Oxley compliance.
800. NEW: Internet Protocol Version 6 (IPv6) support for virtual machines—
ESX Server 3 version 3.5 supports virtual machines configured for IPv6.
801. Paravirtualized guest operating system support with VMI 3.0—ESX
Server 3.5 supports paravirtualized guest operating systems that conform to the
VMware Virtual Machine Interface (VMI) 3.0. VMI is an open paravirtualization
interface developed by VMware in collaboration with the Linux community (VMI
was integrated into the mainline Linux kernel in version 2.6.22).
802. Large page size—In ESX Server 3.5, the VMkernel can now allocate 2MB
pages to the guest operating system.
803. Enhanced VMXNET—Enhanced VMXNET is the next version of
VMware's paravirtualized virtual networking device for guest operating systems.
Enhanced VMXNET includes several new networking I/O performance
improvements including support for TCP Segmentation Offload (TSO) and jumbo
frames.
804. TCP Segmentation Offload (TSO)—TCP Segmentation Offload (TSO)
improves networking I/O performance by reducing the CPU overhead involved
with sending large amounts of TCP traffic.
805. Jumbo frames—Jumbo frames allow ESX Server 3.5 to send larger frames
out onto the physical network. The network must support jumbo frames (end-to-
end) for jumbo frames to be effective.
806. NetQueue support—VMware supports NetQueue, a performance
technology that significantly improves performance in 10 Gigabit Ethernet virtual
environments.
807. Intel I/O Acceleration Technology (IOATv1) support (experimental)—
ESX Server 3.5 provides experimental support for IOATv1.
808. InfiniBand—As a result of the VMware Community Source co-
development effort with Mellanox Technologies, ESX Server 3.5 is compatible
with InfiniBand Host Channel Adapters (HCAs) from Mellanox Technologies.
Support for this feature is provided by Mellanox Technologies as part of the
VMware Third Party Hardware and Software Support Policy.
809. Round-robin load balancing (experimental)—ESX Server 3.5 enhances
native load balancing by providing experimental support for round-robin load
balancing of HBAs.

810. ----------------------------------

811. Vmware boot process

812. Lilo
813. Vmkernal
814. INit /etc/inittab

815. /etc/rc.d/rc3.d will have the symbolic links from /etc/init.d

816. S00vmkstart--actually links to a script called


817. vmkhalt. By running this script first, VMware ensures that there are
818. no VMkernel processes running on the system during the boot
819. process.
820. S10network --tcp/ip services
821. S12syslog --syslog daemon
822. S56xinetd --incoming request to handle from COS,Each application that
can be started by
823. xinetd has a configuration file in /etc/xinetd.d. If the “disable = no”
824. flag is set in the configuration file of a particular application then
825. xinetd starts the application.The most
826. important application that is started here is the vmware-authd application
827. which provides a way to connect and authenticate to ESX to
828. perform VMkernel modifications.

829. S90vmware
830. This is where the VMkernel finally begins to load. The first thing that
831. the VMkernel does when it starts is load the proper device drivers to
832. interact with the physical hardware of the host. You can view all the
833. drivers that the VMkernel may utilize by looking in the
834. /usr/lib/vmware/vmkmod directory.
835. Once the VMkernel has successfully loaded the proper hardware
836. drivers it starts to run its various support scripts:
837. • The vmklogger sends messages to the syslog daemon and generates
838. logs the entire time the VMkernel is running.
839. • The vmkdump script saves any existing VMkernel dump files
840. from the VMcore dump partition and prepares the partition
841. in the errors.

842. Next the VMFS partitions (the partitions used to store all of your VM
843. disk files) are mounted. The VMkernel simply scans the SCSI devices
844. of the system and then automatically mounts any partition that is
configured
845. as VMFS. Once the VMFS partitions are mounted the

846. VMkernel is completely loaded and ready to start managing


virtualmachines.

847. S91httpd.vmwareOne of the last steps of the boot process for the COS is
to start the
848. VMware MUI (the web interface for VMware management). At this
849. point the VMkernel has been loaded and is running. Starting the
850. MUI provides us with an interface used to graphically interact with
851. ESX. Once the MUI is loaded a display plugged into the host’s local
852. console will display a message stating everything is properly loaded
853. and you can now access your ESX host from a web browser.

854. Modifying device


855. allocations through the service console can be done
856. with the vmkpcidivy command.
857. vmkpcidivy –i

858. esxcfg-vswif command will allow the modification


859. of this vSwitch port from the COS command line

860. http://www.applicationdelivery.co.uk/blog/tag/vmdk-limits/

861. While this may seem limited, the tool used is actually quite powerful.
862. VMware provides the nfshaper module, which allows the VMkernel
863. to control the outgoing bandwidth on a per guest basis.

864. -------------------------

865. VMware ESX Server performance monitoring is installed by default, but it


is not activated. It can be activated with command "vmkusagectl install". If you
want to cleanup the statistics, you will have to uninstall the monitoring service
with command "vmkusagectl uninstall". You can then clean the database with
command "vmkusagectl cleandb", and (re)activate/(re)install it again. The
excellent monitoring web pages, created with rrdtool, are at address "http://esx-
server-name/vmkusage/".

866. Command "esxtop" is like the "top" command, but it shows the VMkernel
processes instead.

867. "vdf" tool displays the amount of free space on different volumes,
including VMFS volumes.

868. ESX netcard utility for locating correct netcard among many netcards is
"findnic".

869. Some other commands are: "vmfs-ftp", "vmkmultipath", "vmklogger",


"vmware-cmd", "vm-support".
870. There are a couple of files and directories you should know about. The
most important ones are listed below.

871. /etc/modules.conf This file contains a list of devices in the system


available to the Service Console. Usually the devices allocated solely to VMs, but
physically existing on the system are also shown here in the commented-out ("#")
lines. This is an important file for root and administrators.

872. /etc/fstab This file defines the local and remote filesystems which are
mounted at ESX Server boot.

873. /etc/rc.d/rc.local This file is for server local customisations required at the
server bootup. Potential additions to this file are public/shared vmfs mounts.

874. /etc/syslog.conf

875. This file configures what things are logged and where. Some examples are
given below:
876. *.crit /dev/tty12

877. This example logs all log items at level "crit" (critical) or higher to the
virtual terminal at tty12. You can see this log by pressing [Alt]-[F12] on the
console.
878. *.=err /dev/tty11

879. This example logs all log items at exactly level "err" (error) to the virtual
terminal at tty11. You can see this log by pressing [Alt]-[F11] on the console.
880. *.=warning /dev/tty10

881. This example logs all log items at exactly level "warning" to the virtual
terminal at tty10. You can see this log by pressing [Alt]-[F10] on the console.
882. *.* 192.168.31.3

883. This example forwards everything (all syslog entries) to another (central)
syslog server. Pay attention to that server's security.
884. /etc/logrotate.conf

885. This is the main configuration file for log file rotation program. It defines
the defaults for log file rotation, log file compression, and time to keep the old log
files. Processing the contents of /etc/logrotate.d/ directory is also defined here.

886. /etc/logrotate.d/
887. This directory contains instructions service by service for log file rotation,
log file compression, and time to keep the old log files. For the three vmk* files,
raise "250k" to "4096k", and enable compression.

888. /etc/inittab
889. Here you can change the amount of virtual terminals available on the
Service Console. Default is 6, but you can go up to 9. I always go :-)

890. /etc/bashrc
891. The system default $PS1 is defined here. It is a good idea to change "\W"
to "\w" here to always see the full path while logged on the Service Console. This
is one of my favourites.

892. /etc/profile.d/colorls.sh
893. Command "ls" is aliased to "ls --colortty" here. Many admins don't like
this colouring. You can comment-out ("#") this line. I always do this one, too.

894. /etc/init.d/
895. This directory contains the actual start-up scripts.

896. /etc/rc3.d/
897. This directory contains the K(ill) and S(tart) scripts for the default runlevel
3. The services starting with "S" are started on this runlevel, and the services
Starting with "K" are killed, i.e. not started..

898. /var/log/
899. This directory contains all the log files. VMware's log files start with
letters "vm". The general main log file is "messages".

900. /etc/ssh/
901. This directory contains all the SSH daemon configuration files, public and
public keys. The defaults are both secure and flexible and rarely need any
changing.

902. /etc/vmware/
903. This directory contains the most important vmkernel configuration files.

904. /etc/vmware/vm-list
905. A file containing a list of registered VMs on this ESX Server.

906. /etc/xinetd.conf
907. This is the main and defaults setting configuration file for xinet daemon.
Processing the contents of /etc/xinetd.d/ directory is also defined here.

908. /etc/xinetd.d/
909. This directory contains instructions service by service for if and how to
start the service. Of the services here, vmware-authd, wu-ftpd, and telnet are most
interesting to us.
910. Two of the most interesting parameter lines are "bind =" and "only_from
=", which allows limiting service usage.

911. /etc/ntp.conf
912. This file configures the NTP daemon. Usable public NTP servers in
Finland are fi.pool.ntp.org, elsewhere in Europe europe.pool.ntp.org. You should
always place two to four NTP servers to ntp.conf file. Due to the nature of
*.pool.ntp.org, you should just have the same line four times in the configuration
file. Check www.pool.ntp.org for a public NTP server close to you. Remember to
change the service to autostart at runlevel 3.

913. --------------------

914. 22/tcp
915. SSH daemon listens to this port for remote connections. By default
password authentication is used for logons. RSA/DSA public/private key
authentication can be used and it is actually tried first. Userid/password
authentication is actually tried second. For higher security and for
automated/scripted logons RSA/DSA authentication must be used.

916. 902/tcp
917. VMware authd, the web management UI (MUI) and remote console
authentication daemon (service) for VMware ESX Server uses this port. The
daemon is not listening this port directly, but xinetd does. When someone open
connection to port 902, xinetd then launches authd, and the actual authentication
starts. Xinetd-related authd security is defined in the file /etc/xinetd.d/vmware-
authd.

918. 80/tcp and 443/tcp


919. The httpd.vmware application web server listens to these ports. With high
security on, all connections to port 80 are automatically redirected to port 443.

920. 8222/tcp and 8333/tcp


921. These ports are used by ESX Server's web UI. They are just forwards to
ports 80 and 443 respectively. These ports do not need to be open on the firewalls.
922. Remember, that sshd is by default always running on the Service Console,
so you can always connect to it and do low level management directly to the
Service Console files. An example of this kind of management is when the MUI
stops responding. Just login using your account via ssh, and enter the following
command to restart the webserver responsible for the MUI: su -c
"/etc/init.d/httpd.vmware restart". You normally need the root's password to
complete this task. You could (should!) also use sudo/visudo to make thing even
easier.

923. -----------------------------

924. DRS invocation intervel default time is 5 mins, we can change the value in
vpxd.cfg file

925. ---------------CPU and memory share values, respectively, default to:


926. ? High — 2000 shares per virtual CPU and 20 shares per megabyte of
virtual
927. machine memory
928. ? Normal — 1000 shares per virtual CPU and 10 shares per megabyte of
virtual
929. machine memory
930. ? Low — 500 shares per virtual CPU and 5 shares per megabyte of virtual
machine
931. memory

932. ---------------

933. remove vmnic from virtual switch via COS instead of MUI Aug 14, 2006
12:51 PM
934. you could try manually updating these 3 files

935. /etc/vmware/devnames.conf
936. /etc/vmware/hwconfig
937. /etc/vmware/netmap.conf

938. PLEASE TRY THIS ON A "TEST" SYSTEM


939. PLEASE BACK THEM UP PRIOR TO TRYING ANYTHING

940. ---------------

941. service console issue

942. ok, do the following

943. Delete your vswif and vmknic interfaces by using the following
commands
944. esxcfg-vswif -d vwswif0

945. esxcfg-vmknic -d vmk2

946. then delete your port groups

947. esxcfg-vswitch -D "VMKernel"

948. esxcfg-vswitch -D "Service Console"

949. Then delete your vswitches

950. esxcfg-vswitch -d vSwitch0

951. Now you should have a 'blank' networking config.

952. Now run the reset options for

953. esxcfg-vswitch -r

954. esxcfg-vmknic -r

955. esxcfg-vswif0 -r

956. run the listtings to see what you see.

957. Now to create everything again

958. Create the vswitches

959. esxcfg-vswitch -a vSwitch0

960. esxcfg-vswitch -a vSwitch1

961. create your port groups

962. esxcfg-vswitch -A "Service Console" vSwitch0

963. esxcfg-vswitch -A "VMKernel" vSwitch0

964. excfg-vswitch -A "VMWare Guests" vSwitch1

965. Now create the uplinks

966. esxcfg-vswitch -L vmnic0 vSwitch0


967. esxcfg-vswitch -L vmnic1 vSwitch1

968. If this all works with no issues, then run an esxcfg0vswitch -l to see what
it looks like.

969. Now recreate the vswif interface

970. esxcfg-vswif -a vswif0 -p "Service Console" -i 192.168.1.4 -n


255.255.255.0

971. Now recreate the vmkernel interface

972. esxcfg-vmknic -a "VMKernel" -i 192.168.1.5 -n 255.255.255.0

973. run esxcfg-vswitch -l to check out what you vswitch config looks like.
Hopefully every thing looks good.

974. Check to then see if you can ping your SC IP from another PC?

975. Hope this helps!

976. --------------------

977. lsof-i

978. ---------

979. Service Console Configuration and Troubleshooting Commands


980. esxcfg-advcfg–VMware ESX Server Advanced Configuration Option
Tool
981. Provides an interface to view and change advanced options of the
VMkernel.
982. esxcfg-boot–VMware ESX Server Boot Configuration Tool
983. Provides an interface to view and change boot options, including updating
initrd and GRUB options.
984. esxcfg-configcheck–VMware ESX Server Config Check Tool
985. Checks the configuration file for format updates.
986. esxcfg-info–VMware ESX Server Info Tool
987. Used primarily for debugging, this command provides a view into the state
of the VMkernel and Service Console components.
988. esxcfg-module–VMware ESX Server Advanced Configuration Option
Tool
989. This command provides an interface to see which driver modules are
loaded when the system boots, as well as the ability to disable or add additional
modules.
990. esxcfg-pciid–VMware ESX Server PCI ID Tool
991. This utility rescans the PCI ID list (/etc/vmware/pci.xml), and loads PCI
identifiers for hardware so the Service Console can recognize devices.
992. esxcfg-resgrp–VMware ESX Server Resource Group Manipulation Utility
993. Using this command, it is possible to create, delete, view, and modify
resource group parameters and configurations.
994. esxupdate–VMware ESX Server Software Maintenance Tool
995. This command it used to query the patch status, as well as apply patches to
an ESX host. Only the root user can invoke this command.

996. vdf–VMware ESX Disk Free Command


997. As df works in Linux, vdf works in the Service Console. The df command
will work in the Service Console, but will not show free disk space on VMFS
volumes.
998. vmkchdev–VMware ESX Change Device Allocation Tool
999. This tool can assign devices to either the Service Console or VMkernel, as
well as list whether a device is assigned to the SC or VMkernel. This replaced the
vmkpcidivy command found in previous versions of VMware ESX.
1000. vmkdump–VMkernel Dumper
1001. This command manages the VMkernel dump partition. It is primarily used
to copy the contents of the VMkernel dump partition to a usable file for
troubleshooting.
1002. vmkerrcode–VMkernel Status Return Code Display Utility
1003. This command will decipher VMkernel error codes along with their
descriptions.
1004. vmkfstools–VMware ESX Server File System Management Tool
1005. This utility is used to create and manipulate VMFS file systems, physical
storage devices on an ESX host, logical volumes, and virtual disk files.
1006. vmkiscsi-device–VMware ESX iSCSI Device Tool
1007. Used to query information about iSCSI devices.
1008. vmkload_mod–Vmkernel Module Loader
1009. This application is used to load, unload, or list, device drivers and network
shaper modules into the VMkernel.
1010. vmkloader–VMkernel Loader
1011. This command loads or unloads the VMkernel.
1012. vmkpcidivy–VMware ESX Server Device Allocation Utility
1013. This utility in previous versions of VMware ESX, allowed for the
allocation of devices to either the Service Console, or the VMkernel. In VMware
ESX 3.0, this utility is deprecated, and should only be used to query the host bus
adapter allocations using the following: vmkpcidivy -q vmhba_devs
1014. vmkuptime.pl–Availability Report Generator
1015. This PERL script creates HTML that displays uptime statistics and
downtime statistics for a VMware ESX host.
1016. vmware-hostd–VMware ESX Server Host Agent
1017. The vmware-hostd script acts as an agent for an ESX host and its virtual
machines.
1018. vmware-hostd-support–VMware ESX Server Host Agent Crash
Information Collector
1019. This script collects information to help determine the state of the ESX host
after a hostd crash.

1020. Networking and Storage Commands


1021. esxcfg-firewall–VMware ESX Server Firewall Configuration Tool
1022. Provides an interface to view and change the settings of Service Console
firewall.
1023. esxcfg-hwiscsi–VMware ESX Server Hardware iSCSI Configuration Tool
1024. Provides an interface to allow or deny ARP redirection on a hardware
iSCSI adapter, as well as enable or disable jumbo frames support.
1025. esxcfg-linuxnet–No specific name
1026. This command is only used when troubleshooting VMware ESX. It allows
the settings of the vswif0 (virtual NIC for the Service Console under normal
operation), to be passed to the eth0 interface when booting without loading the
VMkernel. Without the VMkernel loading, the vswif0 interface is not available.
1027. esxcfg-mpath–VMware ESX Server multipathing information
1028. This command allows for the configuration of multipath settings for Fibre
Channel and iSCSI LUNs.
1029. esxcfg-nas–VMware ESX Server NAS configuration tool
1030. This command is an interface to manipulate NAS files systems that
VMware ESX sees.
1031. esxcfg-nics–VMware ESX Server Physical NIC Information
1032. This command shows information about the Physical NICs that the
VMkernel is using.
1033. esxcfg-rescan–VMware ESX Server HBA Scanning Utility
1034. This command initiates a scan of a specific host bus adapter device.
1035. esxcfg-route–VMware ESX Server VMkernel IP Stack Default Route
Management
1036. Tool
1037. This can set the default route for a VMkernel virtual network adapter
(vmknic).
1038. esxcfg-swiscsi–VMware ESX Server Software iSCSI Configuration Tool
1039. The command line interface for configuring software based iSCSI
connections.
1040. esxcfg-vmhbadevs–VMware ESX Server SCSI HBA Tool
1041. Utility to view LUN information for SCSI host bus adapters configured in
VMware ESX.
1042. esxcfg-vmknic–VMware ESX Server VMkernel NIC Configuration Tool
1043. Configuration utility for managing the VMkernel virtual network
adapter(vmknic).
1044. esxcfg-vswif–VMware ESX Server Service Console NIC Configuration
Tool
1045. Configuration utility for managing the Service Console virtual network
adapter (vswif).
1046. esxcfg-vswitch–VMware ESX Server Virtual Switch Configuration Tool
1047. Configuration utility for managing virtual switches and settings.
1048. esxnet-support–VMware ESX Server Network Support Script.
1049. This script is used to perform a diagnostic analysis of the Service Console
and VMkernel’s network connections and settings.
1050. vmkping–Vmkernel Ping
1051. Used to ping the VMkernel virtual adapter (vmknic)
1052. vmkiscsi-ls–VMware ESX iSCSI Target List Tool
1053. This command shows all iSCSI Targets that the iSCSI subsystem knows
about, including Target name, Target ID, session status, host number, bus
number, and more.
1054. vmkiscsi-tool–VMware ESX iSCSI Tool
1055. This command will show the properties of iSCSI initiators.
1056. vmkiscsi-util–VMware ESX iSCSI Utility
1057. This command will display LUN Mapping, Target Mapping, and Target
Details.

1058. VMware Consolidated Backup Commands


1059. vcbMounter–VMware Consolidated Backup–Virtual Machine Mount
Utility
1060. This utility is used to mount a virtual machine’s virtual disk file for the
purpose
1061. of backing up its contents.
1062. vcbResAll–VMware Consolidated Backup–Virtual Machine Restore
Utility
1063. This utility is used to restore multiple virtual machines’ virtual disk files.
1064. vcbRestore–VMware Consolidated Backup–Virtual Machine Restore
Utility
1065. This utility is used to restore a single virtual machine’s virtual disk files
1066. vcbSnapAll–VMware Consolidated Backup–Virtual Machine Mount
Utility
1067. This utility is used to backup one or more virtual machines’ virtual disk
files.
1068. vcbSnapshot–VMware Consolidated Backup–Snapshot Utility
1069. This utility is used to backup a virtual machine’s virtual disk files.
1070. vcbUtil–VMware Consolidated Backup–Resource Browser and Server
Ping
1071. This utility provides different information, depending on argument. The
ping argument attempts to log into the VirtualCenter Server. The resource pools
argument lists all resource pools, and the vmfolders argument lists foldersthat
1072. contain virtual machines.
1073. vcbVmName–VMware Consolidated Backup–VM Locator Utility
1074. This utility performs a search of virtual machines for VCB scripting. It can
list individual VM’s, all VM’s that meet a certain criteria, or VM’s on a
specificESX
1075. host.
1076. vmres.pl–Virtual Machine Restore Utility
1077. This PERL script is depreciated, and vcbRestore should be used instead.
1078. vmsnap.pl–Virtual Machine Mount Utility
1079. This PERL script is depreciated, and vcbMounter should be used instead.
1080. vmsnap_all–Virtual Machine Mount Utility
1081. This script is depreciated, and vcbSnapAll should be used instead.
1082. ?------------------------------

1083. VMkernel Related logging


1084. /var/log/vmkernel –Keeps information about the host and guests
1085. /var/log/vmkwarning –Collects VMkernel warning messages
1086. /var/log/vmksummary –Collects statistics for uptime information
1087. Host Agent logging
1088. /var/log/vmware/hostd.log –Information on the agent and configuration of
1089. an ESX host
1090. Service Console logging
1091. /var/log/messages –Contain general log messages for troubleshooting.
This
1092. also keeps track of any users that have logged into the Service Console,
and
1093. their actions.
1094. Web Access logging
1095. /var/log/vmware/webAccess –Web access logging for VMware ESX
1096. Authentication logging
1097. var/log/secure –Records all authentication requests
1098. VirtualCenter agent logging
1099. /var/log/vmware/vpx –Logs for the VirtualCenter Agent
1100. Virtual Machine logging –Look for a file named vmware.log in the
directory
1101. of the configuration files of a virtual machine.

1102. ESX Memory Management – Part 1


1103. Apr 27th, 2009
1104. by Arnim van Lieshout.

1105. I receive a lot of questions lately about ESX memory management. Things
that are very obvious to me seem to be not so obvious at all for some other people.
So I’ll try to explain these things from my point of view.
1106. First let’s have a look at the virtual machine settings available to us. On
the vm setting page we have several options we can configure for memory
assignment.

1107. Allocated memory: This is the amount of memory we assign to the vm and
is also the amount of memory the guest OS will see as its physical memory. This
is a hard limit and the vm cannot exceed this limit if it demands more memory. It
is configured on the hardware tab of the vm’s settings.
1108. Reservations: A reservation is a guaranteed amount of memory assigned to
the vm. This is a way of ensuring that the vm gets a minimal amount of memory
assigned. When this reservation cannot be met, you will be unable to start the vm.
This is known as “Admission Control”. Reservations are set on the resources tab
of the vm’s settings and by default there is no reservation set.
1109. Limits: A limit is a restriction on the vm, so it cannot use more memory
than this limit. If you would set this limit lower than the allocated memory value,
the ballooning driver will start to inflate as soon as the vm demands more memory
than this limit. Limits are set on the resources tab of the vm’s settings and by
default the limit is set to “unlimited”.Now that we know of limits and
reservations, we need to have a quick look at the VMkernel swap file. This swap
file is used by the VMkernel to swap out the vm’s memory as a last resort to free
up memory when the host is running out of it. When we set a reservation, that
memory is guaranteed and cannot be swapped out to disk. So whenever a vm
starts up, the VMkernel creates a swap file which has a size of the limit minus the
reservation. For example we have a vm with a 1024MB limit and a reservation of
512MB. The swap file created will be 1024MB – 512MB = 512MB. If we would
set the reservation to 1024MB there won’t be a swap file created at all. Remember
that by default there are no reservations and no limits set, so the swap file created
for each vm will be the same size as the allocated memory.
1110. Shares: With shares you set a relative importance on a vm. Unlike limits
and reservation which are fixed, shares can change dynamically. Remember that
the share system only comes into play when memory resources are scarce and
contention is occurring. Shares are set on the resources tab of the vm’s settings
and can be set to “low”, “normal”, “high” or a custom value.
1111. low = 5 shares per 1MB allocated to the vm
1112. normal = 10 shares per 1MB allocated to the vm
1113. high = 20 shares per 1MB allocated to the vm
1114. It is important to note that the more memory you assign to a vm, the more
shares it receives.Let’s look at an example to show you how this share system
works. Say you have 5 vms with each 2,000MB memory allocated and the share
value set to “normal”. The ESX host only has 4,000MB of physical machine
memory available for virtual machines. Each vm receives 20,000 shares according
to the “normal” setting (10 * 2,000). The sum of all shares is 5 * 20,000 =
100,000. Every vm will receive an equal share of 20,000/100,000 = 1/5th of the
resources available = 4,000/5 = 800MB.Now we change the shares setting on 1
vm to “High”, which results in this vm receiving 40,000 shares instead of 20,000.
The sum of all shares is now increased to 120,000. This vm will receive
40,000/120,000 = 1/3rd of the resources available. Thus 4,000/3 = 1333 MB. All
the other vms will receive only 20,000/120,000 = 1/6th of the available resources
= 4,000/6 = 666 MB

1115. Instead of configuring these settings on a vm basis, it is also possible to


configure these settings on a resource pool. A VMware ESX resource pool is a
pool of CPU and memory resources. I always look to the resource pool as a group
of VMs.

1116. This concludes the memory settings we can configure on a vm. Next time
I will go into ESX memory management techniques.

1117. -----------

1118. http://vm-where.com/links.aspx

You might also like