Professional Documents
Culture Documents
Some other Q.
3. How does HA know to restart a VM from a dropped Host (storage lock will be
removed from the metadata)
4. How many iSCSI targets will ESX support 8 for 3.01, (64 for 3.5)
12. Vmware service console manages firewall, snmp agent, apache tomcat other
services like HA & DRs
13. Vmware vitual machine files are vmdk, vmx (configuration), nvram (BIOS file),
Log file
15. Hypervisor ESX server offers basic partition of server resources, however it also
acts as the foundation for virtual infrastructure software enabling Vmotion,DRS
and so forth kerys to the dynamic, automated datacenter.
16. host agent on each managed host software collects , communicates, and executes
the action recieved through the VI client. it is installed as a part of the ESX Server
Installation.
17. Virtual Center agent : On each managed host, software that collects
commmunicates and executes the actions received from the virtual center server.
Ther virtual center agent is installed the first time any host is added to the virtual
center inventory.
18. ESX server installation requirements 1500 mhz cpu intel or amd, memory 1 gb
minimum up to 256 mb, 4 gb hard drive space,
19. The configuration file that manages the mapping of service console file system to
mount point is /etc/fstab,
29. VI client provides direct access to an ESX server for configuration and virtual
machine management
30. The VI Client is also used to access Virtual Center to provide management
configuration and monitoring of all ESX servers and their virtual machines within
the virtual infrastructure environment. However when using the VI client to
connect directly to the ESX server, no management of Virtual center feature is
possible. Eg- you cannot configure and administer, VMware DRS or VMware
HA.
31. VMware license mode : default 60 days trail. After 60 days you can create VM's
but you cannot power on VM's. The license types are Foundation, Standard and
Enterprise.
32. Foundation license: VMFS, Virtual SMP, Virtual center agent, VMware update
manager, VCB
36. By default the first service console network connection is always named service
console. It always in vSwitch 0. The switch always connects vmnic 0
37. To gather VMware diagnostics information run the script vm-spport from the
service console. If you generate the diagnostic information, this information will
be stored in Vmware-virtualcenter-support-date@time folder. The folder contains
Vclient-support, which will hold vi client log files. Another file is esx-support-
date@time.tgz, which is compressed archive file contains esx server diagnostics
information
39. You cannot have two virtual Switches mapped to one physical NIC
40. You can map two or more physical NIC mapped to one virtual switch.
41. A switch used by Vmkernal for accessing ISCSI or NAS based storage
42. Virtual switch used to give the service console access to a management LAN
43. Virtual switch can have 1016 ports and 8 ports used for management purpose total
is 1024.
47. ESX server can have 32 NIC on Intel NIC, 20 Broadcom NIC's
53. Multiple service console connections can be created only if they are configured on
different network. In addition only a single service console gateway, IP address
can be defined
54. A VMkernal port allows to use ISCSI, NAS based networks. Vmkernal port is
required for VMotion.
55. It requires network label, VLAN id optional, IP setting
56. Multiple Vmkernal connections can be configured only if they are configured on
different networks only single vmkernal gateway IP address can be defined.
57. Virtual machine port group required A network lable, VLAN id optional
59. Security
60. Traffic shaping
61. NIC teaming
63. Promiscuous mode : when set to reject, placing a guest adapter in promiscuous
mode has no which frames are received by the adapter
64. Mac address changer : when set to reject, if the guest attempts to change the MAC
address assigned to the virtual NIC, it stops receiving frames
65. Forged transmits - When set to reject drops any frames that the guest sends, where
the source address field contains a MAC address other than the assigned virtual
NIC mac address ( default accept)
68. SNMP incoming port 161 & out going port 162
77. View the iSCSI name assigned to the iSCSI software adapter: vmkiscsi-tool -I -1
vmhba40
78. View the iSCSI alias assigned to the iSCSI software adapter:vmkiscsi-tool -k -1
vmhba40
79. Login to the service console as root and execute esxcfg - vmhbadevs to identify
which LUNs are currently seen by the ESX server. # esxcfg-vmhbadevs
80. Run the esxcf g-vmhbadevs command with the -m option to map VMFS names to
81. VMFS UUIDs. Note that the LUN partition numbers are shown in this output.
The hexidecimal values are described later. # esxcfg-vmhbadevs -m
82. Use the vdf -h comand to identify disk statistics (Size, Used, Avail, Use%,
Mounted on) for all file system volumes recognized by your ESX host.
83. List the contents of the /vmfs/volumes directory. The hexidecimal numbers (in
dark blue) are unique VMFS names. The names in light blue are the VMFS labels.
The labels are symbolically linked to the VMFS volumes. ls -l \vmfs\volumes
84. Using the Linux device name (obtained using esxcfg - vmhbadevs command),
check LUNs A, B and C to see if any are partitioned. If there is no partition table,
example a. below, go to step 3. If there is a table, example b. go to step 2. # fdisk
-1 /dev/sd<?>
85. Format a partitioned LUN using vmkf s tool s . Use the - C and - S options
respectively, to create and label the volume. Using the command below, create a
VMFS volume on LUN A. Ask your instructor if you should use a custom VMFS
label name.# vmkfstools -C vmfs3 -S LUNc#> vmhbal:O:#:l
86. Now that the LUN has been partitioned and formatted as a VMFS volume, it can
be used as a datastore. Your ESX host recognizes these new volumes. vdf -h
87. Use the esxcf g-vmhbadevs command with the -m option to map the VMFS hex
names to SAN LUNs. # esxcfg-vmhbadevs -m
88. It may be helpful to change the label to identify that this VMFS volume is
spanned. Add - spanned to the VMFS label name. # In -sf /vmfs/volumes/<V~~S-
UUID> /vmfs/volumes/<New- L abel- N ame>
89. In order to remove a span, you must reformat LUN B with a new VMFS volume
(because it was the LUN that was spanned to).THIS WILL DELETE ALL DATA
ON BOTH LUNS IN THE SPAN !# vmkfstools -C vmfs3 -S <label>
vmhbal:O:#:l
90. Enable the ntpclient service on the Service Console - # esxcfg-firewall -e ntpclient
91. Determine if the NTP daemon starts when the system boots. # chkconfig --list
ntpd
92. Configure the system to synchronize the hwclock and the operating system clocks
each time the ESX Server host is rebooted. # nano -w /etc/sysconfig/clock UTC=
true
94. Communication between VI client and ESX server required ports 902, 903
96. Communication between VI web access client and ESX 80, 443
98. Communication between ESX server and License server 27010 (in), 27000(out)
99. ESX server in a VMware HA clster 2050 -5000 (in), 8042-8045 (out)
106. List the different ways to identify your virtual machine. To do this, use the
vcbVmName command: vcbVmName -h < V i r t u a l Center - Server-IP-
Address-or-Hostname> -u < V i r t u a l c e n t e r- S e r v e r- u s e r- a ccount> -p
< V i r t u a l c e n t e r- S e r v e r- p assword> -s ipaddr:<IP - address - of -
virtual-machine - to - backup>
107. Unmount the virtual disk(s): mountvm -u c:\backups\tempmnt
108. VMFS Volume can be created one partition 256 GB in the maximum size
of a VM For a LUN 32extents can be added up to 64 TB
113. vmname.vmdk -- actual virtual hard drive for the virtual guest operation
system
118. Log files should be used only when you are having trouble with a virtual
machine.
119. VMDK files – VMDK files are the actual virtual hard drive for the virtual
guest operation system (virtual machine / VM). You can create either dynamic or
fixed virtual disks. With dynamic disks, the disks start small and grow as the disk
inside the guest OS grows. With fixed disks, the virtual disk and guest OS disk
start out at the same (large) disk. For more information on monolithic vs. split
disks see this comparison from sanbarrow.com.
120. VMEM – A VMEM file is a backup of the virtual machine’s paging file. It
will only appear if the virtual machine is running, or if it has crashed.
121. VMSN & VMSD files – these files are used for VMware snapshots. A
VMSN file is used to store the exact state of the virtual machine when the
snapshot was taken. Using this snapshot, you can then restore your machine to the
same state as when the snapshot was taken. A VMSD file stores information
about snapshots (metadata). You’ll notice that the names of these files match the
names of the snapshots.
122. NVRAM files – these files are the BIOS for the virtual machine. The VM
must know how many hard drives it has and other common BIOS settings. The
NVRAM file is where that BIOS information is stored.
123. VMX files – a VMX file is the primary configuration file for a virtual
machine. When you create a new virtual machine and answer questions about the
operating system, disk sizes, and networking, those answers are stored in this file.
As you can see from the screenshot below, a VMX file is actually a simple text
file that can be edited with Notepad. Here is the “Windows XP Professional.vmx”
file from the directory listing, above:
133. How VMotion works: --Live migration of a virtual machine from one
physical server to Another with VMotion is enabled by three underlying
technologies.
134. First, the entire state of a virtual machine is encapsulated by a set of files
stored on shared storage such as FC or iSCSI SAN or NAS. VMware clustered
VMFS allows multiple installations of ESX Server to access the same virtual
machine files concurrently.
135. Second, the active memory and precise execution state of the virtual
machine is rapidly transferred over a high speed network, allowing the virtual
machine to instantaneously switch from running on the source ESX Server to the
destination ESX Server. VMotion keeps the transfer period imperceptible to users
by keeping track of on-going memory transactions in a bitmap. Once the entire
memory and system state has been copied over to the target ESX Server, VMotion
suspends the source virtual machine, copies the bitmap to the target ESX Server,
and resumes the virtual machine on the target ESX Server. This entire process
takes less than two seconds on a Gigabit Ethernet network.
136. Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring that even after the migration, the virtual
machine network identity and network connections are preserved. VMotion
manages the virtual MAC address as part of the process. Once the destination
machine is activated, VMotion pings the network router to ensure that it is aware
of the new physical location of the virtual MAC address. Since the migration of a
virtual machine with VMotion preserves the precise execution state, the network
identity, and the active network connections, the result is zero downtime and no
disruption to users.
137. DRS will balance the workload across the resources you presented to the
cluster. It is an essential component of any successful ESX implementation.
138. With VMware ESX 3.x and VirtualCenter 2.x, it's possible to configure
VirtualCenter to manage the access to the resources automatically, partially, or
manually by an administrator.
139. This option is particularly useful for setting an ESX server into
maintenance mode. Maintenance mode is a good environment to perform tasks
such as scanning for new storage area network (SAN) disks, reconfiguring the
host operating system's networking or shutting down the server for maintenance.
Since virtual machines can't be run during maintenance mode, the virtual
machines need to be relocated to other host servers. Commonly, administrators
will configure the ESX cluster to fully automate the rules for the DRS settings.
This allows Virtual Center to take action based on workload statistics, available
resources and available host servers.
140. An important point to keep in mind is that DRS works in conjunction with
any established resource pools defined in the Virtual Center configuration. Poor
resource pool configuration (such as using unlimited options) can cause DRS to
make unnecessary performance adjustments. If you truly need to use unlimited
resources within a resource pool the best practice would be to isolate. Isolation
requires a separate ESX cluster with a limited number of ESX hosts that share a
single resource pool where the virtual machines that require unlimited resources
are allowed to operate. Sharing unlimited setting resource pools with limited
setting resource pools within the same cluster could cause DRS to make
unnecessary performance adjustments. DRS can compensate for this scenario, but
that could be by bypassing any resource provisioning and planning previously
established.
141. How VMotion works with DRS :- The basic concept of VMotion is that
ESX will move a virtual machine while it is running to another ESX host with the
move being transparent to the virtual machine. ESX requires a dedicated network
interface at 1 GB per second or greater, shared storage and a virtual machine that
can be moved. Not all virtual machines can be moved. Certain situations, such as
optical image binding to an image file, prevent a virtual machine from migrating.
With VMotion enabled, an active virtual machine can be moved automatically or
manually from one ESX host to another. An automatic situation would be as
described earlier when a DRS cluster is configured for full automation. When the
cluster goes into maintenance mode, the virtual machines are moved to another
ESX host by VMotion. Should the DRS cluster be configured for all manual
operations, the migration via VMotion is approved within the Virtual
Infrastructure Client, and then VMotion proceeds with the moves.
142. VMware ESX 3.5 introduces the highly anticipated Storage VMotion.
Should your shared storage need to be brought offline for maintenance, Storage
VMotion can migrate an active virtual machine to another storage location. This
migration will take longer, as the geometry of the virtual machine's storage is
copied to the new storage location. Because this is not a storage solution, the
traffic is managed through the VMotion network interface.
143. Points to consider - One might assume that with the combined use of DRS
and VMotion that all bases are covered. Well, not entirely. There are a few
considerations that you need to be aware of so that you know what DRS and
VMotion can and cannot do for you.
144. VMotion does not give an absolute zero gap of connectivity during a
migration. In my experiences the drop in connectivity via ping is usually limited
to one ping from a client or a miniscule increase in ping time on the actual virtual
machine. Most situations will not notice the change and reconnect over the
network during a VMotion migration. There also is a slight increase in memory
usage and on larger virtual machines this may cause a warning light on RAM
usage that usually clears independently.
153. Now that VMotion is enabled on two or more hosts, when should it be
used? There are two primary reasons to use VMotion: to balance the load on the
physical ESX servers and eliminate the need to take a service offline in order to
perform maintenance on the server.
154. VI3 balances its load by using a new feature called DRS. DRS is included
in the VI3 Enterprise edition along with VMotion. This is because DRS uses
VMotion to balance the load of an ESX cluster in real time between all of the
server involved in the cluster. For information on how to configure DRS see page
95 of the VMware VI3 Resource Management Guide. Once DRS is properly
configured it will constantly be evaluating how best to distribute the load of
running VMs amongst all of the host servers involved in the DRS-enabled cluster.
If DRS decides that a particular VM would be better suited to run on a different
host then it will utilize VMotion to seamlessly migrate the VM over to the other
host.
155. While DRS migrates VMs here and there with VMotion, it is also possible
to migrate all of the VMs off of one host server (resources permitting) and onto
another. This is accomplished by putting a server into "maintenance mode." When
a server is put into maintenance mode, VMotion will be used to migrate all of the
running VMs off it onto another server. This way it is possible to bring the first
server offline to perform physical maintenance on it without impacting the
services that it provides.
158. A request has been made that VM-A should be migrated (VMotioned)
from ESX-A to ESX-B
162. The rest of VM-A's memory is copied from ESX-A all the while memory
is being read and written from VM-A on ESX-A when applications attempt to
access that memory on VM-A on ESX-B.
168. The VM's affinity must not be set, i.e., binding it to physical CPU(s).
169. The VM must not be clustered with another VM (using a cluster service
like the Microsoft Cluster Service (MSCS).
170. The two ESX servers involved must both be using (the same!) shared
storage.
171. The two ESX servers involved must be connected via Gigabit Ethernet (or
better).
172. The two ESX servers involved must have access to the same physical
networks.
173. The two ESX servers involved must not have virtual switch port groups
that are labeled the same.
174. The two ESX servers involved must have compatible CPUs. (See support
on Intel and AMD).
175. If any of the above conditions are not met, VMotion is not supported and
will not start. The simplest way to test these conditions is to attempt a manual
VMotion event. This is accomplished by right-clicking on VM in the VI3 client
and clicking on "Migrate..." The VI3 client will ask to which host this VM should
be migrated. When a host is selected, several validation checks are performed. If
any of the above conditions are true then the VI3 client will halt the VMotion
operation with an error.
176. Conclusion
177. The intent of this article was to provide readers with a solid grasp of what
VMotion is and how it can benefit them. If you have any outstanding questions
with regards to VMotion or any VMware technology please do not hesitate to
send them to me via ask the experts.
178. ------------------------------------------------------------------------
179. What Is VMware VMotion?
180. VMware® VMotion™ enables the live migration of running virtual
machines from one physical server to another
181. with zero downtime, continuous service availability, and complete
transaction integrity. VMotion allows IT organizations
182. to:
183. • Continuously and automatically allocate virtual machines within
resource pools.
184. • Improve availability by conducting maintenance without disrupting
business operations
185. VMotion is a key enabling technology for creating the dynamic,
automated, and self-optimizing data center.
193. Live migration of a virtual machine from one physical server to another
with VMotion is enabled by three underlying technologies.
m. #3 Virtual SMP
n. VMware’s Virtual SMP (or VSMP) is the feature that allows a VMware
ESX Server to utilize up to 4 physical processors on the host system,
simultaneously. Additionally, with VSMP, processing tasks will be
balanced among the various CPUs.
197. With the VI Client, you gain performance information, security & role
administration, and template-based rollout of new VM guests for the entire virtual
infrastructure. If you have more than 1 ESX Server, you need VMware Virtual
Center.
a. VMware ESXi – the slimmed down (yet fully functional) version of ESX
server that has no service console. By buying ESXi, you get VMFS and
virtual SMP only.
b. VMware Infrastructure Foundation – (previously called the starter kit, the
Foundation package includes ESX or ESXi, VMFS, Virtual SMP, Virtual
Center agent, Consolidated backup, and update manager.
c. VMware Infrastructure Standard – includes ESX or ESXi, VMFS, Virtual
SMP, Virtual center agent, consolidated backup, update manager, and
VMware HA.
202. VMware Infrastructure Enterprise – includes ESX or ESXi, VMFS,
Virtual SMP, Virtual center agent, consolidated backup, update manager,
VMware HA, VMotion, Storage VMotion, and DRS.
203. You should note that Virtual Center is required for some of the more
advanced features and it is purchased separately. Also, there are varying levels of
support available for these products. As the length and the priority of your support
package increase, so does the cost
c. For ESX Server licensed features, there is a 14-day grace period during
which hosts continue operation, relying on a cached version of the license
state, even across reboots. After the grace period expires, certain ESX
Server operations, such as powering on virtual machines, become
unavailable. During the ESX Server grace period, when the license server
is unavailable, the following operations are unaffected:
e. ESX Server hosts continue to run. You can connect to any ESX Server
host in the VirtualCenter inventory for operation and maintenance.
Connections to the
f. VirtualCenter Server remain. VI Clients can operate and maintain virtual
machines from their host even if the VirtualCenter Server connection is
also lost.
m. When the license server becomes available again, hosts reconnect to the
license server.
210. Recently VMware added a some what useful command line tool named
vmfs-undelete which exports metadata to a recovery log file which can restore
vmdk block addresses in the event of deletion. It's a simple tool and at present it's
experimental and unsupported and is not available on ESXi. The tool of course
demands that you were proactive and ran it's backup function in order to use it.
Well I think this falls well short of what we need here. What if you have no
previous backups of the VMFS configuration, so we really need to know what to
look for and how to correct it and that's exactly why I created this blog.
214. How does HA know to restart a VM from a dropped Host (storage lock
will be removed from the metadata)
215. How many iSCSI targets will ESX support 8 for 3.01, (64 for 3.5)
a. When you connect to VC you manage ESX server via vpxa (Agent on esx
server). Vpxa then pass those request to hostd (management service on esx
server). When you connect to ESX server directly, you connect to hostd
(bypass vpxa). You can extend this to a troubleshoot case, where connect
to esx see one thing and connect to VC see another. So the problem is
most likely out of sync between hostd and vpxa, "service vmware-vpxa
restart" should take care of it.
221. how many vm's you can use in Virtual center (ESX 3.5 - 2000)
222. how many esx hosts you can connect in Virtual center (3.5 – 200)
223. Vmware service console manages firewall, snmp agent, apache tomcat
other services like HA & DRs
224. Vmware vitual machine files are vmdk, vmx (configuration), nvram
(BIOS file), Log file
226. Hypervisor ESX server offers basic partition of server resources, however
it also acts as the foundation for virtual infrastructure software enabling
Vmotion,DRS and so forth kerys to the dynamic, automated datacenter.
227. Host agent on each managed host software collects , communicates, and
executes the action received through the VI client. it is installed as a part of the
ESX Server Installation.
228. Virtual Center agent: On each managed host, software that collects,
commmunicates and executes the actions received from the virtual center server.
Ther virtual center agent is installed the first time any host is added to the virtual
center inventory.
229. ESX server installation requirements 1500 mhz cpu intel or amd, memory
1 gb minimum up to 256 mb, 4 gb hard drive space,
230. The configuration file that manages the mapping of service console file
system to mount point is /etc/fstab
234. VI client provides direct access to an ESX server for configuration and
virtual machine management
235. The VI Client is also used to access Vitual center to provide management
configuration and monitoring of all ESX servers and their virtual machines within
the virtual infractur environment. However when using the VI client to connect
directly to the ESX server, no management of Virtual center feature is possible.
EG; you cannot configure and administer, Vmware DRS or Vmware HA.
236. Vmware license mode: default 60 days trail. after 60 days you can create
VM's but you cannot power on VM's
240. To gather vmware diagnostics information run the script vm-spport from
the service consolem If you generate the diagnostic information, this information
will be stored in Vmware-virtualcenter-support-date@time folder, The folder
contains Viclient-support, which will holds vi client log files, Another file is esx-
support-date@time.tgz, which is compressed archive file contains esx server
diagnostics information.
242. You cannot have two virtual switchs mapped to one physical NIC
243. You can map two or more physical NIC mapped to one virtual switch.
244. A switch used by VMkernal for accessing ISCSI or NAS based storage
245. Virtual switch used to give the service console access to a management
LAN
246. Vitual swich can have 1016 ports and 8 ports used for management
purpose total is 1024
250. ESX server can have 32 NIC on Intel NIC, 20 Broadcom NIC's
252. Service console port requires network label, VLAN ID optional, Static ip
or DHCP
253. Multiple service console connections can be created only if they are
configured on different network. In addition only a single service console
gateway, IP address can be defined
254. A VMkernal port allow to use ISCSI, NAS based networks. Vmkernal
port is requied for Vmotion. It requires network lablel, vlan id optional, IP setting
255. Multiple Vmkernal connections can be configured only if they are
configured on a different networks,only single vmkernal gateway Ip address can
be defined.
a. Security
b. Traffic shaping
c. NIC teaming
b. Mac address changer : when set to reject, if the guest attempts to change
the MAC address assigned to the virtual NIC, it stops receiving frames
c. Forged transmits - When set to reject drops any frames that the guest
sends, where the source address field contains a MAC address other than
the assigned virtual NIC mac addresss ( default accept)
271. After changing made at command line for reflecting the changes you need
to start the hostd daemon, service vmware-mgmt restart
272. View the iSCSI name assigned to the iSCSI software adapter: vmkiscsi-
tool -I -1 vmhba40
273. View the iSCSI alias assigned to the iSCSI software adapter:
a. vmkiscsi-tool -k -1 vmhba40
274. Login to the service console as root and execute e sxc f g - vmhbadevs to
identify which LUNs are currently seen by the ESX server. # esxcfg-vmhbadevs
275. Run the esxcf g-vmhbadevs command with the -m option to map VMFS
names to VMFS UUIDs. Note that the LUN partition numbers are shown in this
output. The hexidecimal values are described later. # esxcfg-vmhbadevs -m
276. Use the vdf -h comand to identify disk statistics (Size, Used, Avail, Use%,
Mounted on) for all file system volumes recognized by your ESX host.
278. Using the Linux device name (obtained using e sxc f g - vmhbadevs
command), check LUNs A, B and C to see if any are partitioned. If there is no
partition table, example a. below, go to step 3. If there is a table, example b. go
to step 2. # fdisk -1 /dev/sd<?>
279. Format a partitioned LUN using vmkf s tool s . Use the - C and - S options
respectively, to create and label the volume. Using the command below, create a
VMFS volume on LUN A.
280. Ask your instructor if you should use a custom VMFS label name.
a. # vmkfstools -C vmfs3 -S LUNc#> vmhbal:O:#:l
281. Now that the LUN has been partitioned and formatted as a VMFS volume,
it can be used as a datastore. Your ESX host recognizes these new volumes. vdf
-h
282. Use the esxcfg-vmhbadevs command with the -m option to map the
VMFS hex names to SAN LUNs. # esxcfg-vmhbadevs -m
283. It may be helpful to change the label to identify that this VMFS volume is
spanned. Add - spanned to the VMFS label name. # In -sf /vmfs/volumes/<V~~S-
UUID> /vmfs/volumes/<New- L abel- N ame>
284. In order to remove a span, you must reformat LUN B with a new VMFS
volume (because it was the LUN that was spanned to). THIS WILL DELETE
ALL DATA ON BOTH LUNS IN THE SPAN !# vmkfstools -C vmfs3 -S
<label> vmhbal:O:#:l
286. Determine if the NTP daemon starts when the system boots. # chkconfig
--list ntpd
287. Configure the system to synchronize the hwclock and the operating system
clocks each time the ESX Server host is rebooted. # nano -w /etc/sysconfig/clock
UTC= true
289. Communication between VI client and ESX server the ports required 902,
903
291. Communication between VI web access client and ESX 80, 443
292. Communication between VI web access client and virtual center 80, 443
293. Communication between ESX server and License server 27010 (in),
27000(out)
294. ESX server in a vmware HA cluster 2050 -5000 (in), 8042-8045 (out)
301. VCB Mounter is used, among other things, to create the snapshot for the
3rd party backup software to access: vcbMounter -h <VC - IP - address - or -
hostname> -u <VC- u ser- a ccount> -p cVC user password> -a ~~dzntifi-eo rf - t
he- V M -t o- b ackup> -r <Directory - on - VCB Proxy - to - putbackup>-t
<Backup - type: - file - or - fullvm>
302. List the different ways to identify your virtual machine. To do this, use the
303. vcbVmName command:
304. vcbVmName
305. -h < V i r t u a l Center - Server-IP-Address-or-Hostname>
306. -u < V i r t u a l c e n t e r- S e r v e r- u s e r- a ccount>
307. -p < V i r t u a l c e n t e r- S e r v e r- p assword>
308. -s ipaddr:<IP - address - of - virtual-machine - to - backup>
309. -------------------------
312. -----------------
313. VMFS Volume can be created one partition 256 GB in the maimum size
of a VM
316. -----------------
321. vmname.vmdk -- actual virtual hard drive for the virtual guest operation
system
327. VMDK files – VMDK files are the actual virtual hard drive for the virtual
guest operation system (virtual machine / VM). You can create either dynamic or
fixed virtual disks. With dynamic disks, the disks start small and grow as the disk
inside the guest OS grows. With fixed disks, the virtual disk and guest OS disk
start out at the same (large) disk. For more information on monolithic vs. split
disks see this comparison from sanbarrow.com.
328. VMEM – A VMEM file is a backup of the virtual machine’s paging file. It
will only appear if the virtual machine is running, or if it has crashed.
329. VMSN & VMSD files – these files are used for VMware snapshots. A
VMSN file is used to store the exact state of the virtual machine when the
snapshot was taken. Using this snapshot, you can then restore your machine to the
same state as when the snapshot was taken. A VMSD file stores information
about snapshots (metadata). You’ll notice that the names of these files match the
names of the snapshots.
330. NVRAM files – these files are the BIOS for the virtual machine. The VM
must know how many hard drives it has and other common BIOS settings. The
NVRAM file is where that BIOS information is stored.
331. VMX files – a VMX file is the primary configuration file for a virtual
machine. When you create a new virtual machine and answer questions about the
operating system, disk sizes, and networking, those answers are stored in this file.
As you can see from the screenshot below, a VMX file is actually a simple text
file that can be edited with Notepad. Here is the “Windows XP Professional.vmx”
file from the directory listing, above:
332. -------------
340. -----------
341. Max CPU's per core is 4 to 8 vcpu's
342. -----------------
345. a bit map file will be created, and uses will be working on the bitmap file
346. and the changes will be copied to the other ESX host
356. Second, the active memory and precise execution state of the
357. virtual machine is rapidly transferred over a high speed network,
358. allowing the virtual machine to instantaneously switch from
359. running on the source ESX Server to the destination ESX Server.
360. VMotion keeps the transfer period imperceptible to users by
361. keeping track of on-going memory transactions in a bitmap. Once
362. the entire memory and system state has been copied over to the
363. target ESX Server, VMotion suspends the source virtual machine,
364. copies the bitmap to the target ESX Server, and resumes the virtual
365. machine on the target ESX Server. This entire process takes less
366. than two seconds on a Gigabit Ethernet network.
367. Third, the networks being used by the virtual machine are also virtualized
368. by the underlying ESX Server, ensuring that even after the
369. migration, the virtual machine network identity and network connections
370. are preserved. VMotion manages the virtual MAC address
371. as part of the process. Once the destination machine is activated,
372. VMotion pings the network router to ensure that it is aware of the
373. new physical location of the virtual MAC address. Since the migration
374. of a virtual machine with VMotion preserves the precise execution
375. state, the network identity, and the active network connections,
376. the result is zero downtime and no disruption to users.
377. -----------------------------------
378. DRS
379. DRS will balance the workload across the resources you presented to the
cluster. It is an essential component of any successful ESX implementation.
380. With VMware ESX 3.x and VirtualCenter 2.x, it's possible to configure
VirtualCenter to manage the access to the resources automatically, partially, or
manually by an administrator.
381. This option is particularly useful for setting an ESX server into
maintenance mode. Maintenance mode is a good environment to perform tasks
such as scanning for new storage area network (SAN) disks, reconfiguring the
host operating system's networking or shutting down the server for maintenance.
Since virtual machines can't be run during maintenance mode, the virtual
machines need to be relocated to other host servers. Commonly, administrators
will configure the ESX cluster to fully automate the rules for the DRS settings.
This allows VirtualCenter to take action based on workload statistics, available
resources and available host servers
382. An important point to keep in mind is that DRS works in conjunction with
any established resource pools defined in the VirtualCenter configuration. Poor
resource pool configuration (such as using unlimited options) can cause DRS to
make unnecessary performance adjustments. If you truly need to use unlimited
resources within a resource pool the best practice would be to isolate. Isolation
requires a separate ESX cluster with a limited number of ESX hosts that share a
single resource pool where the virtual machines that require unlimited resources
are allowed to operate. Sharing unlimited setting resource pools with limited
setting resource pools within the same cluster could cause DRS to make
unnecessary performance adjustments. DRS can compensate for this scenario, but
that could be by bypassing any resource provisioning and planning previously
established.
383. ----------------------
385. The basic concept of VMotion is that ESX will move a virtual machine
while it is running to another ESX host with the move being transparent to the
virtual machine.
388. Not all virtual machines can be moved. Certain situations, such as optical
image binding to an image file, prevent a virtual machine from migrating. With
VMotion enabled, an active virtual machine can be moved automatically or
manually from one ESX host to another. An automatic situation would be as
described earlier when a DRS cluster is configured for full automation. When the
cluster goes into maintenance mode, the virtual machines are moved to another
ESX host by VMotion. Should the DRS cluster be configured for all manual
operations, the migration via VMotion is approved within the Virtual
Infrastructure Client, then VMotion proceeds with the moves.
389. VMware ESX 3.5 introduces the highly anticipated Storage VMotion.
Should your shared storage need to be brought offline for maintenance,Storage
VMotion can migrate an active virtual machine to another storage location. This
migration will take longer, as the geometry of the virtual machine's storage is
copied to the new storage location. Because this is not a storage solution, the
traffic is managed through the VMotion network interface.
391. One might assume that with the combined use of DRS and VMotion that
all bases are covered. Well, not entirely. There are a few considerations that you
need to be aware of so that you know what DRS and VMotion can and cannot do
for you.
392. VMotion does not give an absolute zero gap of connectivity during a
migration. In my experiences the drop in connectivity via ping is usually limited
to one ping from a client or a miniscule increase in ping time on the actual virtual
machine. Most situations will not notice the change and reconnect over the
network during a VMotion migration. There also is a slight increase in memory
usage and on larger virtual machines this may cause a warning light on RAM
usage that usually clears independently.
395. Schedule VMotion for all systems to keep them moving across hosts.
396. Regularly put ESX hosts in and then exit maintenance mode.
397. Do not leave mounted CD-ROM media on virtual machines
(datastore/ISO file or host device options).
398. Keep virtual machines up to date with VMware tools and virtual machine
versioning.
399. Monitor the VPX_EVENT table in your ESX database for the
EVENT_TYPE = vim.event.VmFailedMigrateEvent
400. All in all, DRS and VMotion are solid technologies. Anomalies can
happen, and the risks should be identified and put into your regular monitoring for
visibility.
403. VI3 balances its load by using a new feature called DRS. DRS is included
in the VI3 Enterprise edition along with VMotion. This is because DRS uses
VMotion to balance the load of an ESX cluster in real time between all of the
server involved in the cluster. For information on how to configure DRS see page
95 of the VMware VI3 Resource Management Guide. Once DRS is properly
configured it will constantly be evaluating how best to distribute the load of
running VMs amongst all of the host servers involved in the DRS-enabled cluster.
If DRS decides that a particular VM would be better suited to run on a different
host then it will utilize VMotion to seamlessly migrate the VM over to the other
host.
404. While DRS migrates VMs here and there with VMotion, it is also possible
to migrate all of the VMs off of one host server (resources permitting) and onto
another. This is accomplished by putting a server into "maintenance mode." When
a server is put into maintenance mode, VMotion will be used to migrate all of the
running VMs off it onto another server. This way it is possible to bring the first
server offline to perform physical maintenance on it without impacting the
services that it provides.
407. A request has been made that VM-A should be migrated (VMotioned)
from ESX-A to ESX-B
410. VM-A is started on ESX-B and all access to VM-A is now directed to the
copy running on ESX-B.
411. The rest of VM-A's memory is copied from ESX-A all the while memory
is being read and written from VM-A on ESX-A when applications attempt to
access that memory on VM-A on ESX-B.
417. The VM's affinity must not be set, i.e., binding it to physical CPU(s).
418. The VM must not be clustered with another VM (using a cluster service
like the Microsoft Cluster Service (MSCS)).
419. The two ESX servers involved must both be using (the same!) shared
storage.
420. The two ESX servers involved must be connected via Gigabit Ethernet (or
better).
421. The two ESX servers involved must have access to the same physical
networks.
422. The two ESX servers involved must not have virtual switch port groups
that are labeled the same.
423. The two ESX servers involved must have compatible CPUs. (See support
on Intel and AMD).
424. If any of the above conditions are not met, VMotion is not supported and
will not start. The simplest way to test these conditions is to attempt a manual
VMotion event. This is accomplished by right-clicking on VM in the VI3 client
and clicking on "Migrate..." The VI3 client will ask to which host this VM should
be migrated. When a host is selected, several validation checks are performed. If
any of the above conditions are true then the VI3 client will halt the VMotion
operation with an error.
425. Conclusion
426. The intent of this article was to provide readers with a solid grasp of what
VMotion is and how it can benefit them. If you have any outstanding questions
with regards to VMotion or any VMware technology please do not hesitate to
send them to me via ask the experts.
427. ------------------------------------------------------------------------
428. What Is VMware VMotion?
429. VMware® VMotion™ enables the live migration of running virtual
machines from one physical server to another
430. with zero downtime, continuous service availability, and complete
transaction integrity. VMotion allows IT organizations
431. to:
432. • Continuously and automatically allocate virtual machines within
resource pools.
433. • Improve availability by conducting maintenance without disrupting
business operations
434. VMotion is a key enabling technology for creating the dynamic,
automated, and self-optimizing data center.
442. Live migration of a virtual machine from one physical server to another
with VMotion is enabled by three underlying
443. technologies.
444. First, the entire state of a virtual machine is encapsulated by a set of files
stored on shared storage such as Fibre Channel
445. or iSCSI Storage Area Network (SAN) or Network Attached Storage
(NAS). VMware’s clustered Virtual Machine File
446. System (VMFS) allows multiple installations of ESX Server to access the
same virtual machine files concurrently.
447. Second, the active memory and precise execution state of the virtual
machine is rapidly transferred over a high speed
448. network, allowing the virtual machine to instantaneously switch from
running on the source ESX Server to the destination
449. ESX Server. VMotion keeps the transfer period imperceptible to users by
keeping track of on-going memory transactions
450. in a bitmap. Once the entire memory and system state has been copied
over to the target ESX Server, VMotion
451. suspends the source virtual machine, copies the bitmap to the target ESX
Server, and resumes the virtual machine on
452. the target ESX Server. This entire process takes less than two seconds on a
Gigabit Ethernet network.
453. Third, the networks being used by the virtual machine are also virtualized
by the underlying ESX Server, ensuring
454. that even after the migration, the virtual machine network identity and
network connections are preserved. VMotion
455. manages the virtual MAC address as part of the process.
456. Once the destination machine is activated, VMotion pings the network
router to ensure that it is aware of the new
457. physical location of the virtual MAC address. Since the migration of a
virtual machine with VMotion preserves the precise
458. execution state, the network identity, and the active network connections,
the result is zero downtime and no disruption
459. to users.
460. ---------------------------------------------
469. -------------------------------------------------
472. Note:
473. You need to have VMotion configured and working for SVMotion to
work. Additionally, there are a ton of caveats about SVMotion in the ESX 3.5
administrator’s guide (page 245) that could cause SVMotion not to work. One
final reminder, SVMotion works to move the storage for a VM from a local
datastore on an ESX server to a shared datastore (a SAN) and back – SVMotion
will not move a VM at all – only the storage for a VM.
474. ----------------------------
489. VMFS is a high performance cluster file system allowing multiple systems
to access the file system at the same time. VMFS is what gives you a solid
platform to perform VMotion and VMHA. With VMFS you can dynamically
increase a volume, support distributed journaling, and the addition of a virtual
disk on the fly.
496. Storage VMotion (or SVMotion) is similar to VMotion in the sense that
"something" related to the VM is moved and there is no downtime to the VM
guest and end users. However, with SVMotion the VM Guest stays on the server
that it resides on but the virtual disk for that VM is what moves. Thus, you could
move a VM guest's virtual disks from one ESX server’s local datastore to a shared
SAN datastore (or vice versa) with no downtime for the end users of that VM
guest. There are a number of restrictions on this. To read more technical details on
how it works, please see the VMware ESX Server 3.5 Administrators Guide.
503. #9 VMware’s Virtual Center (VC) & Infrastructure Client (VI Client)
504. I prefer to list the VMware Infrastructure client & Virtual Center as one of
the advanced features of ESX Server & the VI Suite. Virtual Center is a required
piece of many of the advanced ESX Server features. Also, VC has many
advanced features in its own right. When tied with VC, the VI Client is really the
interface that a VMware administrator uses to configure, optimize, and administer
all of you ESX Server systems.
505. With the VI Client, you gain performance information, security & role
administration, and template-based rollout of new VM guests for the entire virtual
infrastructure. If you have more than 1 ESX Server, you need VMware Virtual
Center.
515. ----------------------------------------
517. VMware’s VMFS was created just for VMware virtualization. VMFS is a
high performance cluster file system allowing multiple systems to access the file
system at the same time. VMFS is what gives you the necessary foundation to
perform VMotion and VMHA. With VMFS you can dynamically increase a
volume, support distributed journaling, and the addition of a virtual disk on the
fly.
518. ------------------
520. However, all licensed functionality currently operating at the time the
license server
521. becomes unavailable continues to operate as follows:
522. ?? All VirtualCenter licensed features continue to operate indefinitely,
relying on a
523. cached version of the license state. This includes not only basic
VirtualCenter
524. operation, but licenses for VirtualCenter add-ons, such as VMotion and
DRS.
525. ?? For ESX Server licensed features, there is a 14-day grace period during
which hosts
526. continue operation, relying on a cached version of the license state, even
across
527. reboots. After the grace period expires, certain ESX Server operations,
such as
528. powering on virtual machines, become unavailable.
529. During the ESX Server grace period, when the license server is
unavailable, the
530. following operations are unaffected:
531. ?? Virtual machines continue to run. VI Clients can configure and operate
virtual
532. machines.
533. ?? ESX Server hosts continue to run. You can connect to any ESX Server
host in the
534. VirtualCenter inventory for operation and maintenance. Connections to
the
535. VirtualCenter Server remain. VI Clients can operate and maintain virtual
536. machines from their host even if the VirtualCenter Server connection is
also lost.
537. During the grace period, restricted operations include:
538. ?? Adding ESX Server hosts to the VirtualCenter inventory. You cannot
change
539. VirtualCenter agent licenses for hosts.
540. ?? Adding or removing hosts from a cluster. You cannot change host
membership for
541. the current VMotion, HA, or DRS configuration.
542. ?? Adding or removing license keys.
543. When the grace period has expired, cached license information is no
longer stored.
544. As a result, virtual machines can no longer be powered on. Running
virtual machines
545. continue to run but cannot be rebooted.
546. When the license server becomes available again, hosts reconnect to the
license server.
547. No rebooting or manual action is required to restore license availability.
The grace
548. period timer is reset whenever the license server becomes available again.
549. -------------------
552. -------------
555. We can have a long discussion about this, but it’s plain and simple:
556. On an Active/Passive array you would need to set the path policy to “Most
Recently Used“.
557. An Active/Active array must have the path policy set to “Fixed“.
558. Now I always wondered why there was a difference in these path policies.
There probably are a couple of explanations but the most obvious one is:
559. MRU fails over to an alternative path when any of the following SCSI
sense codes NOT_READY, ILLEGAL_REQUEST, NO_CONNECT and
SP_HUNG are received. Keep in mind that MRU doesn’t failback.
560. For Active/Active SAN’s, the Fixed path policy a fail over only occurs
when the SCSI sense code “NO_CONNECT” is received. When the path returns a
fail back will occur.
561. As you can see, four against just one SCSI sense code. You can imagine
what happens if you change MRU to Fixed when it’s not supported by the array.
SCSI sense codes will be send out, but ESX isn’t expecting them and will not do a
path fail over.
562. ----------------
563. One more, what is the maximum swap size we can allocate for an esx
host..Ans:1600mb as,a maximum of only 800mb of RAM can be allocated for
COS/SC..Hence twice the size of COS/SC = Swap Size..
564. ----------------------------
565. enable VMotion via the command-line changed. So for anyone looking for
this particular command:
567. -----------------------
577. ------------------------------
578. In this example “vmk0? is the first vmkernel. This is one of the things that
changed, so no portgroup id’s anymore. And if you need to do anything via the
command-line that doesn’t seem to be possible with the normal commands:
vmware-vim-cmd. Definitely the way to go.
579. -------------
580. to see how many virtual machines running on ESX server, the command is
581. vmware-cmd -l
582. ---------------
583. The primary difference between the NAS and SAN is at the
communication level. NAS communicates over the network using a network
share, while SAN primarily uses the Fiber Channel protocol.
584. NAS devices transfer data from storage device to server in the form of
files. NAS units use file systems, which are managed independently.These
devices manage file systems and user authentication.
593. When ESX is booted, it scans fiber and SCSI devices for new and
594. existing LUNs. You can manually initiate a scan through the VMware
595. Management Interface or by using the cos-rescan.sh command.
596. VMware recommends using cos-rescan.sh because it is easier to use
597. with certain Fibre Channel adapters than with vmkfstools.
615. vmkmultipath
616. Using the -r switch will allow you to specify the preferred path to a
617. disk. syntax is
618. vmkmultipath –s <disk> -r <NewPath>
619. # vmkmultipath -s vmhba1:0:1 -r vmhba2:0:1
629. when using commands like df, you will not see the
630. /vfms directory. Instead, you need to use vdf, which reports all of the
631. normal df information plus information about the VMFS volumes.
664. ESX provides several tools that can be used to monitor the utilization.
665. Vmkusage is an excellent tool for graphing historical data in
666. regards to VMNIC performance.
667. The second tool that can be utilized for performance monitoring is
668. esxtop.
682. Since private virtual switches do not need to map back to VMNICs, there
is no need to touch the /etc/vmware/hwconfig file.
683. We can add two simple lines to /etc/vmware/netmap.conf to create a new
private virtual switch:
688. ---
689. We can easily configure the gigabit connection to be “home link” for the
virtual switch. Upon failure of the home link, the backup link will automatically
activate and handle the virtual machine traffic until the issue with the high speed
connection can be resolved. When performing this failover, ESX utilizes the same
methodology as MAC Address load balancing of instantly re-ARPing the virtual
MAC addresses for the virtual machines down the alternative path. To make this
configuration, add the following line to /etc/vmware/hwconfig:
nicteam.bond0.home_link = “vmnic1”
690. ------------
691. IP Address
692. The second method that ESX is capable of providing for load balancing is
based on destination IP address. Since outgoing virtual machine traffic is balanced
based on the destination IP address of the packet, this method provides a much
more balanced configuration than MAC Address based balancing. Like the
previous method, if a link failure
693. is detected by the VMkernel, there will be no impact to the connectivity of
the virtual machines. The downside of utilizing this method of load balancing is
that it requires additional configuration of the physical network equipment.
694. Because of the way the outgoing traffic traverses the network in an
695. IP Address load balancing configuration, the MAC addresses of the
696. virtual NICs will be seen by multiple switch ports. In order to get
697. around this “issue”, either EtherChannel (assuming Cisco switches are
698. utilized) or 802.3ad (LACP - Link Aggregation Control Protocol)
699. must be configured on the physical switches. Without this configuration,
700. the duplicate MAC address will cause switching issues.
701. In addition to requiring physical switch configuration changes, an
702. ESX server configuration change is required. There is a single line
703. that needs to be added to /etc/vmware/hwconfig for each virtual
704. switch that you wish to enable IP Address load balancing on. To make
705. this change, use your favorite text editor and add the line below to
706. /etc/vmware/hwconfig. You will need to utilize the same configuration
707. file to determine the name of the bond that you need to reference
708. in the following entry (replace “bond0” with the proper value):
711. The following steps assume no virtual machines have been configured
712. on the ESX host:
713. Modify /etc/modules.conf to comment out the line that
714. begins with “alias eth0”. This will disable eth0 on the next
715. reboot of the ESX host.
716. Run vmkpcidivy –i at the console. Walk through the current
717. configuration. When you get to the network adapter that is
718. assigned to the Console (c), make sure to change it to Virtual
719. Machines (v). This should be the only value that changes
720. from running vmkpcidivy.
721. Modify /etc/vmware/hwconfig by reconfiguring the network
722. bonds. Remove any line that begins with the following: (In
723. this case, “X” can be any numeric value.)
724. nicteam.vmnicX.team
725. Once the current bond has been deleted, add the following
726. two lines to the end of the file:
727. nicteam.vmnic0.team = “bond0”
728. nicteam.vmnic1.team = “bond0”
729. ESX Blade
730. VMotion
731. COS
732. Virtual Machines
764. ------------------------
810. ----------------------------------
812. Lilo
813. Vmkernal
814. INit /etc/inittab
829. S90vmware
830. This is where the VMkernel finally begins to load. The first thing that
831. the VMkernel does when it starts is load the proper device drivers to
832. interact with the physical hardware of the host. You can view all the
833. drivers that the VMkernel may utilize by looking in the
834. /usr/lib/vmware/vmkmod directory.
835. Once the VMkernel has successfully loaded the proper hardware
836. drivers it starts to run its various support scripts:
837. • The vmklogger sends messages to the syslog daemon and generates
838. logs the entire time the VMkernel is running.
839. • The vmkdump script saves any existing VMkernel dump files
840. from the VMcore dump partition and prepares the partition
841. in the errors.
842. Next the VMFS partitions (the partitions used to store all of your VM
843. disk files) are mounted. The VMkernel simply scans the SCSI devices
844. of the system and then automatically mounts any partition that is
configured
845. as VMFS. Once the VMFS partitions are mounted the
847. S91httpd.vmwareOne of the last steps of the boot process for the COS is
to start the
848. VMware MUI (the web interface for VMware management). At this
849. point the VMkernel has been loaded and is running. Starting the
850. MUI provides us with an interface used to graphically interact with
851. ESX. Once the MUI is loaded a display plugged into the host’s local
852. console will display a message stating everything is properly loaded
853. and you can now access your ESX host from a web browser.
860. http://www.applicationdelivery.co.uk/blog/tag/vmdk-limits/
861. While this may seem limited, the tool used is actually quite powerful.
862. VMware provides the nfshaper module, which allows the VMkernel
863. to control the outgoing bandwidth on a per guest basis.
864. -------------------------
866. Command "esxtop" is like the "top" command, but it shows the VMkernel
processes instead.
867. "vdf" tool displays the amount of free space on different volumes,
including VMFS volumes.
868. ESX netcard utility for locating correct netcard among many netcards is
"findnic".
872. /etc/fstab This file defines the local and remote filesystems which are
mounted at ESX Server boot.
873. /etc/rc.d/rc.local This file is for server local customisations required at the
server bootup. Potential additions to this file are public/shared vmfs mounts.
874. /etc/syslog.conf
875. This file configures what things are logged and where. Some examples are
given below:
876. *.crit /dev/tty12
877. This example logs all log items at level "crit" (critical) or higher to the
virtual terminal at tty12. You can see this log by pressing [Alt]-[F12] on the
console.
878. *.=err /dev/tty11
879. This example logs all log items at exactly level "err" (error) to the virtual
terminal at tty11. You can see this log by pressing [Alt]-[F11] on the console.
880. *.=warning /dev/tty10
881. This example logs all log items at exactly level "warning" to the virtual
terminal at tty10. You can see this log by pressing [Alt]-[F10] on the console.
882. *.* 192.168.31.3
883. This example forwards everything (all syslog entries) to another (central)
syslog server. Pay attention to that server's security.
884. /etc/logrotate.conf
885. This is the main configuration file for log file rotation program. It defines
the defaults for log file rotation, log file compression, and time to keep the old log
files. Processing the contents of /etc/logrotate.d/ directory is also defined here.
886. /etc/logrotate.d/
887. This directory contains instructions service by service for log file rotation,
log file compression, and time to keep the old log files. For the three vmk* files,
raise "250k" to "4096k", and enable compression.
888. /etc/inittab
889. Here you can change the amount of virtual terminals available on the
Service Console. Default is 6, but you can go up to 9. I always go :-)
890. /etc/bashrc
891. The system default $PS1 is defined here. It is a good idea to change "\W"
to "\w" here to always see the full path while logged on the Service Console. This
is one of my favourites.
892. /etc/profile.d/colorls.sh
893. Command "ls" is aliased to "ls --colortty" here. Many admins don't like
this colouring. You can comment-out ("#") this line. I always do this one, too.
894. /etc/init.d/
895. This directory contains the actual start-up scripts.
896. /etc/rc3.d/
897. This directory contains the K(ill) and S(tart) scripts for the default runlevel
3. The services starting with "S" are started on this runlevel, and the services
Starting with "K" are killed, i.e. not started..
898. /var/log/
899. This directory contains all the log files. VMware's log files start with
letters "vm". The general main log file is "messages".
900. /etc/ssh/
901. This directory contains all the SSH daemon configuration files, public and
public keys. The defaults are both secure and flexible and rarely need any
changing.
902. /etc/vmware/
903. This directory contains the most important vmkernel configuration files.
904. /etc/vmware/vm-list
905. A file containing a list of registered VMs on this ESX Server.
906. /etc/xinetd.conf
907. This is the main and defaults setting configuration file for xinet daemon.
Processing the contents of /etc/xinetd.d/ directory is also defined here.
908. /etc/xinetd.d/
909. This directory contains instructions service by service for if and how to
start the service. Of the services here, vmware-authd, wu-ftpd, and telnet are most
interesting to us.
910. Two of the most interesting parameter lines are "bind =" and "only_from
=", which allows limiting service usage.
911. /etc/ntp.conf
912. This file configures the NTP daemon. Usable public NTP servers in
Finland are fi.pool.ntp.org, elsewhere in Europe europe.pool.ntp.org. You should
always place two to four NTP servers to ntp.conf file. Due to the nature of
*.pool.ntp.org, you should just have the same line four times in the configuration
file. Check www.pool.ntp.org for a public NTP server close to you. Remember to
change the service to autostart at runlevel 3.
913. --------------------
914. 22/tcp
915. SSH daemon listens to this port for remote connections. By default
password authentication is used for logons. RSA/DSA public/private key
authentication can be used and it is actually tried first. Userid/password
authentication is actually tried second. For higher security and for
automated/scripted logons RSA/DSA authentication must be used.
916. 902/tcp
917. VMware authd, the web management UI (MUI) and remote console
authentication daemon (service) for VMware ESX Server uses this port. The
daemon is not listening this port directly, but xinetd does. When someone open
connection to port 902, xinetd then launches authd, and the actual authentication
starts. Xinetd-related authd security is defined in the file /etc/xinetd.d/vmware-
authd.
923. -----------------------------
924. DRS invocation intervel default time is 5 mins, we can change the value in
vpxd.cfg file
932. ---------------
933. remove vmnic from virtual switch via COS instead of MUI Aug 14, 2006
12:51 PM
934. you could try manually updating these 3 files
935. /etc/vmware/devnames.conf
936. /etc/vmware/hwconfig
937. /etc/vmware/netmap.conf
940. ---------------
943. Delete your vswif and vmknic interfaces by using the following
commands
944. esxcfg-vswif -d vwswif0
953. esxcfg-vswitch -r
954. esxcfg-vmknic -r
955. esxcfg-vswif0 -r
968. If this all works with no issues, then run an esxcfg0vswitch -l to see what
it looks like.
973. run esxcfg-vswitch -l to check out what you vswitch config looks like.
Hopefully every thing looks good.
974. Check to then see if you can ping your SC IP from another PC?
976. --------------------
977. lsof-i
978. ---------
1105. I receive a lot of questions lately about ESX memory management. Things
that are very obvious to me seem to be not so obvious at all for some other people.
So I’ll try to explain these things from my point of view.
1106. First let’s have a look at the virtual machine settings available to us. On
the vm setting page we have several options we can configure for memory
assignment.
1107. Allocated memory: This is the amount of memory we assign to the vm and
is also the amount of memory the guest OS will see as its physical memory. This
is a hard limit and the vm cannot exceed this limit if it demands more memory. It
is configured on the hardware tab of the vm’s settings.
1108. Reservations: A reservation is a guaranteed amount of memory assigned to
the vm. This is a way of ensuring that the vm gets a minimal amount of memory
assigned. When this reservation cannot be met, you will be unable to start the vm.
This is known as “Admission Control”. Reservations are set on the resources tab
of the vm’s settings and by default there is no reservation set.
1109. Limits: A limit is a restriction on the vm, so it cannot use more memory
than this limit. If you would set this limit lower than the allocated memory value,
the ballooning driver will start to inflate as soon as the vm demands more memory
than this limit. Limits are set on the resources tab of the vm’s settings and by
default the limit is set to “unlimited”.Now that we know of limits and
reservations, we need to have a quick look at the VMkernel swap file. This swap
file is used by the VMkernel to swap out the vm’s memory as a last resort to free
up memory when the host is running out of it. When we set a reservation, that
memory is guaranteed and cannot be swapped out to disk. So whenever a vm
starts up, the VMkernel creates a swap file which has a size of the limit minus the
reservation. For example we have a vm with a 1024MB limit and a reservation of
512MB. The swap file created will be 1024MB – 512MB = 512MB. If we would
set the reservation to 1024MB there won’t be a swap file created at all. Remember
that by default there are no reservations and no limits set, so the swap file created
for each vm will be the same size as the allocated memory.
1110. Shares: With shares you set a relative importance on a vm. Unlike limits
and reservation which are fixed, shares can change dynamically. Remember that
the share system only comes into play when memory resources are scarce and
contention is occurring. Shares are set on the resources tab of the vm’s settings
and can be set to “low”, “normal”, “high” or a custom value.
1111. low = 5 shares per 1MB allocated to the vm
1112. normal = 10 shares per 1MB allocated to the vm
1113. high = 20 shares per 1MB allocated to the vm
1114. It is important to note that the more memory you assign to a vm, the more
shares it receives.Let’s look at an example to show you how this share system
works. Say you have 5 vms with each 2,000MB memory allocated and the share
value set to “normal”. The ESX host only has 4,000MB of physical machine
memory available for virtual machines. Each vm receives 20,000 shares according
to the “normal” setting (10 * 2,000). The sum of all shares is 5 * 20,000 =
100,000. Every vm will receive an equal share of 20,000/100,000 = 1/5th of the
resources available = 4,000/5 = 800MB.Now we change the shares setting on 1
vm to “High”, which results in this vm receiving 40,000 shares instead of 20,000.
The sum of all shares is now increased to 120,000. This vm will receive
40,000/120,000 = 1/3rd of the resources available. Thus 4,000/3 = 1333 MB. All
the other vms will receive only 20,000/120,000 = 1/6th of the available resources
= 4,000/6 = 666 MB
1116. This concludes the memory settings we can configure on a vm. Next time
I will go into ESX memory management techniques.
1117. -----------
1118. http://vm-where.com/links.aspx