You are on page 1of 5

More documents by Venu Vanama

Log in to create and rate content, and to follow, bookmark, and share content with other members.

Reclaim disk space from thin provisioned VMDK


files in ESXi Server
Document created by Venu Vanama on Mar 11, 2016
Version 1

Like 0 Comment 0

When you have some VMs with thin provisioned disks, you may have noticed that these will continue to
grow, and never shrink, even after cleaning out data from within the VM. Currently, VMware does not have
automatic space reclamation. Luckily, there are some tools to allow you to reclaim unused space from your
virtual thin disks. The only downside is that you have to power off the affected machines.

Step 1: Run below command to check information such as firmware


revision, thin provisioning status, the VAAI filter and status for the
datastore virtual machine is running:

# esxcli storage core device list d naa.60a98000572d54724a346a6170627a52


naa.60a98000572d54724a346a6170627a52
Display Name: Hitachi Fibre Channel Disk (naa.60a98000572d54724a346a6170627a52)
Has Settable Display Name: true
Size: 51200
Device Type: Direct-Access
Multipath Plugin: NMP
Devfs Path: /vmfs/devices/disks/naa.60a98000572d54724a346a6170627a52
Vendor: HITACHI
Model: LUN
Revision: 8020
SCSI Level: 4
Is Pseudo: false
Status: on
Is RDM Capable: true
Is Local: false
Is Removable: false
Is SSD: false
Is Offline: false
Is Perennially Reserved: false
Thin Provisioning Status: yes
Attached Filters: VAAI_FILTER
VAAI Status: supported
Other UIDs: vml.020033000060a98000572d54724a346a6170627a524c554e202020

Here we see that the device is indeed Thin Provisioned and supports VAAI. Now we can run a command to display the VAAI
primitives supported by the array for that device. In particular we are interested in knowing whether the array supports the
UNMAP primitive for dead space reclamation (what we refer to as the Delete Status). Another esxcli command is used for
this step 'esxcli storage core device vaai status get -d <naa>':

a. naa.60a98000572d54724a346a6170627a52
VAAI Plugin Name: VMW_VAAIP_HDS
ATS Status: supported
Clone Status: supported
Zero Status: supported
Delete Status: supported

The device displays Delete Status as supported meaning that it is capable of sending SCSI UNMAP commands to the array
when a space reclaim operation is requested.

Step 2: "really" deleting space

Reclaiming disk space only works when the blocks on your virtual disk are really empty. Deleting data usually
only removes the entries from the file allocation table but does not zero the blocks. As a result, ESX will still
think the blocks are in-use. There are a few tools available to do this on Windows- and Linux-hosts.
Windows
Download SDelete . This command-line tool has a powerful feature which can track and zero unused blocks.
You won't notice a difference in used disk space from within your VM, but this will enable VMware to track
unused blocks and release them back to your datastore, effectively shrinking your thin provisioned disk.

Open an elevated command prompt and run SDelete:

sdelete.exe -z [drive:]

Replace [drive:] with the affected disk or partition. Note that when you have multiple partitions on a single
virtual disk, you need to do this on all partitions for it to be effective. Otherwise, reclamation will only be partial
because not every data block will be zeroed.
Linux
Linux has a different approach. Depending on your filesystem, there are several tools out there, but one that
works everytime is by filling free space with a file of zeroes.
First, be sure to shutdown all services which write to the volume you want to shrink as to avoid problems with
them running out of free disk space. Next, use dd to completely fill free space with zeroes:

$ dd if=/dev/zero of=/[mounted-volume]/zeroes && rm -f /[mounted-volume]/zeroes

Replace [mounted-volume] with wherever you've mounted the volume. Note that when you have multiple
partitions on a single virtual disk, you need to do this on all partitions for it to be effective. Otherwise,
reclamation will only be partial because not every data block will be zeroed.

Step 3: Run reclaim space in ESX

When data has been deleted, you're ready to really reclaim disk space. This is done by "punching out" zeroed
blocks in the virtual disk's VMDK file.
First, power off the affected VMs and log into any ESX server with access to the VMFS datastore which has the
VMDKs to shrink. Cd to the datastore and volume. Next, launch vmkfstools:

vmkfstools -K [disk].vmdk

Replace [disk] with the disk name. For ESX, don't reference the flat VMDK (usually called disk-flat.vmdk).
You have to reference the descriptor file (disk.vmdk).
Depending on the size of the disk, this can be a lengthy process. When finished, all blocks that were previously
zeroed from within the VM will now be gone from the VMDK and space will have been reclaimed.

Note:
1. VMware Horizon View users: Reclamation of disk space won't apply to VDs until your next recompose.
2. This above procedure only works with thin provisioned virtual disks

ATTACHMENTS
Visibility: Open to anyone 13388 Views

Last modified on Mar 12, 2016 12:34 AM

Tags: vmware esx zpr sdelete vaai esxi_host vaai_support zero_page_reclaim


vmware vsphere vmware space reclaimable space unmap vmware blog unmap reclaim
vmware esx vmware best practices zero_page_reclaimesxi_host sdelete vmware

0 Comments

Related Content

Latest VMware vSphere over NFS (file) best practices document from HDS

Hitachi NAS (HNAS) VMware Best Practices

VVol : Hitachi Storage for VMware Virtual Volumes (VVol)

The Storage Economist: 2011 Archive

The Storage Economist: 2012 Archive

Recommended Content

TSI2764 Lab Guide-Lab3 - v1-0 GDLL

TSI2764 Lab Guide-Lab2 - v1-0 GDLL

TSI2764 Lab Guide-Lab1 - v1-0 GDLL

Getting Started Developing Plugins for HCI

Fields and Streams: The HCI Document

QUICK LINKS

Solution and Product Forums Member Directory


Developer Network My Places
Innovation Center Community Help and Feedback
Partner Communities Hitachi, Ltd.

Terms of Use Hitachi Vantara 2017. All Rights Reserved.

Home | Top of page | Help 2017 Jive Software | Powered by

You might also like