You are on page 1of 15

How to Increase the Size of a Vdisk and Filesystem in a LDom Guest Domain To

(Doc ID 1549604.1) Bottom

In this Document

Goal

Solution

Procedure to increase size of a zpool based on EFI label in a guest domain running Solaris 11

Procedure to increase size of a zpool based on SMI label in a guest domain running Solaris 11

Procedure to increase size of a UFS filesystem based on SMI label in a guest domain running
Solaris 11

Procedure to expand a SMI disk label using the new Solaris 11 format subcommand 'expand'

Procedure to expand root on a Solaris 10 Guest domain using live upgrade

Procedure to expand a vdisk in a Kernel Zone when Global Zone is a LDom Guest

References

APPLIES TO:

Solaris Operating System - Version 11 11/11 and later


Information in this document applies to any platform.

GOAL

How to increase the size of a vdisk and filesystem on a guest domain.

SOLUTION

LDom supports a number of possible other devices to be exported by the virtual disk server as a
virtual disk to the guest domain.
You can export a physical disk, disk slice, volumes, or file as a block device.
The following procedures are based on a setup of ZVOLs for underlying device on primary
domain.
The basic steps would work for other kind of devices as well.

Procedure to increase size of a zpool based on EFI label in a guest domain running Solaris 11

Assumptions:
- Guest domain is running Solaris 11
- underlying device for the vdisk on primary is a zvol
- vdisk in guest is EFI-labeled
- filesystem on vdisk in guest domain is ZFS

Before you start with the procedure to increase the size, ensure you have a recent full backup of the
data on the volume you are trying to expand
Even though the data on the guest domain suppose to be fully available after finishing the given
procedures, we recommend to have
a full backup to be on the safe side.

On primary domain:

get current volsize and increase volsize on primary domain for the underlying ZVOL

primary-domain# zfs get volsize g003-pool/rpool.img


NAME PROPERTY VALUE SOURCE
g003-pool/rpool.img volsize 50G local

primary-domain# zfs set volsize=70g g003-pool/rpool.img

On Guest LDom:
Procedure using autoexpand flag :

// check autoexpand flag


guest-ldom-g003# zpool get autoexpand rpool
NAME PROPERTY VALUE SOURCE
rpool autoexpand off local

// check size of zfs dataset


guest-ldom-g003# df -kl /rpool
Filesystem 1024-blocks Used Available Capacity Mounted on
rpool 51351552 31 51351465 1% /rpool

// set autoexpand flag to 'on' for the specific zpool and verify increased size for
this zpool
guest-ldom-g003# zpool set autoexpand=on rpool

// check size of zfs dataset again


guest-ldom-g003# df -kl /rpool
Filesystem 1024-blocks Used Available Capacity Mounted on
rpool 71995392 31 71995304 1% /rpool
# zpool list rpool
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 69.8G 88K 69.7G 0% 1.00x ONLINE -

// set autoexpand flag to the same value as set before this procedure. autoexpand
value is set to 'off' by default
guest-ldom-g003# zpool get autoexpand rpool
NAME PROPERTY VALUE SOURCE
rpool autoexpand on local

guest-ldom-g003# zpool set autoexpand=off rpool

Procedure using zpool online -e :

// before resize of underlying storage

Guest# zpool list


NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 99.5G 16.8G 82.7G 16% 1.00x ONLINE -
test 19.9G 792K 19.9G 0% 1.00x ONLINE - << former size of 20 GB

// after resize ( 20 GB -> 30 GB ) of underlying storage

Guest# format -e
AVAILABLE DISK SELECTIONS:
-- lines omitted --
3. c2d3 <Unknown-Unknown-0001-30.00GB>
/virtual-devices@100/channel-devices@200/disk@3

Guest# zpool online -e test c2d3

Guest# zpool list


NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
rpool 99.5G 16.8G 82.7G 16% 1.00x ONLINE -
test 29.9G 1.05M 29.9G 0% 1.00x ONLINE - << new size of 30 GB now

Reference: System Administration Commands zpool(1M)


zpool online [-e] pool device...
-e
Expand the device to use all available space. If the
device is part of a mirror or raidz then all devices
must be expanded before the new space will become
available to the pool.
Procedure to increase size of a zpool based on SMI label in a guest domain running Solaris 11

Before you start with the procedure to increase the size, ensure you have a recent full backup of the
data on the volume you are trying to expand.
Even though the data on the guest domain suppose to be fully available after finishing the given
procedures, we recommend to have a full backup to be on the safe side.

Assumptions:
- Guest domain is running Solaris 11
- underlying device for the vdisk on primary is a zvol
- vdisk in guest is SMI-labeled
- filesystem on vdisk in guest domain is ZFS

On primary domain:

# ldm list -o disk


------------------------------------------------------------------------------
NAME
primary

VDS
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 g003-
rpool /dev/zvol/dsk/g003-pool/rpool.img
--- lines omitted --

------------------------------------------------------------------------------
NAME
g003

DISK
NAME VOLUME TOUT
ID DEVICE SERVER MPGROUP
--- lines omitted --
g003-rpool g003-rpool@primary-vds0 3 disk@3 primary

Increase volsize of the underlying zvol from 100G to 140G


# zfs get volsize g003-pool/rpool.img
NAME PROPERTY VALUE SOURCE
g003-pool/rpool.img volsize 100G local

# zfs set volsize=140G g003-pool/rpool.img


On Guest LDom

# prtvtoc /dev/rdsk/c2d3s0
* First Sector Last
* Partition Tag Flags Sector Count Sector Mount Directory
0 2 00 0 209534976 209534975
2 5 01 0 209534976 209534975

// Use new format(1M) subcommand 'expand' to enlarge the partition label on c2d3s0
// Please see info below "procedure to expand a disk label using the new Solaris 11
'expand' feature in Solaris 11"

// Check filesystem type on slice 0 of c2d3s0


# fstyp /dev/rdsk/c2d3s0
zfs

# zpool get all rpool | egrep size


rpool size 99.5G -

# zpool get autoexpand rpool


rpool autoexpand off default

# df -kl /rpool
Filesystem 1024-blocks Used Available Capacity Mounted on
rpool 102703104 31 102703017 1% /rpool

# zpool set autoexpand=on rpool

# df -kl /rpool
Filesystem 1024-blocks Used Available Capacity Mounted on
rpool 143990784 31 143990696 1% /rpool

# zpool set autoexpand=off rpool

Note: Important fixes for this procedure are available in Oracle Solaris 11.2.8.4.0 (or greater) / Oracle
Solaris 10 150400-23 (SPARC) / 150401-23 (x86)

Procedure to increase size of a UFS filesystem based on SMI label in a guest domain running Solaris 11
Before you start with the procedure to increase the size, ensure you have a recent full backup of the
data on the volume you are trying to expand.
Even though the data on the guest domain suppose to be fully available after finishing the given
procedures, we recommend to have a full backup to be on the safe side.

Assumptions:
- Guest domain is running Solaris 11
- underlying device for the vdisk on primary is a zvol
- vdisk in guest is SMI-labeled
- filesystem on vdisk in guest domain is UFS

On primary domain:

# ldm list -o disk


------------------------------------------------------------------------------
NAME
primary

VDS
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds0
g003-
dp /dev/zvol/dsk/g003-
pool/dpool.img
--- lines omitted --

------------------------------------------------------------------------------
NAME
g003

DISK
NAME VOLUME TOUT
ID DEVICE SERVER MPGROUP
--- lines omitted --
g003-dp g003-dp@primary-vds0 2 disk@2 primary

// Increase the size of the volume to a certain desired size


# zfs set volsize=200G g003-pool/dpool.img

# zfs get volsize g003-pool/dpool.img


NAME PROPERTY VALUE SOURCE
g003-pool/dpool.img volsize 200G local

On Guest LDom:

Unmount /dev/dsk/c2d2s0
# umount /dev/dsk/c2d2s0
// Expand the size of the slice 0 to full size ( in this case )
// Use new format(1M) subcommand 'expand' to enlarge the partition label on c2d2s0
// Please see info below "procedure to expand a disk label using the new Solaris 11
'expand' feature in Solaris 11"

// Check filesystem tpye


# fstyp /dev/dsk/c2d2s0
ufs

// Check size before growfs


# df -kl /dp
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/dsk/c2d2s0 103181230 102329 102047089 1% /dp

// Growfs UFS to size of the slice 0


# growfs -M /dp /dev/rdsk/c2d2s0
/dev/rdsk/c2d2s0: 314376192 sectors in 51168 cylinders of 48 tracks, 128 sectors
153504.0MB in 3198 cyl groups (16 c/g, 48.00MB/g, 5824 i/g)
super-block backups (for fsck -F ufs -o b=#) at:
32, 98464, 196896, 295328, 393760, 492192, 590624, 689056, 787488, 885920,
Initializing cylinder groups:
...............................................................
super-block backups for last 10 cylinder groups at:
313399840, 313498272, 313596704, 313695136, 313793568, 313892000, 313990432,
314088864, 314187296, 314285728

// Confirm new size


# df -kl /dp
Filesystem 1024-blocks Used Available Capacity Mounted on
/dev/dsk/c2d2s0 154808718 153521 153623385 1% /dp

Procedure to expand a SMI disk label using the new Solaris 11 format subcommand 'expand'

(the 'expand' appears in partition menu only in case the underlying vdisk size in greater than the
currently label'ed size)

# format -e /dev/rdsk/c2d3s0

format> partition

PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
expand - expand label to use the maximum allowed space
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> expand
Expansion of label cannot be undone; continue (y/n) ? y
The expanded capacity was added to the disk label and "s2".
Disk label was written to disk.

partition> print
Current partition table (original):
Total disk cylinders available: 3980 + 2 (reserved
cylinders) << *new* "Total disk cylinders available"

Part Tag Flag Cylinders Size Blocks


0 root wm 0 - 2841 99.91GB (2842/0/0) 209534976
1 unassigned wu 0 0 (0/0/0) 0
2 backup wu 0 - 3979 139.92GB (3980/0/0) 293437440
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0

partition> 0
Part Tag Flag Cylinders Size Blocks
0 root wm 0 - 2841 99.91GB (2842/0/0) 209534976

Enter partition id tag[root]:


Enter partition permission flags[wm]:
Enter new starting cyl[0]:
Enter partition size[209534976b, 2842c, 2841e, 102312.00mb, 99.91gb]:
3980c << enter new number of total cylinders here

partition> print
Current partition table (unnamed):
Total disk cylinders available: 3980 + 2 (reserved cylinders)

Part Tag Flag Cylinders Size Blocks


0 root wm 0 - 3979 139.92GB (3980/0/0) 293437440
1 unassigned wu 0 0 (0/0/0) 0
2 backup wu 0 - 3979 139.92GB (3980/0/0) 293437440
3 unassigned wm 0 0 (0/0/0) 0
4 unassigned wm 0 0 (0/0/0) 0
5 unassigned wm 0 0 (0/0/0) 0
6 unassigned wm 0 0 (0/0/0) 0
7 unassigned wm 0 0 (0/0/0) 0

partition> label
[0] SMI Label
[1] EFI Label
Specify Label type[0]: 0
Ready to label disk, continue? y

partition> q
If the label is EFI we have to pay attention to the last Sector of partition 0: 629129182, and
first sector of partition 8: 671072223, then you have to choose with format the partition you
want to assign these new sectors and increase that partition and label the disks

partition> p
Volume: test1
Current partition table (original):
Total disk sectors available: 671072189 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector


0 usr wm 256 299.99GB 629129182
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 671072223 8.00MB 671088606

Procedure to expand root on a Solaris 10 Guest domain using live upgrade

Assumptions:
- Guest domain is running Solaris 10
- underlying device for the root vdisk in guest is of size 20G and SMI-labeled
- a second vdisk with the expected size (e.g. 40G) can be exported to the Solaris 10 Guest
domain for live upgrade

1) create a new zfs zvol with size of 40G (or any desired new size)
# zfs create -V 40g ldoms/ldg3-root2.img

2) export the new zvol to the Solaris 10 Guest


# ldm add-vdsdev /dev/zvol/dsk/ldoms/ldg3-root2.img ldg3-root2@primary-vds0
# ldm add-vdisk ldg3-root2 ldg3-root2@primary-vds0 ldg3

3) login to Guest, apply the most recent LU patch 121430-94


Then create a new rpool2, and create and luactivate a new LU ABE on this rpool2
# patchadd 121430-94

4) label the new vdisk in Guest to have all space to slice 0 and create a new BE
# zfs create rpool2 c0d1s0
# lucreate -c BE1 -n BE2 -p rpool2
# luactivate BE2
# init 6

5) confirm that the newly booted ABE has the desired new size
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
BE1 yes no no yes -
BE2 yes yes yes no -

# zpool list
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
rpool2 39.8G 8.44G 31.3G 21% ONLINE -

// new BE2
# df -kl /
Filesystem kbytes used avail capacity Mounted on
rpool2/ROOT/BE2 41029632 4646133 32051953 13% /

// former BE1
# lumount BE1
# df -kl /.alt.BE1/
Filesystem kbytes used avail capacity Mounted on
/dev/dsk/c0d0s0 18479546 4439508 13855243 25% /.alt.BE1
# luumount BE1

Procedure to expand a vdisk in a Kernel Zone when Global Zone is a LDom Guest

In the following example, the vdisk for the Kernel Zone is based on a ZVOL with 80GB initial
size and got expanded to 100GB

primary # ldm ls -o disk primary | egrep ldg0


ldg0-data /dev/zvol/dsk/rpool/ldg0-data.img
ldg0-root /dev/zvol/dsk/ldoms/ldg0-root.img
ldg0-kz1-root /dev/zvol/dsk/space/ldg0-kz1-root.img
ldg0-kz1-data /dev/zvol/dsk/space/ldg0-kz1-data.img

primary # ldm ls -o disk ldg0


NAME
ldg0
DISK
NAME VOLUME TOUT
ID DEVICE SERVER MPGROUP
ldg0-data ldg0-data@primary-vds0 1 disk@1 primary
ldg0-root ldg0-root@primary-vds0 0 disk@0 primary
ldg0-kz1-root ldg0-kz1-root@primary-vds0 3 disk@3 primary
ldg0-kz1-data ldg0-kz1-data@primary-vds0 4 disk@4 primary

// get current volsize of space/ldg0-kz1-data.img


primary# zfs get volsize space/ldg0-kz1-data.img
NAME PROPERTY VALUE SOURCE
space/ldg0-kz1-data.img volsize 80G local
The Guest LDom 'ldg0' has a Kernel Zone configured with name 'kz1'

root@Guest-ldg0# zoneadm list -cv


ID NAME STATUS PATH BRAND IP
0 global running / solaris shared
2 kz1 running - solaris-kz excl

// status before resizing the underlying backing store (ZVOL) on primary


root@Guest-ldg0# echo | format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1d0 <Unknown-Unknown-0001-30.00GB>
/virtual-devices@100/channel-devices@200/disk@0
1. c1d1 <Unknown-Unknown-0001 cyl 282 alt 2 hd 96 sec 768>
/virtual-devices@100/channel-devices@200/disk@1
2. c1d3 <Unknown-Unknown-0001-40.00GB>
/virtual-devices@100/channel-devices@200/disk@3
3. c1d4 <Unknown-Unknown-0001-80.00GB>
/virtual-devices@100/channel-devices@200/disk@4

Resizing of the ZVOL for kz1-data in primary domain

primary # zfs set volsize=100g space/ldg0-kz1-data.img

The new size has been recognized in Guest LDom immediately after resizing of the underlying
backing store on primary
*No* further actions required on Guest LDom

root@Guest-ldg0# echo | format


Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c1d0 <Unknown-Unknown-0001-30.00GB>
/virtual-devices@100/channel-devices@200/disk@0
1. c1d1 <Unknown-Unknown-0001 cyl 282 alt 2 hd 96 sec 768>
/virtual-devices@100/channel-devices@200/disk@1
2. c1d3 <Unknown-Unknown-0001-40.00GB>
/virtual-devices@100/channel-devices@200/disk@3
3. c1d4 <Unknown-Unknown-0001-100.00GB>
/virtual-devices@100/channel-devices@200/disk@4

Check the status in Kernel zone. Please note that the resize of the underlying backing store does
not get recognized without reboot of the kernel zone.

Note: In Solaris 11.4, a reboot of the kernel zone is not necessary.


root@Kernel-Zone:~# echo | format
Searching for disks...done

AVAILABLE DISK SELECTIONS:


0. c1d1 <kz-vDisk-DISK-40.00GB>
/kz-devices@ff/disk@1
1. c1d3 <kz-vDisk-DISK-80.00GB>
/kz-devices@ff/disk@3

root@Kernel-Zone:~# zpool list


NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 79.8G 936K 79.7G 0% 1.00x ONLINE -
rpool 39.8G 6.44G 33.3G 16% 1.00x ONLINE -

root@Kernel-Zone:~# zpool get autoexpand data


NAME PROPERTY VALUE SOURCE
data autoexpand off local

After reboot of the Kernel zone, the new size of the underlying backing store gets recognized in
VOLNAME by format(1M), but the label still remained in former size of 80GB
The label of the vdisk needs to be expanded and the zpool must be expanded as well using
"zpool online -e"

root@Kernel-Zone:~# init 6

// after reboot

root@Kernel-Zone:~# echo | format


Searching for disks...done

AVAILABLE DISK SELECTIONS:


0. c1d1 <kz-vDisk-DISK-40.00GB>
/kz-devices@ff/disk@1
1. c1d3 <kz-vDisk-DISK-100.00GB>
/kz-devices@ff/disk@3
Specify disk (enter its number): Specify disk (enter its number):

root@Kernel-Zone:~# format c1d3


selecting c1d3
[disk formatted, no defect list found]
/dev/dsk/c1d3s0 is part of active ZFS pool data. Please see zpool(1M).

FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format - format and analyze the disk
repair - repair a defective sector
show - translate a disk address
label - write label to the disk
analyze - surface analysis
defect - defect list management
backup - search for backup labels
verify - read and display labels
inquiry - show disk ID
volname - set 8-character volume name
!<cmd> - execute <cmd>, then return
quit

format> partition

PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
expand - expand label to use the maximum allowed space
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit

partition> expand
The expanded capacity is added to the unallocated space.
partition> print
Current partition table (original):
Total disk sectors available: 209698749 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector


0 usr wm 256 79.99GB 167755741
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 209698783 8.00MB 209715166

partition> 0
Part Tag Flag First Sector Size Last Sector
0 usr wm 256 79.99GB 167755741

Enter partition id tag[usr]:


Enter partition permission flags[wm]:
Enter new starting sector[256]:
Enter partition size[167755486b, 167755741e, 81911mb, 79gb, 0tb]: $ << you may
use '$' sign to use all available new space on vdisk

partition> print
Current partition table (unnamed):
Total disk sectors available: 209698749 + 16384 (reserved sectors)

Part Tag Flag First Sector Size Last Sector


0 usr wm 256 99.99GB 209698781
1 unassigned wm 0 0 0
2 unassigned wm 0 0 0
3 unassigned wm 0 0 0
4 unassigned wm 0 0 0
5 unassigned wm 0 0 0
6 unassigned wm 0 0 0
8 reserved wm 209698783 8.00MB 209715166

partition> label
Ready to label disk, continue? y

partition> q
format> q

root@Kernel-Zone:~# zpool online -e data c1d3

root@Kernel-Zone:~# zpool list


NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 99.8G 1.27M 99.7G 0% 1.00x ONLINE -
rpool 39.8G 6.45G 33.3G 16% 1.00x ONLINE -

// new size has been recognized now in format as well as in zpool 'data'

Please note, the new subcommand 'expand' is available in Solaris 11 only and not referenced in
format(1M) man page

Note: Important fixes for this procedure are available in Oracle Solaris 11.2.8.4.0 (or greater) / Oracle
Solaris 10 150400-23 (SPARC)/ 150401-23 (x86)

REFERENCES

NOTE:1538587.1 - Failure to Add Vdisk to Guest Domain: 'LDom {ldomname} did not respond to request
to configure VIO device'
BUG:15479057 - SUNBT6699271-SOLARIS_11 DYNAMIC VIRTUAL DISK SIZE MANAGEMENT
NOTE:1676533.1 - ZFS Pools in Guest LDOMs may become UNAVAIL if format(1M) is used in the
Control/Primary/Service Domains
NOTE:1367098.1 - Oracle VM Server for SPARC (LDoms) Document Index
NOTE:1592785.1 - Vdc Driver - Configuring Vdisk Physical Block Size in vdc.conf on a Guest Ldom
NOTE:1458631.1 - How to add/remove a vdisk to/from a guest LDom dynamically
NOTE:1382180.1 - Solaris Does Not Automatically Handle an Increase in LUN Size
NOTE:1912796.1 - Zpool Online -e Does Not Increase The Pool Size
NOTE:1611403.1 - How to Add the Additional Storage Space Created from Dynamic LUN Expansion to
the Solaris Operating System

You might also like