You are on page 1of 6

HOWTO: Migrate a UFS Root Filesystem to ZFS

26Sep08
Solaris 10 10/08 (u6) is due to be released within the next month of so (I don't have an exact date) and
one of the great features to come with it is ZFS boot. You can already use ZFS boot on Nevada and
OpenSolaris defaults to ZFS, but this will be the first release of officially supported Solaris 10 to have
ZFS boot.
People have been waiting for this for a long time, and will naturally be eager to migrate their root
filesystem from UFS to ZFS. This article will detail how you can do this using Live Upgrade. This will
allow you to perform the migration with the least amount of downtime, and still have a safety net in case
something goes wrong.
These instructions are aimed at users with systems ALREADY running Solaris 10 10/08 (update 6) or
Nevada build 90 (snv_90) or later.

Create the Root zpool


The first thing you need to do is create your disk zpool. It MUST exist before you can continue, so create
and verify it:
# zpool create rootpool c1t0d0s0
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rootpool 10G 73.5K 10.0G 0% ONLINE -
#
If the slice you've selected currently has another filesystem on it, eg UFS or VxFS, you'll need to use the
-f flag to force the creation of the ZFS filesystem.
You can use any name you like. I've chosen rootpool to make it clear what the pool's function is.

Create The Boot Environments (BE)


Now we've got our zpool in place, we can create the BEs that will be used to migrate the current root
filesystem across to the new ZFS filesystem.
Create the ABE as follows:
# lucreate -c ufsBE -n zfsBE -p rootpool

This command will create two boot environments where:


 ufsBE is the name your current boot environment will be assigned. This can be anything you like
and is your safety net. If something goes wrong, you can always boot back to this BE (unless you delete
it).
 zfsBE is the name of your new boot environment that will be on ZFS and...
 rootpool is the name of the zpool you create for the boot environment.
This command will take a while to run as it copies your ufsBE to your new zfsBE and will produce
 The current BE, c1t0d0s0, containing a UFS root file system, is identified by the -c option.
 The new BE, zfsBE, is identified by the -n option.

output similar to the following if all goes well:


# lucreate -c ufsBE -n zfsBE -p rootpool
Analyzing system configuration.
No name for current boot environment.
Current boot environment is named .
Creating initial configuration for primary boot environment .
The device is not a root device for any boot environment; cannot get BE ID.
PBE configuration successful: PBE name PBE Boot Device .
Comparing source boot environment file systems with the file
system(s) you specified for the new boot environment. Determining which
file systems should be in the new boot environment.
Updating boot environment description database on all BEs.
Updating system configuration files.
The device is not a root device for any boot environment; cannot get BE ID.
Creating configuration for boot environment .
Source boot environment is .
Creating boot environment .
Creating file systems on boot environment .
Creating file system for > in zone on .
Populating file systems on boot environment .
Checking selection integrity.
Integrity check OK.
Populating contents of mount point >.
Copying.
Creating shared file system mount points.
Creating compare databases for boot environment .
Creating compare database for file system >.
Updating compare databases on boot environment .
Making boot environment bootable.
Creating boot_archive for /.alt.tmp.b-7Tc.mnt
updating /.alt.tmp.b-7Tc.mnt/platform/sun4u/boot_archive
Population of boot environment successful.
Creation of boot environment successful.
#
The x86 output it not much different. It'll just include information about updating GRUB.
Update: You may get the following error from lucreate:
ERROR: ZFS pool does not support boot environments.
This will be due to the label on the disk.
You need to relabel your root disks and give them an SMI label. You can do this using "format -e", select
the disk, then go to "label" and select "[0] SMI label". This should be all that's needed, but whilst you're
at it, you may as well check your partition table is still as you want. If not, make your changes and label
the disk again.
For x86 system, you need to ensure your disk has an fdisk table.
You should now be able to perform the lucreate.
The most likely reason for your disk having an EFI label is it's probably been used by ZFS as a whole
disk before. ZFS uses EFI labels for whole disk usage, however you need an SMI label for your root
disks at the moment (I believe this may change in the future).
Once the the lucreate has completed, you can verify your Live Upgrade environments with lustatus:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes yes yes no -
zfsBE yes no no yes -
#
Activate and Boot from ZFS zpool
We're almost done. All we need to do now is activate our new ZFS boot environment and reboot:
# luactivate zfsBE
# init 6
NOTE: Ensure you reboot using "init 6" or "shutdown -i6". Do NOT use "reboot"
Remember, if you're on SPARC, you'll need to set the appropriate boot device at the OBP. luactivate
will remind you.
You can verify you're booted from the ZFS BE using lustatus:
# lustatus
Boot Environment Is Active Active Can Copy
Name Complete Now On Reboot Delete Status
-------------------------- -------- ------ --------- ------ ----------
ufsBE yes no no yes -
zfsBE yes yes yes no -
#
At this point you can delete the old ufsBE if all went well. You can also re-use that old disk/slice for
anything you want like adding it to the rootpool to create a mirror. The choice is yours, but now you
have your system booted from ZFS and all it's wonderfulness is available on the root filesystem too.
Tagged with: Software, Operating Systems, OpenSolaris, Solaris, General, HOWTO, ZFS, filesystem,
root and boot   |         
23 Comments • Add a Comment
1 Aidan • 3:03 AM Sunday, 2 Nov 2008
Wow - that's really easy!
2 Joe the System Administrator • 3:11 AM Monday, 3 Nov 2008
I have VMware Fusion installed on a client's computer, and we performed a standard upgrade using a
UFS filesystem which worked flawlessly. But then we had problems with booting ZFS, here is what
occurred:
1) The lucreate command was used to migrate the UFS partition over to ZFS (worked absent any error
messages)
2) The luactivate command was used to activate the new ZFS based partition (worked absent any error
messages)
3) We used the "init 6" command (NOT "reboot") to restart the system (worked absent any error
messages)
4) We then logged in up successful boot up only to find that an "lustatus" command revealed we had
actually booted from the UFS partition, and not the ZFS partition.
What went wrong?
3 Colin • 9:12 AM Monday, 3 Nov 2008
Hi Joe.
Sounds like you may not have selected the correct boot device. You didn't state if you were on SPARC or
x86. If SPARC, you will need to update your OBP boot device to point to the new partition. If x86,
GRUB should have been updated, so you should only need to select it from the menu (I can't remember if
it's changed to the default entry or not).
HTH
Colin
4 Chris • 2:27 AM Wednesday, 5 Nov 2008
how can I do this if I have the folowing disc structure?
/dev/dsk/c1t0d0s0 20G 17G 3.2G 84% /
/dev/dsk/c1t0d0s6 20G 16G 3.5G 82% /usr
/dev/dsk/c1t0d0s5 20G 14G 5.7G 71% /var
can I assign whole disk c1t1d0 to rootpool?
or do I need to setup 3 separate pools for this?
zpool create rootpool c1t1d0 ?
5 Colin • 10:26 AM Wednesday, 5 Nov 2008
Chris: Unfortunately, you MUST use slices for the root pool due to a limitation with ZFS boot.
Accordingly, you'll need to ensure you have an SMI label on your disk (default unless it's been used by a
non-root ZFS pool before) and partition it. If you want to assign all of the space to slice 0, then you can.
There's no point slicing the disk for /usr and /var or creating separate pools as these filesystems will be
migrated into the root pool (I don't think you can change this).
You can then create your rootpool using "zpool create rootpool c1t1d0s0" and then perform your
migration across to this device.
6 rbuick • 8:01 PM Wednesday, 12 Nov 2008
I attempted to move from ufs to zfs and all I got was a grub> prompt; see link, for which I've received no
response, so far.
http://www.opensolaris.org/jive/thread.jspa?threadID=82101&tstart=0
Have you any thoughts?
Thanks for your time.
7 Colin • 12:22 PM Thursday, 13 Nov 2008
Sounds like you may have some how trashed your GRUB config or GRUB can't find your menu.lst for
some reason.
I've posted a suggestion on the forum thread.
8 veera • 10:29 AM Tuesday, 20 Jan 2009
hi,
I am not able to do lucreate command while upgrade into to ZFS.
i have 4 disks in my sparc machine 2 disks are used for ufs solaris 10.and i m try to build zfs now.
i have created zpool with rest of 2 disks but lucreate command is throughing some error.
ERROR: ZFS pool does not support boot environments.
can you please help on this.
Veera.
9 Colin • 10:40 AM Tuesday, 20 Jan 2009
Hi Veera
This is actually quite simple to solve - relabel your disks with "SMI Label". You can do this by running...
# format -e ⇒ select the disk ⇒ label ⇒ [0] SMI label.
Then check your partition layout is as desired and label the disk again.
This is actually cropping up quite a bit now, so I've updated the instructions above to include this step.
10 veera • 12:59 PM Wednesday, 28 Jan 2009
Hi colin,
I got the solution,any ways i have installed total OS with ZFS and i am successfully created solaris 9 os
under zones.
Thanks for the help,
Veera.
11 Amit • 12:20 AM Wednesday, 11 Feb 2009
Colin,
This is what happened when I tried it:
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
rootpool 120G 111K 120G 0% ONLINE -
# lucreate -c c0t0d0 -n zfsbe -p rootpool
ERROR: unknown option -- p
Usage: lucreate -n BE_name [ -A BE_description ] [ -c BE_name ]
[ -C ( boot_device | - ) ] [ -f exclude_list-file [ -f ... ] ] [ -I ]
[ -l error_log-file ] [ -M slice_list-file [ -M ... ] ]
[ -m mountPoint:devicePath:fsOptions [ -m ... ] ] [ -o out_file ]
[ -s ( - | source_BE_name ) ] [ -x exclude_dir/file [ -x ... ] ] [ -X ]
[ -y include_dir/file [ -y ... ] ] [ -Y include_list-file [ -Y ... ] ]
[ -z filter_list-file ]
# uname -a
SunOS wapofindb02 5.10 Generic_137137-09 sun4u sparc SUNW,Sun-Fire-V490
Any ideas on why I cant use the -p flag? Its not listed as an option on the man page either...
Thanks!!
12 Colin • 9:52 AM Wednesday, 11 Feb 2009
Amit: Sounds like you're either still running an earlier release of Solaris 10 (see /etc/release) or you've
still got an old revision of the Live Upgrade pkgs installed.
Remove all the Live Upgrade pkgs and re-install from the Solaris 10 10/08 media and try again.
13 fugitive • 10:11 PM Monday, 27 Apr 2009
HI
Te doc was good and i was able to migrate UFS to ZFS with live upgrade with sum hiccups due to
patches .. but i have one moe non root file system /zones on my root disk is it possible to convert it to zfs
with live upgrade in root pool
I read somwhere it cannot do that .. is that correct and if yes how can we move other non root file systems
from rootpool
14 Colin • 10:50 AM Tuesday, 28 Apr 2009
Yes, I believe that is correct. However you can easily move your zone onto ZFS by creating the ZFS
pool/filesystem and then use zoneadm -z move to move your zone.
15 Fred Lucas • 8:12 PM Monday, 22 Jun 2009
Question for you guys. When my server is built via jumpstart the file system is laid down on the 0 disk
with the following format. I'm wondering how I'd go about putting ZFS rootpool on this disk and
mirroring it to the c0t1d0 disk. Anyone have any ideas?
Part Tag Cylinders
0 root 0-1611
1 swap 1611-3220
2 backup 0-14086
3 unassigned 0
4 unassigned 0
5 var 3221-5635
6 usr 5636-8050
7 home 8056-14086
16 Colin • 8:50 PM Monday, 22 Jun 2009
Fred: If your system is still to be built, your best option is to change your jumpstart configuration and
configure the rootpool mirror directly in the jumpstart profile (see the Installation Guide on
docs.sun.com).
If your system is already built, convert your current UFS root disk to ZFS on the other disk and then
attach this disk as a mirror (it's the default for "zpool attach").
If you need any more specific details or assistance, try the ZFS:Discuss forum/alias
17 Fred Lucas • 7:08 PM Wednesday, 24 Jun 2009
What do you folks make of this error I get after jumpstarting the M3000? A copy of the profile is below
as well. Any help is appreciated. Thanks.
[Ed. Removed superfluous info ]
18 Colin • 8:03 PM Wednesday, 24 Jun 2009
Fred: you latest comment isn't really related to this post. Please pose your question to your nearest Sun
support centre, or try the OpenSolaris forums.
19 Dilip • 10:53 AM Thursday, 10 Dec 2009
Thanks for the help. The EFI label was snagging me.
20 Cassandra • 5:49 PM Friday, 19 Mar 2010
I think this may be along the lines of Chris' question, but I would like to be clear:
does this root pool contain all of the root filesystem?
for example, all of the file systems under root:
/
/usr
/var
/opt
etc etc
are just one zfs file system (aka root "/") and that one file system can have the options/attributes I chose?
I was imagining / hoping that each of these would become their own filesyetem so I could set
quotas/reservations. Is this not the case?
I am just worried about /var growing too large, and I would like to place a cap on it!
21 Colin • 4:49 PM Saturday, 20 Mar 2010
@Cassandra: YES, all of your OS filesystems (see the lucreate (1M) man page) will be consolidated onto
the same ZFS filesystem as the root filesystem. This was the recommended behaviour when this
functionality was introduced. I believe the latest version of Solaris 10 now allows you to specify a
separate /var at installation time, but I don't believe this functionality is in LU yet.
The only way I can see you getting /var on a separate ZFS filesystem is by installing the OS from scratch
or creating a new ZFS filesystem and copy the data across and modify the props to it mount onto /var at
boot.
Pingbacks
 Solaris 10 10/08 (aka Update 6) is Now Available :: Col’s Tech Stuff
 Growing a ZFS Root Pool :: Col’s Tech Stuff
◄ Links for 18 Sep 2008 - 24 Sep 2008 Mapping Sun Keyboard Shortcuts in Opera ►
Comments Closed
If you have any further questions or comments, feel free to send them to me directly.

You might also like