Professional Documents
Culture Documents
D66810
April 2010
Edition 2.0
D61784GC20
SA-345-S10 B
Student Guide
Administration
Sun Virtualization:
Solaris 10 Logical Domains
Disclaimer
This document contains proprietary information, is provided under a license agreement containing restrictions on use and
disclosure, and is protected by copyright and other intellectual property laws. You may copy and print this document solely for
your own use in an Oracle training course. The document may not be modified or altered in any way. Except as expressly
permitted in your license agreement or allowed by law, you may not use, share, download, upload, copy, print, display,
perform, reproduce, publish, license, post, transmit, or distribute this document in whole or in part without the express
authorization of Oracle.
The information contained in this document is subject to change without notice. If you find any problems in the document,
please report them in writing to: Oracle University, 500 Oracle Parkway, Redwood Shores, California 94065 USA. This
document is not warranted to be error-free.
This training manual may include references to materials, offerings, or products that were previously offered by Sun
Trademark Notice
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective
owners.
AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro
Devices. Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used
under license and are trademarks or registered trademarks of SPARC International, Inc. UNIX is a registered trademark
licensed through X/Open Company, Ltd.
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
1
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Preparation............................................................................... 3-19
Task 1 – Set Up the Control Domain.................................... 3-20
Task 2 – Create Virtual Services............................................ 3-28
Task 3 – Configure the Virtual Switch as a Primary
Network Interface ................................................................ 3-31
Task 4 – Enable the Virtual Network Terminal Server...... 3-33
Lab Exercise Summary ........................................................... 3-34
Creating a Guest Domain.................................................................4-1
Additional Resources ........................................................................ 4-2
3
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Preface
Preface-i
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Course Map
Course Map
The following course map enables you to see what you have
accomplished and where you are going in reference to the course goals.
System Management
System Management
Configuring the Control Creating
ChangesainGuest Domain
Solaris 10
and Service Domain
System Management
Managing Advanced
Performing Logical Changes in Solaris
Logical Domains 10
Domains Administration Configurations
Refer to the Sun Learning Services catalog for specific information and
Introductions
Now that you have been introduced to the course, introduce yourself to
the other students and the instructor, addressing the following items:
● Name
● Company affiliation
● Title, function, and job responsibility
Conventions
The following conventions are used in this course to represent various
training elements and alternative learning resources.
Icons
Note – Indicates additional information that can help students but is not
crucial to their understanding of the concept being described. Students
should be able to understand the concept or complete the task without
this information. Examples of notational information include keyword
shortcuts and minor system adjustments.
Typographical Conventions
Courier is used for the names of commands, files, directories,
programming code, and on-screen computer output; for example:
Use ls -al to list all files.
system% You have mail.
Courier bold is used for characters and numbers that you type; for
example:
To list the files in this directory, type:
# ls
Courier bold is also used for each line of programming code that is
referenced in a textual description; for example:
1 import java.io.*;
2 import javax.servlet.*;
3 import javax.servlet.http.*;
Notice the javax.servlet interface is imported to allow access to its
life cycle methods (Line 2).
Palatino italics is used for book titles, new words or terms, or words that
you want to emphasize; for example:
Read Chapter 6 in the User’s Guide.
These are called class options.
Module 1
1-1
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Additional Resources
Additional Resources
App App App App App App App App App App App App App
A B C A B C D A B A B C
OS
Hypervisor
Server
There are multiple roles that logical domains can perform (see Figure 1-2),
such as a:
● Control domain – The control domain controls the Logical Domains
environment. It is used to configure machine resources and guest
domains, and provides services necessary for domain operation,
such as virtual console service. The control domain also normally
acts as a service domain. The Logical Domains Manager software is
located within the control domain. A system has exactly one control
domain. It is the first domain configured on the system and is also
known as the primary domain.
● I/O domain – An I/O domain has direct ownership of and direct
access to physical I/O devices connected to the PCI bus, such as
local network interfaces and disk drives, and devices connected to
PCI adapters. It is normally a prerequisite role for a service domain.
● Service domain – A service domain provides virtual services, such as
virtual disk drives, virtual network switches, and virtual console
services to guest domains. The service and I/O roles are typically
configured on the same domain.
Workload
System
Controller Logical Device Device
Domains Client
Driver Server
Manager
Hypervisor
Physical Resourses
(CPUs, LANs, Storage, Network)
A domain can also have multiple roles. For example, the initial domain
(the “primary” domain) usually has the role of control, I/O, and service
domains (see Figure 1-3).
Workload
System
Controller Logical Device Device
Domains Client
Driver Server
Manager
Hypervisor
Physical Resourses
(CPUs, LANs, Storage, Network)
Now that you have a better understanding of what the Logical Domains
product is, how it is positioned in Sun Microsystems virtualization
technologies, and how it utilizes server virtualization technology, let’s
look at the key architectural components that enable you to create and
configure a Logical Domains environment.
The Hypervisor
The hypervisor is a firmware layer on the Flash PROM of the server
motherboard, which partitions a physical system into one or more virtual
machines (see Figure 1-4). The SPARC sun4v architecture contains a new
hyper-privileged mode that enables the hypervisor to access and control
all platform devices. In this role, the hypervisor abstracts underlying
hardware and exposes a subset of system resources to each logical
domain. In fact, logical domains can only access platform resources
Control Guest
Domain Domain
Firmware Hypervisor
CPU, Memory,
Hardware
& I/O
Hardware
Resources
Hypervisor
Physical Resources
Virtual OpenBoot
Console {0}ok
Service
Hypervisor
Physical Resources
I/O Devices
When you boot the logical domain, the OBP passes its current device tree
(both real and virtual devices) to the Solaris OS kernel. At that point the
/etc/path_to_inst file is built or updated. It contains the mapping
between how the OBP represents things and how the Solaris OS
represents them.
Note – Each domain runs the same version of the OpenBoot PROM.
Control/Service/IO Guest
Firmware Hypervisor
CPU, Memory,
Hardware
& I/O
Note – To have all the Logical Domains 1.3 features, you need to run the
Solaris 10 10/09 OS.
Solaris 10 Solaris 10
App
Device Driver
Virtual Virtual
Nexus Driver
Device Device
Service Driver
A B
Now that you are familiar with the Logical Domains architecture, let’s
look at the steps involved in planning for your Logical Domains
environment.
For example, let’s assume our primary design goal is to plan for high
availability. With this goal in mind, we might configure our new Logical
Domains production environment to look something like the example
shown in Figure 1-10.
Hypervisor
I/O Bridge (Leaf B) I/O Bridge (Leaf A)
NIC NIC
Storage
As you can see, we are planning for high availability by having two
service domains. Each service domain provides access to the same storage
and to the same network. The guest domain uses multipathing to access
the storage and the network through either of the service domains. That
way if a service domain is not available, the guest domain still has access
to the storage and the network through the other service domain.
The number of logical domains you create per server is based on two
factors:
● The purpose of the logical domain (what applications you intend to
run).
● Available resources in the underlying system.
System Requirements
Note – For additional information about the hardware platforms that can
run Logical Domains and their capabilities, see Section 1 - Hardware of
the LDoms Community Cookbook
(http://wikis.sun.com/display/SolarisLogicalDomains/Section+1+%28N
EW%29+-+Hardware).
Discussion – Take a moment to study the device locations on the rear and
front panels of the Sun SPARC Enterprise T5140 server in the figures that
! follow. You will need this information for the first lab activity. Also,
?
review the device pathing information that is provided for the T5140.
The Sun SPARC Enterprise T5140 Server begins the next wave of high-
efficiency systems based on the second generation of Sun's Chip Multi-
threaded Technology (CMT). The Sun SPARC Enterprise T5140 server
utilizes multiple UltraSPARC-T2 Plus processors. Because the system has
multiple chips, it uses cache coherency protocols via connections between
the chips to manage the sharing of data. Figure 1-11 and Figure 1-12 show
the location of devices accessed from the front and rear panels.
Locator LED/
Locator Button
Service Action
Required LED DVD Drive Disk Drive Map
Power/OK LED
Power HDD0 HDD1 HDD2 HDD3 Power Supply Service LED
Button Disk Drives Disk Drives System Overtemperature LED
Fan Module Service LED
Figure 1-11 Sun SPARC Enterprise T5140 Server Device Locations Front
Panel
Table 1-1 provides the device and slot mapping for the T5140 (1U) and
T5240 (2U) systems.
The T5140/T5240 have all Ethernet ports on pci@500 and all the hard
disks are located at pci@400. Both the T5140 and T5240 systems can be
configured with two I/O domains, with the on-board network interfaces
connected to one I/O domain and the internal disks connected to the
other.
Note – For device path information for other Sun platforms, see
Solaris[TM] Operating System: Matrix of Recognized Device Paths
(http://sunsolve.sun.com/search/document.do?assetkey=1-61-208209-1).
Thread 8
C M C M C M
Thread 7 C M C M C M
Thread 6 C M C M C M
Thread 5 C M C M C M
Thread 4 C M C M C M
Thread 3 C M C M C M
Thread 2 C M C M C M
Thread 1 C M C M C M
Time
Thread 1 C M C M C M
Time
C Compute Time
M Cache/Memory Latency
Running threads from the same core in separate domains can lead to
unpredictable and poor performance. For that reason, when creating a
domain with eight virtual CPUs, ensure that the virtual CPUs come from
the same core to provide the best performance. Because the Logical
Domains Manager adds from the lowest virtual CPU and removes from
highest, in terms of the CPU ID, it is possible to create several domains in
a way that can break the core/thread affinity model.
Note – Depending on the CPU chip and system type, the number of CPU
threads per CPU core can be different. See the Sun SPARC Enterprise
Server table in the LDom Community Cookbook, Section 1, System
Overview for a listing of number of cores and threads by system
(http://wikis.sun.com/display/SolarisLogicalDomains/Section+1+%28N
EW%29+-+Hardware).
As part of your planning you will also want to determine the amount of
memory required for each domain. Logical Domains software does not
impose a minimum memory size limitation when creating a domain. The
memory size requirement is a characteristic of the guest operating system.
For recommended and minimum size memory requirements, refer to the
installation guide for the operating system you are using.
(*) If the domain is also a service domain, the guideline for the service
domain has to be selected.
Logical domains have been assigned the following block of 512K MAC
addresses:
00:14:4F:F8:00:00 ~ 00:14:4F:FF:FF:FF
00:14:4F:F8:00:00 - 00:14:4F:FB:FF:FF
You can use the upper half of this range for manual MAC address
allocation:
00:14:4F:FC:00:00 - 00:14:4F:FF:FF:FF
When you do not specify a MAC address when you create a logical
domain or a network device, the Logical Domains Manager automatically
allocates and assigns a MAC address to that logical domain or network
device.
As part of this planning step you will want to gather as much information
as possible as to what actions are required to complete each task. A good
practice is to read through the appropriate sections of the Logical
Domains Administration Guide and make note of any considerations that
might impact your plan.
In Module 2 of this course you are given the opportunity to complete each
of the installation tasks using the Logical Domains 1.3 software and a
T5140 server.
As part of this planning step you will want to ensure that your
configuration tasks are complete and fully support your design
requirements. Review the configuration sections of the Logical Domains
Administration Guide and make note of any considerations that might
impact your design or plan.
Once you have completed your plan and have had it approved by
management, you are ready to implement your design. You will find that
having spent the time up front in planning your approach will make the
implementation and maintenance of your new Logical Domains
environment go more smoothly.
Notes:
Module 2
2-1
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Additional Resources
Additional Resources
Introduction
After you have completed your Logical Domains planning and have a
good idea of what you want your Logical Domains environment to look
like, the next step is to install the different software components required
to install and enable the Logical Domains (LDoms) software on your
system. In this module you will be shown how to enable the LDoms 1.3
software on a T5140 server. You will then be given the opportunity to
apply what you have learned by completing a lab exercise.
Before we get started, there are a few things you need to know. Using the
LDoms software requires the following components:
● Supported server running an operating system at least equivalent to
the Solaris 10 10/09 OS with any patches recommended in
“Required Software and Patches” in the Logical Domains 1.3 Release
Notes.
● System firmware Version 7.2.4 at a minimum for your Sun
UltraSPARC T2 or T2 Plus platform; however, Version 7.2.7 is
recommended.
● Logical Domains 1.3 software installed and enabled on the server.
If you need to upgrade your system to the Solaris 10 OS required for the
1.3 version of the Logical Domains software, refer to “Required Software
and Patches” in Logical Domains 1.3 Release Notes. The Release Notes will
also list any required and recommended patches.
You can use any normal process of installation for the server, including
JumpStart, network, DVD or CD, or upgrading from a previous version.
Select the Entire Distribution for a regular installation or SUNWCall for a
JumpStart installation.
Note – For complete instructions for upgrading the Solaris OS, refer to the
Solaris 10 10/09 Release and Installation Collection
(http://docs.sun.com/app/docs/coll/1236.11).
Once you know what version of the firmware you need, you can check to
Note – Servers now use the ILOM service processor, which has
commands different from the ALOM system controller. However, the
ILOM has an ALOM compatibility mode so you can still use the ALOM
commands which remain unchanged.
If you don’t have the right version, you will need to upgrade the system
firmware. In this section you are shown two ways to upgrade the system
firmware:
● From the operating system
● From a local TFTP server using system controller commands
You can find system firmware for your platform at the SunSolvesm site
(http://sunsolve.sun.com).
To upgrade the system firmware from the operating system, perform the
following steps:
1. In the system console window, access the system controller.
2. From the system controller, open a console session on the system.
Using ILOM:
-> start /SP/console
The steps for upgrading the system firmware with a TFTP server are as
follows:
1. Shut down and power off the host server operating system.
# shutdown -i0 -g0 -y
2. Use the #. escape sequence to return to the system controller.
3. Power down the system.
Using ILOM:
-> cd /SYS
-> stop /SYS
Do you want to stop SYS (y/n)? y
Using ALOM:
sc> poweroff -fy
4. Upgrade the system firmware, depending on your server.
Refer to your platform documentation for information about how to
update your firmware.
Sample ILOM command:
-> load -source tftp://IP-address/Firmware_File
Sample ALOM command:
sc> flashupdate -s IP-address -f path/Firmware_File
Where:
● IP-address is the IP address of your TFTP server.
● path is the location in SunSolve or your own directory where
you can obtain the system firmware image.
5. Reset the service processor.
Using ILOM:
-> cd /SP
-> reset
If you use the install-ldm installation script, you have several choices
to specify how you want the script to run. Each choice is described in the
procedures that follow.
● Using the install-ldm script with no options does the following
automatically:
● Checks that the Solaris OS release is Solaris 10 10/09 OS at a
minimum
● Verifies that the package subdirectories SUNWldm/ and
SUNWldmp2v/ are present
● Verifies that the prerequisite Solaris Logical Domains driver
packages, SUNWldomr and SUNWldomu, are present
● Verifies that the SUNWldm and SUNWldmp2v packages have not
been installed
● Installs the Logical Domains Manager 1.3 software
● Verifies that all packages are installed
● If the Solaris Security Toolkit (SUNWjass) is already installed,
you are prompted to harden the Solaris OS on the control
domain.
● Determine whether to use the Logical Domains Configuration
Assistant (ldmconfig) to perform the installation.
The steps for installing the Logical Domains Manager software packages
manually are as follows:
1. Use the pkgadd command to install the SUNWldm.v and SUNWldmp2v
packages.
For more information about the pkgadd command, see the
pkgadd(1M) man page.
The -G option installs the package in the global zone only and the -d
option specifies the path to the directory that contains the SUNWldm.v
and SUNWldmp2v packages.
# pkgadd -Gd . SUNWldm.v SUNWldmp2v
If the ldmd daemon has been disabled, you can use the following
procedure to enable it.
1. Use the svcadm command to enable the Logical Domains Manager
daemon, ldmd.
# svcadm enable ldmd
For more information about the svcadm command, see the
svcadm(1M) man page.
2. Use the ldm list command to verify that the Logical Domains
Manager is running.
# ldm list
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active ---c- SP 64 3264M 0.3% 19d 9m
The ldm list command should list all domains that are currently
defined on the system. In particular, the primary domain should be
listed and be in the active state. The sample output presented in step
2 shows that only the primary domain is defined on the system.
By default, the root user has read and write authorization while all other
users have read-only authorization.
You can set up authorization and profiles and assign roles for user
accounts using the Solaris OS Role-Based Access Control (RBAC) adapted
for the Logical Domains Manager. For more information about RBAC,
refer to the Solaris 10 System Administrator Collection
(http://docs.sun.com/app/docs/coll/47.16)
To find out more about how to add authorization for additional users and
manage user profiles on your Logical Domains system as well as how to
enable other security features, see the “Security” chapter in the Logical
Domains 1.3 Administration Guide.
Note – For a complete listing of the ldm command subcommands, see the
ldm(1M) man page.
The Logical Domains Manager daemon, ldmd, must be running to use the
Logical Domains Manager CLI.
Usage:
ldm [--help] command [options] [properties] operands
ldm -V
Options:
-V Display version information
bindings
list-bindings [-e] [-p] [<ldom>...]
services
list-services [-e] [-p] [<ldom>...]
constraints
list-constraints ([-x] | [-e] [-p]) [<ldom>...]
domain ( dom )
add-domain (-i <file> | [mac-addr=<num>] [hostid=<num>]
[failure-policy=<ignore|stop|reset|panic>]
[master=<master_ldom1,...,master_ldom4>] <ldom>...)
set-domain (-i <file> | [mac-addr=<num>] [hostid=<num>]
[failure-policy=<ignore|stop|reset|panic>]
[master=<master_ldom1,...,master_ldom4>] <ldom>)
remove-domain (-a | <ldom>...)
list-domain [-e] [-l] [-o <format>] [-p] [<ldom>...]
’format’ is one or more of:
console,cpu,crypto,disk,domain,memory,network,physio,
resmgmt,serial,status
start-domain (-a | -i <file> | <ldom>...)
stop-domain [-f] (-a | <ldom>...)
bind-domain (-i <file> | <ldom>)
unbind-domain <ldom>
panic-domain <ldom>
migrate-domain [-n] [-p <password_file>] <source_ldom>
[<user>@]<target_host>[:<target_ldom>]
io
add-io [bypass=on] <bus> <ldom>
remove-io <bus> <ldom>
crypto ( mau )
add-crypto <number> <ldom>
set-crypto <number> <ldom>
remove-crypto <number> <ldom>
memory ( mem )
add-memory <number>[GMK] <ldom>
set-memory <number>[GMK] <ldom>
remove-memory <number>[GMK] <ldom>
operation
cancel-operation (migration | reconf) <ldom>
policy
add-policy [enable=yes|no] [priority=<value>]
reconf
cancel-reconf <ldom>
spconfig ( config )
add-spconfig [-r <autosave>] <config_name>
set-spconfig <config_name>
remove-spconfig [-r] <config_name>
list-spconfig [-r [<autosave>]]
variable ( var )
add-variable <var_name>=<value>... <ldom>
set-variable <var_name>=<value>... <ldom>
remove-variable <var_name>... <ldom>
list-variable [<var_name>...] <ldom>
vconscon ( vcc )
add-vconscon port-range=<x>-<y> <vcc_name> <ldom>
set-vconscon port-range=<x>-<y> <vcc_name>
remove-vconscon [-f] <vcc_name>
vconsole ( vcons )
set-vconsole [port=[<port-num>]] [group=<group>]
[service=<vcc_server>] <ldom>
vcpu
add-vcpu <number> <ldom>
set-vcpu <number> <ldom>
remove-vcpu <number> <ldom>
vdisk
vdiskserver ( vds )
add-vdiskserver <service_name> <ldom>
remove-vdiskserver [-f] <service_name>
vdpcc ( ndpsldcc )
add-vdpcc <vdpcc_name> <service_name> <ldom>
remove-vdpcc [-f] <vdpcc_name> <ldom>
vdpcs ( ndpsldcs )
add-vdpcs <vdpcs_name> <ldom>
remove-vdpcs [-f] <vdpcs_name>
vdiskserverdevice ( vdsdev )
add-vdiskserverdevice [-f] [options={ro,slice,excl}]
[mpgroup=<mpgroup>] <backend> <volume_name>@<service_name>
set-vdiskserverdevice [-f] [options=[{ro,slice,excl}]]
[mpgroup=[<mpgroup>]]
<volume_name>@<service_name>
remove-vdiskserverdevice [-f] <volume_name>@<service_name>
vnet
add-vnet [mac-addr=<num>] [mode=hybrid] [pvid=<pvid>]
[vid=<vid1,vid2,...>] [mtu=<mtu>] [linkprop=phys-state]
[id=<networkid>] <if_name> <vswitch_name> <ldom>
set-vnet [mac-addr=<num>] [mode=[hybrid]] [pvid=[<pvid>]]
[vid=[<vid1,vid2,...>]] [mtu=[<mtu>]] [linkprop=[phys-
state]]
[vswitch=<vswitch_name>] <if_name> <ldom>
remove-vnet [-f] <if_name> <ldom>
vswitch ( vsw )
add-vswitch [default-vlan-id=<vid>] [pvid=<pvid>]
[vid=<vid1,vid2,...>]
[mac-addr=<num>] [net-dev=<device>] [linkprop=phys-state]
[mode=<mode>] [mtu=<mtu>] [id=<switchid>] <vswitch_name>
<ldom>
set-vswitch [pvid=[<pvid>]] [vid=[<vid1,vid2,...>]] [mac-
addr=<num>]
[net-dev=[<device>]] [mode=[<mode>]] [mtu=[<mtu>]]
[linkprop=[phys-state]] <vswitch_name>
Verb aliases:
Alias Verb
----- -------
rm remove
ls list
Command aliases:
Alias Command
----- -------
cancel-op cancel-operation
create add-domain
modify set-domain
destroy remove-domain
remove-reconf cancel-reconf
start start-domain
stop stop-domain
bind bind-domain
unbind unbind-domain
panic panic-domain
migrate migrate-domain
#
Note – You can also find the ldm command and subcommand information
in the Logical Domains 1.3 Reference Manual located in the files installed on
your workstation for this class.
Preparation
During this exercise you work with lab equipment located in Sun’s
Remote Lab Data Center (RLDC). The RLDC is remote from your present
location. To successfully complete the following tasks, you should
familiarize yourself with the RLDC lab topology. Figure 2-1 shows the
major components that make up the lab topology.
Table 2-1 shows the IP addresses assigned to the RLDC lab host machine’s
system controller (SC) network management (net mgmt) port, the SC
login credentials, and the server login credentials. Note that the IP
address subnet assignment varies based on the lab to which you are
assigned. Please contact your instructor for IP address information.
Table 2-1 Host Machine’s System Controller IP Address Assignments
Sun uses the Sun Secure Global Desktop (SGD) to provide you with access
to the remote lab environment. To access the SGD, open the following
path in your Web browser:
http://rldc.sun.net
Click the LOGIN button and enter the user name and password provided
by your instructor. When the Security Warning dialog box appears, click
the Accept button. Figure 2-2 shows a typical SGD session.
The SGD places you on the lab “landing pad” from which you can access
your lab equipment.
The SGD left panel contains tools that allow you to access your lab
systems. The Gnome tools open remote desktops on the landing pad or lab
systems. The Terminal tools open terminal windows on the landing pad or
lab systems. The Console tool opens a remote console on the lab system.
The Administration Console is for shadowing your lab partner.
Operating System
Physical Resourses
(CPUs, LANs, Storage)
Note – The sample command output shown for the following commands
are from a Sun SPARC Enterprise T5140 server. The server details and
device pathing information might differ based on the server’s system
disks and server architecture of the machine you are using.
5. In the landing pad desktop session, open a Web browser and enter
the IP address of the system controller in your assigned lab server to
open the Sun Integrated Lights Out Manager WebGUI.
Log in using the appropriate credentials shown in Table 2-1.
6. Click the System Information tab to determine the following:
Product Name: T5140
System Firmware: Sun System Firmware 7.2.6
7. Click the Remote Control tab and Remote Power Control tab. Verify
that the host is currently on. If not, select the Power On action and
click Save. Click the OK button in the confirmation dialog box.
"/virtual-devices@100/channel-devices@200/virtual-channel@0" 2 "vldc"
"/virtual-devices@100/channel-devices@200/virtual-channel-client@1" 3 "vldc"
"/virtual-devices@100/console@1" 0 "qcn"
"/virtual-devices@100/ncp@6" 0 "ncp"
"/virtual-devices@100/random-number-generator@e" 0 "n2rng"
"/virtual-devices@100/n2cp@7" 0 "n2cp"
"/pci@400" 0 "px"
"/pci@400/pci@0" 0 "pxb_plx"
"/pci@400/pci@0/pci@1" 1 "pxb_plx"
"/pci@400/pci@0/pci@1/pci@0" 0 "px_pci"
In this task, you can choose to either upgrade the system controller
firmware from the “landing pad” system using the Sun Integrated Lights
Out Manager WebGUI (see “Loading Firmware Using the Sun Integrated
Lights Out Manager WebGUI” on page 29) or load the firmware from the
operating system (see “Loading Firmware From the Operating System”
on page 32).
If you wish to load the system controller firmware using the Sun
Integrated Lights Out Manager WebGUI, perform the following steps:
Note – These steps are specific to the Sun SPARC Enterprise T5140 server,
which is used as an example in this exercise. To see the patch
requirements and installation instructions for other servers, please read
the README file associated with the patch file of your server and follow
the instructions given.
1. Verify that you have the landing pad Web browser open to the Sun
Integrated Lights Out Manager WebGUI. If not, open a browser
session at this time.
If the Redirection (Sun ILOM Remote Console) window is open, click
the Redirection pull down menu and select Quit.
2. In the Sun Integrated Lights Out Manager WebGUI, click the Remote
Control tab and Remote Power Control tab. Select the Graceful
Shutdown and Power Off action and click Save. Click the OK button
in the confirmation dialog box.
4. Click the Maintenance tab and from the Firmware Upgrade tab select
Enter Upgrade Mode. Click the OK button in the confirmation dialog
box.
Note – At the end of the firmware upgrade process, the system controller
is reset. It might take a few minutes before you can access Sun Integrated
Lights Out Manager WebGUI.
8. After the system has reconnected, log back in to the Sun Integrated
Lights Out Manager WebGUI.
9. Click the Remote Control tab and Remote Power Control tab. Power
the system on.
11. Log out of the Sun Integrated Lights Out Manager WebGUI and
close the Web browser.
In this task you download the system controller firmware from the
operating system (on the local host) to the system controller memory and
install it. This procedure requires that you use the system controller
ALOM command set.
On systems that run ILOM, you can switch to ALOM compatibility mode
by running these commands on the system controller:
-> cd /SP/users/SC_User_Name
-> set cli_mode=alom
The next time you log in to the system controller, you will be able to run
ALOM commands.
If you wish to load the firmware from the operating system on the local
README.139444-07
Sun_System_Firmware-7_2_7_b-SPARC_Enterprise_T5140+T5240.pkg
sysfw720_README_docs.css
sysfw720_README_docs.html
sysfwdownload
sysfwdownload.README
TPM-Feature-README.txt
Note that the Sun_System_Firmware-7_2_7_b-
SPARC_Enterprise_T5140+T5240.pkg file contains the system
# ./sysfwdownload -u Sun_System_Firmware-7_2_7_b-
SPARC_Enterprise_T5140+T5240.pkg
WARNING: Host will be powered down for automatic firmware update when
download is completed.
Do you want to continue(yes/no)? yes
creating: LDoms_Manager-1_3/Install/
inflating: LDoms_Manager-1_3/Install/install-ldm
creating: LDoms_Manager-1_3/Legal/
inflating: LDoms_Manager-1_3/Legal/CDDL.LICENSE
inflating: LDoms_Manager-1_3/Legal/Ldoms_MIB_1.0.1_Entitlement.txt
inflating: LDoms_Manager-1_3/Legal/Ldoms_MIB_1.0.1_SLA_Entitlement.txt
...
# ls -d LDoms_Manager-1_3*
LDoms_Manager-1_3 LDoms_Manager-1_3.zip
You are about to install the Logical Domains Manager package that will
enable you to create, destroy and control other domains on your system.
You will also be given the option of running the LDoms Configuration
Assistant (ldmconfig) to setup the control domain and create guest
domains.
Once installed, you may configure your system for a basic Logical
Domains deployment. If you select "y" for the following question, the
Logical Domains Configuration Assistant will be launched following a
successful installation of packages.
(You may launch the LDoms Configuration Assistant at a later time with
the command /usr/sbin/ldmconfig, or use the GUI Configuration Assistant
which is bundled in the LDoms zip file - see README.GUI for more
details)
Module 3
3-1
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Additional Resources
Additional Resources
Introduction
After you have installed the Logical Domains software, your next task is
to create and configure the logical domains that will constitute your
Logical Domains environment.
In this module you will be shown how to configure the control domain
(specifically the control domain’s resources) and then how to configure
the control domain as a service domain. After you have been shown how
Control Domain
Hypervisor
Physical Resources
(CPUs, MAU, Memory, I/O)
The following examples show you how to assign each resource type to the
control domain. You should have determined the allocation of each
resource as part of your Logical Domains planning, so that when you
come to this task, you know exactly how many resources to allocate to the
control domain and why.
Review and make note of how many of each resource you have,
specifically virtual CPUs, MAUs, and memory.
In the sections that follow you are shown how to set the virtual CPUs,
MAUs, and memory for the control domain.
To set the virtual CPUs for the control domain, run the following
command:
primary# ldm set-vcpu 8 primary
Note – Adding a MAU to the control domain is useful because having this
resource can speed up warm migration by taking advantage of the
cryptographic unit when transferring data (transfer is done using
encryption). You will learn more about warm migration in Module 6.
To verify that the resources have been assigned correctly to the control
domain, run the following command:
You must enable these virtual services initially to be able to use them later.
Note – You can add more than one disk or switch service if desired.
To enable a virtual service, you add it to the domain. In the sections that
follow you will be shown how to add each of the virtual services.
Virtual SAN 2
Virtual SAN 1
Hypervisor
I/O Bridge
Storage
Devices
To allow configuring virtual disks into a logical domain, you must add a
virtual disk server (vds). In the example that follows a virtual disk server
(primary-vds0) is being added to the control domain (primary), which
will also serve as a service domain.
Hypervisor
I/O Bridge
Gb
Ether I/F
With a virtual console service, console I/O from all domains, except the
primary domain, is redirected to a service domain running the virtual
console concentrator (vcc) service and virtual network terminal server
daemon (vntsd), instead of to the systems controller.
To provide access to the virtual console of each logical domain, you must
enable the virtual network terminal server daemon (vntsd). You are
shown how to complete this task in the “Enabling the Virtual Network
Terminal Server Daemon” section of this module.
To verify the services have been created, run the following command:
primary# ldm list-services primary
VCC
NAME LDOM PORT-RANGE
VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 primary 00:14:4f:fb:79:88 nxge0 0 switch@0 1 1 1500
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary
To verify the configuration is ready to be used at the next reboot, enter the
following command:
This list command shows the initial configuration set will be used once
you powercycle.
Note – If a net-dev is defined, the vsw will be connected with that net-
dev, and packets can be exchanged between that net-dev and other
domains connected to that vsw. Plumbing vsw allows the domain with the
vsw to communicate with other domains connected to that vsw.
Note – Be sure that you have created the virtual console concentrator
(vcc) service on the control domain before you enable vntsd.
Preparation
To prepare for this lab exercise:
● Review the Preparation section in the previous lab.
● Review the hardware and software configuration information you
gathered in Task 1 of the previous lab.
● When performing this exercise, use the following system login
information. See your instructor if you need help.
Control domain operating system:
● Login name root
● Login password cangetin
Note – Examples shown in this exercise are from lab machine 4. The
command responses shown in this lab are examples only. Depending
upon your lab configuration, the responses to the commands in the labs
might vary slightly.
Note – The sample command output shown for the following commands
are from a Sun SPARC Enterprise T5140 server. The server details and
device pathing information might differ based on the server’s system
disks and server architecture of the machine you are using.
Note – In the examples shown in this lab, commands run on the primary
domain are indicated by the primary# prompt. The primary domain is
the domain in which the Logical Domains Manager software is running.
In this task you allocate the following resources to the control domain:
● vcpu – The virtual CPUs representing the processor strands of the
server to be used by the control domain.
● mau – The modular arithmetic unit to be used by the control domain.
● memory – The amount of memory to be used by the control domain.
Control Domain
Hypervisor
Physical Resources
(CPUs, MAU, Memory, I/O)
72 0 no
73 0 no
74 0 no
75 0 no
76 0 no
77 0 no
78 0 no
79 0 no
80 0 no
81 0 no
MAU
ID CPUSET BOUND
0 (0, 1, 2, 3, 4, 5, 6, 7) primary
1 (8, 9, 10, 11, 12, 13, 14, 15) primary
2 (16, 17, 18, 19, 20, 21, 22, 23) primary
3 (24, 25, 26, 27, 28, 29, 30, 31) primary
4 (64, 65, 66, 67, 68, 69, 70, 71) primary
5 (72, 73, 74, 75, 76, 77, 78, 79) primary
6 (80, 81, 82, 83, 84, 85, 86, 87) primary
7 (88, 89, 90, 91, 92, 93, 94, 95) primary
MEMORY
PA SIZE BOUND
0x0 512K _sys_
0x80000 1536K _sys_
0x200000 94M _sys_
0x6000000 32M _sys_
0x8000000 96M _sys_
0xe000000 7968M primary
IO
MAC
HOSTID
0x84463e4e
CONTROL
failure-policy=ignore
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 0 0.6% 100%
1 1 0.1% 100%
2 2 0.0% 100%
3 3 0.0% 100%
4 4 0.0% 100%
5 5 0.0% 100%
6 6 0.0% 100%
7 7 0.0% 100%
8 8 0.0% 100%
9 9 0.0% 100%
10 10 0.0% 100%
11 11 0.0% 100%
12 12 0.0% 100%
13 13 0.0% 100%
14 14 0.0% 100%
15 15 0.0% 100%
16 16 0.0% 100%
17 17 0.0% 100%
18 18 0.0% 100%
19 19 0.0% 100%
20 20 0.0% 100%
21 21 0.0% 100%
22 22 0.0% 100%
23 23 4.1% 100%
24 24 0.0% 100%
25 25 0.0% 100%
26 26 0.0% 100%
27 27 0.5% 100%
28 28 0.0% 100%
29 29 0.0% 100%
30 30 0.0% 100%
31 31 0.0% 100%
64 64 0.0% 100%
MAU
ID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
1 (8, 9, 10, 11, 12, 13, 14, 15)
MEMORY
RA PA SIZE
0xe000000 0xe000000 7968M
VCONS
NAME SERVICE PORT
SP
How many virtual CPUs are currently available for creating guest
domains? 0 vCPUs
How much memory is currently available for creating guest
domains?
0 memory
3. Assign one modular arithmetic unit to the primary domain.
primary# ldm set-mau 1 primary
4. Assign eight virtual CPUs to the primary domain.
primary# ldm set-vcpu 8 primary
5. Assign two gigabytes of memory to the primary domain.
primary# ldm set-memory 2G primary
Initiating delayed reconfigure operation on LDom primary. All
configuration changes for other LDoms are disabled until the LDom
reboots, at which time the new configuration for LDom primary will also
take effect.
6. Verify the resources assigned to the control domain by running the
ldm list-bindings command.
primary# ldm list-bindings primary
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -ndc-- SP 8 2G 0.1% 17h 57m
MAC
00:14:4f:46:3e:4e
HOSTID
0x84463e4e
CONTROL
failure-policy=ignore
DEPENDENCY
master=
MAU
ID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
MEMORY
RA PA SIZE
0xe000000 0xe000000 2G
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci_0
pci@500 pci_1
VCONS
NAME SERVICE PORT
SP
Note that the status flags indicate that the primary domain is now in a
delayed configuration state. The system must be rebooted for the
configuration changes to take effect.
7. Reboot the control domain.
primary# shutdown -i6 -g0 -y
...
You will have to wait a few minutes until the login prompt is available.
8. After the control domain has rebooted, log in as root.
console login: root
Password: cangetin
...
9. Generate a long listing of the primary domain to verify the new
configuration was created correctly.
primary# ldm list-domain -l primary
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-c-- SP 8 2G 37% 2m
SOFTSTATE
Solaris running
HOSTID
0x84463e4e
CONTROL
failure-policy=ignore
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 0 12% 100%
1 1 25% 100%
2 2 38% 100%
3 3 19% 100%
4 4 35% 100%
5 5 55% 100%
6 6 21% 100%
7 7 6.6% 100%
MAU
ID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
MEMORY
RA PA SIZE
0xe000000 0xe000000 2G
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci_0
pci@500 pci_1
VCONS
NAME SERVICE PORT
SP
Hypervisor
VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 primary 00:14:4f:fb:79:88 nxge0 0 switch@0 1 1 1500
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary
SOFTSTATE
Solaris running
MAC
00:14:4f:46:3e:4e
HOSTID
0x84463e4e
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 0 0.4% 100%
1 1 0.2% 100%
2 2 0.6% 100%
3 3 0.1% 100%
4 4 0.1% 100%
5 5 0.0% 100%
6 6 0.1% 100%
7 7 0.2% 100%
MAU
ID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
MEMORY
RA PA SIZE
0xe000000 0xe000000 2G
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci_0
pci@500 pci_1
VCC
NAME PORT-RANGE
primary-vcc0 5000-5100
VSW
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 00:14:4f:f9:bf:47 nxge0 0 switch@0 1 1 1500
VDS
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds0
VCONS
NAME SERVICE PORT
SP
In this task you configure the virtual switch created in ‘‘Task 2 – Create
Virtual Services’’ on page 3-28 to access the subnet discovered in the
vsw0
Service Domain
nxge0
Hypervisor
pci@500/../network@0 Resource
Subnet 192.168.xx.0
Module 4
4-1
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Additional Resources
Additional Resources
Introduction
After you have configured the control and service domain, your next task
is to create the appropriate number of guest domains needed to support
your Logical Domains environment requirements. You may recall from
Module 1 that each guest domain runs an operating system instance and
leverages the services provided by a service domain to run applications
and user services.
In this section you are shown how to determine which resources are
Once you know what resources are available and how you are going to
assign those resources, you can create the guest domain.
In the example that follows, you are shown how to create and start guest
domain ldom1 using the process presented above.
To create a guest domain and then start it, enter the following commands:
1. Add a guest domain.
primary# ldm add-domain ldom1
2. Add CPUs to the guest domain. For example, to add eight virtual
CPUs to guest domain ldom1, type:
primary# ldm add-vcpu 8 ldom1
3. Add memory to the guest domain. For example, to add two
gigabytes of memory to guest domain ldom1, type:
primary# ldm add-memory 2G ldom1
4. Add a virtual network device to the guest domain. For example, the
following command would add a virtual network device with these
specifics to the guest domain ldom1:
primary# ldm add-vnet vnet0 primary-vsw0 ldom1
● vnet0 is a unique interface name to the logical domain,
assigned to this virtual network device instance for reference on
subsequent set-vnet or remove-vnet subcommands.
● primary-vsw0 is the name of an existing network service
(virtual switch) to which to connect.
Note – You also have the option of setting a specific MAC address for this
virtual network device. You will have the opportunity to do this as part of
the lab exercise at the end of this module.
10. To find the console port of the guest domain, you can look at the
output of the list-domain subcommand.
primary# ldm list-domain ldom1
11. Connect to the console of the guest domain. There are several ways
you can do this:
● By default, you can you can access a guest domain’s console
from the control domain by running the following command:
primary# telnet localhost 5000
● Or you can configure the vntsd service so that it can be
accessed remotely (that is, from outside of the control domain).
Once the vntsd service is properly configured for remote
access, you can access the domain’s console from a system
different from the control domain by running the command:
# telnet hostname-of-the-control-domain 5000
Figure 4-1 shows the logical interconnections for a guest domain after it
has been created. The device path information may differ on your server.
nxge0 vnet0
primary-vds0 c0d0 CPUs
Hypervisor
Network
Internal Disk 1
In this class you are shown how to install the Solaris OS on a guest
domain using JumpStart. For instructions on how to install the Solaris OS
on a guest domain from a DVD or from a Solaris ISO file, see “Installing
Solaris OS on a Guest Domain” in the Logical Domains 1.3 Administration
Guide.
To jumpstart a guest domain, you must first modify the basic Solaris
JumpStart profile as follows:
Virtual disk device names in a logical domain differ from physical disk
device names in that they do not contain a target ID (tN) in the device
name. Instead of the normal cNtNdNsN format, virtual disk device names
are of the format cNdNsN, where cN is the virtual controller, dN is the
virtual disk number, and sN is the slice. Modify your JumpStart profile to
reflect this change as in the following profile example:
In order to JumpStart the Solaris Operating System aliases must exist for
the virtual devices. Aliases are now automatically created based on the
name of the virtual device (name used with ldm add-vnet or ldm add-
vdisk).
You can directly JumpStart the Solaris operating system by using the
following command:
{1} ok boot vnet0 -v install
Figure 4-2 shows an example of the logical path the operating system
installation takes through the system.
nxge0 vnet0
primary-vds0 c0d0 vnet0
vol0 vdisk0
c1t1d0s2
Hypervisor
/pci@500/../network@0 /pci@400/../sd@1,0
Resources
JumpStart Server
Internal Disk 1
You can get the information you need from the Logical Domains Manager
by running the ldm command with the appropriate subcommands and by
logging into the guest domain and running OpenBoot PROM and Solaris
operating system commands. Table 4-1 summarizes the commands you
Property Command
Property Command
Boot device primary# ldm list-variable boot-device ldom1
variable and
{0} ok printenv boot-device
and
guest# eeprom boot-device
Preparation
To prepare for this lab exercise:
● Review the LDom services you configured in the previous exercise.
● When performing this exercise, use the following system login
information. See your instructor if you need help.
Control domain operating system:
● Login name root
● Login password cangetin
Guest domain operating system:
● Login name root
● Login password cangetin
Note – Examples shown in this exercise are from lab machine 4. The
command responses shown in the lab are examples only. Depending upon
your lab configuration, the responses to the commands in the labs might
vary slightly.
You add the following resources (Figure 4-3) to your new guest domain:
● Eight virtual CPUs
● Two gigabytes of memory
nxge0 vnet0
primary-vds0 c0d0 CPUs
c1t1d0s2
Hypervisor
Network
Internal Disk 1
VSW
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary
Are there sufficient services available to create the new guest domain
that has a boot disk and network interface? Yes
3. List all the host server devices to identify the resources available for
creating a new guest domain.
primary# ldm list-devices -a
VCPU
PID %FREE PM
0 0 no
1 0 no
2 0 no
3 0 no
4 0 no
5 0 no
6 0 no
7 0 no
8 100 ---
9 100 ---
10 100 ---
11 100 ---
12 100 ---
13 100 ---
14 100 ---
15 100 ---
16 100 ---
17 100 ---
18 100 ---
19 100 ---
20 100 ---
21 100 ---
22 100 ---
23 100 ---
24 100 ---
25 100 ---
26 100 ---
27 100 ---
28 100 ---
29 100 ---
30 100 ---
31 100 ---
64 100 ---
MAU
ID CPUSET BOUND
0 (0, 1, 2, 3, 4, 5, 6, 7) primary
1 (8, 9, 10, 11, 12, 13, 14, 15)
MEMORY
PA SIZE BOUND
0x0 512K _sys_
IO
DEVICE PSEUDONYM BOUND OPTIONS
pci@400 pci_0 yes
pci@500 pci_1 yes
Are there sufficient vCPU and memory resources available to create
the new guest domain? Yes
4. Use the ldm add-domain command to create the framework for a
new guest domain named ldom1.
primary# ldm add-domain ldom1
5. Add eight virtual CPUs to the configuration of guest domain ldom1.
primary# ldm add-vcpu 8 ldom1
6. Add two gigabytes of memory to the configuration of guest domain
ldom1.
primary# ldm add-memory 2G ldom1
7. List the bindings for guest domain ldom1.
primary# ldm list-bindings ldom1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldom1 inactive ------ 8 2G
CONTROL
failure-policy=ignore
DEPENDENCY
master=
machine1 00:14:4F:FC:XX:01
machine2 00:14:4F:FC:XX:02
machine3 00:14:4F:FC:XX:03
machine4 00:14:4F:FC:XX:04
machine5 00:14:4F:FC:XX:05
machine6 00:14:4F:FC:XX:06
Note that the boot-device variable for guest domain ldom1 will default
to the first disk which is now vdisk0.
11. Set the auto-boot? variable for guest domain ldom1 to false.
primary# ldm set-variable auto-boot\?=false ldom1
12. Bind all the resources you configured in this task to the guest
domain ldom1.
primary# ldm bind-domain ldom1
13. List the logical domains.
17. Generate a long listing of all the logical domains in the system to
verify your work.
primary# ldm list-domain -l
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 0.2% 38m
SOFTSTATE
Solaris running
MAC
00:14:4f:46:3e:4e
HOSTID
0x84463e4e
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 0 0.3% 100%
1 1 0.1% 100%
2 2 0.1% 100%
3 3 0.1% 100%
4 4 0.1% 100%
5 5 0.2% 100%
6 6 0.1% 100%
7 7 0.8% 100%
MAU
ID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
MEMORY
RA PA SIZE
0xe000000 0xe000000 2G
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci_0
pci@500 pci_1
VCC
NAME PORT-RANGE
primary-vcc0 5000-5100
VSW
NAME MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 00:14:4f:fb:79:88 nxge0 0 switch@0 1 1 1500
VDS
NAME VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 vol0 /dev/dsk/c1t1d0s2
VCONS
NAME SERVICE PORT
SP
------------------------------------------------------------------------------
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldom1 active -t---- 5000 8 2G 12% 16s
SOFTSTATE
OpenBoot Running
MAC
00:14:4f:fa:61:06
HOSTID
0x84fa6106
CONTROL
failure-policy=ignore
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 8 100% 100%
1 9 0.0% 100%
2 10 0.0% 100%
3 11 0.0% 100%
4 12 0.0% 100%
5 13 0.0% 100%
6 14 0.0% 100%
7 15 0.0% 100%
MEMORY
RA PA SIZE
0xe000000 0x8e000000 2G
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet0 primary-vsw0@primary 0 network@0 00:14:4f:fc:06:04 1 1500
DISK
NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
vdisk0 vol0@primary-vds0 0 disk@0 primary
VCONS
NAME SERVICE PORT
ldom1 primary-vcc0@primary 5000
Note – For the remainder of this course, instead of using the telnet/~.
method of navigating between logical domains, you will open a window
for each logical domain and simply click the appropriate window to do
work. To open an additional window for the guest domain, click the host
terminal application on the SGD desktop. In the terminal window, run the
telnet localhost 5000 command to access the guest domain. Log in as
root. There will be times during the lab when the control domain must be
rebooted. This will cause the terminal window to close. When this
happens, simply reopen the terminal after the system has reboot.
nxge0 vnet0
primary-vds0 c0d0 vnet0
vol0 vdisk0
c1t1d0s2
Hypervisor
/pci@500/../network@0 /pci@400/../sd@1,0
Resources
JumpStart Server
Internal Disk 1
/packages/deblocker
/packages/SUNW,builtin-drivers
6. Use the show-disks command to display the guest domain ldom1
disks. Type q to quit the command.
{0} ok show-disks
a) /virtual-devices@100/channel-devices@200/disk@0
q) NO SELECTION
Enter Selection, q to quit:q
7. Use the show-nets command to display the guest domain ldom1
8. List the disk and network aliases. Use the network device name
obtained in the previous step.
Note that aliases for the virtual disk and network are automatically created
using the names you provided.
{0} ok devalias
vdisk0 /virtual-devices@100/channel-devices@200/disk@0
vnet0 /virtual-devices@100/channel-devices@200/network@0
net /virtual-devices@100/channel-devices@200/network@0
disk /virtual-devices@100/channel-devices@200/disk@0
virtual-console /virtual-devices/console@1
name aliases
{0} ok
9. JumpStart the guest domain ldom1 using the vnet0 device alias.
...
Contact your instructor if the JumpStart does not begin to load the
software packages.
Note – After the OS has been installed and the SMF service descriptions
have been loaded, the JumpStart operation will run a postinstallation
script to set the root password to cangetin. This will cause the OS to
reboot. After the OS has rebooted, you can log in as root with the
"/virtual-devices@100/ncp@6" 0 "ncp"
"/virtual-devices@100/random-number-generator@e" 0 "n2rng"
"/virtual-devices@100/n2cp@7" 0 "n2cp"
Are the on-board network interfaces present? No.
Why?
The network interfaces nxge and e1000g are not present in guest domain
ldom1 because neither the pci@400 PCI bus nor pci@500 PCI bus is
currently bound to ldom1.
partition> ^D
guest#
Note the disk name c0d0. Virtual disk device names in a logical domain
differ from physical disk device names in that they do not contain a target
ID (tN) in the device name. Instead of the normal cNtNdNsN format,
virtual disk device names are of the format cNtNdNsN, where cN is the
virtual controller, dN is the virtual disk number, and sN is the slice.
Notes:
Module 5
5-1
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Additional Resources
Additional Resources
Introduction
In this module you will be shown how to perform several key Logical
Domains administrative tasks.
First, you will learn how to reconfigure the logical domain resources you
set up during your initial configuration of the system. You will also learn
about dynamic and delayed reconfiguration.
In the last section of this module you will learn how to manage the logical
domain configurations you have set up. The tasks include removing and
restoring guest domain configurations and resetting a Logical Domains
configuration to either a user-defined or factory default configuration.
Note – There is no lab for this module. The tasks presented in this module
are included in the lab exercise for Module 6.
In this section you are introduced to the concepts of dynamic and delayed
reconfiguration and shown how to reconfigure virtual CPUs and memory
Dynamic Reconfiguration
Note – A new feature introduced with the Logical Domains 1.3 software is
dynamic resource management (DRM). DRM allows you to automatically
perform dynamic reconfiguration activities. At this time, you can only
Delayed Reconfiguration
You can use the ldm set-vcpu command from the control domain to
dynamically reconfigure virtual CPUs in any domain. You simply modify
the amount, and the Solaris OS instance running in the domain will see
them immediately.
For example, to assign sixteen virtual CPUs to the guest domain ldom1,
you would type the following:
Note – You can also use the ldm add-vcpu command to increase the
number of virtual CPUs. For example, if the guest domain ldom1 already
had eight virtual CPUs assigned to it and you wanted to allocate an
additional eight CPUs for a total of 16 CPUs, you would use the
command ldm add-vcpu 8 1dom1.
You can verify the number of CPUs now assigned by accessing the Solaris
OS running in ldom1 and querying the CPU resources seen by the
operating system.
guest# psrinfo -vp
The physical processor has 16 virtual processors (0-15)
UltraSPARC-T2+ (chipid 0, clock 1165 MHz)
...
To decrease the number of virtual CPUs, you can use the ldm remove-
vcpu command. For example, to remove eight virtual CPUs from the
guest domain ldom1, you would type the following:
primary# ldm remove-vcpu 8 ldom1
Reconfiguring Memory
Note – You can also use the ldm add-memory command to increase the
amount of memory. For example, if the guest domain ldom1 already had
two gigabytes of memory assigned to it and you wanted to allocate an
additional two gigabytes for a total of 4 gigabytes, you would use the
command ldm add-memory 2G 1dom1.
To decrease the amount of memory in a domain, you can use the ldm
remove-memory command. For example, to remove one gigabyte of
memory from the primary domain, you would type:
primary# ldm remove-memory 1G primary
Virtual Virtual
Disk Server Disk Client
(vds) (vdc)
LDC
Hypervisor
The virtual disk backend can be physical or logical. Physical devices can
include the following:
● Physical disk or disk logical unit number (LUN)
● Physical disk slice
Full Disk
Any backend can be exported as a full disk except physical disk slices that
can only be exported as single slice disks.
Although most of the time you will use a full disk, you will want to use a
single slice disk if:
● You want to export a disk slice.
● You want to export an already existing volume that contains some
data.
● You want to store data that you also want to be able to access
without using a virtual disk.
A single slice disk is also visible from the OS installation software and can
be selected as a disk onto which you can install the OS. In that case, if you
install the OS using the UNIX File System (UFS), then only the root
partition (/) must be defined, and this partition must use all the disk
space.
Most of the time, you do not need to specify any virtual disk backend
options. For more information about the options, see the "Virtual Disk
Backend Options" section in Chapter 6 Using Virtual Disks in the Logical
Domains 1.3 Administration Guide.
To export a physical disk as a full disk, you perform the following steps:
1. Export a physical disk as a full disk.
For example, to export the physical disk c1t48d0 as a full disk, you
must export slice 2 of that disk (c1t48d0s2).
primary# ldm add-vdsdev /dev/dsk/c1t48d0s2 \
disk1@primary-vds0
2. Assign the disk to a guest domain.
For example, assign the disk (vdisk1) to guest domain 1dom1.
primary# ldm add-vdisk vdisk1 disk1@primary-vds0 ldom1
3. After the guest domain is started and running the Solaris OS, verify
that the disk is accessible and is a full disk.
A full disk is a regular disk that has eight (8) slices.
For example, the disk being checked is c0d1.
ldom1# ls -1 /dev/dsk/c0d1s*
/dev/dsk/c0d1s0
/dev/dsk/c0d1s1
/dev/dsk/c0d1s2
/dev/dsk/c0d1s3
/dev/dsk/c0d1s4
/dev/dsk/c0d1s5
/dev/dsk/c0d1s6
/dev/dsk/c0d1s7
A physical disk slice is always exported as a single slice disk. In that case,
virtual disk drivers (vds and vdc) forward I/O from the virtual disk and
act as a pass-through to the physical disk slice.
To export a physical disk slice as a single slice disk, perform the following
steps:
1. Export a slice of a physical disk as a single slice disk.
For example, to export slice 0 of the physical disk c1t57d0 as a
single slice disk, you must export the device corresponding to that
slice (c1t57d0s0) from the service domain as follows:
primary# ldm add-vdsdev /dev/dsk/c1t57d0s0 \
disk2@primary-vds0
You do not need to specify the slice option, because a slice is
always exported as a single slice disk.
2. Assign the disk to a guest domain.
For example, assign the disk (vdisk2) to guest domain ldom1:
primary# ldm add-vdisk vdisk2 disk2@primary-vds0 ldom1
3. After the guest domain is started and running the Solaris OS, you
can list the disk (c0d13, for example) and see that the disk is
accessible:
ldom1# ls -1 /dev/dsk/c0d13s*
/dev/dsk/c0d13s0
/dev/dsk/c0d13s1
/dev/dsk/c0d13s2
/dev/dsk/c0d13s3
/dev/dsk/c0d13s4
/dev/dsk/c0d13s5
/dev/dsk/c0d13s6
/dev/dsk/c0d13s7
Although there are 8 slices, because the disk is a single slice disk,
only the first slice (s0) is usable.
Exporting Slice 2
To export slice 2 (disk c1t57d0s2, for example) as a single slice disk, you
must specify the slice option. Otherwise, the disk is exported as a full
disk. For example:
primary# ldm add-vdsdev options=slice \
If you do not set the slice option, a file or volume is exported as a full
disk. In that case, virtual disk drivers (vds and vdc) forward I/O from the
virtual disk and manage the partitioning of the virtual disk. The file or
volume eventually becomes a disk image that contains data from all slices
of the virtual disk and the metadata used to manage the partitioning and
disk structure.
When you export a file or volume as a full disk, it appears in the guest
domain as an unformatted disk, that is, a disk with no disk label. Then,
you need to run the format(1M) command in the guest domain to define
usable partitions and to write a valid disk label. Any I/O to the virtual
disk fails while the disk is unformatted.
Note – Prior to the Solaris 10 5/08 OS release, when a file was exported as
a virtual disk, the system wrote a default disk label and created default
partitioning. This is no longer the case with the Solaris 10 5/08 OS release,
and you must run format(1M) in the guest domain to write a disk label.
To export a file as a full disk, you need to perform the following steps:
1. From the primary domain, create a file (fdisk0 for example) to use
as the virtual disk.
primary# mkfile 100m /ldoms/domain/test/fdisk0
The size of the file defines the size of the virtual disk. This example
creates a 100-megabyte blank file to get a 100-megabyte virtual disk.
2. From the control domain, export the file as a virtual disk.
primary# ldm add-vdsdev /ldoms/domain/test/fdisk0 \
If the slice option is set, the file or volume is exported as a single slice
disk. In this case, the virtual disk has only one partition (s0), which is
directly mapped to the file or volume backend. The file or volume only
contains data written to the virtual disk with no extra data such as
partitioning information or disk structure.
Then if you stop the guest domain, you can access the data on the volume
without using the virtual disk.
For example:
primary# mount /dev/zvol/dsk/foo/vol0 /mnt
primary# cat /mnt/file
test
You can also do this with a slice instead of a ZFS volume or with a file.
With a file, you can access the data from the service domain using lofi
instead of directly mounting the backend.
Prior to the Solaris 10 5/08 OS release, the slice option did not exist, and
volumes were exported as single slice disks. If your configuration exports
volumes as virtual disks and you upgrade to the Solaris 10 5/08 OS,
volumes are exported as full disks rather than single slice disks.
For instructions on how to export files and disk slices as virtual disks, see
the “Guidelines for Exporting Files and Disk Slices as Virtual Disks” in
Chapter 6 Using Virtual Disks in the Logical Domains 1.3 Administration
Guide.
Note – You cannot export the CD or DVD drive itself. You only can export
the CD or DVD that is inside the CD or DVD drive. Therefore, a CD or
DVD must be present inside the drive before you can export it. Also, to be
able to export a CD or DVD, that CD or DVD cannot be in use in the
service domain. In particular, the Volume Management file system,
volfs(7FS) service must not use the CD or DVD.
If you export a Solaris OS installation DVD, you can boot the guest
domain on the virtual disk corresponding to that DVD and install the
guest domain from that DVD. To do so, when the guest domain reaches
the ok prompt, use the following command:
ok boot /virtual-devices@100/channel-
devices@200/disk@n:f
Where n is the instance of the virtual disk representing the exported
DVD.
Refer to the Solaris ZFS Administration Guide for more information about
using ZFS.
In this section you are shown how to create a ZFS space to use as a virtual
disk onto which a Solaris OS is to be installed.
You can create the space for the virtual disk on ZFS volumes or ZFS files.
Creating a ZFS volume, whatever its size, is quick using the zfs create
-V command. On the other hand, ZFS files have to be created using the
mkfile command. The command can take some time to complete,
especially if the file to create is quite large, which is often the case when
creating space to contain an OS image.
Both ZFS volumes and ZFS files can take advantage of ZFS features such
as snapshots and clones, but a ZFS volume is a pseudo device while a ZFS
file is a regular file.
If the ZFS space is to be used as a virtual disk onto which the Solaris OS is
to be installed, then it should be large enough to contain:
● Installed software – about 6 gigabytes
● Swap partition – about 1 gigabyte
● Extra space to store system data – at least 1 gigabyte
Therefore, the size of the ZFS allocated space to install the entire Solaris
OS should be at least 8 to 10 gigabytes.
To create the ZFS pool and the corresponding volume, perform the
following steps:
1. Create a ZFS storage pool in the service domain.
For example, to create a ZFS pool named tank1 using disk c1t2d0,
type:
primary# zpool create -f tank1 c1t2d0
To create the ZFS pool and the corresponding file, perform the following
steps:
1. Create a ZFS storage pool in the service domain.
For example, to create a ZFS pool named tank1 using disk c1t2d0,
type:
primary# zpool create -f tank1 c1t2d0
2. Create a ZFS file.
For example, to create a ZFS file system to store the ZFS files (for
example, to create 10-gigabyte disk image):
primary# zfs create tank1/ldoms
primary# mkfile 10g /tank1/ldoms/file0
3. Run the following command to verify that the zpool (tank1) and
ZFS file system (tank1/ldoms) have been created.
primary# zfs list
4. Export the ZFS file as a virtual disk.
primary# ldm add-vdsdev /tank1/ldoms/file0 \
zvfile@primary-vds0
5. Assign the ZFS file to a guest domain.
For example, to add a file named zvfile, type:
primary# ldm add-vdisk zvfile zvfile@primary-vds0 ldom1
After you have created the ZFS volume or file that is to be used as a
virtual disk, the next step is to jumpstart the OS image onto the ZFS
volume or file. You can then use the ZFS snapshot and clone capabilities
to easily provision new domains. You want to do the snapshot after the
jumpstart but before the domain goes through the sysconfig phase. This
approach ensures the snapshot will not contain any predefined
configuration.
After you have stored an OS image on a ZFS volume or on a ZFS file, you
can create snapshots of this image by using the ZFS snapshot command.
Before you create a snapshot of the OS image, you want to make sure that
the disk is not currently in use in the guest domain. The data currently
stored on the image must be coherent. There are several ways to ensure
that a disk is not in use in a guest domain. You can either:
● Stop and unbind the guest domain. This is the safest solution, and
this is the only solution available if you want to create a snapshot of
an OS image used as the boot disk of a guest domain.
● Unmount any slices of the disk you want to snapshot that are used
in the guest domain, and ensure that no slice is in use on the guest
domain.
In the example that follows, because of the ZFS layout, the command to
create a snapshot of the OS image is the same whether the OS image is
stored on a ZFS volume or on a ZFS file.
Once you have created a snapshot of an OS image, you can duplicate this
image by using the ZFS clone command. Then the cloned image can be
Cloning a boot disk image quickly creates a boot disk for a new guest
domain without having to perform the entire Solaris OS installation
process.
For example, if the virtual disk created was the boot disk of domain
ldom1, do the following to clone that disk to create a boot disk for domain
ldom2 given the snapshot you just created.
You can now export the corresponding volume or file as a virtual disk and
assign it to the new ldom2 guest domain. The guest domain ldom2 can
directly boot from that virtual disk without having to go through the OS
installation process.
For more information about cloning a boot disk image, see the “Cloning a
Boot Disk Image” section in Chapter 6 Using Virtual Disks in the Logical
Domains 1.3 Administration Guide.
vsw vnet
IP Routing
Legacy NIC
Hypervisor
/../../network@0 Resources
Subnet
Figure 5-2 Configuring a Virtual Switch and the Service Domain for
NAT and Routing
For information about how to set up the virtual switch for NAT and
routing, see the “Set Up the Virtual Switch to Provide External
Connectivity to Domains” section in Chapter 7 Using Virtual Networks in
the Logical Domains 1.3 Administration Guide.
Note – Tagged VLANs are not supported in any of the previous releases
for LDoms networking components.
The virtual switch (vsw) and virtual network (vnet) devices support
switching of Ethernet packets based on the virtual local area network
(VLAN) identifier (ID) and handle the necessary tagging or untagging of
Ethernet frames.
You can create multiple VLAN interfaces over a vnet device in a guest
domain. You can use the Solaris OS ifconfig(1M) command to create a
VLAN interface over a virtual network device, the same way it is used to
configure a VLAN interface over any other physical network device. The
additional requirement in the LDoms environment is that you must assign
the vnet to the corresponding VLANs using the Logical Domains
Manager CLI commands. Refer to the ldm(1M) for complete information
about the Logical Domains Manager CLI commands.
Similarly, you can configure VLAN interfaces over a virtual switch device
in the service domain. VLAN IDs 2 through 4094 are valid. VLAN ID 1 is
reserved as default-vlan-id.
When you create a vnet device on a guest domain, you must do the
following:
● Assign the vnet to the required VLANs by specifying a port VLAN
ID.
● Use ldm add-vnet command to specify zero or more VLAN IDs for
this vnet by setting the pvid and vid properties.
Similarly, configure the vsw device to specify any VLANs to which the
vsw device itself should belong when plumbed as a network interface. To
perform this configuration, use the ldm add-vsw command to set the
pvid and vid properties.
Use the ldm set-vnet or ldm set-vsw commands to change the VLANs
Port VLAN ID
The port VLAN ID (PVID) indicates a VLAN to which the virtual network
device needs to be a member, in untagged mode. In this case, the vsw
device provides the necessary tagging or untagging of frames for the vnet
device over the VLAN specified by its PVID. Any outbound frames from
the virtual network that are untagged are tagged with its PVID by the
virtual switch. Inbound frames tagged with this PVID are untagged by
the virtual switch before sending it to the vnet device. Thus, assigning a
PVID to a vnet implicitly means that the corresponding virtual network
port on the virtual switch is marked untagged for the VLAN specified by
the PVID. You can have only one PVID for a vnet device.
For example, if you were to plumb vnet instance 0, using the following
command, and if the pvid property for the vnet has been specified as 10,
the vnet0 interface would be implicitly assigned to belong to VLAN 10.
# ifconfig vnet0 plumb
VLAN ID
Starting with the Logical Domains 1.2 release, a copy of the current
configuration is automatically saved on the control domain whenever the
Logical Domains configuration is changed. The autosave operation occurs
immediately.
Note – For information about how to use the ldm *-spconfig commands
to manage configurations and to manually recover autosave files, see the
ldm(1M) man page.
Note – You might also want to clean up the resources used by this domain
by removing the vdsdev and backend (file/volume).
You can only restore domains for which you have explicitly saved the
configuration using the ldm ls-constraints -x command. Then you
can restore the configuration by using the ldm bind-domain -i
command.
In the example that follows you are shown how to restore the
configuration for guest domain ldom1. The configuration was saved using
the following command:
# ldm ls-constraints -x ldom1 > ldom1.xml
Note – These steps restore the domain configuration only, not the virtual
The sections that follow provide instructions on how to reset the system
to either a user-defined or factory default configuration.
For example, assume you stored the configuration with the name
myconfig. To reset the configuration, type:
-> set /HOST/bootmode config=myconfig
To reset the system to the factory default configuration from the control
domain:
1. Select the factory default configuration.
primary# ldm set-config factory-default
2. Stop all domains by using the -a option.
primary# ldm stop-domain -a
3. Stop the control domain.
primary# shutdown -i0 -g0 -y
4. Powercycle the system so that the factory-default configuration is
loaded.
sc> poweroff
sc> poweron
To reset the system to the factory default configuration using the ILOM
service processor with the Logical Domains Manager, use the following
command:
-> set /HOST/bootmode config=factory-default
Module 6
6-1
Copyright 2010 Sun Microsystems, Inc. All Rights Reserved. Sun Learning Services, Revision B
THESE eKIT MATERIALS ARE FOR YOUR USE IN THIS CLASSROOM ONLY. COPYING eKIT MATERIALS FROM THIS COMPUTER IS STRICTLY PROHIBITED
Additional Resources
Additional Resources
Introduction
In this module you will be shown how to manage advanced Logical
Domains configurations.
First, you will learn how to migrate a logical domain from one server to
another. You will be shown the process for migrating both an active
(warm migration) and a bound or inactive domain (cold migration).
The module concludes with a lab exercise designed to give you the
opportunity to practice the tasks you were shown during the lecture.
The migration itself can be broken down into different phases (see
Figure 6-1):
● Phase 1: After connecting with the Logical Domains Manager
running in the target host, information about the source domain is
transferred to the target host. This information is used to perform a
series of checks to determine whether a migration is possible. The
checks differ depending on the state of the source domain. For
example, if the source domain is active, a different set of checks are
performed than if the domain is bound or inactive.
● Phase 2: When all checks in Phase 1 have passed, the source and
target machines prepare for the migration. In the case where the
source domain is active, this includes shrinking the number of CPUs
to one and suspending the domain. On the target machine, a domain
is created to receive the source domain.
● Phase 3: For an active domain, the next phase is to transfer all the
runtime state information for the domain to the target. This
information is retrieved from the hypervisor. On the target, the state
information is installed in the hypervisor.
Information is transferred from the source to Source and target machines prepare for migration.
the target host.
All runtime state information for the domain is Target domain resumes execution and the source
transferred to the target. is destroyed.
Software Compatibility
For a migration to occur, both the source and target machines must be
running compatible software:
● The hypervisor on both the source and target machines must have
firmware that supports domain migration.
If you see the following error, you do not have the correct version of
system firmware on either the source or target machine.
The ldm command line interface for migration allows the user to specify
an optional alternate user name for authentication on the target host. If
this is not specified, the user name of the user executing the migration
command is used. In both cases, the user is prompted for a password for
the target machine unless the -p option is used to initiate an automatic
migration.
The target machine must have sufficient free memory to accommodate the
migration of the source domain. In addition, the layout of the available
memory on the target machine must be compatible with the memory
layout of the source domain or the migration will fail.
Virtual devices that are associated with physical devices can be migrated.
However, domains that have direct access to physical devices cannot be
migrated. For instance, you cannot migrate I/O domains.
All virtual I/O (VIO) services used by the source domain must be
Caution – A migration fails when the logical volume used by the source
as a boot device exists on the target but does not refer to the same storage.
The migration appears to succeed; however, the machine is not usable
because it is unable to access its boot device. To avoid leaving the target
domain in an inconsistent state, stop the domain, correct the configuration
issue, and restart the domain.
Note – The switches do not have to be connected to the same network for
the migration to occur; however, the migrated domain can experience
networking problems if the switches are not connected to the same
network.
A domain using Network Interface Unit (NIU) Hybrid I/O resources can
be migrated. A constraint specifying NIU Hybrid I/O resources is not a
hard requirement of a logical domain. If such a domain is migrated to a
machine that does not have available NIU resources, the constraint is
Starting with Logical Domains 1.3, you can migrate a guest domain that
has bound cryptographic units if it runs an operating system that
supports cryptographic unit dynamic reconfiguration (DR).
For an inactive domain, there are no checks performed against the virtual
input/output (VIO) constraints. So, the VIO servers do not need to exist
for the migration to succeed. As with any inactive domain, the VIO
servers need to exist and be available at the time the domain is bound.
To obtain the status of the source domain, type the following on the
source domain machine:
# ldm list ldom1
To obtain the status of the target domain, type the following on the target
machine:
# ldm list ldom1
The source and target domains are shown differently in the status output.
The output of the ldm list command indicates the state of the migrating
domain and includes the following fields:
● NAME – Name of the domain, either the source or target
● STATE – State of the domain, for example, suspended or bound
● FLAGS – Status of the domain
● CONS – Console port of the domain
The sixth column in the FLAGS field shows one of the following values:
● The source domain shows an s to indicate that it is the source of the
migration.
● The target domain shows a t to indicate that it is the target of a
migration.
● If an error occurs that requires user intervention, an e is shown.
The following shows that ldom1 is the source domain of the migration:
# ldm list ldom1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldom1 suspended -n---s 8 2G 0.0% 2h 7m
The following shows that ldom1 is the target domain of the migration:
# ldm list ldom1
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
ldom1 bound -----t 5000 1 2G
A newline character on the end of the password and all lines that follow
the first line are ignored.
The file in which you store the target machine's password must be
properly secured. If you plan to store passwords in this manner, ensure
that the file permissions are set so that only the root owner, or a
privileged user, can read or write the file (400 or 600).
An I/O domain is a domain that has direct ownership of and direct access
to physical I/O devices. It can be created by assigning a PCI EXPRESS
(PCI-E) bus to a domain. PCI-E buses that are present on a server are
identified with names such as pci@400 (pci_0). Use the ldm command to
assign each bus to a separate domain.
The maximum number of I/O domains that you can create depends on
the number of PCI-E buses available on the server. For example, the Sun
SPARC Enterprise T5440 server has four PCI-E buses, so it can have up to
four I/O domains, whereas the Sun SPARC Enterprise T5140 server has
only two PCI-E buses and can, therefore, have up to two I/O domains.
See Figure 6-2 for the Sun SPARC Enterprise T5140 server PCI-E bus
configuration.
Hypervisor
pci@400 pci@500
Resources
In the example that follows you are shown how to create a new I/O
domain from an initial configuration where several buses are owned by
the primary domain. By default the primary domain owns all buses
present on the system. This example is for a Sun SPARC Enterprise T5140
server. This procedure can also be used on other servers. The instructions
for different servers might vary slightly from these, but you can obtain the
basic principles from this example.
First, you must retain the bus that has the primary domain's boot disk.
Then, remove another bus from the primary domain and assign it to
In this example, the primary domain only uses a ZFS pool (rpool
(c1t0d0s0)) and network interface (nxge0).
Note – If you discover that both buses are in use, you need to reconfigure
the control domain so that it is no longer using the bus you want to
remove.
6. Remove the PCI bus that is not currently being used by the control
domain based on your current configuration.
In this example, bus pci@500 is being removed from the primary
domain on a T5140 server.
primary# ldm remove-io pci@500 primary
7. Save this split configuration to the service processor.
In this example, the configuration is config_split.
primary# ldm add-spconfig config_split
This configuration, config_split, is also set as the next
configuration to be used after the reboot.
9. Stop the domain to which you want to add the PCI bus.
The following example stops the ldom1 domain:
primary# ldm stop ldom1
10. Add the available bus to the domain that needs direct access.
The available bus is pci@500 and the domain is ldom1.
primary# ldm add-io pci@500 ldom1
11. Bind the resources to the domain.
Note – If you have an Infiniband Host Channel Adapter (HCA) card, you
might need to turn on the I/O memory management unit (MMU) bypass
mode. For information about how to perform this task, see “Enabling the
I/O MMU Bypass Mode on a PCI Bus” in Chapter 5 Setting Up I/O
Domains of the Logical Domains 1.3 Administration Guide.
The Logical Domains 1.3 release introduces support for link-based IPMP
with virtual network devices. When configuring an IPMP group with
virtual network devices, configure the group to use link-based detection.
If using older versions of the Logical Domains software, you can only
configure probe-based detection with virtual network devices.
For information about older versions of the Logical domains software, see
“Configuring and Using IPMP in Releases Prior to Logical Domains 1.3”
in Chapter 6 Using Virtual Networks in the Logical Domains 1.3
Administration Guide.
IPMP GRP
In the event of a physical link failure in the service domain, the virtual
switch device that is bound to that physical device detects the link failure.
Then, the virtual switch device propagates the failure to the
corresponding virtual network device that is bound to this virtual switch.
The virtual network device sends notification of this link event to the IP
layer in the guest LDom_A, which results in failover to the other virtual
network device in the IPMP group.
IPMP GRP
vsw0 vsw1
nxge0 nxge1
The two virtual switch interfaces can then be plumbed and configured
into an IPMP group. In the event of a physical link failure, the virtual
switch device that is bound to that physical device detects the link failure.
Then, the virtual switch device sends notification of this link event to the
IP layer in the service domain, which results in failover to the other
virtual switch device in the IPMP group.
With Logical Domains 1.3, the virtual network and virtual switch devices
support link status updates to the network stack. By default, a virtual
network device reports the status of its virtual link through its LDC to the
virtual switch. This setup is enabled by default and does not require you
to perform additional configuration steps.
LDC LDC
Hypervisor
Active Channel Backup Channel
Preparation
To prepare for this lab exercise:
● Review the LDom services you configured in the previous exercise.
● When performing this exercise, use the following system login
information. See your instructor if you need help.
Control domain operating system:
● Login name root
● Login password cangetin
Guest domain operating system:
● Login name root
● Login password cangetin
● Task 7 requires that you partner with another student to migrate a
guest domain from your machine to their machine. Contact your
instructor if you need help.
Note – The sample command output shown for the following commands
are from a Sun SPARC Enterprise T5140 server. The server details and
device pathing information might differ based on the server’s system
disks and server architecture of the machine you are using.
Note – Examples shown in this exercise are from lab machine 4. The
command responses shown in the lab are examples only. Depending upon
MAC
00:14:4f:f8:d6:15
HOSTID
0x84f8d615
CONTROL
failure-policy=ignore
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 12 100%
1 13 100%
2 14 100%
3 15 100%
MEMORY
RA PA SIZE
0xe000000 0x8e000000 2G
VARIABLES
auto-boot?=false
boot-device=/virtual-devices@100/channel-devices@200/disk@0
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet0 primary-vsw0@primary 0 network@0 00:14:4f:fc:00:04 1 1500
DISK
NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
vdisk0 vol0@primary-vds0 0 disk@0 primary
VCONS
NAME SERVICE PORT
ldom1 primary-vcc0@primary 5000
11. Boot the Solaris OS. Log in to the guest domain ldom1 as root.
{0} ok boot
...
machine4-vnet0 console login: root
Password: cangetin
...
12. Verify that the bound resources are available to the guest domain
ldom1 operation system.
guest# psrinfo
Hypervisor
CPUs
Resources
Note – The output for the following commands are specific to the Sun
SPARC Enterprise T5140 server, which is used as an example in this
course. Based on the server you are using, your system configuration and
server architecture, the output and your responses will vary.
guest# psrinfo
0 on-line since 02/09/2010 16:19:46
1 on-line since 02/09/2010 16:19:47
...
VCPU
VID PID UTIL STRAND
0 8 0.2% 100%
1 9 0.0% 100%
2 10 0.0% 100%
3 11 0.0% 100%
4 12 0.0% 100%
5 13 0.0% 100%
6 14 0.3% 100%
7 15 0.0% 100%
Hypervisor
memory
Resources
MEMORY
RA PA SIZE
0xe000000 0x8e000000 4G
...
In this task you use the Logical Domains Management utility to split the
PCI bus such that the control (primary) domain maintains access to
network and disk devices on one PCI bus. You then bind the other PCI
bus to the guest domain ldom1. Finally, you configure the guest domain
physical network interface to provide direct access to the network. See
Figure 6-8.
pci@400 pci@500
Resources
...
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci_0
pci@500 pci_1
...
Note that pci@400 and pci@500 are currently bound to the primary
domain.
VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 primary 00:14:4f:fa:6a:f8 nxge0 0 switch@0 1 1 1500
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary vol0 /dev/dsk/c1t1d0s2
FLAGS
normal,delayed(modify),control,vio-service
VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 primary 00:14:4f:fb:79:88 e1000g0 0 switch@0 1 1 1500
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary vol0 /dev/dsk/c1t1d0s2
9. Remove the PCI bus that is not currently being used by the control
domain.
The following is an example of removing a PCI bus on a T5140 server.
primary# ldm remove-io pci@500 primary
Initiating delayed reconfigure operation on LDom primary. All
configuration changes for other LDoms are disabled until the LDom
reboots, at which time the new configuration for LDom primary will also
take effect.
10. Save the split configuration to the system controller. Name this
Caution – If you happen to remove the primary bus by error, you can return to
the SC and do a reset. This voids the remove-io command used above and you
can then remove the correct bus.
The error occurs because you removed the wrong PCI bus. Therefore, make sure
that you remove the correct PCI bus.
12. List the configurations on the system controller to verify that
config_split is current.
primary# ldm list-spconfig
factory-default
config_initial
config_split [current]
13. Verify that the PCI bus that was removed in the previous steps is not
bound to the control domain.
primary# ldm list-bindings primary
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 0.4% 3m
...
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci_0
19. Boot the Solaris OS. After the Solaris OS is completely booted, log in
as root.
{0} ok boot
Boot device: vdisk0 File and args:
...
machine4-vnet0 console login: root
Password: cangetin
...
20. Run the grep network /etc/path_to_inst command to
"/virtual-devices@100/channel-devices@200/network@0" 0 "vnet"
"/pci@500/pci@0/pci@8/network@0" 0 "nxge"
"/pci@500/pci@0/pci@8/network@0,1" 1 "nxge"
"/pci@500/pci@0/pci@8/network@0,2" 2 "nxge"
"/pci@500/pci@0/pci@8/network@0,3" 3 "nxge"
21. Plumb the first network interface attached to the PCI bus pci@500.
guest# ifconfig nxge0 plumb
...
Jan 20 16:31:07 machine4-vnet0 nxge: NOTICE: nxge0: xcvr addr:0x1d - link
is up 100 Mbps full duplex
...
S10U8-5g.img
c1t0d0s0
Hypervisor
/pci@400/../sd@0,0 Resource
Internal Disk 0
4. Add a virtual disk service device to virtual disk service vds0 using
the /ldoms/images/S10U8-5g.img file. Name this device vol1.
primary# ldm add-vdiskserverdevice /ldoms/images/S10U8-5g.img \
vol1@primary-vds0
5. Assign virtual disk vol1 to guest domain ldom1 using the
add-vdisk subcommand. Name the virtual disk vdisk1.
primary# ldm add-vdisk vdisk1 vol1@primary-vds0 ldom1
6. Move to the guest domain window.
Note – If the special files for the new disk (c0d1) are not present, run the
devfsadm -v command to create the new special file for the virtual disk.
9. Run the format command to view the new virtual disk. Display the
partition table.
guest# format
Searching for disks...done
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> print
Current partition table (original):
Total disk cylinders available: 17474 + 2 (reserved cylinders)
partition> modify
Select partitioning base:
0. Current partition table (original)
1. All Free Hog
Choose base (enter number) [0]? 1
partition> ^D
10. Run the newfs command to create a file system on the raw disk
device, which you partitioned in step 7.
Example: guest# newfs /dev/rdsk/c0d1s0
newfs: construct a new file system /dev/rdsk/c0d1s0: (y/n)? y
/dev/rdsk/c0d1s0: 10484400 sectors in 17474 cylinders of 1 tracks,
600 sectors 5119.3MB in 137 cyl groups (128 c/g, 37.50MB/g, 4800 i/g)
super-block backups (for fsck -F ufs -o b=#) at: 32, 76832, 153632,
230432, 307232, 384032, 460832, 537632, 614432, 691232, 9753632, 9830432,
9907232, 9984032, 10060832, 10137632, 10214432, 10291232, 10368032,
c1t2d0
Hypervisor
/pci@400/../sd@2,0 Resources
Internal Disk 2
3. Run the following command to verify that the zpool (tank1) and
ZFS volume (tank1/vol1) have been created.
primary# zfs list
NAME USED AVAIL REFER MOUNTPOINT
tank1 500M 273G 21K /tank1
tank1/vol1 500M 274G 16K
4. Add the /dev/zvol/dsk/tank1/vol1 device to the control domain
virtual disk service. Name the virtual disk device zvol.
primary# ldm add-vdiskserverdevice /dev/zvol/dsk/tank1/vol1 \
FORMAT MENU:
disk - select a disk
type - select (define) a disk type
partition - select (define) a partition table
current - describe the current disk
format> partition
PARTITION MENU:
0 - change `0' partition
1 - change `1' partition
2 - change `2' partition
3 - change `3' partition
4 - change `4' partition
5 - change `5' partition
6 - change `6' partition
7 - change `7' partition
select - select a predefined table
modify - modify a predefined partition table
name - name the current table
print - display the current table
label - write partition map and label to the disk
!<cmd> - execute <cmd>, then return
quit
partition> print
Current partition table (original):
Total disk cylinders available: 1704 + 2 (reserved cylinders)
partition> modify
partition> ^D
9. Run the newfs command to create a file system on the new ZFS
virtual disk discovered in the previous step.
guest# newfs /dev/rdsk/c0d1s0
newfs: construct a new file system /dev/rdsk/c0d1s0: (y/n)? y
/dev/rdsk/c0d1s0: 1022400 sectors in 1704 cylinders of 1 tracks, 600
sectors 499.2MB in 107 cyl groups (16 c/g, 4.69MB/g, 2240 i/g)
super-block backups (for fsck -F ufs -o b=#) at: 32, 9632, 19232, 28832,
38432, 48032, 57632, 67232, 76832, 86432, 931232, 940832, 950432, 960032,
969632, 979232, 988832, 998432, 1008032, 1017632
VSW
NAME LDOM MAC NET-DEV ID DEVICE LINKPROP DEFAULT-VLAN-ID PVID VID MTU MODE
primary-vsw0 primary 00:14:4f:fa:6a:f8 e1000g0 0 switch@0 1 1 1500
VDS
NAME LDOM VOLUME OPTIONS MPGROUP DEVICE
primary-vds0 primary vol0 /dev/dsk/c1t1d0s2
MAC
00:14:4f:f9:b7:b4
HOSTID
0x84f9b7b4
CONTROL
failure-policy=ignore
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 8 0.8% 100%
1 9 0.1% 100%
2 10 0.1% 100%
3 11 0.1% 100%
4 12 0.1% 100%
5 13 0.1% 100%
MEMORY
RA PA SIZE
0xe000000 0x8e000000 2G
VARIABLES
auto-boot?=false
boot-device=vdisk0
IO
DEVICE PSEUDONYM OPTIONS
pci@500 pci_1
NETWORK
NAME SERVICE ID DEVICE MAC MODE PVID VID MTU LINKPROP
vnet0 primary-vsw0@primary 0 network@0 00:14:4f:fc:06:05 1 1500
DISK
NAME VOLUME TOUT ID DEVICE SERVER MPGROUP
vdisk0 vol0@primary-vds0 0 disk@0 primary
VCONS
NAME SERVICE PORT
ldom1 primary-vcc0@primary 5000
11. Working with your lab partner, designate one of your lab systems as
the source machine and the other as the target machine.
The source machine will contain the bootable image file. The image
file will then be shared with the target machine using NFS.
12. On the source machine, create an image file. Use NFS to share it with
the target machine.
primary# mkfile 10g /ldoms/images/bootdisk
primary# vi /etc/dfs/dfstab
...
share -F nfs -o anon=0 /ldom/images
...
primary# shareall
13. On the source machine, remove the vdisk vdisk0 from ldom1.
primary# ldm remove-vdisk vdisk0 ldom1
14. On the source machine, add the new image file to disk service
primary-vds0. Name the new disk vol1.
primary# ldm add-vdsdev /ldoms/images/bootdisk vol1@primary-vds0
15. On the source machine, add the vdisk vdisk0 to ldom1. Use vol1
from the disk service.
primary# ldm add-vdisk vdisk0 vol1@primary-vds0 ldom1
16. Bind the configuration to guest domain ldom1.
primary# ldm bind-domain ldom1
17. Open a console session on ldom1 and JumpStart the guest domain.
primary# telnet localhost 5000
...
{0} ok boot vnet0 -v install
Boot device: /virtual-devices@100/channel-devices@200/network@0 File and
args: - install
Requesting Internet Address for 0:14:4f:fc:6:4
Using RARP/BOOTPARAMS...
<IP_Addr_Target> is alive
24. On the source machine, run the ldm migrate -n command to
perform a dry run of the migration of the active guest domain ldom1.
In the command, specify ldom2 as the new name of the migrated
domain.
This example shows you how to change the name of the guest domain. If
you wish to retain the same name, simply omit the target name field.
primary# ldm migrate-domain -n ldom1 root@IP_Addr_Target:ldom2
STATUS
OPERATION PROGRESS SOURCE
migration 37% ldom01
or
primary# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 15% 18h 28m
ldom2 bound -----t 5000 1 2G
26. After the migration is complete, list all available domains on the
source machine.
primary# ldm list-domain
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-cv- SP 8 2G 0.3% 21m
Note that ldom1 has been destroyed.
27. After the migration is complete, list all available domains on the
target machine.
guest# ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index
1
inet 127.0.0.1 netmask ff000000
vnet0: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 2
inet 192.168.XX.43 netmask ffffff00 broadcast 192.168.XX.255
ether 0:14:4f:fc:6:5
guest# format
Searching for disks...done
Note – In the following steps, the guest domain ldom2 will be migrated
back to the original machine. The roles of source and target machines is
now reversed.
31. On the source machine (old target machine), run the ldm migrate -
n command to perform a dry run of the migration of active guest
domain ldom2. In the command, specify ldom1 as the new name of
the migrated domain.
primary# ldm migrate-domain -n ldom2 root@IP_Addr_Target:ldom1
Target Password: cangetin
primary#
If no error messages are returned, the active guest domain migration
should proceed normally. If errors are indicated, resolve them before
continuing on to the next step.
32. Migrate the active guest domain ldom2 to the target machine (old
source machine). In the command, specify ldom1 as the new name of
the migrated domain.
primary# ldm migrate-domain ldom2 root@IP_Addr_Target:ldom1
Target Password: cangetin
primary#
The migrate can take a couple minutes to complete.
You can check the progress of the migration by running the
ldm list -o status or ldm list-domain command.
For example, to check migration status on the target machine:
STATUS
OPERATION PROGRESS SOURCE
migration 43% ldom02
or
Note – The guest domain ldom1 might already be removed from your
->
10. In the system controller, change to the /SYS directory and run the
11. Start the serial console. After the system has booted, log in as root.
-> start /SP/console
Are you sure you want to start /SP/console (y/n)? y
Serial console started. To stop, type #.
...
console login: root
Password: cangetin
...
12. List the logical domain configuration files stored on the system
controller.
primary# ldm list-config
factory-default [current]
config_initial
config_split
Note that the factory-default configuration is the current
configuration.
13. Remove all the user-defined configuration files stored on the system
controller.
primary# ldm remove-config config_initial
primary# ldm remove-config config_split
14. List the logical domain configuration files stored on the system
controller.
primary# ldm list-config
factory-default [current]
15. Generate a long list of the primary domain to verify that it has
returned to the factory default state.
primary# ldm list-domain -l primary
NAME STATE FLAGS CONS VCPU MEMORY UTIL UPTIME
primary active -n-c-- SP 64 7968M 0.1% 3m
SOFTSTATE
Solaris running
HOSTID
0x84463e4e
CONTROL
failure-policy=ignore
DEPENDENCY
master=
VCPU
VID PID UTIL STRAND
0 0 0.5% 100%
1 1 0.0% 100%
2 2 0.0% 100%
3 3 0.1% 100%
4 4 0.1% 100%
5 5 0.0% 100%
6 6 0.0% 100%
7 7 0.0% 100%
8 8 0.0% 100%
9 9 0.0% 100%
10 10 0.0% 100%
11 11 0.0% 100%
12 12 0.0% 100%
13 13 0.0% 100%
14 14 0.0% 100%
15 15 0.0% 100%
16 16 0.0% 100%
17 17 0.0% 100%
18 18 0.0% 100%
19 19 0.1% 100%
20 20 0.0% 100%
21 21 0.0% 100%
22 22 0.0% 100%
23 23 0.0% 100%
24 24 0.2% 100%
25 25 0.0% 100%
26 26 0.0% 100%
27 27 0.0% 100%
28 28 0.2% 100%
29 29 0.0% 100%
30 30 0.0% 100%
31 31 0.0% 100%
64 64 0.1% 100%
MAU
ID CPUSET
0 (0, 1, 2, 3, 4, 5, 6, 7)
1 (8, 9, 10, 11, 12, 13, 14, 15)
2 (16, 17, 18, 19, 20, 21, 22, 23)
3 (24, 25, 26, 27, 28, 29, 30, 31)
4 (64, 65, 66, 67, 68, 69, 70, 71)
5 (72, 73, 74, 75, 76, 77, 78, 79)
6 (80, 81, 82, 83, 84, 85, 86, 87)
7 (88, 89, 90, 91, 92, 93, 94, 95)
IO
DEVICE PSEUDONYM OPTIONS
pci@400 pci_0
pci@500 pci_1
VCONS
NAME SERVICE PORT
SP
Notes: