Professional Documents
Culture Documents
IBM PowerVM
Live Partition Mobility
Explore the PowerVM Enterprise Edition
Live Partition Mobility
John E Bailey
Thomas Prokop
Guido Somers
ibm.com/redbooks
International Technical Support Organization
March 2009
SG24-7460-01
Note: Before using this information and the product it supports, read the information in
“Notices” on page xv.
This edition applies to AIX Version 6.1, AIX 5L Version 5.3 TL7, HMC Version 7.3.2 or later, and
POWER6 technology-based servers, such as the IBM Power System 570 (9117-MMA) and the
IBM Power System 550 Express (8204-E8A).
© Copyright International Business Machines Corporation 2007, 2009. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP
Schedule Contract with IBM Corp.
Contents
Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix
Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
The team that wrote this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvii
Become a published author . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xix
Chapter 1. Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Partition migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Cross-system flexibility is the requirement . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.4 Live Partition Mobility is the answer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.4.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.5.1 Hardware infrastructure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.5.2 Components involved . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.6 Operation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.1 Inactive migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.6.2 Active migration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.7 Combining mobility with other features . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7.1 High availability clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
1.7.2 AIX Live Application Mobility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
Contents v
5.6.5 Configure network on the mobile partition. . . . . . . . . . . . . . . . . . . . 157
5.6.6 Remove adapters from the mobile partition . . . . . . . . . . . . . . . . . . 160
5.6.7 Ready to migrate. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
5.7 The command-line interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
5.7.1 The migrlpar command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
5.7.2 The lslparmigr command. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
5.7.3 The lssyscfg command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.7.4 The mkauthkeys command . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
5.7.5 A more complex example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
5.8 Migration awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
5.9 Making applications migration-aware . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
5.9.1 Migration phases. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
5.9.2 Making programs migration aware using APIs . . . . . . . . . . . . . . . . 179
5.9.3 Making applications migration-aware using scripts . . . . . . . . . . . . . 182
5.10 Making kernel extension migration aware . . . . . . . . . . . . . . . . . . . . . . . 185
5.11 Virtual Fibre Channel. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
5.11.1 Basic virtual Fibre Channel Live Partition Mobility preparation . . . 190
5.11.2 Migration of a virtual Fibre Channel based partition . . . . . . . . . . . 193
5.11.3 Dual Virtual I/O Server and virtual Fibre Channel multipathing. . . 195
5.11.4 Live Partition Mobility with Heterogeneous I/O . . . . . . . . . . . . . . . 198
5.12 Processor compatibility modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
5.12.1 Verifying the processor compatibility mode of mobile partition . . . 208
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
Contents vii
viii IBM PowerVM Live Partition Mobility
Figures
Figures xi
7-5 Checking the amount of memory of the mobile partition. . . . . . . . . . . . . 234
7-6 Checking the amount of memory on the destination server . . . . . . . . . . 235
7-7 Checking the amount of processing units of the mobile partition . . . . . . 236
7-8 Checking the amount of processing units on the destination server . . . . 237
7-9 Enter PowerVM Edition key on the IVM. . . . . . . . . . . . . . . . . . . . . . . . . . 239
7-10 Processor compatibility mode on the IVM . . . . . . . . . . . . . . . . . . . . . . . 241
7-11 Checking the partition workload group participation . . . . . . . . . . . . . . . 243
7-12 Checking if the mobile partition has physical adapters . . . . . . . . . . . . . 244
7-13 View/Modify Virtual Fibre Channel window . . . . . . . . . . . . . . . . . . . . . . 249
7-14 Virtual Fibre Channel Partition Connections window . . . . . . . . . . . . . . 250
7-15 Partition selected shows Automatically generate . . . . . . . . . . . . . . . . . 250
7-16 Virtual Fibre Channel on source system . . . . . . . . . . . . . . . . . . . . . . . . 251
7-17 Virtual Fibre Channel on destination system. . . . . . . . . . . . . . . . . . . . . 252
7-18 Selecting physical adapter to be used as a virtual Ethernet bridge . . . 254
7-19 Create virtual Ethernet adapter on the mobile partition. . . . . . . . . . . . . 255
7-20 Create a virtual Ethernet adapter on the management partition . . . . . . 256
7-21 Partition is migrating . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area.
Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM
product, program, or service may be used. Any functionally equivalent product, program, or service that
does not infringe any IBM intellectual property right may be used instead. However, it is the user's
responsibility to evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document.
The furnishing of this document does not give you any license to these patents. You can send license
inquiries, in writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer
of express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may
make improvements and/or changes in the product(s) and/or the program(s) described in this publication at
any time without notice.
Any references in this information to non-IBM Web sites are provided for convenience only and do not in any
manner serve as an endorsement of those Web sites. The materials at those Web sites are not part of the
materials for this IBM product and use of those Web sites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without
incurring any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurement may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm
the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on
the capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the
sample programs are written. These examples have not been thoroughly tested under all conditions. IBM,
therefore, cannot guarantee or imply reliability, serviceability, or function of these programs.
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business
Machines Corporation in the United States, other countries, or both. These and other IBM trademarked
terms are marked on their first occurrence in this information with the appropriate symbol (® or ™),
indicating US registered or common law trademarks owned by IBM at the time this information was
published. Such trademarks may also be registered or common law trademarks in other countries. A current
list of IBM trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
SUSE, the Novell logo, and the N logo are registered trademarks of Novell, Inc. in the United States and
other countries.
Java, and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other
countries, or both.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Other company, product, or service names may be trademarks or service marks of others.
Live Partition Mobility is the next step in the IBMs Power Systems™ virtualization
continuum. It can be combined with other virtualization technologies, such as
logical partitions, Live Workload Partitions, and the SAN Volume Controller, to
provide a fully virtualized computing platform that offers the degree of system
and infrastructure flexibility required by today’s production data centers.
This IBM® Redbooks® publication discusses how Live Partition Mobility can help
technical professionals, enterprise architects, and system administrators:
Migrate entire running AIX® and Linux® partitions and hosted applications
from one physical server to another without disrupting services and loads.
Meet stringent service-level agreements.
Rebalance loads across systems quickly, with support for multiple concurrent
migrations.
Use a migration wizard for single partition migrations.
This book can help you understand, plan, prepare, and perform partition
migration on IBM Power Systems servers that are running AIX.
Note: Minor updates and technical corrections are marked by change bars
such as the ones in the left margin on this page. A 2010 update was made to
include POWER7™ servers.
John E Bailey is a Staff Software Engineer working in the IBM Power Systems
Test Organization for IBM USA. He holds a degree in Computer Science from
Prairie View A&M University. He has seven years experience with IBM Power
Systems and has worked on Live Partition Mobility for three years. His areas of
expertise include AIX, Linux, Hardware Management Console, storage area
networks, PowerVM™ virtualization, and software testing.
The authors of the first edition of the IBM System p® Live Partition Mobility
Redbook are:
Mitchell Harding, Narutsugu Itoh, Peter Nutt, Guido Somers, Federico Vagnini,
Jez Wain
John E. Bailey, John Banchy, Kevin J. Cawlfield, Eddie Chen, Steven J. Finnes,
Matthew Harding, Mitchell P. Harding, Tonya L. Holt, David Hu,
Robert C. Jennings, Anil Kalavakolanu, Timothy Marchini, Josh Miers,
Timothy Piasecki, Steven E. Royer, Elizabeth A. Ruth, Maneesh Sharma,
Luc R. Smolders, John D. Spangenberg, Ravindra Tekumallah,
Vasu Vallabhaneni, Jonathan R. Van Niewaal, Dean S. Wilcox
IBM USA
Jun Nakano
IBM Japan
Your efforts will help increase product acceptance and customer satisfaction. As
a bonus, you will develop a network of contacts in IBM development labs, and
increase your productivity and marketability.
Find out more about the residency program, browse the residency index, and
apply online at:
http://www.ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
Preface xix
xx IBM PowerVM Live Partition Mobility
1
Chapter 1. Overview
In this chapter, we provide an overview of Live Partition Mobility with a high-level
description of its features.
IBM Power Systems servers are designed to offer the highest stand-alone
availability in the industry. Enterprises must occasionally restructure their
infrastructure to meet new IT requirements. By letting you move your running
production applications from one physical server to another, Live Partition
Mobility allows for nondisruptive maintenance or modification to a system (to your
users). This mitigates the impact on partitions and applications formerly caused
by the occasional need to shut down a system.
Today, even small IBM Power Systems servers frequently host many logical
partitions. As the number of hosted partitions increases, finding a maintenance
window acceptable to all becomes increasingly difficult. Live Partition Mobility
allows you to move partitions around so that you can perform previously
disruptive operations on the machine at your convenience, rather than when it
causes the least inconvenience to the users.
The ability to move running partitions from one server to another offers you the
ability to balance workloads and resources. If a key application’s resource
requirements peak unexpectedly to a point where there is contention for server
resources, you might move it to a more powerful server or move other, less
critical, partitions to different servers, and use the freed-up resources to absorb
the peak.
Live Partition Mobility is the next step in the IBM PowerVM continuum. It can be
combined with other virtualization technologies, such as logical partitions, Live
Workload Partitions, and the SAN Volume Controller to provide a fully virtualized
computing platform offering the degree of system and infrastructure flexibility
required by today’s production data centers.
Chapter 1. Overview 3
the new requirements in a very short time, but also with minimal to no impact on
the service level. Configuration changes must be applied in a very simple and
secure way, with limited administrator intervention, to reduce change
management costs and the related risk.
Without a way to migrate a partition, all these activities require careful planning
and highly skilled people, and often cause a significant downtime. In some cases,
an SLA may be so strict that planned outages are not tolerated.
Live Partition Mobility provides the administrator a greater control over the usage
of resources in the data center. It allows a level of reconfiguration that in the past
was not possible because of complexity or just because of SLAs that do not allow
an application to be stopped for an architectural change.
Chapter 1. Overview 5
communicate to the source and destination servers, and their respective Virtual
I/O Servers. It is executed in a controlled way and with minimal administrator
interaction so that it can be safely and reliably performed in a very short time
frame.
When the service provided by the partition cannot be interrupted, its relocation
can be performed, with no loss of service, by using the active migration feature.
1.5 Architecture
Live Partition Mobility requires a specific hardware infrastructure. Several
platform components are involved. Live Partition Mobility is controlled by the
Hardware Management Console (HMC) or the Integrated Virtualization Manager
(IVM). This section describes the HMC-based architecture. Chapter 7,
“Integrated Virtualization Manager for Live Partition Mobility” on page 221
describes the IVM-based Live Partition Mobility in detail.
Chapter 1. Overview 7
Live Partition Mobility requires a specific hardware and microcode configuration
that is currently available on POWER6 technology-based systems only.
The procedure that performs the migration identifies the resource configuration
of the mobile partition on the source system and then reconfigures both source
and destination systems accordingly. Because the focal-point of hardware
configuration is the HMC, it has been enhanced to coordinate the process of
migrating partitions.
The mobile partition’s configuration is not changed during the migration. The
destination system must be able to host the mobile partition and must have
enough free processor and memory resources to satisfy the partition’s
requirements before migration is started. No limitation exists on the size of the
mobile partition; it can even use all resources of the source system offered by the
Virtual I/O Server.
The operating system and application data must reside on external disks of the
source system because the mobile partition’s disk data must be available after
the migration to the destination system is completed. An external, shared-access
storage subsystem is therefore required.
The mobile partition must not own any physical adapters and must use the
Virtual I/O Server for both network and external disk access. External disks may
be presented to the mobile partition as virtual SCSI, or virtual Fibre resources, or
both.
Because the mobile partition’s external disk space must be available to the
Virtual I/O Servers on the source and destination systems, you cannot use
storage pools. Each Virtual I/O Server must create virtual target devices using
physical disks and not logical volumes.
Both the source and the target system must have an appropriate shared Ethernet
adapter environment to host a moving partition. All virtual networks in use by the
mobile partition on the source system must be available as virtual networks on
the destination system.
VLANs defined by port virtual IDs (PVIDs) on the VIOS have no meaning outside
of an individual server as all packets are bridged untagged. It is possible for
VLAN 1 on CEC 1 to be part of the 192.168.1 network while VLAN 1 on CEC 2 is
part of the 10.1.1 network.
Figure 1-1 shows a basic hardware infrastructure enabled for Live Partition
Mobility and that is using a single HMC. Each system is configured with a single
Virtual I/O Server partition. The mobile partition has only virtual access to
network and disk resources. The Virtual I/O Server on the destination system is
connected to the same network and is configured to access the same disk space
used by the mobile partition. For illustration purposes, the device numbers are all
shown as zero, but in practice, they can vary considerably.
hdisk0
vscsi0 ent0
Processor
Processor
POWER
Service
Service
VLAN POWER VLAN
Hypervisor Hypervisor
Ethernet Network
Storage
Subsystem LUN
Chapter 1. Overview 9
The migration process creates a new logical partition on the destination system.
This new partition uses the destination’s Virtual I/O Server to access the same
mobile partition’s network and disks. During active migration, the state of the
mobile partition is copied, as shown in Figure 1-2.
hdisk0
vscsi0 ent0
Processor
Processor
POWER POWER
Service
Service
VLAN VLAN
Hypervisor Hypervisor
Ethernet Network
Storage
Subsystem LUN
hdisk0
ent0 vscsi0
Processor
Processor
POWER
Service
Service
VLAN POWER VLAN
Hypervisor Hypervisor
Ethernet Network
Storage
Subsystem LUN
Note: HMC Version 7 Release 3.4 introduces remote migration, the option of
migrating partitions between systems managed by different HMCs. See 5.4,
“Remote Live Partition Mobility” on page 130 for details on remote migration
Chapter 1. Overview 11
Memory management of an active migration is assigned to a mover service
partition on each system. During an active partition migration, the source mover
service partition extracts the mobile partition’s state from the source system and
sends it over the network to the destination mover service partition, which in turn
updates the memory state on the destination system.
Any Virtual I/O Server partition can be configured as a mover service partition.
1.6 Operation
Partition migration can be performed either as an inactive or an active operation.
The steps executed are similar to those an administrator would follow when
performing a manual migration. These actions normally require accurate
planning and a system-wide knowledge of the configuration of the two systems
because virtual adapters and virtual target devices have to be created on the
destination system, following virtualization configuration rules.
The inactive migration task takes care of all planning and validation and performs
the required activities without user action. This mitigates the risk of human error
and executes the movement in a timely manner.
Chapter 1. Overview 13
Active migration performs similar steps to inactive migration, but also copies
physical memory to the destination system. It keeps applications running,
regardless of the size of the memory used by the partition; the service is not
interrupted, the I/O continues accessing the disk, and network connections keep
transferring data.
POWER6 System B
Database
VIOS
Test Environment
Web Application
VIOS
VIOS
Live Partition Mobility increases global availability, but it is not a high availability
solution. It requires both source and destination systems be operational and that
the partition is not in a failed state. In addition, it does not monitor operating
system and application state and it is, by default, a user-initiated action.
Unplanned outages still require specific actions that are normally executed by
cluster solutions such as IBM PowerHA
Chapter 1. Overview 15
IBM PowerHA for AIX, also known as High Availability Cluster Multiprocessing
(HACMP™) for AIX, supports Live Partition Mobility for all IBM POWER6
technology-based servers. Table 1-1 provides support details.
Cluster software and Live Partition Mobility provide different functions that can be
used together to improve the availability and uptime of applications. They can
simplify administration, reducing the related cost.
Although Live Application Mobility is very similar to active Live Partition Mobility, it
is a pure AIX 6 function. It does not require any partition configuration change,
and it can be executed on any server running AIX Version 6.1, including
POWER7, POWER6, POWER5, and POWER4™ technology-based servers.
Live Application Mobility is a capability provided by PowerVM Workload Partitions
Manager™, and can function on all systems that support AIX Version 6.1,
although Live Partition Mobility is a PowerVM feature that works for AIX 5.3,
AIX 6.1 and Linux operating systems that operate on POWER6 or POWER7
technology-based servers.
AIX 6 AIX 6
Test
POWER5 System A
AIX 6 Web
AIX 6
DB
Common Filesystem
Live Partition Mobility and AIX Live Application Mobility have different scopes but
have similar characteristics. They can be used in conjunction to provide even
higher flexibility in a POWER6 or POWER7 environment.
Chapter 1. Overview 17
18 IBM PowerVM Live Partition Mobility
2
HMC
Mobile Partition Virtual I/O Server
AIX/Linux
DLPAR-RM
Mover
DLPAR-RM
Service
Partition Profiles
POWER Hypervisor
Service
Processor IBM Power System
These components and their roles are described in the following list.
Hardware Management Console (HMC) The HMC is the central point of
control. It coordinates administrator initiation and setup of the
subsequent migration command sequences that flow between the
various partition migration components.
Note: Beginning with HMC Version 7 Release 3.4, the destination system
may be managed by a remote HMC. Mobility operations to a remotely
managed destination system is discussed in 5.4, “Remote Live Partition
Mobility” on page 130.
Before initiating the migration of a partition, the HMC verifies the capability and
compatibility of the source and destination servers, and the characteristics of the
mobile partition to determine whether or not a migration is possible.
The hardware, firmware, Virtual I/O Servers, mover service partitions, operating
system, and HMC versions that are required for Live Partition Mobility along with
the system compatibility requirements are described in Chapter 3,
“Requirements and preparation” on page 45.
2.2.2 Readiness
Migration readiness is a dynamic partition property that changes over time.
Server readiness
A server that is running on battery power is not ready to receive a mobile
partition; it cannot be selected as a destination for partition migration. A server
that is running on battery power may be the source of a mobile partition; indeed,
that it is running on battery power may be the impetus for starting the migration.
2.2.3 Migratability
The term migratability refers to a partition’s ability to be migrated and is distinct
from partition readiness. A partition may be migratable but not ready. A partition
that is not migratable may be made migratable with a configuration change. For
active migration, consider whether a shutdown and reboot is required. When
considering a migration, also consider the following additional prerequisites:
General prerequisites:
– The memory and processor resources required to meet the mobile
partition’s current entitlements must be available on the destination server.
– The partition must not have any required dedicated physical adapters.
– The partition must not have any logical host Ethernet adapters.
– The partition is not a Virtual I/O Server.
– The partition is not designated as a redundant error path reporting
partition.
– The partition does not have any of its virtual SCSI disks defined as logical
volumes in any Virtual I/O Server. All virtual SCSI disks must be mapped
to LUNs visible on a SAN or iSCSI.
– The partition has virtual Fibre Channel disks configured as described in
Section 5.11, “Virtual Fibre Channel” on page 187.
– The partition is not part of an LPAR workload group. A partition can be
dynamically removed from a group.
– The partition has a unique name. A partition cannot be migrated if any
partition exists with the same name on the destination server.
In an inactive migration only, the following characteristics apply:
– It is a partition in the Not Activated state
– May use huge pages
– May use the barrier synchronization registers
The remainder of this chapter describes the inactive and active migration
processes.
Note: As part of a migration process, the HMC copies all of the mobile
partition’s profiles as-is to the destination system. The HMC also creates a
new migration profile containing the partition’s current state and, unless you
specify a profile name, this profile replaces the existing profile that was last
used to activate the partition. If you specify an existing profile name, the HMC
replaces that profile with the new migration profile. Therefore, if you want to
keep the partition’s existing profiles, you should specify a new and unique
profile name when initiating the migration. If you add an adapter (physical or
virtual) to a partition using dynamic reconfiguration, it is added to the profile as
desired.
2.4.1 Introduction
The HMC is the central point of control, coordinating administrator actions and
migration command sequences. Because the mobile partition is powered off,
only the static partition state (definitions and configurations) is transferred from
source to destination. The transfer is performed by the controlling HMC, the
service processors, and the POWER Hypervisor on the two systems; there is no
dynamic state, so mover service partitions are not required.
The HMC creates a migration profile for the mobile partition on the destination
server corresponding to its current configuration. All profiles associated with the
mobile partition are moved to the destination server after the partition definition
has been created on the destination server.
Note: Because the HMC always migrates the latest activated profile, an
inactive partition that has never been activated is not migratable. To meet this
requirement, booting to an operating system is unnecessary; booting to the
SMS menu is sufficient. Any changes to the latest activated profile after
power-off are not preserved. To save the changes, the mobile partition must
be reactivated and shut down.
Source
Source
RMC connection
Virtual I/O
Server Virtual
check
adapter
Destination mapping
Virtual I/O
Server
Partition
Mobile
Readiness
Partition check
Time
1 HMC
2
Source LPAR
System removal
Validation
Source Virtual
Virtual I/O
Notification of
Storage
completion
Server removal
Destination
Virtual storage
Virtual I/O adapter setup
Server
Mobile
Partition
Time
Note: Virtual slot numbers can change during migration. When moving a
partition to a server and then back to the original, it will not have the same slot
numbers. If this information is required, you should record the slot numbers.
The mover service partitions on the source and destination, under the control of
the HMC, move this state between the two systems.
2.5.2 Preparation
After you have created the Virtual I/O Servers and enabled the mover service
partitions, you must prepare the source and destination systems for migration:
1. Synchronize the time-of-day clocks on the mover service partition using an
external time reference, such as the network time protocol (NTP). This step is
optional; it increases the accuracy of time measurement during migration.
The step is not required by the migration mechanisms. Even if this step is
omitted, the migration process correctly adjusts the partition time. Time never
goes backward on the mobile partition during a migration.
2. Prepare the partition for migration:
a. Use dynamic reconfiguration on the HMC to remove all dedicated I/O,
such as PCI slots, GX slots, virtual optical devices, and Integrated Virtual
Ethernet from the mobile partition.
b. Remove the partition from a partition workload group.
3. Prepare the destination Virtual I/O Server.
a. Configure the shared Ethernet adapter as necessary to bridge VLANs.
b. Configure the SAN such that requisite storage devices are available.
After the pre-check, the HMC prevents any configuration changes to the partition
that might invalidate the migration and then proceeds to perform a detailed
capability, compatibility, migratability, and readiness check on the source and
destination systems.
Source
Source
Virtual I/O
Configuration checks
The HMC performs the following configuration checks:
Checks the source and destination systems, POWER Hypervisor, Virtual I/O
Servers, and mover service partitions for active partition migration capability
and compatibility
Checks that the RMC connections to the mobile partition, the source and
destination Virtual I/O Servers, and the connection between the source and
destination mover service partitions are established
Checks that there are no required physical adapters in the mobile partition
and that there are no required virtual serial slots higher than slot 2
Checks that no client virtual SCSI disks on the mobile partition are backed by
logical volumes and that no disks map to internal disks
Checks the mobile partition, its OS, and its applications for active migration
capability. An application registers is capability with AIX, and may block
migrations
Checks that the logical memory block size is the same on the source and
destination systems
Checks that the type of the mobile partition is AIX or Linux and that it is not an
alternate error logging partition or not a mover service partition
Checks that the mobile partition is not configured with barrier synchronization
registers
Checks that the mobile partition is not configured with huge pages
This is the end of the validation phase. At this point, there have been no state
changes to the source and destination systems or to the mobile partition. The
HMC inhibits all further dynamic reconfiguration of the mobile partition that might
invalidate the migration: CPU, memory, slot, variable capacity weight, processor
entitlement, and LPAR group.
Figure 2-6 shows the activities and workflow of the migration phase of an active
migration.
Source LPAR
System removal
Validation
Source Virtual
Virtual I/O
Notification of completion
SCSI & FC
MSP Setup
Server removal
Memory
copy
Destination Virtual SCSI
Virtual I/O & FC adapter
Server setup
Mobile
Partition
Time
For active partition migration, the transfer of partition state follows a path:
1. From the mobile partition to the source system’s hypervisor.
2. From the source system’s hypervisor to the source mover service partition.
3. From the source mover service partition to the destination mover service
partition.
4. From the destination mover service partition to the destination system's
hypervisor.
5. From the destination system’s hypervisor to the partition shell on the
destination.
1 2 4 5
The suspend window period (from end of step 7 through end of step 10) lasts
only a few seconds.
The HMC must identify at least one possible destination Virtual I/O Server for
each virtual SCSI and virtual Fibre Channel client adapter assigned to the mobile
partition, or the HMC fails the pre-check or migration. Destination Virtual I/O
Servers must have access to all LUNs used by the mobile partition. If multiple
source-to-destination Virtual I/O Server combinations are possible for virtual
adapter mappings, and you have not specified a mapping, the HMC selects one
of them.
If the destination Virtual I/O Servers cannot access all the VLANs required by the
mobile partition, the HMC halts the migration.
possible_virtual_scsi_mappings=30/VIOS1_L10/1,\
suggested_virtual_scsi_mappings=30/VIOS1_L10/1,\
possible_virtual_fc_mappings=none,\
suggested_virtual_fc_mappings=none
source_msp_name=VIOS1_L9,source_msp_id=1,dest_msp_names=VIOS1_L10,\
dest_msp_ids=1,ipaddr_mappings=9.3.5.3//1/VIOS1_L10/9.3.5.111/
If either of the chosen mover service partitions determines that its VASI cannot
handle a migration or if the HMC receives a VASI device error from a mover
service partition, the HMC stops the migration with an error.
During the migration phase, an initial transfer of the mobile partition’s physical
memory from the source to the destination occurs. Because the mobile partition
is still active, a portion of the partition’s resident memory will almost certainly
have changed during this pass. The hypervisor keeps track of these changed
pages for retransmission to the destination system in a dirty page list. It makes
additional passes through the changed pages until the mover service partitions
detects that a sufficient amount of pages are clean or the timeout is reached.
The speed and load of the network that is used to transfer state between the
source and destination systems influence the time required for both the transfer
of the partition state and the performance of any remote paging operations.
The amount of changed resident memory after the first pass is controlled more
by write activity of the hosted applications than by the total partition memory size.
Nevertheless, a reasonable assumption is that partitions with a large memory
requirement have higher numbers of changed resident pages than smaller ones.
To ensure that active partition migrations are truly nondisruptive, even for large
partitions, the POWER Hypervisor resumes the partition on the destination
system before all the dirty pages have been migrated over to the destination. If
the mobile partition tries to access a dirty page that has not yet been migrated
from the source system, the hypervisor on the destination sends a demand
paging request to the hypervisor on the source to fetch the required page.
Although AIX is migration safe, verify that any applications you are running are
migration safe or aware. See 5.8, “Migration awareness” on page 177 for more
information.
When you have ensured that all these requirements are satisfied and all
preparation tasks are completed, the HMC verifies and validates the Live
Partition Mobility environment. If this validation turns out to be successful, then
you can initiate the partition migration by using the wizard on the HMC graphical
user interface (GUI) or through the HMC command-line interface (CLI).
– Both source and destination systems must have the PowerVM Enterprise
Edition license code installed. To check, use the HMC to:
i. In the navigation area, expand Systems Management.
ii. Select the system in the navigation area.
iii. Expand the Capacity on Demand (CoD) section in the task list by
clicking on it. Select the Enterprise Enablement option and expand it
by clicking on it.
iv. Select View History Log.
The CoD Advanced Functions Activation History Log panel opens.
Figure 3-1 on page 48 shows the activation of Enterprise Edition for
Live Partition Mobility.
v. Click Close.
vi. If the Enterprise Edition code is not activated, you must repeat the first
three steps and then select Enter Activation Code to enable Live
Partition Mobility as shown on Figure 3-2.
Note: You can also check the firmware level by executing the lslic
command on the HMC.
Note: Ensure that the target hardware supports the operating system you
are migrating.
Storage requirements
For a list of supported disks and optical devices, see the Virtual I/O Server
data sheet for VIOS:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/d
atasheet.html
Prepare servers 54 -
Prepare HMC 61 -
Network considerations 87 -
Table Note 1: For inactive migration, you perform fewer preparatory tasks on
the Virtual I/O Server because:
You do not have to enable the mover service partition on either the source
or destination Virtual I/O Server.
You do not have to synchronize the time-of-day clocks.
Table Note 2: For inactive migration, you have to perform fewer preparatory
tasks on the mobile partition:
RMC connections are not required.
The mobile partition can have dedicated I/O. These dedicated I/O devices
will be removed automatically from the partition before the migration
occurs.
Barrier-synchronization registers can be used in the mobile partition.
The mobile partition can use huge pages.
The applications do not have to be migration-aware or migration-safe.
3.5.1 HMC
Ensure that the source and destination systems are managed by the same HMC
(or a redundant HMC pair).
For more information about this HMC migration scenario see 5.4, “Remote
Live Partition Mobility” on page 130.
Figure 3-7 Checking the number of processing units of the mobile partition
Figure 3-9 shows how to check the current version and release of our HMC.
Note: Live Partition Mobility requires HMC Version 7 Release 3.2 or higher to
be used. In this publication, we used the latest Version 7 Release 3.4 of the
HMC software (see Figure 3-9 on page 61). You can also verify the current
HMC version, and release and service pack level with the lshmc command.
When using Live Parition Mobility with an HMC managing at least one
POWER7-based server, HMC V7R710 or later is required.
For more information about upgrading the Hardware Management Console, see:
http://www14.software.ibm.com/webapp/set2/sas/f/hmc/home.html
If the source and destination Virtual I/O Servers do not meet the requirements,
perform an upgrade.
There must be at least one mover service partition on both the source and
destination Virtual I/O Servers for the mobile partition to participate in active
partition migration. If the mover service partition is disabled on either the source
or destination Virtual I/O Server, the mobile partition can be migrated inactively.
To enable the source and destination mover service partitions using the HMC,
you must have super administrator (such as hmcsuperadmin, as in the hscroot
login) authority and complete the following steps:
1. In the navigation area, open Systems Management and select Servers.
2. In the contents area, open the source system.
3. Select the source Virtual I/O Server logical partition and select Properties on
the task area.
4. On the General tab, select Mover Service Partition, and click OK.
5. Repeat these steps for the destination system.
If you choose not to complete this step, the source and destination Virtual I/O
Servers synchronize the clocks while the mobile partition is moving from the
source system to the destination system. Completing this step before the mobile
partition is moved can prevent possible errors.
To synchronize the time-of-day clocks on the source and destination Virtual I/O
Servers using the HMC, you must be a super administrator (such as hscroot) to
complete the following steps:
1. In the navigation area, open Systems Management.
2. Select Servers and select the source system.
3. In the contents area, select the source Virtual I/O Server logical partition.
4. Click on Properties.
5. Click the Settings tab.
6. For Time reference, select Enabled and click OK.
7. Repeat the previous steps on the destination system for the destination
Virtual I/O Server.
Note: After the Virtual I/O Server infrastructure is configured, a backup of the
Virtual I/O Servers is recommended; this approach produces an established
checkpoint prior to migration.
To establish an RMC connection for the mobile partition, you must be a super
administrator (a user with the HMC hmcsuperadmin role, such as hscroot) on the
HMC and complete the following steps:
1. Sign on to the operating system of the mobile partition with root authority.
2. From the command line, enter the following command to check if the RMC
connection is established:
lsrsrc IBM.ManagementServer
This command is shown in Example 3-2.
Redundant error path reporting allows a logical partition to report server common
hardware errors and partition hardware errors to the HMC. Redundant error path
reporting must be disabled if you want to migrate a logical partition.
To disable redundant error path reporting for the mobile partition, you must be a
super administrator and complete the following steps:
1. In the navigation area, open Systems Management.
2. Select Servers and select the source system.
3. In the contents area, select the logical partition you wish to migrate and select
Configuration Manage Profiles.
4. Select the active logical partition profile and select Edit from the Actions
menu.
5. Click the Settings tab.
6. Deselect Enable redundant error path reporting, and click OK.
7. Because disabling redundant error path reporting cannot be done
dynamically, you have to shut down the mobile partition, then power it on
using the profile with the modifications.
Virtual serial adapters are often used for virtual terminal connections to the
operating system. The first two virtual serial adapters (slots 0 and 1) are reserved
for the HMC. For a logical partition to participate in a partition migration, it cannot
have any required virtual serial adapters, except for the two reserved for the
HMC.
To dynamically disable unreserved virtual serial adapters using the HMC, you
must be a super administrator and complete the following steps:
1. In the navigation area, expand Systems Management.
2. Select Servers and select the source system.
3. In the contents area, select the logical partition to migrate and select
Configuration Manage Profiles.
Figure 3-15 Verifying the number of serial adapters on the mobile partition
A partition workload group identifies a set of partitions that reside on the same
system. The partition profile specifies the name of the partition workload group to
which it belongs, if applicable. For a logical partition to participate in a partition
migration, it cannot be assigned to a partition workload group.
To dynamically remove the mobile partition from a partition workload group, you
must be a super administrator on the HMC and complete the following steps:
1. In the navigation area, expand Systems Management Servers.
2. In the contents area, open the source system.
Figure 3-16 and Figure 3-17 on page 72 show the tabs for the disablement of the
partition workload group (both in the partition and in the partition profiles).
To disable BSR for the mobile partition using the HMC, you must be a super
administrator and complete the following steps:
1. In the navigation area, expand Systems Management.
2. Select Servers.
3. In the contents area, open the source system.
4. Select the mobile partition and select Properties.
Figure 3-18 Checking the number of BSR arrays on the mobile partition
– If the number of BSR arrays is not equal to zero, take one of the following
actions:
• Perform an inactive migration instead of an active migration. Skip the
remaining steps and see 2.4, “Inactive partition migration” on page 27.
• Click OK and continue to the next step to prepare the mobile partition
for an active migration.
7. In the contents area, open the mobile partition and select Configuration
Manage Profiles.
8. Select the active logical partition profile and select Edit from the Actions
menu.
9. Click the Memory tab.
10.Enter 0 in the BSR arrays for this profile field and click OK. This is shown in
Figure 3-19 on page 74.
– If the current huge page memory is not equal to 0, take one of the
following actions:
• Perform an inactive migration instead of an active migration. Skip the
remaining steps and see 2.4, “Inactive partition migration” on page 27.
• Click OK and continue with the next step to prepare the mobile partition
for an active migration.
3. In the contents area, open the mobile partition and select Configuration
Manage Profiles.
4. Select the active logical partition profile and select Edit from the Actions
menu.
To remove required I/O from the mobile partition using the HMC, you must be a
super administrator and complete the following steps:
1. In the navigation area, expand Systems Management.
2. Select Server and select the source system.
3. In the contents area, open the mobile partition and select Configuration
Manage Profiles.
4. Select the active logical partition profile and select Edit from the Actions
menu.
5. Click the I/O tab. See Figure 3-22. Note the following information.
– If Required is not selected for any resource, skip the remainder of this
procedure and continue with additional preparatory tasks for the mobile
partition, in 3.8.9, “Name of logical partition profile” on page 78.
Figure 3-22 Checking if there are required resources in the mobile partition
Note: You must also verify that no Logical Host Ethernet Adapter (LHEA)
devices are configured, because these are also considered as physical I/O.
Inactive migration is still possible if LHEA are configured.
Figure 3-23 shows you how to verify whether a LHEA is configured for the mobile
partition. First, select an IVE physical port to define a LHEA, and then verify
whether there are logical port IDs. If no logical port ID is in this column, then no
logical Host Ethernet Adapter is configured for this partition. More information
about Integrated Virtual Ethernet adapters can be found in the Integrated Virtual
Ethernet Adapter Technical Overview and Introduction, REDP-4340 publication.
CuAt:
name = "hdisk7"
attribute = "unique_id"
value = "3E213600A0B8000114632000073244919ADCA0F1815 FAStT03IBMfcp"
type = "R"
generic = "D"
rep = "nl"
nls_index = 79
ii. Type the chdev command to put a PVID on the physical volume in the
following format:
chdev - dev physicalvolumename -attr pv=yes -perm
– To list disks with an IEEE volume attribute identifier, issue the following
command (in the shell oem_setup_env):
lsattr -El hdiskX
4. Verify that the mobile partition has access to a source Virtual I/O Server
virtual SCSI adapter. You have to verify the configuration of the virtual SCSI
adapters on the mobile partition and the source Virtual I/O Server logical
partition to ensure that the mobile partition has access to storage. You must
be a super administrator (such as hscroot) to complete the following steps:
a. Verify the virtual SCSI adapter configuration of the mobile partition:
i. In the navigation area, open Systems Management.
ii. Click Servers.
iii. In the contents area, open the source system.
iv. Select the mobile partition and click Properties.
v. Click the Virtual Adapters tab.
vi. Record the Slot ID and Remote Slot ID for each virtual SCSI adapter.
vii. Click OK.
b. Verify the virtual SCSI adapter configuration of the source Virtual I/O
Server virtual SCSI adapter:
i. In the navigation area, open Systems Management.
ii. Click Servers.
iii. In the contents area, open the source system.
iv. Select the Virtual I/O Server logical partition and click Properties.
v. Click the Virtual Adapters tab.
vi. Verify that the Slot ID corresponds to the Remote Slot ID that you
recorded (in step vi on page 81) for the virtual SCSI adapter on the
mobile partition.
vii. Verify that the Remote Slot ID is either blank or that it corresponds to
the Slot ID that you recorded (in step vi on page 81) for the virtual SCSI
adapter on the mobile partition.
viii.Click OK.
When all virtual SCSI adapters on the source Virtual I/O Server logical
partition allow access to virtual SCSI adapters of every logical partition
(not only the mobile partition), you have two solutions:
• You may create a new virtual SCSI server adapter on the source Virtual
I/O Server and allow only the virtual SCSI client adapter on the mobile
partition to access it.
• You may change the connection specifications of a virtual SCSI server
adapter on the source Virtual I/O Server so that it allows access to the
virtual SCSI adapter on the mobile partition. This means that the virtual
SCSI adapter of the client logical partition that currently has access to
the virtual SCSI adapter on the source Virtual I/O Server will no longer
have access to the adapter.
5. Verify that the destination Virtual I/O Server has sufficient free virtual slots to
create the virtual SCSI adapters required to host the mobile partition in order
to create a virtual SCSI adapter after it moves to the destination system. To
verify the virtual SCSI configuration using the HMC, you must be a super
administrator (such as hscroot) to complete the following steps:
a. In the navigation area, open Systems Management.
b. Select Servers.
c. In the contents area, open the destination system.
d. Select the destination Virtual I/O Server logical partition and click
Properties.
You first have to create a shared Ethernet adapter on the Virtual I/O Server using
the HMC so that the client logical partitions can access the external network
without requiring a physical Ethernet adapter. Shared Ethernet adapters are
required on both source and destination Virtual I/O Servers for all the external
networks used by mobile partitions. If you plan to use a shared Ethernet adapter
(SEA) with an Integrated Virtual Ethernet (IVE) adapter, ensure that the physical
port of this IVE adapter is set to promiscuous mode for the Virtual I/O Server. If
the IVE is put in promiscuous mode, it can only be used by a single LPAR. For
more information about IVE, Integrated Virtual Ethernet Adapter Technical
Overview and Introduction, REDP-4340.
If you plan to use the Integrated Virtual Ethernet adapter with the shared
Ethernet adapter, ensure that you use the logical host Ethernet adapter to
create the shared Ethernet adapter.
Live Partition Mobility using N_Port ID Virtualization and virtual Fibre Channel
features are covered in 5.11, “Virtual Fibre Channel” on page 187. Live Partition
Mobility using Integrated Virtualization Manager is covered in Chapter 7,
“Integrated Virtualization Manager for Live Partition Mobility” on page 221.
vscsi0 ent0
POWER POWER
Hypervisor Hypervisor
If a mobile partition has dedicated I/O adapters, it can only participate in the
inactive partition migration. However, even in that case, the dedicated adapters
are automatically removed from the partition profile so that the partition will boot
with only virtual I/O resources after migration. If you have to use dedicated I/O
adapters on the mobile partition after the migration, update the mobile partition’s
profile before booting, add adapters to the mobile partition by using dynamic
LPAR operations, or make available the desired resources using other means.
The following requirements must be met for the active partition migration, in
addition to the requirements listed in 4.1.1, “Minimum requirements” on page 91:
On both the source and the destination Virtual I/O Server partitions, one
mover service partition is enabled and one automatically configured Virtual
Asynchronous Services Interface (VASI) device is available.
No physical or dedicated I/O adapters are assigned to the mobile partition.
Resource Monitoring and Control (RMC) connection must exist between the
mobile partition and the HMC is active.
Note: Any virtual TTY sessions will be disconnected during the migration, but
can be reestablished on the destination system by the user after migration.
Synchronizing the time-of-day clocks for the source and destination Virtual I/O
Server partitions is optional for both active and inactive partition migration.
However, it is a recommended step for active partition migration. If you choose
not to complete this step, the source and destination systems will synchronize
the clocks while the mobile partition is moving from the source system to the
destination system.
To set the mover service partition attribute during the creation of a Virtual I/O
Server partition:
1. In the navigation pane, expand Systems Management Servers, and
select the system on which you want to create a new Virtual I/O Server
partition.
4. The mover service partition will be activated with the partition. Proceed with
the remaining steps of the Virtual I/O Server partition creation.
You can also set the mover service partition attribute dynamically for an existing
Virtual I/O Server partition while the partition is in the Running state.
1. In the navigation pane, expand Systems Management Servers, and
select the desired system.
2. In the Contents pane (the top right of the Hardware Management Console
Workplace), select the Virtual I/O Server for which you want to enable the
mover service partition attribute.
4. Check the Mover service partition box on the General tab in the Partition
Properties window, and click OK. See Figure 4-5.
If you are proceeding with this step when the mobile partition is in the Not
Activated state, the destination and source mover service partition and wait
time entries do not appear, because these are not required for the inactive
partition migration.
Note: Figure 4-8 on page 101 shows the option of entering a remote
HMC’s information. This step applies only to a remote migration between
systems managed by different HMCs. Our example shows migration of a
partition between systems managed by a single HMC. See 5.4, “Remote
Live Partition Mobility” on page 130 for more details on remote migration.
If you want to perform an active migration, the mobile partition must be in the
Running state, and no physical or dedicated I/O adapters must be assigned to it.
For details about the active partition migration requirements, see 4.1.3, “Active
partition migration” on page 93.
In this scenario, we are going to migrate a partition named mobile from the
source system (9117-MMA-SN101F170-L10) to the destination system
(9117-MMA-SN10F6A0-L9). To migrate a mobile partition:
1. In the navigation pane, expand Systems Management Servers, and
select the source system.
At this point, you can see that the mobile partition is on the source system, as
shown in Figure 4-12.
2. In the contents pane, select the partition to migrate to the destination system,
that is, the mobile partition.
5. You can specify the New destination profile name in the Profile Name panel,
as shown in Figure 4-15 on page 107.
Figure 4-16 Optionally specifying the Remote HMC of the destination system
In this basic scenario, one Virtual I/O Server partition is configured on the
destination system, so the wizard window shows only one mover service
partition candidate. If you have more than one Virtual I/O Server partition on
the source or on the destination system, you can select which mover server
partitions to use.
Note: If there is only one shared processor pool this option might not
appear. See 5.5, “Multiple shared processor pools” on page 147 for more
information about shared processor pools and Live Partition Mobility.
15.The Migration status and Progress is shown in the Partition Migration Status
panel, as shown in Figure 4-25.
17.If you keep a record of the virtual I/O configuration of the partitions, check and
record the migrating partition’s configuration in the destination system.
Although the migrating partition retains the same slot numbers as on the
source system, the server virtual adapter slot numbers can be different
between the source and destination Virtual I/O Servers. Also, the virtual
target device name might change during migration.
This discussion relates to the common practice of using more than one Virtual
I/O Server to allow for concurrent maintenance, and is not limited to only two
servers. Also, Virtual I/O Servers may be created to offload the mover services to
a dedicated partition.
Live Partition Mobility does not make any changes to the network setup on the
source and destination systems. It only checks that all virtual networks used by
the mobile partition have a corresponding shared Ethernet adapter on the
destination system. Shared Ethernet failover might or might not be configured on
either the source or the destination systems.
When multiple Virtual I/O Servers are involved, multiple virtual SCSI, and virtual
Fibre Channel combinations are possible. Access to the same storage area
network (SAN) disk may be provided on the destination system by multiple
Virtual I/O Servers for use with virtual SCSI mapping. Similarly, multiple Virtual
I/O Servers can provide access with multiple paths to a specific set of assigned
LUNs for virtual Fibre Channel usage. Live Partition Mobility automatically
manages the virtual SCSI and virtual Fibre Channel configuration if an
administrator does not provide specific mappings.
The partition that is moving must keep the same number of virtual SCSI and
virtual Fibre Channel adapters after migration and each virtual disk must remain
connected to the same adapter or adapter set. An adapter’s slot number can
change after migration, but the same device name is kept by the operating
system for both adapters and disks.
A migration can fail validation checks and is not started if the moving partition
adapter and disk configuration cannot be preserved on the destination system.
In this case, you are required to modify the partition configuration before starting
the migration.
In this section, we describe three different migration scenarios where the source
and destination systems provide disk access either with one or two Virtual I/O
Servers using virtual SCSI adapters. More information about virtual Fibre
Channel adapters can be found in 5.11, “Virtual Fibre Channel” on page 187.
LVM Mirroring
hdisk0 hdisk1
vscsi0 vscsi1
Hypervisor Hypervisor
Disk A Disk B
Storage Storage
Subsystem Subsystem
Figure 5-1 Dual VIOS and client mirroring to dual VIOS before migration
The migration process automatically detects which Virtual I/O Server has access
to which storage and configures the virtual devices to keep the same disk access
topology.
LVM Mirroring
hdisk1 hdisk0
vscsi1 vscsi0
Hypervisor Hypervisor
vhost0 vhost0
Disk A Disk B
Storage Storage
Subsystem Subsystem
Figure 5-2 Dual VIOS and client mirroring to dual VIOS after migration
LVM Mirroring
hdisk1 hdisk0
vscsi1 vscsi0
Hypervisor Hypervisor
vtscsi0 vtscsi1
Disk A Disk B
Storage Storage
Subsystem Subsystem
Figure 5-3 Dual VIOS and client mirroring to single VIOS after migration
hdisk0
vscsi0 vscsi1
Hypervisor Hypervisor
Disk A
Storage
Subsystem
Figure 5-4 Dual VIOS and client multipath I/O to dual VIOS before migration
hdisk0
vscsi0 vscsi1
Hypervisor Hypervisor
vhost0 vhost0
Disk A
Storage
Subsystem
Figure 5-5 Dual VIOS and client multipath I/O to dual VIOS after migration
If the destination system is configured with only one Virtual I/O Server, the
migration cannot be performed. The migration process would create two paths
using the same Virtual I/O Server, but this setup is not allowed, because having
two virtual target devices that map the same backing device on different virtual
SCSI server devices is not possible.
To migrate the partition, you must first remove one path from the source
configuration before starting the migration. The removal can be performed
without interfering with the running applications. The configuration becomes a
simple single Virtual I/O Server migration.
Because the migration never changes a partition’s configuration, only one Virtual
I/O Server is used on the destination system.
When both destination Virtual I/O Servers have access to all the disk data, the
migration can select either one or the other. When you start the migration, you
have the option of choosing a specific Virtual I/O Server. The HMC automatically
makes a selection if you do not specify the server. The situation is shown in
Figure 5-6.
hdisk0
vscsi0
Hypervisor Hypervisor
vtscsi0
Disk A
Storage
Subsystem
When the migration is performed using the GUI on the HMC, a list of possible
Virtual I/O Servers to pick from is provided. By default, the command-line
interface makes the automatic selection if no specific option is provided.
hdisk0
vscsi0
Hypervisor Hypervisor
vtscsi0
Disk A
Storage
Subsystem
In many scenarios, more than one migration may be started on the same system.
For example:
A review of the entire infrastructure detects that a different system location of
some logical partition may improve global system usage and service quality.
A system is planned to enter maintenance and must be shut down. Some of
its partitions cannot be stopped or the planned maintenance time is too long
to satisfy service level agreements.
For each mobile partition, you must use an HMC GUI wizard or an HMC
command. While a migration is in progress, you can start another one. When the
number of migrations to be executed grows, the setup time using the GUI can
become long and you should consider using the CLI instead. The migrlpar
command may be used in scripts to start multiple migrations in parallel.
The active migration process has been designed to handle any partition memory
size and it is capable of managing any memory workload. Applications can
update memory with no restriction during migration and all memory changes are
taken into account, so elapsed migration time can change with workload.
Although the algorithm is efficient, planning the migration during low activity
periods can help to reduce migration time.
Virtual I/O Servers selected as mover service partitions are involved in partition’s
memory migration and must manage high network traffic. Network management
can cause high CPU usage and usual performance considerations apply; use
uncapped Virtual I/O Servers and add virtual processors if the load increases.
Alternatively, create dedicated Virtual I/O Servers on the source and destination
systems that provide the mover service function separating the service network
traffic from the migration network traffic. You can combine or separate
virtualization functions and mover service functions to suit your requirements.
In a dual HMC configuration, both HMCs see the same system’s status, have the
same configuration rights, and can perform the same actions. To avoid
concurrent operations on the same system, a locking mechanism is in place that
allows the first configuration change to occur and the second one to fail with a
message showing the identifier of the locking HMC.
The HMC that initiates a migration takes a lock on both managed systems and
the lock is released when migration is completed. The other HMC can show the
status of migration but cannot issue any additional configuration changes on the
two systems. Although the lock can be manually broken, carefully consider this
option.
When multiple migrations are planned between two systems, multiple HMC
commands are issued. The first migration task takes an HMC lock on both
systems so the subsequent migration must be issued on the same HMC. After
that, only one HMC is required to be used when multiple concurrent migrations
are executed.
Remote migration operations require that each HMC has RMC connections to its
individual system’s Virtual I/O Servers and a connection to its system’s service
processors. The HMC does not have to be connected to the remote system’s
RMC connections to its Virtual I/O Servers nor does it have to connect to the
remote system’s service processor.
The remote active and inactive migrations follow the same workflow as described
in Chapter 2, “Live Partition Mobility mechanisms” on page 19. The local HMC,
which manages the source server in a remote migration, serves as the
controlling HMC. The remote HMC, which manages the destination server,
receives requests from the local HMC and sends responses over a secure
network channel.
The following list indicates the requirements for remote HMC migrations:
A local HMC managing the source server
A remote HMC managing the destination server
Version 7 Release 3.4 or later HMC version
Network access to a remote HMC
SSH key authentication to the remote HMC
The source and destination servers, mover service partitions, and Virtual I/O
Servers are required to be configured exactly as though they were going to be
performing migrations managed by a single HMC in the basic scenario as
described in Chapter 3, “Requirements and preparation” on page 45.
To initiate the remote migration operation, you may use only the HMC that
contains the mobile partition.
hdisk0
vscsi0 ent0
Processor
Processor
Service
Service
VLAN POWER POWER VLAN
Hypervisor Hypervisor
Ethernet Network
Storage
Subsystem LUN
hdisk0
vscsi0 ent0
Processor
Processor
Service
Service
VLAN POWER POWER VLAN
Hypervisor Hypervisor
Ethernet Network
Storage
Subsystem LUN
hdisk0
vscsi0 ent0
Processor
Processor
Service
Service
VLAN POWER POWER VLAN
Hypervisor Hypervisor
Ethernet Network
Storage
Subsystem LUN
Figure 5-10 One public and one private network migration infrastructure
The steps to configure Virtual I/O Servers, client partition, mover service
partitions and partition profiles do not change.
Use dedicated networks with 1 Gbps bandwidth, or more. This applies for each
involved HMC, Virtual I/O Server, and mover service partition.
3. In the Remote Command Execution window, enable the check box to Enable
remote command execution using the ssh facility, as shown in
Figure 5-13. Click OK.
Note: If the window does not appear, you have no errors or warnings.
a. Check the messages in the window and the prerequisites for the migration:
• For error messages: You cannot perform the migration steps if errors
exist. Eliminate any errors.
• For warning messages: If only warnings occur (no errors), you may
migrate the partition after the validation steps.
In this scenario, we migrate a partition named mobile from the source system
(9117-MMA-SN100F6A0-L9) managed by the local HMC (9.3.5.128) to the
destination system (9117-MMA-SN101F170-L10) on the remote HMC
(9.3.5.180), as follows:
1. In the navigation pane on the local HMC, expand Systems Management
Servers, and select the source system.
2. In the contents pane, select the partition that you will migrate to the
destination system, that is, the mobile partition.
3. Click view popup menu and select Operations Mobility Migrate to
start the Partition Migration wizard.
4. Check the Migration Information of the mobile partition in the Partition
Migration wizard.
If the mobile partition is powered off, the Migration Type is inactive. If the
partition is in the Running state, the Migration Type is active.
You can specify the New destination profile name in the Profile Name
window.
If you leave the name blank or do not specify a unique profile name, the
profile on the destination system will be overwritten.
6. Select the destination system and click Next. The HMC validates the partition
migration environment.
7. Check errors or warnings in the Partition Validation Errors/Warnings window,
and eliminate any errors. If there are any errors, you cannot proceed to the
next step. You may proceed to the next step if it shows warnings only.
8. If you are performing inactive migration, skip this step and go to 9.
If you are performing active migration, select the source and the destination
mover service partitions to be used for the migration.
9. Select the VLAN configuration.
10.Select the virtual storage adapter assignment.
11.Specify the wait time in minutes.
14.If you keep a record of the virtual I/O configuration of the partitions, check the
migrating partition’s configuration in the destination system. Although the
migrating partitions retain the same slot numbers as on the source systems,
the server virtual adapter slot numbers can be different between the source
and destination Virtual I/O Servers. Also, the virtual target device name can
change during migration.
The next two examples show how the --ip and --u flags are used with commands
lslparmigr (in Example 5-2 on page 146) and migrlpar (in Example 5-3 on
page 146).
source_msp_name=VIOS1_L9,source_msp_id=1,dest_msp_names=VIOS1_L10,
dest_msp_ids=1,ipaddr_mappings=9.3.5.3//1/VIOS1_L10/9.3.5.111/
Warnings:
HSCLA295 As part of the migration process, the HMC will create a new
migration profile containing the partition's current state. The
default is to use the current profile, which will replace the existing
definition of this profile. While this works for most scenarios, other
options are possible. You may specify a different existing profile,
which would be replaced with the current partition definition, or you
may specify a new profile to save the current partition state.
If you use the CLI, the migration operation will fail if the arrival of the migrating
partition would cause the maximum processors in the chosen shared pool on the
destination to be exceeded.
If the migration is initiated after a change has occurred on the destination system
where the selected processor pool can no longer accommodate the client
partition, the migration will fail.
5.6.1 Overview
Three types of adapters cannot be present in a partition when it is participating in
an active migration are physical adapters, Integrated Virtual Ethernet adapters,
and non-default virtual serial adapters. A non-default virtual serial adapter is a
virtual serial adapter other than the two automatically created virtual serial
adapters in slots 0 and 1. If a partition has non-default virtual serial adapters, you
must deconfigure them; you might have to switch from physical to virtual
resources. For this scenario, we assume you are beginning with a mobile
partition that uses a single physical Ethernet adapter and a single physical SCSI
adapter. See Figure 5-23.
Mobile Partition
rootvg
hdisk0 hdisk1
ent0 sisioa0
Hypervisor
SCSI
Enclosure
The process described in this section covers both the case where the mobile
partition does not have such required adapters, and the case where it does.
Before proceeding, verify that the requirements for Live Partition Mobility are met,
as outlined in Chapter 3, “Requirements and preparation” on page 45. However,
in that chapter, ignore the requirement to check that the adapters cannot be
migrated because this exception is discussed in all of 5.6, “Migrating a partition
with physical resources” on page 149.
Important: Mark the virtual SCSI server adapter as desired (not required)
in your Virtual I/O Server partition profile. This setting is necessary to allow
the migration process to dynamically remove this adapter during a
migration.
When creating the virtual SCSI server adapter, use the “Only selected
client partition can connect” option. For the Client partition field, specify the
mobile partition. For the Client adapter field, specify an unused virtual slot
on the mobile partition. Do not set the server adapter to accept
connections from any partition. This method allows the migration process
to identify which server adapter is paired with which client partition.
2. Attach and configure the remote storage using a storage area network:
– Create one LUN on your storage subsystem for each disk in use on your
mobile partition. Ensure that these LUNs are at least as large as the disks
on your mobile partition. Make these LUNs available as hdisks on the
source Virtual I/O Server.
Figure 5-24 shows the created and configured source Virtual I/O Server.
Mobile Partition
rootvg
hdisk0 hdisk1
ent0 sisioa0
Hypervisor
vhost0 ent1
Source VIOS
hdisk5 hdisk6
Storage Ethernet
adapter adapter
SCSI Storage
Enclosure Subsystem
Figure 5-24 The source Virtual I/O Server is created and configured
Important: Do not create any virtual SCSI server adapters for your mobile
partition on the destination Virtual I/O Server. Do not map any shared hdisks
on the destination Virtual I/O Server. All of this is done automatically for you
during the migration.
Mobile Partition
rootvg
hdisk0 hdisk1
ent0 sisioa0
Hypervisor Hypervisor
Destination VIOS
Source VIOS
SCSI Storage
Enclosure Subsystem
Figure 5-25 The destination Virtual I/O Server is created and configured
Figure 5-26 shows the configured storage devices on the mobile partition.
Mobile Partition
rootvg
hdisk0 hdisk1 hdisk7 hdisk8
Hypervisor Hypervisor
Destination VIOS
Source VIOS
SCSI Storage
Enclosure Subsystem
Figure 5-26 The storage devices are configured on the mobile partition
Mobile Partition
Mobile Partition
rootvg
Hypervisor Hypervisor
SCSI Storage
Enclosure Subsystem
Figure 5-28 shows rootvg on the mobile partition now wholly on the virtual disks.
Hypervisor Hypervisor
SCSI Storage
Enclosure Subsystem
Figure 5-28 The root volume group of the mobile partition is on virtual disks only
Hypervisor Hypervisor
Destination VIOS
Source VIOS
hdisk5 hdisk6 hdisk3 hdisk4
SCSI Storage
Enclosure Subsystem
Figure 5-29 The mobile partition has a virtual network device created
Now that the virtual network adapters are configured, stop using the physical
network adapters and begin using the virtual network adapters. To move to
virtual networks on the mobile partition, use new or existing IP addresses. Both
procedures, discussed in this section, affect network connectivity differently.
Understand how all running applications use the networks; take appropriate
actions before proceeding.
Figure 5-30 shows the mobile partition using a virtual network, with its physical
network interface unconfigured.
Hypervisor Hypervisor
Destination VIOS
Source VIOS
SCSI Storage
Enclosure Subsystem
Figure 5-30 The mobile partition has unconfigured its physical network interface
Note: A reboot is not sufficient. The mobile partition must be shut down
and activated with the modified profile.
hdisk7 hdisk8
vscsi0 ent1
Hypervisor Hypervisor
Destination VIOS
Source VIOS
SCSI Storage
Enclosure Subsystem
After the migration is complete, consider adding physical resources back to the
mobile partition, if they are available on the destination system.
Note: The active mobile partition profile is created on the destination system
without any references to any physical I/O slots that were present in your
profile on the source system. Any other mobile partition profiles are copied
unchanged.
hdisk7 hdisk8
vscsi0 ent1
Hypervisor Hypervisor
Destination VIOS
Source VIOS
SCSI Storage
Enclosure Subsystem
Several existing HMC commands for Live Partition Mobility have been updated to
support the latest mobility features. The commands are migrlpar, lslparmigr,
and lssyscfg.
Tip: Use the ssh-keygen command to create the public and private key-pair on
your client. Then add these keys to the HMC user’s key-chain by using the
mkauthkeys --add command on the HMC.
Command conventions
The commands follow the HMC command conventions, which are:
Single character parameters are preceded by a single dash (-).
Multiple character parameters are preceded by a double dash (--).
All filter and attribute names are lower case, with underscores joining words
together, for example vios_lpar_id.
The data format specified with the virtual_fc_mappings attribute mirrors the
format of the virtual_scsi_mappings attribute as it relates to virtual Fibre Channel
adapter mappings for N_Port ID Virtualization (NPIV).
Examples
To migrate the partition myLPAR from the system srcSystem to the destSystem
using the default MSPs and adapter maps, use the following command:
$ migrlpar -o m srcSystem -t destSystem -p myLPAR
When the destination system has multiple shared-processor pools, you can
stipulate to which shared-processor pool the moving partition will be assigned at
the destination with either of the following commands:
$ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i
"shared_proc_pool_id=1"
$ migrlpar -o m -m srcSystem -t destSystem -p myLPAR -i
"shared_proc_pool_name="DefaultPool"
You can use the --force flag on the recover command, but you should only when
the partition migration fails, leaving the partition definition on both the source and
destination systems.
Examples
The following examples illustrate how this command is used.
In this example, we can see that the system is capable of both active and inactive
migration and that there is one inactive partition migration in progress. By using
the -F flag, the same information is produced in a CSV format:
$ lslparmigr -r sys -m mySystem -F
These attribute values are the same as in the preceding example, without the
attribute identifier. This format is appropriate for parsing or for importing in to a
spreadsheet. Adding the --header flag prints column headers on the first line:
$ lslparmigr -r sys -m mySystem -F --header
If you are only interested in specific attributes, then you can specify these as
options to the -F flag. For example, if you want to know just the number of active
and inactive migrations in progress, use the following command:
$ lslparmigr -r sys -m mySystem -F \
num_active_migrations_in_progress,num_inactive_migrations_in_progress
This command produces the following results, which indicates that there are no
active migrations and one active migration running:
0,1
Here, we see that the command supplies only one attribute for the user on the
HMC from which it is executed. The attribute remote_lpar_mobility_capable
displays a value of 1 if the HMC has the ability to perform migrations to a remote
HMC. Conversely, a value of 0 indicates that the HMC is incapable of remote
migrations.
You may also use the -F flag followed by the attribute to limit the output of the
command to the value. For example, the command:
$ lslparmigr -r manager -F remote_lpar_mobility_capable
Here, we see that the system mySystem is hosting three partitions, QA,
VIOS1_L10, and PROD. Of these, the PROD partition is in the Starting state of
an inactive migration as indicated by the migration_state and migration_type
attributes. When the command was run, the migration the ID of the destination
partition had not been chosen, as seen by the 65535 value for the dest_lpar_id
parameter.
Use the -filter flag to limit the output to a given set of partitions with either the
lpar_names or the lpar_ids attributes:
$ lslparmigr -r lpar -m mySystem --filter lpar_ids=3
You can use the -F flag to generate the same information in CSV format or to limit
the output:
$ lslparmigr -r lpar -m mySystem --filter lpar_ids=3 -F
Here the -F flag, without additional parameters, has printed all the attributes. In
the example, the last four fields of output pertain to the MSPs; because the
partition in question is undergoing an inactive migration, no MSPs are involved
and these fields are empty. You can use the --header flag with the -F flag to print
a line of column headers at the start of the output.
Here, we see that if we move the partition TEST from srcSystem to destSystem,
then:
There is a mover service partition on the source (VIOS1_L9).
There is a mover service partition on the destination (VIOS1_L10).
If the migration uses VIOS1_L9 on the source, VIOS1_L10 can be used on
the destination.
This approach gives one possible mover service partition combination for the
migration.
This output indicates that processor pool IDs 1 and 0 are capable of hosting the
client partition called TEST. The command requires the -m, -t, and --filter flags.
The --filter flag requires that you either use the lpar_ids or lpar_names attributes
to identify the client partition. You may only specify one client partition at a time.
This command shows that we are communicating with an HMC with IP address
9.3.5.180 using the HMC’s User ID hscroot. The command then checks the
remote HMC for the destSystem-managed system for possible processor pools.
It produces the output:
"shared_proc_pool_ids=1,0","shared_proc_pool_names=SharedPool01,Default
Pool"
Here, the system is showing that two shared-processor pools are possible
destinations for the client partition.
40/VIOS1_L10/1
This output indicates that if you migrate the client partition called TEST from the
srcSystem to destSystem, then the suggested virtual SCSI adapter mapping
would be to map the client virtual adapter in slot 40 to the Virtual I/O Server
called VIOS1_L10, which has a partition ID of 1, on the destination system.
The lssyscfg -r lpar command displays the msp and time_ref partition
attributes on Virtual I/O Server partitions that are capable of participating in
active partition migrations. The msp attribute has a value of 1 when enabled as
mover service partition and 0 when it is not enabled.
Examples
To get the remote HMC user's SSH public key, you may simply use the -g flag:
$ mkauthkeys --ip rmtHostName -u hscroot -g
In some cases, you may choose to remove the authentication keys, which you
can do by using the mkauthkeys command with the -r flag:
$ mkauthkeys -r ccfw@rmtHostName
The HMC stores the key as user called ccfw. It is not stored as the user ID you
specified in the steps to retrieve the authentication keys. Also note that the
remote HMC's host name has to be specified in this command. If DNS is unable
to resolve the host name and you used the IP address to configure the
authentication, only then will you use the actual IP address in the place of
rmtHostName.
The --test flag allows you to check whether authentication is properly configured
to the remote HMC:
$ mkauthkeys --ip rmtHostName -u hscroot --test
The command returns the following error if keys were not configured properly:
HSCL3653 The Secure Shell (SSH) communication configuration between the
source and target Hardware Management Consoles has not been set up
properly for user hscroot. Please run the mkauthkeys command to set up
the SSH communication authentication keys.
How it works
The script starts by checking that both the source and destination systems are
mobility capable. For this, it uses the new attributes given in the lssyscfg
command. It then uses the lslparmigr command to list all the partitions on the
system. It uses this list as an outer loop for the rest of the script. The program
then performs a number of elementary checks:
The source and destination must be capable of mobility.
The lssyscfg command shows the mobility capability attribute.
Only partitions of type aixlinux can be migrated.
The script uses the lssyscfg command to ascertain the partition type.
Determines whether to avoid migrating a partition that is already migrating.
The script reuses the lslparmigr command for this.
Validate the partition migration.
The script uses the migrlpar -v and checks the return code.
If all the checks pass, the migration is launched with the migrlpar command. The
code snippet does some elementary error checking. If migrlpar returns a
non-zero value, a recovery is attempted using the migrlpar -o r command.
#
#
# Make sure that they are both capapble of active and inactive migration
#
if [ $SRC_CAP = $DEST_CAP ] && [ $SRC_CAP = "1,1" ]
then
#
# List all the partitions on the source system
#
for LPAR in $(lslparmigr -r lpar -m $SRC_SERVER -F name)
do
#
# Only migrate “aixlinux” partitions. VIO servers cannot be migrated
#
LPAR_ENV=$(lssyscfg -r lpar -m $SRC_SERVER \
--filter lpar_names=$LPAR -F lpar_env)
if [ $LPAR_ENV = "aixlinux" ]
then
#
# Make sure that the partition is not already migrating
#
LPAR_STATE=$(lslparmigr -r lpar -m $SRC_SERVER --filter lpar_names=$LPAR -F
migration_state)
if [ "$LPAR_STATE" = "Not Migrating" ]
then
#
# Perform a validation to see if there’s a good chance of success
#
migrlpar -o v -m $SRC_SERVER -t $DEST_SERVER -p $LPAR
RC=$?
if [ $RC -ne 0 ]
then
echo "Validation failed. Cannont migrate partition $LPAR"
else
#
# Everything looks good, let’s do it...
#
echo "migrating $LPAR from $SRC_SERVER to $DEST_SERVER"
migrlpar -o m -m $SRC_SERVER -t $DEST_SERVER -p $LPAR
Most applications do not require any changes to work correctly and efficiently
with Live Partition Mobility. Certain applications can have dependencies on
characteristics that change between the source and destination servers and
other applications may adjust their behavior to facilitate the migration.
The check and prepare phases take place on the source system; the post phase
occurs on the destination after the device tree and ODM have been updated to
reflect the destination system configuration.
Note: An application must not block the SIGRECONFIG signal and the signal
must be handled in a timely manner. The dynamic LPAR and Live Partition
Mobility infrastructure wait a short period of time for a reply from applications.
If no response occurs after this amount of time, the system assumes all is well
and proceeds to the next phase. You can speed up a migration or dynamic
reconfiguration operation by acknowledging the SIGRECONFIG event even if
your application takes no action.
The dr_reconfig() system call has been modified to support partition migration.
The returned dr_info structure includes the following bit-fields:
migrate
partition
These fields are for the new migration action and the partition object that is the
object of the action.
The code snippet in Example 5-5 shows how dr_reconfig() might be used. This
code would run in a signal-handling thread.
// loop forever
while (1) {
// Wait on signals in signal set
sigwait(&signalSet, &signalId);
if (signalID == SIGRECONFIG) {
if (rc = dr_reconfig(DR_QUERY, &drInfo)) {
// handle the error
} else {
if (drInfo.migrate) {
if {drInfo.check) {
/*
You can use the sysconf() system call to check the system configuration on the
destination system. The _system_configuration structure has been modified to
include the following fields:
icache_size Size of the L1 instruction cache
icache_asc Associativity of the L1 instruction cache
dcache_size Size of the L1 data cache
dcache_asc Associativity of the L1 data cache
L2_cache_size Size of the L2 cache
L2_cache_asc Associativity of the L2 cache
itlb_size Instruction translation look-aside buffer size
itlb_asc Instruction translation look-aside buffer associativity
dtlb_size Data translation look-aside buffer size
dtlb_asc Data translation look-aside buffer associativity
tlb_attrib Translation look-aside buffer attributes
slb_size Segment look-aside buffer size
The input variables are set as environment variables on the command line,
followed by the name of the script to be invoked and any additional parameters.
premigrate <resource> At this point the migration will be initiated. The script
can reconfigure or suspend an application to facilitate
the migration process. The script is called with this
command at the prepare-migration phase.
The code in Example 5-6 on page 184 shows a Korn shell script that detects the
partition migration reconfiguration events. For this example, the script simply logs
the called command to a file.
if [[ $# -eq 0 ]]
then
echo "DR_ERROR=Script usage error"
exit 1
fi
ret_code=0
command=$1
case $command in
scriptinfo )
echo "DR_VERSION=1.0"
echo "DR_DATE=27032007"
echo "DR_SCRIPTINFO=partition migration test script"
echo "DR_VENDOR=IBM"
echo "SCRIPTINFO" >> /tmp/migration.log;;
usage )
echo "DR_USAGE=$0 command [parameter]"
echo "USAGE" >> /tmp/migration.log;;
register )
echo "DR_RESOURCE=pmig";;
echo "REGISTER" >> /tmp/migration.log;;
checkmigrate )
echo "CHECK_MIGRATE" >> /tmp/migration.log;;
premigrate )
echo "PRE_MIGRATE" >> /tmp/migration.log
postmigrate )
echo "POST_MIGRATE" >> /tmp/migration.log;;
undopremigrate )
echo "UNDO_CHECK_MIGRATE" >> /tmp/migration.log;;
* )
echo "*** UNSUPPORTED *** : $command" >> /tmp/migration.log;;
ret_code=10;;
esac
exit $ret_code
The actions parameter supports the following values for mobility awareness:
DR_MIGRATE_CHECK
DR_MIGRATE_PRE
DR_MIGRATE_POST
DR_MIGRATE_POST_ERROR
Figure 5-33 shows a basic configuration using virtual Fibre Channel and a single
Virtual I/O Server in the source and destination systems before migration occurs.
hdisk0
fcs0 ent0
POWER POWER
Hypervisor Hypervisor
Figure 5-33 Basic NPIV virtual Fibre Channel infrastructure before migration
hdisk0
ent0 fcs0
POWER POWER
Hypervisor Hypervisor
Figure 5-34 Basic NPIV virtual Fibre Channel infrastructure after migration
Required components
The mobile partition must meet the requirements described in Chapter 2, “Live
Partition Mobility mechanisms” on page 19. In addition, the following components
must be configured in the environment:
An NPIV-capable SAN switch
An NPIV-capable physical Fibre Channel adapter on the source and
destination Virtual I/O Servers
HMC Version 7 Release 3.4, or later
Virtual I/O Server Version 2.1 with Fix Pack 20.1, or later
AIX 5.3 TL9, or later
AIX 6.1 TL2 SP2, or later
Each virtual Fibre Channel adapter on the Virtual I/O Server mapped to an
NPIV-capable physical Fibre Channel adapter
Each virtual Fibre Channel adapter on the mobile partition mapped to a virtual
Fibre Channel adapter in the Virtual I/O Server
At least one LUN mapped to the mobile partition’s virtual Fibre Channel
adapter
Mobile partitions may have virtual SCSI and virtual Fibre Channel LUNs.
Migration of LUNs between virtual SCSI and virtual Fibre Channel is not
supported at the time of publication.
Figure 5-35 shows a mobile partition virtual Fibre Channel adapter example.
Figure 5-35 Client partition virtual Fibre Channel adapter WWPN properties
Figure 5-36 Virtual Fibre Channel adapters in the Virtual I/O Server
Figure 5-37 shows an example of a virtual Fibre Channel properties for the
Virtual I/O Server, called a Server Fibre Channel Adapter.
The Virtual I/O Server lsdev and lsmap commands can be used to query the
virtual Fibre Channel configuration and mapping to the mobile partition, as
shown in Example 5-8 on page 192.
Status:LOGGED_IN
FC name:fcs3 FC loc code:U789D.001.DQDYKYW-P1-C6-T2
Ports logged in:2
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:fcs0 VFC client DRC:U9117.MMA.101F170-V2-C60-T1
After validating that the mobile partition can be migrated, follow steps 1 on
page 104 through 10 on page 112 in 4.3, “Preparing for an active partition
migration” on page 94. Instead of selecting virtual SCSI adapters in step 11 on
page 113, select the virtual Fibre Channel adapter assignment as shown in
Figure 5-38.
5.11.3 Dual Virtual I/O Server and virtual Fibre Channel multipathing
With multipath I/O, the logical partition accesses the same storage data using
two different paths, each provided by a separate Virtual I/O Server.
Note: With NPIV-based disks, both paths can be active. For NPIV and virtual
Fibre Channel, the storage multipath code is loaded into the mobile
partition.The multipath capabilities depend on the storage subsystem type and
multipath code deployed in the mobile partition.
hdisk0
fcs0 fcs1
Hypervisor Hypervisor
Disk A
Storage
Subsystem
Figure 5-41 Dual VIOS and client multipath I/O to dual NPIV before migration.
hdisk0
fcs0 fcs1
Hypervisor Hypervisor
vfchost0 vfchost1
Disk A
Storage
Subsystem
Figure 5-42 Dual VIOS and client multipath I/O to dual VIOS after migration.
If the destination system is configured with only one Virtual I/O Server, the
migration cannot be performed. The migration process would create two paths
using the same Virtual I/O Server, but this setup of having one virtual Fibre
Channel host device mapping the same LUNs on different virtual Fibre Channel
adapters is not recommended.
To migrate the partition, you must first remove one path from the source
configuration before starting the migration. The removal can be performed
without interfering with the running applications. The configuration becomes a
simple single Virtual I/O Server migration.
Partitions may not use physical adapters, Host Ethernet Adapters (HEA), and
non-default virtual serial adapters when participating in an active migration. Any
adapters of these types must be deconfigured and removed before migration.
For this scenario, we assume that you are beginning with a mobile partition that
is using a physical Fibre Channel adapter, and that a Virtual I/O Server exists
and is running on the source and destination systems. Another assumption is
that the Virtual I/O Server partitions have one physical NPIV-capable Fibre
Channel adapter, and the mobile partition’s storage subsystem LUNs are
available to the physical adapter currently used by the mobile partition.
Figure 5-43 describes our starting configuration. See Chapter 2 in PowerVM
Virtualization on IBM System p: Managing and Monitoring, SG24-7590 for
additional details about virtual Fibre Channel and NPIV configuration.
Mobile Partition
hdisk0
fcs1 ent0
Hypervisor Hypervisor
ent1 ent1
Destination VIOS
Source VIOS
Storage
Subsystem
2. Use dynamic LPAR to add a virtual Fibre Channel server adapter, with the
same properties from the previous step, to the activated mobile partition.
Record the virtual Fibre Channel client adapter’s slot number and WWPN pair
for use when configuring the storage subsystem in step 4.
3. Save the changes made to the mobile partition to new profile name to
preserve the generated WWPNs for future use by the mobile partition.
Important: Similar to virtual SCSI, you do not have to create virtual Fibre
Channel server adapters for your mobile partition on the destination Virtual
I/O Server. They are created automatically for you during the migration.
6. Execute the vfcmap command to associate the virtual Fibre Channel server
adapter to the physical Fibre Channel adapter.
Status:LOGGED_IN
FC name:fcs1 FC loc code:U789D.001.DQDWWHY-P1-C1-T2
Ports logged in:2
Flags:a<LOGGED_IN,STRIP_MERGE>
VFC client name:fcs2 VFC client DRC:U9117.MMA.100F6A0-V2-C70-T1
7. Record the existing physical Fibre Channel adapter and disk configuration.
Use these details when you remove the physical adapter from the partition.
8. Run the cfgmgr command on the mobile partition to configure the new virtual
Fibre Channel client adapter. The lsdev command shows the new adapter as
fcs2 in Example 5-11. Our physical adapter is a dual-port adapter listed as
fcs0 and fcs1. The mobile partition’s LUNs are attached using the fcs1 port.
9. Verify that the partition’s disks are enabled on the new virtual Fibre Channel
adapters by using the lspath command as shown in Example 5-12. Because
our storage subsystem uses active and passive controller paths, two paths
are shown for each disk. Other storage subsystems might use different
commands to list available paths and show different output.
Mobile Partition
hdisk0
Hypervisor Hypervisor
Destination VIOS
Source VIOS
Storage
Subsystem
Figure 5-46 The mobile partition using physical and virtual resources
Example 5-13 Removing the physical adapters and their child devices
# rmdev -R -dl fcs0
# rmdev -R -dl fcs1
# lsdev -Cc adapter|grep fcs
fcs2 Available 70-T1 Virtual Fibre Channel Client Adapter
Example 5-14 Remaining paths after physical adapter has been removed
# lspath
Enabled hdisk0 fscsi2
2. Use your HMC to remove all physical adapter slots from the mobile partition
that is using dynamic LPAR.
3. Remove all virtual serial adapters from slots 2 and above from the mobile
partition using dynamic LPAR. Figure 5-47 shows the mobile partition using
only virtual resources.
Mobile Partition
hdisk0
fcs1 ent0
Hypervisor Hypervisor
ent1 ent1
Destination VIOS
Source VIOS
Storage
Subsystem
After the migration is complete, consider adding physical resources back to the
mobile partition, if they are available on the destination system.
Note: The active mobile partition profile is created on the destination system
without any references to any physical I/O slots that were present in your
profile on the source system. Any other mobile partition profiles are copied
unchanged.
Figure 5-48 shows the mobile partition migrated to the destination system.
Mobile Partition
hdisk0
fcs0 ent0
Hypervisor Hypervisor
Storage
Subsystem
You can run several versions of AIX, Linux, and Virtual I/O Server in logical
partitions on POWER5 technology-based servers, POWER6 technology-based
servers. Certain older versions of these operating environments do not support
the capabilities that are available with new processors, limiting your flexibility to
move logical partitions between servers that have different processor types.
The processor compatibility mode in which the logical partition currently operates
is the current processor compatibility mode of the logical partition. The
hypervisor sets the current processor compatibility mode for a logical partition by
using the following information:
Processor features supported by the operating environment running in the
logical partition
Preferred processor compatibility mode that you specify
If you want a logical partition to run in an enhanced mode, you must specify the
enhanced mode as the preferred mode for the logical partition. If the operating
When you move an active logical partition between servers that have different
processor types, both the current and preferred processor compatibility modes of
the logical partition must be supported by the destination server. When you move
an inactive logical partition between servers that have different processor types,
only the preferred mode of the logical partition must be supported by the
destination server. Table 5-2 lists current and preferred processor compatibility
modes supported on each server type.
For example, you want to move an active logical partition from a POWER6
technology-based server to a Refreshed POWER6 technology-based server so
that the logical partition can take advantage of the additional capabilities
available with the Refreshed POWER6 processor. You set the preferred
processor compatibility mode to the default mode and when you activate the
logical partition on the POWER6 technology-based server, it runs in the
When you want to move the logical partition back to the POWER6
technology-based server, you must change the preferred mode from the default
mode to the POWER6 mode (because the POWER6+ mode is not supported on
a POWER6 technology-based server) and restart the logical partition on the
Refreshed POWER6 technology-based server. When you restart the logical
partition, the hypervisor evaluates the configuration. Because the preferred mode
is set to POWER6, the hypervisor does not set the current mode to a higher
mode than POWER6. Remember, the hypervisor first determines whether it can
set the current mode to the preferred mode. If not, it determines whether it can
set the current mode to the next highest mode, and so on. In this case, the
operating environment supports the POWER6 mode, so the hypervisor sets the
current mode to the POWER6 mode, so that you can move the logical partition
back to the POWER6 technology-based server.
The easiest way to maintain this moving back and forth type of flexibility between
different types of processors is to determine the processor compatibility mode
supported on both the source and destination servers and set the preferred
processor compatibility mode of the logical partition to the highest mode
supported by both servers. In this example, you set the preferred processor
compatibility mode to the POWER6 mode, which is the highest mode supported
by both POWER6 technology-based servers and Refreshed POWER6
technology-based servers.
The same logic from the previous examples applies to inactive migrations, except
inactive migrations do not require the current processor compatibility mode of the
logical partition because the logical partition is inactive. After you move an
inactive logical partition to the destination server and activate that logical partition
on the destination server, the hypervisor evaluates the configuration and sets the
current mode for the logical partition just like it does when you restart a logical
partition after an active migration. The hypervisor attempts to set the current
mode to the preferred mode. If it cannot, it checks the next highest mode and so
on. If you specify the default mode as the preferred mode for an inactive logical
partition, you can move that inactive logical partition to a server of any processor
type. Remember, when you move an inactive logical partition between servers
with different processor types, only the preferred mode of the logical partition
3. If you plan to perform an inactive migration, skip this step and go to step 4 on
page 210.
If you plan to perform an active migration, identify the current processor
compatibility mode of the mobile partition, as follows:
a. In the navigation area of the HMC that manages the source server, expand
Systems Management Servers and select the source server.
b. In the contents area, select the mobile partition and click Properties.
c. Select the Hardware tab and view the Processor Compatibility Mode,
which is the current processor compatibility mode of the mobile partition.
Record this value so that you can refer to it later.
4. Verify that the preferred and current processor compatibility modes that you
identified in steps 2 on page 208 page and on page 209 are in the list of
supported processor compatibility modes identified in step 1 on page 208 for
the destination server. For active migrations, both the preferred and current
processor compatibility modes of the mobile partition must be supported by
the destination server. For inactive migrations, only the preferred processor
compatibility mode must be supported by the destination server.
The same information can be obtained from the HMC’s CLI by using the
lsrefcode and lslparmigr commands. See 5.7, “The command-line interface”
on page 162 for details.
Reference codes describe the progress of the migration. You can find a
description of reference codes in “SRCs, current state” on page 260. When the
reference code represents an error, a migration recovery procedure might be
required.
During an inactive migration, only the HMC is involved, and it holds all migration
information.
An active migration requires the coordination of the mobile partition and the two
Virtual I/O Servers that have been selected as mover service partitions. All these
objects record migration events in their error logs. You can find a description of
partition-related error logs in “Operating system error logs” on page 266.
The mobile partition records the start and the end of the migration process. You
may extract the data by using the errpt command, as shown in Example 6-1.
Migration information is recorded also on the Virtual I/O Servers that acted as a
mover service partition. To retrieve it, use the errlog command.
On the destination mover service partition, the error log registers only the end of
the migration, as shown in Example 6-3.
The error logs on the mobile partition and the Virtual I/O Servers also record
events that prevent the migration from succeeding, such as user interruption or
network problems. They can be used to trace all migration events on the system.
6.2 Recovery
Live Partition Mobility is designed to verify whether a requested migration can be
executed and to monitor all migration processes. If a running migration cannot be
completed, a rollback procedure is executed to undo all configuration changes
applied.
A partition migration might be prevented from running for two main reasons:
The migration is not valid and does not meet prerequisites.
An external event prevents a migration component from completing its job.
The migration validation described in 4.4.1, “Performing the validation steps and
eliminating errors” on page 99 takes care of checking all prerequisites. It can be
explicitly executed at any moment and it does not affect the mobile partition.
When a recovery is required, the mobile partition name can appear on both the
source and the destination system. The partition is either powered down (inactive
migration) or really working only on one of the two systems (active migration).
Configuration cleanup is made during recovery.
The same actions performed on the GUI can be executed with the migrlpar
command on the HMC’s command line. See 5.7, “The command-line interface”
on page 162 for details.
After a successful recovery, the partition returns to normal operation state and
changes to its configuration are then allowed. If the migration is executed again,
the validation phase will detect the component that prevented the migration and
will select alternate elements or provide a validation error.
During an active migration, there is a partition state transfer through the network
between the source and destination mover service partitions. The mobile
partition continues running on the source system while its state is copied on the
In the HMC GUI, the migration process fails and an error message is displayed.
Because the migration stopped in the middle of the state transfer, the partition
configuration on the two involved systems is kept in the migrating status, waiting
for the administrator to identify the problem and decide how to continue.
In the HMC, the status of the migrating partition, mobile, is present in both
systems, while it is active only on the source system. On the destination system,
only the shell of the partition is present. The situation can viewed by expanding
Systems Management Custom Groups All partitions. In the content
area, a situation similar to Figure 6-5 is shown.
Both Virtual I/O Servers, using a single mover service partition, have recorded
the event in their error logs. On the Virtual I/O Server, where the cable was
unplugged, we see both the physical network error and the mover service
partition communication error, as indicated in Example 6-5.
The other Virtual I/O Server only shows the communication error of the mover
service partition, because no physical error has been created, as indicated in
Example 6-6.
To recover from an interrupted migration, you must select the mobile partition and
select Operations Mobility Recover, as shown in Figure 6-3 on page 217.
A pop-up window similar to the one shown in Figure 6-4 on page 218 opens.
Click the Recover button and the partition state is cleaned up (normalized). The
mobile partition is present only on the source system where it is running, and it is
removed on the destination system, where it has never been executed.
After the network outage is resolved, the migration can be issued again. Wait for
the RMC protocol to reset communication between the HMC and the Virtual I/O
Server that had the network cable unplugged.
As with Live Partition Mobility conducted by the HMC, before migrating a logical
partition, a validation check should be performed to ensure that the migration will
complete successfully.
The migration task on the local Integrated Virtualization Manager helps you
validate and complete a partition migration to a remote system that is managed
by another Integrated Virtualization Manager.
Both the source and destination systems must be at a firmware level 01Ex320 or
later, where x is an S for BladeCenter or an L for Entry servers (such as the
Power 520, Power 550, and Power 560).
Although there is a minimum required firmware level, each system can have a
different level of firmware. The level of source system firmware must be
compatible with the destination firmware. It is recommended to have the most
current system firmware available installed.
Previous versions of AIX and Linux can participate in inactive partition migration
if the operating systems support virtual devices and IBM Power Systems
POWER6 technology-based systems.
Storage requirements
For a list of supported disks and optical devices, see the data sheet available on
the Virtual I/O Server support Web site:
http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/data
sheet.html
Network requirements
The migrating partition uses the virtual LAN for network access. The VLAN must
be bridged to a physical network using a shared Ethernet adapter in the Virtual
I/O Server partition. Your LAN must be configured such that migrating partitions
can continue to communicate with other necessary clients and servers after a
migration is completed.
Note: Figure 7-3 gives you the impression that you might migrate from an
IVM managed system to a remote IVM or HMC managed system. However
at the time of this publication migration between IVM and HMC managed
systems is not supported.
3. Ensure that the destination server has enough available memory to support
the mobile partition:
a. Determine the amount of memory that the mobile partition requires:
i. From the Partition Management menu, click View/Modify Partitions.
The View/Modify Partitions panel opens.
ii. Select the mobile partition.
iii. From the More Tasks menu, select Properties. A new window named
Partition Properties opens.
iv. Click the Memory tab.
v. Record the minimum, assigned, and maximum memory settings.
vi. Click OK.
c. Compare the values from the mobile partition and the destination server.
Notes:
Keep in mind that when you move the mobile partition to the destination
server, the destination server requires more reserved firmware memory
to manage the mobile partition. If necessary, you may add more
available memory to the destination server to support the migration by
dynamically removing memory from the other logical partitions.
Use any role other than View Only to modify the memory. Users with the
Service Representative (SR) role cannot view or modify storage values.
Figure 7-7 Checking the amount of processing units of the mobile partition
Figure 7-8 Checking the amount of processing units on the destination server
c. Compare the values from the mobile partition and the destination server. If
the destination server does not have enough available processors to
support the mobile partition, use the Integrated Virtualization Manager to
dynamically remove the processors from the logical partition or you can
remove processors from logical partitions on the destination server.
Note: You must have a super administrator role to perform this task.
5. Verify that the source and destination Virtual I/O Server can communicate
with each other.
If Partition Mobility is not enabled and the feature was purchased with the
system, obtain the activation code from the IBM Capacity on Demand (CoD)
Web site:
http://www-912.ibm.com/pod/pod
Enter the system type and serial number on the CoD site and click Submit.
A list of available activation codes (such as VET or Virtualization Technology
Code, POD, or CUoD Processor Activation Code) or keys with a type and
description is displayed. If PowerVM Enterprise Edition was not purchased
with the system, it can be upgraded through the Miscellaneous Equipment
Specification (MES) process.
c. Verify that the processor compatibility mode (which you identified in step b
on page 240) is in the list of supported processor compatibility modes
(which you identified in step a on page 240) for the destination server. For
active migrations, both the preferred and current modes of the mobile
partition must be supported by the destination server. For inactive
migrations, only the preferred mode must be supported by the destination
server.
5. Ensure that the mobile partition does not have physical adapters, as follows:
a. From the Partition Management menu, click View/Modify Partitions. The
View/Modify Partitions window opens.
b. Select the logical partition that you want to remove from the partition
workload group.
c. From the More Tasks menu, select Properties. A new window named
Partition Properties appears.
d. In the Physical Adapters tab, verify if there are no physical adapters
configured.
e. Click OK.
6. Ensure that the applications running in the mobile partition are mobility-safe
or mobility-aware. Most software applications running in AIX and Linux logical
partitions do not require any changes to work correctly during active Partition
Mobility. Certain applications might have dependencies on characteristics that
change between the source and destination servers and other applications
might have to adjust to support the migration.
The physical storage that the mobile partition uses is connected to the SAN. At
least one physical adapter that is assigned to the source Virtual I/O Server
logical partition is connected to the SAN, and at least one physical adapter that is
assigned to the destination Virtual I/O Server logical partition is also connected
to the SAN.
When you move the mobile partition to the destination server, the Integrated
Virtualization Manager automatically creates and connects virtual adapters on
the destination server, as follows:
Creates virtual adapters on the destination Virtual I/O Server logical partition
Creates virtual adapters on the mobile partition
Connects the virtual adapters on the destination Virtual I/O Server logical
partition to the virtual adapters on the mobile partition
Verify that the destination server provides the same virtual SCSI configuration as
the source server so that the mobile partition can access its physical storage on
the SAN after it moves to the destination server:
1. Verify that the physical storage that is used by the mobile partition is assigned
to the management partition on the source server and to the management
partition on the destination server.
2. Verify that the reserve_policy attributes on the physical volumes are set to
no_reserve so that the mobile partition can access its physical storage on the
SAN from the destination server.
To set the reserve_policy attribute of the physical storage to no_reserve:
a. From either the Virtual I/O Server logical partition on the source server or
the Virtual I/O Server on the destination server, list the disks to which the
Virtual I/O Server has access. Run the following command:
lsdev -type disk
b. List the attributes of each disk. Run the following command, where hdiskX
is the name of the disk that you identified in the previous step:
lsdev -dev hdiskx -attr
CuAt:
name = "hdisk7"
attribute = "unique_id"
value = "3E213600A0B8000291B080000520C023C6B410F1815 FAStT03IBMfcp"
type = "R"
generic = "D"
rep = "nl"
nls_index = 79
...
#
Each virtual Fibre Channel adapter that is created on the mobile partition (or any
client logical partition) is assigned a pair of worldwide port names (WWPNs).
Both WWPNs are assigned to the physical storage that the mobile partition uses.
During normal operation, the mobile partition uses one WWPN to log on to the
SAN and access the physical storage.
When you move the mobile partition to the destination server, there is a brief
period of time during which the mobile partition runs on both the source and
destination servers. Because the mobile partition cannot log on to the SAN from
both the source and destination servers at the same time using the same
WWPN, the mobile partition uses the second WWPN to log on to the SAN from
the destination server during the migration. The WWPNs of each virtual Fibre
Channel adapter move with the mobile partition to the destination server.
The first step is to assign virtual Fibre Channel adapters to your client partition
using the physical NPIV-capable adapter that is being used in your management
partition. Access the GUI and perform the following tasks:
1. From the I/O Adapter Management menu in the navigation area, click
View/Modify Virtual Fibre Channel.
The View/Modify Virtual Fibre Channel window opens. All the physical ports,
connected partitions, and available connections on the physical Fibre
Channel adapters that support NPIV.
You can now see the physical Fibre Channel adapters that are capable of
being used for hosting virtual Fibre Channel adapters.
2. Select the physical adapter to use and click Modify Partition Connections.
3. You may now choose to add or remove virtual Fibre Channel adapter
assignments for a partition. In this case, you will select the partition of your
choice so that a virtual Fibre Channel adapter is created and WWPNs are
generated for the client. After you select a partition, the phrase Automatically
generate is displayed in the Worldwide Port Names column, as shown in
Figure 7-15. Click OK. The WWPNs for the client partition are generated.
Next, verify that the destination server provides the same virtual Fibre Channel
configuration as the source server so that the mobile partition can access its
physical storage on the SAN after it moves to the destination server, as follows:
1. Verify, for each virtual Fibre Channel adapter on the mobile partition, that both
WWPNs are assigned to the same physical storage on the SAN.
vi. Record the number of physical ports that are assigned to the mobile
partition and click OK.
b. Determine the number of physical ports that are available on the
management partition on the destination server:
i. From the I/O Adapter Management menu, select View/Modify Virtual
Fibre Channel. The View/Modify Virtual Fibre Channel panel opens.
Note: You may also use the lslparmigr command to verify that the
destination server provides enough available physical ports to support
the virtual Fibre Channel configuration of the mobile partition.
5. You may now choose to validate and migrate the mobile partition to the
destination server.
The mobile partition uses the virtual LAN for network access. The virtual LAN
must be bridged to a physical network using a virtual Ethernet bridge in the
management partition. The LAN must be configured so that the mobile partition
can continue to communicate with other necessary clients and servers after a
migration is completed.
You may assign a Host Ethernet Adapter (or Integrated Virtual Ethernet) port
to a logical partition so that the logical partition can directly access the
external network by completing the following steps:
a. From the I/O Adapter Management menu, select View/Modify Host
Ethernet Adapters.
b. Select a port with at least one available connection and click Properties.
VIOSE01042026 The partition cannot be migrated when in its current power state.
VIOSE0104202A A virtual slot owned by the partition has an adapter that cannot
be migrated.
VIOSE0104202C The Virtual I/O Server on the source managed system is not
marked as an MSP.
VIOSE0104202D The Virtual I/O Server partition is not capable of taking part in a
migration.
VIOSE0104202F The partition is not the source of the migration. The executed
command must be run on the source.
VIOSE01040F04 A warning that the partition has a physical I/O resource assigned
to it that will be removed as part of the inactive migration.
VIOSE0104203F The migrlpar process was unable to finish the migration on the
source managed system because other tasks have not finished.
VIOSE01042042 Failed to lock the storage configuration on the source Virtual I/O
Server.
VIOSE01042032 RMC is not active with the migrating partition. RMC needs to be
active to perform active migrations.
VIOSE01090003 Unable to find a Virtual I/O Server partition with the given ID on
the destination managed system.
VIOSE01090004 Unable to find a Virtual I/O Server partition with the given name
on the destination managed system.
VIOSE01090005 The destination managed system does not have access to the
storage assigned to the migrating partition.
VIOSE0109000A The target Virtual I/O Server does not support partition mobility.
VIOSE0109000B A VLAN that is bridged on the source Virtual I/O Server is not
bridged on the target. This is only a warning message and does
not cause the migration to fail.
VIOSE01090010 A given partition name does not match the name of the partition
with the given ID.
VIOSE01090012 This code appears only if the source makes a clean-up request
to the target, but the partition on the target is not in the process
of a migration.
VIOSE01090015 The target managed system does not have enough available
memory to create the partition.
VIOSE01090016 The target managed system does not have enough available
processing units to create the partition.
VIOSE01090017 The target managed system does not have enough available
processors to create the partition.
VIOSE01090018 The availability priority of the mobile partition is higher than the
target management partition.
VIOSE0109001C The partition with the given ID on the target managed system is
not a Virtual I/O Server.
VIOSE0109001D A Virtual I/O Server partition with the given name does not exist
on the target managed system.
VIOSE01090033 Not enough memory is available for firmware to use with the new
partition.
VIOSE01090034 The processor pool ID specified was not found on the target
managed system.
VIOSE01090035 The processor pool name specified was not found on the target
managed system.
VIOSE01090036 The command to set the storage configuration for the partition
failed.
VIOSE01090037 The command to lock the storage configuration for the partition
failed.
VIOSE01090039 The destination managed system was not able to clean up the
migration because not all partitions involved have finished.
VIOSE0109003E The partition cannot be migrated because the target Virtual I/O
Server has already reached its maximum number of virtual slots.
PVID Port Virtual LAN Identifier SPOT shared product object tree
The publications listed in this section are considered particularly suitable for a
more detailed discussion of the topics covered in this book.
IBM Redbooks
For information about ordering these publications, see “How to get IBM
Redbooks” on page 274. Note that several documents referenced here might be
available in softcopy only.
AIX 5L Differences Guide Version 5.3 Edition, SG24-7463
AIX 5L Practical Performance Tools and Tuning Guide, SG24-6478
Effective System Management Using the IBM Hardware Management
Console for pSeries, SG24-7038
IBM System p Advanced POWER Virtualization (PowerVM) Best Practices,
REDP-4194
Implementing High Availability Cluster Multi-Processing (HACMP) Cookbook,
SG24-6769
Introduction to pSeries Provisioning, SG24-6389
Linux Applications on pSeries, SG24-6033
Managing AIX Server Farms, SG24-6606
NIM from A to Z in AIX 5L, SG24-7296
Partitioning Implementations for IBM eServer p5 Servers, SG24-7039
A Practical Guide for Resource Monitoring and Control (RMC), SG24-6615
Integrated Virtualization Manager on IBM System p5, REDP-4061
PowerVM Virtualization on IBM System p: Managing and Monitoring,
SG24-7590
PowerVM Virtualization on IBM System p: Introduction and Configuration
Fourth Edition, SG24-7940
IBM System p Advanced POWER Virtualization (PowerVM) Best Practices,
REDP-4194
Other publications
These publications are also relevant as further information sources:
Documentation available on the support and services Web site includes:
– User guides
– System management guides
– Application programmer guides
– All commands reference volumes
– Files reference
– Technical reference volumes used by application programmers
The support and services Web site is:
http://www.ibm.com/systems/p/support/index.html
Virtual I/O Server and support for Power Systems (including Advanced
PowerVM feature):
https://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/
home.html
Linux for pSeries installation and administration (SLES 9):
http://www.ibm.com/developerworks/systems/library/es-pinstall/
Linux virtualization on POWER5: A hands-on setup guide:
http://www.ibm.com/developerworks/edu/dw-esdd-virtual-i.html
POWER5 Virtualization: How to set up the IBM Virtual I/O Server:
http://www.ibm.com/developerworks/aix/library/au-aix-vioserver-v2/
Latest Multipath Subsystem Device Driver User's Guide
http://www.ibm.com/support/docview.wss?rs=540&context=ST52G7&uid=ssg
1S7000303
Index 277
Live Application Mobility 16 migratepv command 156
Live Partition Mobility migration
high availability 15 active 31
PowerVM support 16 inactive 27
preparation 53 messages 103
remote 130 errors 101
Live Workload Partitions 16 warnings 101
LMB 34, 54 mover service partition selection 111
logical HEA processor compatibility mode 205
See LHEA profile 27
logical memory block remote 130
See LMB shared processor pool selection 114
logical unit number 31 specifying the destination profile 106
logical volumes 24, 29 starting state 41
LPAR workload group 25, 28, 32 state 31
lsattr command 81 status window 116
lsdev command 79, 88, 154, 157, 191, 201, steps 99
245–246 validation 110
lslic command 49 virtual Fibre Channel 193
lslparmigr command 41, 121, 128, 135, 145, 166, virtual SCSI adapter assignment 113
172, 175, 214, 253 VLAN 112
remote capability 168 workflow 26
lsmap command 191 migration phase
lspv command 81, 246–247 active migration 36
lsrefcode command 214 migrlpar command 129, 145, 175, 218
lsrsrc command 67 example 165
lssyscfg command 175, 240 migrate 165
lsvet command 238 recovery 165
LUN stop 165
mapping 31, 39 validate 165
minimal requirements 91
HMC 91
M LMB 91
MAC address 28
Network connection 91
uniqueness 35
partition 91
memory
storage 92
affinity 43
VIOS 91
available 56
virtual SCSI 91
configuration 37
mkauthkeys command 138, 163, 173
dirty page 38
mktcpip command 88
footprint 38
mkvdev command 151
LPAR memory size 42
mobility-aware 79
modification 38
mobility-safe 79
pages 38
mover service partition 24
messages 101, 110
See MSP
migratability 25
MPIO 30, 35
huge pages 25
MSP 21, 25, 32–33, 36–39, 64
redundant error path 25
configuration 96, 129, 143
versus partition readiness 25
definition 12
Index 279
current 205, 209 capability 23
default 208 compatibility 23
enhanced 205 example 9
examples 206 hardware 7
inactive migration 208 huge pages 25
non-enhanced 206 memory 25
preferred 205, 208 name 25
supported 206 network 8, 12, 129
verification 208 partition 7
processors physical adapter 7
available 58 physical adapters 93
binding 43 processors 25
configuration 37 redundant error path 25
state 32 RMC 93
profile 21, 26 storage 8, 92
active 28 synchronization 32
last activated 27, 30 VASI 93
name 26, 78 VIOS 7–8
pending values 37 virtual SCSI 91
workload group 25
reserve_policy attributes 79, 92
R resource availability 35
RAS tools 44
resource balancing 4
reactivation
Resource Monitoring and Control
active migration 39
See RMC
readiness 24
resource sets, AIX 43
battery power 24
resource state 32
infrastructure 25
RMC 20, 24–25, 28, 34, 66, 131
server 24
rollback 31, 42
Red Hat Enterprise Linux 51
Redbooks Web site 274
Contact us xix S
reducevg command 156 SAN 24–25, 32, 131
redundant error path reporting 68 SCSI reservation 24
remote migration 130–131 SEA 8, 32, 87
considerations 135 server readiness 24
information 169 service partition 28
infrastructure 133 shared Ethernet adapter
lslparmigr command 169 See SEA
migration 141 shared processor pool 147, 171
network test 136 CLI 148
private network 132 information 168
requirements 132 lslparmigr command 168
workflow 131 SIMD 23
required I/O 76 SMS 27
requirements SSH
active migration 93 key authentication 132
adapters 25 key generation 136
battery power 24 ssh command 163
Index 281
virtual serial I/O 69
default adapters 26
VLAN 25, 32
W
warning messages 101
warnings 103
workflow 26
active migration 34
inactive migration 30
validation 28
workload
throttling 38
workload group 25, 28, 32
workload manager 43
workload partition
See WPAR
WPAR
migration 16
requirements 17
WWPN 190
X
XRSET 43
IBM PowerVM
Live Partition Mobility
Explore the PowerVM Live Partition Mobility is the next step in the IBMs Power
Enterprise Edition Systems virtualization continuum. It can be combined with INTERNATIONAL
Live Partition other virtualization technologies, such as logical partitions, TECHNICAL
Mobility Live Workload Partitions, and the SAN Volume Controller, to SUPPORT
provide a fully virtualized computing platform that offers the ORGANIZATION
degree of system and infrastructure flexibility required by
Move active and
today’s production data centers.
inactive partitions
between servers This IBM Redbooks publication discusses how Live Partition BUILDING TECHNICAL
Mobility can help technical professionals, enterprise INFORMATION BASED ON
Manage partition architects, and system administrators: PRACTICAL EXPERIENCE
migration with an Migrate entire running AIX and Linux partitions and
HMC or IVM hosted applications from one physical server to another IBM Redbooks are developed by
without disrupting services and loads. the IBM International Technical
Meet stringent service-level agreements. Support Organization. Experts
Rebalance loads across systems quickly, with support for from IBM, Customers and
multiple concurrent migrations. Partners from around the world
create timely technical
Use a migration wizard for single partition migrations. information based on realistic
This book can help you understand, plan, prepare, and scenarios. Specific
recommendations are provided
perform partition migration on IBM Power Systems servers to help you implement IT
that are running AIX. solutions more effectively in
your environment.