Professional Documents
Culture Documents
Right?
A Consolidation Case Study on Oracle SuperCluster
by Thierry Manf, with contributions from Orgad Kimchi, Maria Frendberg, and Mike Gerdts
Best practices and hands-on instructions for using Oracle Solaris Zones to consolidate existing physical
servers and their applications onto Oracle SuperCluster using the P2V migration process, including a
step-by-step example of how to consolidate an Oracle Solaris 8 server running Oracle Database 10g.
Table of Contents
Introduction
Good Questions to Start With
SuperCluster Domains for Oracle Solaris Zones
Network Setup
Oracle Licensing
P2V Migration Step by Step
Performance Tuning
Example: Consolidating an Oracle Solaris 8 Server Running Oracle Database 10 g
Conclusion
See Also
About the Author
Introduction
A growing number of companies are looking at virtualization to consolidate many physical servers on a single
platform. As a general-purpose engineered system, Oracle SuperCluster has the following characteristics, which
are required to address consolidation needs:
Scalable computing resources with strong throughput capacities, which are critical for the execution of
many virtual machines and applications in parallel.
Fully redundant compute, storage, and networking resources, which deliver the availability required for
running many application layers on a single virtualized platform.
Native virtualization technologies with Oracle VM Server for SPARC and lightweight Oracle Solaris
Zones. These two technologies can be combined for maximum flexibility while reducing the virtualization
overhead.
A powerful and versatile database machine that can accommodate the load of many applications.
Tools to facilitate migration from a physical server to a virtual machine (P2V migration). The content of a
physical serverincluding applicationscan be captured and redeployed in an Oracle Solaris Zone.
Support of a wide range of Oracle Solaris releases, simplifying the consolidation of legacy servers.
This article provides guidance, best practices, and hands-on instructions for using Oracle Solaris Zones to
consolidate existing servers onto SuperCluster. It focuses on the operating system and virtualization layers, on
the P2V migration process, and on the associated tools for facilitating this migration.
This article is intended to help system administrators, architects, and project managers who have some
understanding of SuperCluster and want to familiarize themselves with P2V migration and evaluate the possibility
of conducting such a transition.
Good Questions to Start With
SuperCluster offers a lot of flexibility and numerous options for virtualization and consolidation. The following
sections discuss some questions that should help you to quickly identify the best options for consolidating your
physical servers on SuperCluster.
Oracle Solaris 9: No restrictions on updates; 64-bit required. See System Administration Guide: Oracle
Solaris 9 Containers for more details on solaris9 branded zones. Use isainfob to check the number of
bits (32 or 64) in the address space of the server to be consolidated.
Oracle Solaris 8 2/04 or later; 64-bit. If you are using a previous update, applications
using libthread might experience some problems. See Chapter 8 of the System Administration Guide: Oracle
Solaris 8 Containers for more details on solaris8 branded zones. Use isainfob to check the number of
bits (32 or 64) in the address space of the server to be consolidated.
Also look at Oracle Solaris Support Life Cycle Policyreferenced on this pageto check whether your Oracle
Solaris release is supported.
Non-SPARC servers (such as x86 servers) cannot be consolidated on SuperCluster using P2V migration.
However, applications using non-native codesuch as Java, Python, and otherscan be migrated from nonSPARC servers to SuperCluster.
Note: For consolidating an application that was developed with a compiled language (such as C, C++, or
FORTRAN) and that runs on a non-SPARC server, the application must first be recompiled on SPARC, which
means you need access to the source code.
Oracle's Sun ZFS Storage 7320 appliance, which is integrated into SuperCluster
The two main criteria to be considered when choosing between these options are I/O performance and
manageability.
I/O Performance
The first thing to check on the source server to be consolidated is whether the data that generates the I/O load is
located on the root file system. If notwhich is typically the case with data located on SAN storagethe data will
likely not be transferred to the zonepath as part of the P2V migration, and the location of the resulting zone won't
have much impact on I/O performance. In such a case, the zone should be installed on the Sun ZFS Storage
7320 appliance. If the data is located on the root file system, it will be transferred to the zonepath and the zone's
location will impact I/O performance. In this case, and if a large number of zones have their zonepaths on the
Sun ZFS Storage 7320 appliance, local HDDs can be a good alternative to dedicate I/O bandwidth to a limited
number of zones.
Manageability
If you want to migrate zones between the SuperCluster domains, installing zones on the Sun ZFS Storage 7320
appliance is the best option. Creating a dedicated ZFS pool (zpool) from an iSCSI LU exported by the Sun ZFS
Storage 7320 appliance for each zone greatly simplifies the migration, because it becomes a matter of
transferring the zpool from the source to the target domain using zpoolexportand zpoolimport. No data
migration or copy is required.
Note: A SuperCluster domain is simply an Oracle VM Server for SPARC virtual machine (aka a logical domain).
On SuperCluster, zones are hosted in domains.
In addition, one zpool per zone matches the default zone provisioning schema from Oracle Enterprise Manager
Ops Center, and if the zpool is located on the Sun ZFS Storage 7320 appliance, Oracle Enterprise Manager Ops
Center can be used to perform the migration between domains.
Finally, zones installed on the Sun ZFS Storage 7320 appliance immediately benefit from the high availability
designed into the SuperCluster: the storage and the network access to it are fully redundant.
SuperCluster Domains for Oracle Solaris Zones
Oracle Solaris Zones resulting from a P2V consolidation must be hosted in Application Domains. The Sun ZFS
Storage 7320 appliance should be preferred for installing zone; however, if you plan to install zones on local
HDDs, the Application Domain must have spare HDDs for creating an extra zpool dedicated to zones. For
SuperCluster configurations with one or two domains per SPARC T-Series server from Oracle, the Application
Domains have enough spare HDDs. For configurations with more than two domains per SPARC T-Series server,
the /u01 file system can be used instead of an extra zpool.
Oracle SuperCluster T5-8 comes with a set of logical domains (Oracle VM Server for SPARC virtual machines).
These domains are configured on the SPARC T5-8 servers during the initial configuration of SuperCluster and
they can be of two different types:
Application Domains running Oracle Solaris 11 offer the highest flexibility for zones' network configuration. Virtual
network interfaces and InfiniBand (IB) interfaces can be created at will and dedicated to zones. Each zone can
be connected to multiple VLANs, enabling a seamless integration with the data center network infrastructure.
Each zone can also be connected to multiple IB partitions. In each zone, IP Network Multipathing (IPMP) can be
configured on each VLAN and IB partition to provide network redundancy.
Oracle SuperCluster T5-8 comes with three networks:
The 10-GbE client access network connects SuperCluster to the data center. The servers to be
consolidated are located on this network.
The management network (10 GbE on Oracle SuperCluster T5-8) is dedicated to administration tasks.
Here, one vnet (vnet0), two NICs (ixgbe0 and ixgbe1), and three InfiniBand interfaces are available in the
domain. A NIC that is not used by the global zone does not appear in the ifconfiga output, which is shown
in Listing 1:
# ifconfig -a
VLAN Setup
VLAN setup in an Application Domain running Oracle Solaris 11 on the 10-GbE client access network is
straightforward for both Oracle Solaris release 10 and 11 zones. Use an exclusive-IP zone, and create virtual
network interfaces (VNICs) in the global zone on top of the required 10-GbE NIC using the dladmcreate
vnic command and using the v option to set the vlanid attribute. From there, all the packets leaving the
zone through the VNIC are tagged with vlanid. Also, as soon as a VNIC is created, a virtual switch is
instantiated in the global zone, which ensures packet filtering: only the packets tagged with vlanid are
forwarded to the zone through the VNIC. The virtual switch also guarantees that two zones running in the same
Application Domain but connected to different VLANs cannot communicate with each other.
Similarly, for Application Domains running Oracle Solaris 10, VLAN-tagged interfaces are created in the global
zone and assigned to an exclusive-IP zone. For Oracle Solaris 10, more details can be found on this blog.
Oracle Licensing
Oracle recognizes capped Oracle Solaris Zones as licensable entities, known as hard partitions. This means that
zones created as part of the P2V migration process can be capped to optimize the cost of your Oracle software
license.
To create a zone that fits the licensing requirements set by Oracle, you first need to create a resource pool with
the desired number of cores and bind the zone to this resource pool.
First create and edit a licensePool.cmd file as follows:
create pset license-pset ( uint pset.min = 16; uint pset.max = 16 )
create pool license-pool
associate pool license-pool ( pset license-pset )
The first line defines a processor set with sixteen virtual CPUs. On Oracle SuperCluster T5-8, since each core
has eight virtual CPUs, this is equivalent to two cores. As a result, the actual number of cores considered for the
licensing is two. It is worth noting that the number of virtual CPUs in the processor set should always be a
multiple of eight.
When you are done editing the file, create the licensepool pool:
# pooladm -s
# poolcfg -f licensePool.cmd
# pooladm -c
The new resource pool configuration can be checked using pooladm, as shown in Listing 2:
# pooladm
# ...
# pool license-pool
int
pool.sys_id 2
boolean pool.active true
boolean pool.default false
int
pool.importance 1
string pool.comment
pset
license-pset
pset license-pset
int
pset.sys_id 1
boolean pset.default false
uint
pset.min 16
uint
string
uint
uint
string
pset.max 16
pset.units population
pset.load 31
pset.size 16
pset.comment
...
Listing 2
Modify the zone configuration using the zonecfg command and add the following attribute to the
zone: pool=licensepool. The zone is now using the licensepool cores.
Finally, reboot the zone. You can now connect to it and check the number of CPUs that are visible using
the psrinfo command.
Note that many zones can use the licensepool cores without increasing the license cost, which remains
based on two cores. All the zones associated to licensepool share the two cores.
P2V Migration Step by Step
This section goes through the main steps of a P2V migration to SuperCluster. The example given is the
consolidation of an Oracle Solaris 10 server into an Oracle Solaris 10 zone in an Application Domain running
Oracle Solaris 10.
Note: General information about P2V migrationoutside of the SuperCluster contextcan be found in
the Oracle Solaris Administration guide.
State
running
Disallowed Privilege
/usr/lib/fm/fmd/fmd
sys_config
The flarcreate command creates an s10server.flar image file. The L and cpio options enforce the use
of the CPIO format for the archive. This format should be used when the source system has a ZFS root file
system. The S option skips a disk-space checking step for a faster process, and n specifies the internal name
of the FLAR image file. You can use the x option to exclude a file, a directory, or a ZFS pool from the FLAR
image file.
Note: flarcreate is not available on Oracle Solaris 11. A ZFS stream from the global zone's rpool is used
instead of a FLAR image file. More information can be found here.
If the source server has multiple boot environments (BEs), only the active one is included in the FLAR image file.
With an Oracle Solaris 10 native zone, the different BEs can be listed but only one can be used: the active one.
With an Oracle Solaris10 branded zone (running in an Oracle Solaris 11 global zone), the live-upgrade related
commands are not available and the different BEs cannot be listed. As much as possible, BEs should be deleted
from the source server before the P2V migration.
The FLAR image file is now ready to be copied to SuperCluster.
Note: If SuperCluster is configured with more than two domains per server node, the zpoolcreate command
can abort with a message saying that it cannot open one of the slices because the device is busy. It is likely that
the slice is shared by the domain virtual disk server. You can check this by connecting to the control domain and
running ldmlistdomainl on the Application Domain. The busy device should appear in the VDS section
of the output. The solution is to use other slices to create the ZFS pool. This example uses the SSDs.
Using the Sun ZFS Storage 7320 appliance user interface, create an iSCSI LUN and get the associated
target number. To get the target, use the Configuration->SAN tab and select ISCSI Targets.
Mount the iSCSI LUN in the domain hosting the zone (that is, in the global zone):
1.
Where:
iqn.198603.com.sun:... is the target number associated to the LUN and
collected in Step 1.
192.168.30.5 is the IP address of the Sun ZFS Storage 7320 appliance on the
InfiniBand network.
4.
At this point, the LUN is mounted. Find its associated device name using iscsiadmlisttarget:
Configure the zone using zonecfg with the configuration file created with zonep2vchkc. In the configuration
file, zonepath is set to/s10p2vpool/s10P2V, iptype is set to exclusive, and netphysical is set
to ixgbe1:
# zonecfg -z s10P2V -f s10P2V.cfg
When the zone is configured, install it using the zoneadm command and the FLAR image file. If you want to use
the same OS configuration as the source serverincluding the same IP addressuse the p option. Be aware
that this can create an address conflict: the source server should be shut down or its IP address should be
modified before booting the zone. If you want to use a different configuration, use the u option instead, which
unconfigures the zone upon install:
# zoneadm -z s10P2V install -a /s10P2V.flar -u
cannot create ZFS dataset nfspool/s10P2V: dataset already exists
Log File: /var/tmp/s10P2V.install_log.AzaGfB
Installing: This may take several minutes...
Postprocessing: This may take a while...
Postprocess: Updating the zone software to match the global zone...
Postprocess: Zone software update complete
Postprocess: Updating the image to run within a zone
Result: Installation completed successfully.
Log File: /nfspool/s10P2V/root/var/log/s10P2V.install13814.log
At this point, the zone is ready to be booted but its Oracle Solaris instance is not configured. If the zone is booted
as such, connect to its console using zloginC in order to provide the configuration parameters interactively.
Alternatively, you can copy a sysidcfg file to the zonepath to avoid interactive configuration:
# cp sysidcfg /nfspool/s10P2V/root/etc/sysidcfg
# zoneadm -z s10P2V boot
Performance Tuning
This section focuses on tuning the I/O performance of a zone resulting from a P2V migration. From a CPU and
network point of view, a zone behaves like any other Oracle Solaris image, so there is no zone-specific tuning to
be performed. However, on the I/O side, because a zone sits on a file system, performance can benefit from file
system tuning.
In the case of a P2V migration to SuperCluster, the most important parameters for I/O performance are the
ZFS recordsize andcompression for the zonepath andif the zone is located on the Sun ZFS Storage 7320
appliancethe NFS rsize and wsize.
If the source server hosts a database or an application that performs fixed-size access to files, the
ZFS recordsize should be tuned to match this size. For example, if it is hosting a database with a 4k record
size, set the ZFS recordsize to 4k.
Regardless of whether the zonepath is located on a local HDD or on the Sun ZFS Storage 7320 appliance, ZFS
compression improves performance for synchronous I/O operations. It also improves asynchronous I/O
operations when the zonepath is on the Sun ZFS Storage 7320 appliance. For these types of workloads, the
recommendation is to set the zonepath's compression to on. The improvement is not that important for
asynchronous I/O operations with the zonepath on local HDDs.
Example: Consolidating an Oracle Solaris 8 Server Running Oracle Database 10g
This section describes a P2V migration from an Oracle Solaris 8 server running Oracle Database 10.2.0.5 to an
Oracle Solaris 8 zone hosted in an Application Domain running Oracle Solaris 10. The database data is located
on attached storage connected through Fibre Channel on which an Oracle Automatic Storage Management file
system has been created.
On the source system, as user oracle and before creating the FLAR image file, stop the database listener and
the Oracle Automatic Storage Management instances, as shown in Listing 9:
$ sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:19:48 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> shutdown immediate
$ export ORACLE_SID=+ASM
$ sqlplus "/as sysdba"
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 13:21:38 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
SQL> shutdown
ASM diskgroups dismounted
ASM instance shutdown
$ lsnrctl stop
LSNRCTL for Solaris: Version 10.2.0.5.0 - Production on 26-AUG-2012 13:23:49
Copyright (c) 1991, 2010, Oracle. All rights reserved.
Connecting to (ADDRESS=(PROTOCOL=tcp)(HOST=)(PORT=1521))
The command completed successfully
Listing 9
Once the database and Oracle Automatic Storage Management are stopped, create the FLAR image file
as root and copy it to the Application Domain. In the following example, S specifies that disk-space checking is
skipped and that the archive size is not written to the archive, which significantly reduces the archive creation
time. The n option specifies the image name, and L specifies the archive format.
# flarcreate -S -n s8-system -L cpio /var/tmp/s8-system.flar
At this point, move the SAN storage and connect it to the SPARC T-Series server. Then, from the control domain,
make the LUN (/dev/dsk/c5t40d0s6) available in Application Domain s10u10EIS21:
# ldm add-vdsdev /dev/dsk/c5t40d0s6 oradata@primary-vds0
# ldm add-vdisk oradata oradata@primary-vds0 s10u10-EIS2-1
net:
address not specified
physical: ixgbe1
defrouter not specified
device
match: /dev/rdsk/c0d2s0
attr:
name: machine
type: string
value: sun4u
Listing 10
Still in the Application Domain, install the zone and boot it using zoneadm. With p, the configuration of the
Oracle Solaris 8 image is preserved, and a specifies the archive location:
# zoneadm -z s8-10gr2 install -p -a /var/temp/s8-system.flar
...
# zoneadm -z s8-10gr2 boot
Now that the zone is booted, it is possible to connect to it using zlogins810gr2. As root, change the
ownership of the raw device and as oracle, start Oracle Automatic Storage Management and the database, as
shown in Listing 11:
# chown oracle:dba /dev/rdsk/c0d2s0
# su - oracle
$ lsnrctl start
...
$ export ORACLE_SID=+ASM
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:36:44 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> startup
ASM instance started
Total System Global Area 130023424 bytes
Fixed Size
2050360 bytes
Variable Size
102807240 bytes
ASM Cache
25165824 bytes
ASM diskgroups mounted
SQL> quit
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.5.0 - 64bit
Production
With the Partitioning, OLAP, Data Mining and Real Application Testing options
$ export ORACLE_SID=ORA10
$ sqlplus / as sysdba
SQL*Plus: Release 10.2.0.5.0 - Production on Sun Aug 26 14:37:13 2012
Copyright (c) 1982, 2010, Oracle. All Rights Reserved.
Connected to an idle instance.
SQL> startup
ORACLE instance started.
Total System Global Area 1610612736 bytes
Fixed Size
2052448 bytes
Variable Size
385879712 bytes
Database Buffers
1207959552 bytes
Redo Buffers
14721024 bytes
Database mounted.
Database opened.
Listing 11
Conclusion
Oracle Solaris Zones are fully supported and integrated with Oracle SuperCluster T5-8. In addition, the P2V
migration tools provided with Oracle Solaris greatly simplify the consolidation of physical servers to virtual
machines on Oracle SuperCluster T5-8.
As an engineered system, SuperCluster offers a lot of flexibility in terms of configuration: Oracle Solaris Zones
are the perfect receptacle for P2V migration. They provide a strong segregationincluding network segregation
and can be used to optimize the licensing cost of the platform. Meanwhile, with Oracle Solaris Zones, the
virtualization overhead is minimized.
Native Oracle Solaris Zones are patched and updated by the quarterly full stack downloadable patch for
SuperCluster.
Oracle's Sun ZFS Storage 7320 appliance, which is included in SuperCluster, provides a large amount of
redundant storage for Oracle Solaris Zones. Once installed on this shared storage, zones can be swiftly migrated
between the different domains and computing nodes of SuperCluster. Oracle Solaris Zones can be connected to
the 10-GbE client access network and to the InfiniBand I/O fabric. Network redundancy is available through IP
Network Multipathing, and VLANs are available for a seamless integration in the existing data center network.
The highly scalable computing resources of Oracle SuperCluster T5-8 ensure that many Oracle Solaris Zones
can run on this platform, while the powerful database can sustain the load of many applications running
concurrently.
All these integrated features make Oracle SuperCluster T5-8 the platform of choice for server consolidation.
See Also
System Administration Guide: Oracle Solaris 9 Containers
System Administration Guide: Oracle Solaris 8 Containers
Oracle Solaris Support Life Cycle Policy on the Oracle Solaris Releases web page
"Solaris 10 Zones and NetworkingCommon Considerations"
Oracle Solaris Administration: Oracle Solaris Zones, Oracle Solaris 10 Zones, and Resource
Management
About the Author
Thierry Manf has been working at Oracle and Sun Microsystems for more than 15 years. He currently holds the
position of principal engineer in the Oracle SuperCluster Engineering group.