You are on page 1of 21

Solutions Guide

Database Consolidation with


Oracle 12c Multitenancy
James Morle
Scale Abilities, Ltd.

Table of contents
Introduction ......................................................................................................................................................3
Testing scenario and architecture ....................................................................................................................3
Introduction to testing .....................................................................................................................................10
Proof Point move a PDB between CDBs ....................................................................................................10
Proof Point relocate a datafile to an alternate diskgroup .............................................................................14
Proof Point create a clone from PDB ..........................................................................................................17
Conclusion .....................................................................................................................................................19
For more information ......................................................................................................................................20

Solution Implementers Series

Introduction
This whitepaper is a study of the new Multitenant features of Oracle Database 12c and how a
shared disk architecture using Fibre Channel infrastructure can be used to facilitate dramatically
simplified database consolidation compared to previous releases. In this paper we will explore
the new multitenant functionality by building a test database cluster, within which we relocate
databases between nodes, instances and storage tiers. The new functionality in Oracle 12c
makes these operations extremely straightforward and provides a compelling case for
consolidation using a shared disk infrastructure. Although shared disk storage may be achieved
through other means, this paper focuses on the proven combination of Fibre Channel and
Oracles Automatic Storage Management functionality. All testing performed for this whitepaper
was carried out using the latest Gen 5 (16GFC) Fibre Channel components from Emulex, which
ensures that maximum bandwidth is available in the Storage Area Network for any operations
that require data copying, and provides low latency access for all other database I/O operations.
Access to the high bandwidth throughput provided by Gen 5 parts will become increasingly
important for data mobility in consolidated cloud environments such as the one demonstrated
here.

Disclosure:
Scale Abilities was commissioned commercially by
Emulex Corporation to conduct this study. To ensure
impartial commentary under this arrangement, Scale
Abilities was awarded sole editorial control and Emulex
granted sole veto rights for publication.

Testing scenario and architecture


The Multitenant Option of the Oracle 12c Database is implemented through the concept of
Pluggable Databases (PDBs). A PDB is essentially the same as what we always used to have
pre-12c with all the generic data dictionary information removed. These PDBs cannot execute
by themselves, they need to be plugged into a Container Database (CDB), which contains all
the data dictionary information that is absent from the PDB. A CDB can host multiple PDBs, and
so allows a single set of data dictionary information, background processes and memory
allocation to be shared across multiple PDBs.
The scenario chosen for this testing was one perceived to become common for Database
Administrators( DBAs) involved in consolidating many databases into a single shared compute
environment, frequently referred to as a Private Cloud. This is an attractive proposition for
corporations with tens, hundreds or even thousands of databases, all important for the operation
of the business. The consolidation of these databases into a single managed entity dramatically
reduces management costs, improves quality of service and -- apart from all that -- corrals all
those pesky databases into a contained area!

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


The logical architecture chosen was that of a shared cluster of compute nodes, with a variety of
different storage devices and tiers available to those nodes. The test scenario was implemented
as a five-node cluster, managed using Oracle Grid Infrastructure 12c, with a Gen 5 (16GFC)
Fibre Channel network connecting all the storage to all hosts. The Oracle Database was then
tasked to provide two Container Databases (CDBs), which were constrained to discrete subsets
of the cluster using the server pool functionality within the Oracle Clusterware. This architecture
is representative of an end-user cluster, where different CDBs are used to host databases of
different service levels, potentially with different memory footprints on their respective server
nodes. Real end-user clusters could have considerably higher node counts depending on the
amount of consolidation required and the desired maximum cluster size.
Two CDBs were defined: CDB1 was nominated as the container for low-criticality workloads,
and CDB2 as the container for mission-critical workloads.
Figure 1. Architecture Implementation

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series

From a storage perspective, three different tiers of storage were defined to reflect the common
reality of multiple classes of storage being present in the data center. These were defined as
Small Storage Array, to represent a mid-tier traditional storage array, Big Storage Array, to
represent a top-tier traditional storage array and Fast Storage Array, to represent the growing
reality of solid-state storage. All the storage is shared across all cluster nodes, making the
relocation of databases to different servers a very straightforward process, especially with the
Multitenant Option in Oracle 12c. Given that all the data is available to all of the cluster nodes, a
migration of PDBs between CDBs only requires a logical unplug from the source CDB and a
plug into the recipient CDB No data copying is required, only shared access.
Figure 2. Migration of PDBs between CDBs

The inherent shared architecture of the Multitenant Option makes the consolidation of
databases much simpler than in previous releases. The following diagram shows how a similar
implementation of two databases would have worked in the 11g release:

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


Figure 3. Consolidation with Oracle 11g

In this 11g example, each database would exist as an entity in its own right, with a full copy of
the data dictionary, and its own set of background processes and memory allocations (ie: the
instance) on each node in the cluster. Each of those instances would more or less compete for
resource, and had no direct control over the resource consumption by the other instances.
Certain measures could be taken, such as Instance Caging and Preferred Nodes, but these
were really just workarounds prior to the 12c solution. In the 12c Multitenant world, the
resources of all the databases can be effectively resource managed, and there is no overhead
for hosting multiple databases other than the resource actually required to host those
databases.
The testing focus was primarily on functionality and architecture, rather than raw performance.
Accordingly, both PDB1 and PDB2 were kept relatively simple, with just a few large tables
created in each. Each database was approximately 1TB in size to demonstrate a relatively large
data transport requirement. Any performance observations noted in this whitepaper are included

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


for completeness, with only limited investigation performed regarding areas of potential
improvement.
Hardware configuration
This paper was written following the execution of a series of tests in Emulexs Technical
Marketing Costa Mesa lab environment. The diagram below shows a simplified physical
hardware topology, excluding Ethernet networks.
Figure 2. Physical hardware topology

Each server node was an HP DL360 Generation 8 server with 96GB of memory and two CPU
sockets populated with 8 core Intel E5-2690 processors running at 2.9GHz. Each server was
equipped with two port Emulex LPe16002B Gen5 Fibre Channel HBAs, with each port
connected to independent Fibre Channel zones via Brocade 6510 Gen 5 Fibre Channel
switches. By using Emulex Onecommand Manager we verified Emulex Gen 5 Fibre Channel
adapters connectivity to the FC targets. In addition, updating the firmware was a simple process
during the initial setup. By using Emulex OneCommand Manager we were able to update online
the latest firmware version for Emulex Lightpulse LPe16002B adapter.

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


The storage tier was provided by two physical hardware devices. The traditional storage was
provided by an HP 3PAR StoreServ V10000 storage array, configured to present LUNS from
two separate storage pools. The larger of these pools was used to provide the role of the Big
Storage Array in our scenario, and the smaller one provided the role of the Small Storage
Array. The role of the Fast Storage Array was provided by a SanBlaze v7.0 target emulation
DRAM-based device.
The server was configured with the following software:
Operating System - Oracle Linux 6.4, using the 2.6.39-400 UEK Kernel
Emulex HBA device driver - Release 8.3.7.18
Oracle 12c - Release 1, version 12.1.0.1 (RDBMS and Grid Infrastructure)
All storage was presented directly through to ASM via dm-multipath and ASMlib. Three ASM
diskgroups were created -- one for each of the three scenarios of storage arrays in the
configuration. These were named +BIGARRAY, +SMALLARRAY and +FASTARRAY to
correspond with the underlying hardware.
The five node cluster comprised of the following hostnames:

oracle1.emulex.com
oracle2.emulex.com
oracle3.emulex.com
oracle4.emulex.com
oracle5.emulex.com
The cluster was split into two server pools using Clusterware server pools. The low-criticality
container database (CDB1) is hosted by a server pool named lesscritical, which includes the
oracle4 and oracle5 hosts. The mission-critical container (CDB2) is hosted by a server pool
named missioncritical, which includes the oracle1, oracle2 and oracle 3 hosts.
[oracle@Oracle1 ~]$ srvctl config srvpool
Server pool name: Free
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: Generic
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names:
Server pool name: lesscritical
Importance: 0, Min: 0, Max: -1
Category:

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


Candidate server names: oracle4,oracle5
Server pool name: missioncritical
Importance: 0, Min: 0, Max: -1
Category:
Candidate server names: oracle1,oracle2,oracle3

We can also view the CRS resources to ensure we have instances of our CDBs running on the
nodes we expect:
[oracle@Oracle1 ~]$ crsctl stat res -t
----------------------------------------------------------------------------Name
Target State
Server
State details
----------------------------------------------------------------------------ora.cdb1.db
1
ONLINE OFFLINE
STABLE
2
ONLINE ONLINE
oracle4
Open,STABLE
3
ONLINE OFFLINE
STABLE
4
ONLINE ONLINE
oracle5
Open,STABLE
5
ONLINE OFFLINE
STABLE
ora.cdb2.db
1
ONLINE ONLINE
oracle1
Open,STABLE
2
ONLINE OFFLINE
STABLE
3
ONLINE ONLINE
oracle2
Open,STABLE
4
ONLINE OFFLINE
STABLE
5
ONLINE ONLINE
oracle3
Open,STABLE

ASM is configured using the new FlexASM feature of Oracle12c. This is a very useful new
feature that moves away from the former dependency to have one ASM instance on every
cluster node. From Oracle 12c, it is possible to host full instances of ASM on a subset of the
cluster nodes, and for other cluster nodes to collect ASM metadata information via a local proxy
instance, which in turn connects to the full ASM instances. For this testing, the full ASM
instances where local on nodes 1, 2, 3 and remote via the Proxy instance on nodes 4 and 5.
[oracle@Oracle1 ~]$ crsctl stat res -t
----------------------------------------------------------------------------Cluster Resources
----------------------------------------------------------------------------ora.asm
1
ONLINE ONLINE
oracle1
STABLE
2
ONLINE ONLINE
oracle3
STABLE
3
ONLINE ONLINE
oracle2
STABLE

Although FlexASM was used in this case, it is also possible to perform all of the testing in this
whitepaper using the traditional ASM model of having one instance of ASM on each cluster
node.

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series

Introduction to testing
In pre-12c releases, it was necessary to host multiple full instances of each database on each
node required, resulting in wasted resources and rudimentary resource management
capabilities. For example, if one wanted to consolidate databases named DB1 and DB2 onto a
shared cluster using pre-12c technology, it would be necessary to have instances for both DB1
and DB2 running on at least two nodes of the cluster. Each of these instances would have to be
configured to accommodate the full workload of its application on each node, including any
passive nodes if running in active/passive mode. This resulted in gross memory and kernel
resource waste on each server. The results worsened when instances would failover to other
nodes, as workloads would very often end up hosted on an inappropriate node. Resource
management was even more restricted, as each instance had no visibility or control of the
resource requirements of instances on the shared server.
The concept of Container Databases (CDBs) and Pluggable Databases (PDBs) is new in Oracle
12c, and is fundamental to providing the missing link in database consolidation for Oracle
databases. A CDB is the container for one or more PDB, and a PDB is the actual database
from the viewpoint of the application. PDBs can be plugged into and unplugged from CDBs
using simple commands, and they can be cloned and moved to other CDBs. All the PDBs that
are plugged into a CDB share a single Oracle instance (per node, in the case of RAC), and can
be resource-managed by a single set of controls within the CDB.
The philosophy behind this testing was to perform key operations on PDBs that would be
frequently required in a true consolidated environment and to discover what the utility-value was
from running large shared clusters of multiple CDBs.
In addition to the Multitenant option, 12c also now supports fully online datafile moves. It is
believed this functionality goes hand in hand with the management of PDBs, as it offers the
ability to relocate databases onto different tiers of storage, as well as movement between CDBs.

Proof Point move a PDB between CDBs


This test is a straightforward move of a PDB from one CDB to another within the same cluster.
Since full access is available to all storage on each node on the cluster, it should be a simple
operation in this configuration.
The starting point of the test is a single PDB, named PDB1, in the CDB named CDB1. PDB1 is
a little over 1TB in size, and would be a cumbersome database to move using previous releases
of Oracle. The objective is to move PDB1 to a different CDB named CDB2. Currently CDB2 has
another PDB hosted within it named PDB2.
First, Scale Abilities checks that there is connection to the root of the CDB and checks the
status of PDB1:
10

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


SQL> SELECT SYS_CONTEXT ('USERENV', 'CON_NAME') FROM DUAL;
SYS_CONTEXT('USERENV','CON_NAME')
----------------------------------------------------------------------------CDB$ROOT
SQL> select con_id,name,open_mode from v$PDBs;
CON_ID NAME
OPEN_MODE
---------- ------------------------------ ---------2 PDB$SEED
READ ONLY
3 PDB1
READ WRITE

In order to unplug the PDB, PDB must be closed on all instances:


SQL> alter pluggable database pdb1 close immediate instances=all;
Pluggable database altered.
SQL> select con_id,inst_id,name,open_mode from gv$PDBs
2* order by 1,2
SQL> /
CON_ID
INST_ID NAME
OPEN_MODE
---------- ---------- ------------------------------ ---------2
2 PDB$SEED
READ ONLY
2
4 PDB$SEED
READ ONLY
3
2 PDB1
MOUNTED
3
4 PDB1
MOUNTED

Next PDB1 is unplugged. This changes the state of the PDB in the current CDB (CDB1) to be
unplugged and prevents all access to PDB1. It produces an XML file in the specified location
that contains all the metadata required to plug the database back into a CDB.
SQL> alter pluggable database pdb1 unplug into
'/home/oracle/pdb1_unplug.xml';
Pluggable database altered.

One aspect of this operation not immediately obvious is the XML file created by the foreground
process on the database server to which the user is connected. In a RAC environment (and in a
client/server environment), this is not necessarily the same server which is running SQL*Plus;
therefore, care must be taken to understand where the database connection is made.
The resulting XML contains a variety of interesting information, such as a list of files, location,
size, container ID, database ID, options installed, version and so on. This XML file is the master
description of the PDB and is used by the next CDB into which the PDB is plugged to determine
the steps required.
Looking back at CDB1, it is evident that PDB1 is now unplugged and cannot be opened up
again:
11

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


1* select pdb_id,pdb_name,guid,status from CDB_PDBS
SQL> /
PDB_ID PDB_NAME
GUID
STATUS
---------- ---------- -------------------------------- ------------3 PDB1
E5A299062A7FF04AE043975BC00A8790 UNPLUGGED
2 PDB$SEED
E5A288645805E887E043975BC00AF91C NORMAL
SQL> alter pluggable database pdb1 open instances=all;
alter pluggable database pdb1 open instances=all
*
ERROR at line 1:
ORA-65107: Error encountered when processing the current task on instance:2
ORA-65086: cannot open/close the pluggable database
SQL> !oerr ora 65086
65086, 00000, "cannot open/close the pluggable database"
// *Cause: The pluggable database has been unplugged.
// *Action: The pluggable database can only be dropped.
//
SQL> drop pluggable database pdb1;
Pluggable database dropped.

Now PDB1 can be plugged into CDB2. Start by connecting to the CDB2 root using the TNS
alias created by dbca:
[oracle@Oracle1 ~]$ sqlplus sys@cdb2 as sysdba
SQL*Plus: Release 12.1.0.1.0 Production on Fri Sep 6 08:13:08 2013
Copyright (c) 1982, 2013, Oracle.

All rights reserved.

Enter password:

Connected to:
Oracle Database 12c Enterprise Edition Release 12.1.0.1.0 - 64bit Production
With the Partitioning, Real Application Clusters, Automatic Storage Management, OLAP,
Advanced Analytics and Real Application Testing options
SQL> select pdb_id,pdb_name,guid,status from CDB_PDBS;
PDB_ID PDB_NAME
GUID
STATUS
---------- ---------- -------------------------------- ------------3 PDB2
E5A3120148FF19CDE043975BC00AAF39 NORMAL
2 PDB$SEED
E5A30153B3FB11AFE043975BC00AC732 NORMAL
SQL> create pluggable database pdb1 using '/home/oracle/pdb1_unplug.xml'
nocopy;
Pluggable database created.

12

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


The nocopy option was selected to instruct Oracle to leave the datafiles in their original
locations, which were all in the +SMALLARRAY diskgroup. The default action is to make copies
of the datafile and leave the originals intact. Another option is to select the move option, which
allows the files to be moved to a new location - very useful for migrating to another diskgroup
and/or storage array.
During the plugging in of PDB1, the following messages were emitted in the alert file, showing
the mounting of the +SMALLARRAY diskgroup and the creation of a CRS dependency for
future startup operations:
NOTE: ASMB mounting group 2 (SMALLARRAY)
NOTE: ASM background process initiating disk discovery for grp 2
NOTE: Assigning number (2,0) to disk (ORCL:ELX_3PAR_100G_0)
NOTE: Assigning number (2,1) to disk (ORCL:ELX_3PAR_100G_1)
NOTE: Assigning number (2,2) to disk (ORCL:ELX_3PAR_100G_2)
NOTE: Assigning number (2,3) to disk (ORCL:ELX_3PAR_100G_3)
NOTE: Assigning number (2,4) to disk (ORCL:ELX_3PAR_100G_4)
NOTE: Assigning number (2,5) to disk (ORCL:ELX_3PAR_100G_5)
NOTE: Assigning number (2,6) to disk (ORCL:ELX_3PAR_100G_6)
NOTE: Assigning number (2,7) to disk (ORCL:ELX_3PAR_100G_7)
NOTE: Assigning number (2,8) to disk (ORCL:ELX_3PAR_100G_8)
NOTE: Assigning number (2,9) to disk (ORCL:ELX_3PAR_100G_9)
SUCCESS: mounted group 2 (SMALLARRAY)
NOTE: grp 2 disk 0: ELX_3PAR_100G_0 path:ORCL:ELX_3PAR_100G_0
NOTE: grp 2 disk 1: ELX_3PAR_100G_1 path:ORCL:ELX_3PAR_100G_1
NOTE: grp 2 disk 2: ELX_3PAR_100G_2 path:ORCL:ELX_3PAR_100G_2
NOTE: grp 2 disk 3: ELX_3PAR_100G_3 path:ORCL:ELX_3PAR_100G_3
NOTE: grp 2 disk 4: ELX_3PAR_100G_4 path:ORCL:ELX_3PAR_100G_4
NOTE: grp 2 disk 5: ELX_3PAR_100G_5 path:ORCL:ELX_3PAR_100G_5
NOTE: grp 2 disk 6: ELX_3PAR_100G_6 path:ORCL:ELX_3PAR_100G_6
NOTE: grp 2 disk 7: ELX_3PAR_100G_7 path:ORCL:ELX_3PAR_100G_7
NOTE: grp 2 disk 8: ELX_3PAR_100G_8 path:ORCL:ELX_3PAR_100G_8
NOTE: grp 2 disk 9: ELX_3PAR_100G_9 path:ORCL:ELX_3PAR_100G_9
NOTE: dependency between database CDB2 and diskgroup resource
ora.SMALLARRAY.dg is established

Next, lets check the status and open up PDB1 in its new container database:
SQL> select con_id,inst_id,name,open_mode from gv$PDBs order by 1,2
2 /
CON_ID
INST_ID
---------- ---------2
1
2
3
2
5
3
1
3
3
3
5
4
1
4
3
4
5

13

NAME
-----------------------------PDB$SEED
PDB$SEED
PDB$SEED
PDB2
PDB2
PDB2
PDB1
PDB1
PDB1

Database Consolidation with Oracle 12c Multitenancy

OPEN_MODE
---------READ ONLY
READ ONLY
READ ONLY
READ WRITE
READ WRITE
READ WRITE
MOUNTED
MOUNTED
MOUNTED

Solution Implementers Series


9 rows selected.
SQL> alter pluggable database pdb1 open instances=all;
Pluggable database altered.
SQL>

Proof Point relocate a datafile to an alternate diskgroup


Online relocation of datafiles is an important feature of Oracle 12c. Previous releases allowed a
certain degree of online relocation but still mandated a brief period of unavailability when the
new file location was brought online. Oracle 12c promises to move a datafile with zero
downtime.
In this scenario, start with a tablespace named APPDATA which has a single datafile that has
been created in the wrong diskgroup:
1* select name,bytes/1048576 sz from v$datafile
SQL> /
NAME
SZ
----------------------------------------------------------------------------- ------------+BIGARRAY/CDB2/DATAFILE/undotbs1.260.825312223
110
+SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/system.281.825310527
260
+SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/sysaux.282.825310527
630
+SMALLARRAY/CDB1/E5A299062A7FF04AE043975BC00A8790/DATAFILE/users.284.825310547
5
+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.304.825412353
1090560

Note
The attempt to move a datafile for a PDB while connected to the CDB$ROOT was unsuccessful, and the
error message was not entirely helpful:

05:20:37 SQL> alter database move datafile


'+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.304.825412
353' to '+SMALLARRAY';
alter database move datafile
'+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.304.825412
353' to '+SMALLARRAY'
*
ERROR at line 1:
ORA-01516: nonexistent log file, data file, or temporary file "45"
Connecting to the PDB fixes this problem and allowed the file to be moved successfully:

05:20:48 SQL> alter session set container=PDB1;


Session altered.
Elapsed: 00:00:00.00
05:21:21 SQL> alter database move datafile
'+BIGARRAY/CDB2/E5A299062A7FF04AE043975BC00A8790/DATAFILE/appdata.304.825412

14

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


353' to '+SMALLARRAY';
Database altered.
Elapsed: 04:51:57.49

Although the move was successful, it was very slow (more on that shortly). During the move, Scale
Abilities was able to read and update a table within that tablespace:
SQL> select sum(object_id) from bigtable where rownum<10
SUM(OBJECT_ID)
-------------324
SQL> update bigtable set object_id=object_id+10 where rownum<10;
9 rows updated.
SQL> commit;
Commit complete.
SQL> select sum(object_id) from bigtable where rownum<10;
SUM(OBJECT_ID)
-------------414

Now, considering the long runtime for the move operation. As mentioned earlier, this paper isnt
focused on performance; however, that was a very long runtime and worthy of some
explanation. The Scale Abilities lab will perform a more technical investigation in the future and
write a blog post about it. In the meantime heres what a trace of the session executing the
move looked like:
WAIT #140641124141656: nam='control file sequential read' ela= 971 file#=0
block#=59 blocks=1 obj#=-1 tim=393150810594
WAIT #140641124141656: nam='db file sequential read' ela= 1494 file#=45
block#=119070465 blocks=128 obj#=-1 tim=393150812119
WAIT #140641124141656: nam='db file single write' ela= 1759 file#=45
block#=119070465 blocks=128 obj#=-1 tim=393150813980
WAIT #140641124141656: nam='DFS lock handle' ela= 193 type|mode=1128857605
id1=110 id2=1 obj#=-1 tim=393150814264
WAIT #140641124141656: nam='DFS lock handle' ela= 218 type|mode=1128857605
id1=110 id2=3 obj#=-1 tim=393150814537
WAIT #140641124141656: nam='DFS lock handle' ela= 11668 type|mode=1128857605
id1=110 id2=2 obj#=-1 tim=393150826256
WAIT #140641124141656: nam='control file sequential read' ela= 249 file#=0
block#=1 blocks=1 obj#=-1 tim=393150826570
WAIT #140641124141656: nam='control file sequential read' ela= 982 file#=0
block#=48 blocks=1 obj#=-1 tim=393150827577

15

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


WAIT #140641124141656: nam='control file sequential read' ela= 952 file#=0
block#=50 blocks=1 obj#=-1 tim=393150828576
WAIT #140641124141656: nam='control file sequential read' ela= 957 file#=0
block#=59 blocks=1 obj#=-1 tim=393150829578
WAIT #140641124141656: nam='db file sequential read' ela= 1712 file#=45
block#=119070593 blocks=128 obj#=-1 tim=393150831343
WAIT #140641124141656: nam='db file single write' ela= 1843 file#=45
block#=119070593 blocks=128 obj#=-1 tim=393150833283
WAIT #140641124141656: nam='DFS lock handle' ela= 205 type|mode=1128857605
id1=110 id2=1 obj#=-1 tim=393150833571
WAIT #140641124141656: nam='DFS lock handle' ela= 164 type|mode=1128857605
id1=110 id2=3 obj#=-1 tim=393150833794

An interesting observation for aficionados of trace files is that the file number of the original and
copy datafile appear to be registered as the same (45), which is likely a type of anomaly with the
wait interface abstraction. This can be observed in the respective waits for db file sequential
read and db file single write, where it is reported as the file# parameter. It seems the move
operation is taking place 128 blocks at a time (128*8KB=1MB in this case); however, there are a
series of coordinating measures taking place in the RAC tier which slow down the progress. In
particular, note the 11.6ms wait for DFS lock handle highlighted in green, which (when
decoded) shows a wait for the CI enqueue. The CI enqueue is a cross-instance invocation most
likely associated with coordinating the DBWR processes on other instances to ensure dirty
blocks are written out to both the original datafile and the new copy.
The total elapsed time for one iteration (1MB moved) in this trace file fragment is 18.984ms,
which implies a disk throughput of 52MB/s, very little of which is spent actually transferring data
because of the time spent waiting for the DFS lock handle. Scale Abilities also tried a datafile
move operation with the PDB explicitly closed on all but one instance, but it did not improve the
throughput and requires further investigation.
Shutting down all the instances except the one where the move takes place dramatically
reduces the overhead, but does not remove it altogether:
WAIT #140653075283640: nam='DFS lock handle' ela= 4542 type|mode=1128857605
id1=110 id2=2 obj#=-1 tim=2388675442209
WAIT #140653075283640: nam='control file sequential read' ela= 806 file#=0
block#=1 blocks=1 obj#=-1 tim=2388675443083
WAIT #140653075283640: nam='control file sequential read' ela= 921 file#=0
block#=48 blocks=1 obj#=-1 tim=2388675444060
WAIT #140653075283640: nam='control file sequential read' ela= 965 file#=0
block#=50 blocks=1 obj#=-1 tim=2388675445060
WAIT #140653075283640: nam='control file sequential read' ela= 967 file#=0
block#=60 blocks=1 obj#=-1 tim=2388675446062
WAIT #140653075283640: nam='db file sequential read' ela= 1748 file#=45
block#=20943489 blocks=128 obj#=-1 tim=2388675447849
WAIT #140653075283640: nam='db file single write' ela= 2891 file#=45
block#=20943489 blocks=128 obj#=-1 tim=2388675450857

16

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


This shows that the DFS lock handle calls no longer require servicing by other instances in the
cluster, but that there is still an overhead in making them. After making this change, the
throughput went up to around 90MB/s, which is still not very high.
The low throughput seems like it could be dramatically improved by Oracle with some
algorithmic changes. Rather than perform all the cross-instance work every 1MB, perform it
every 50MB, for example. This is an important feature for managing a consolidated platform,
and it should be possible to use a much higher percentage of the 16GFC connectivity that we
have available to speed up the relocation of data files between storage tiers.
The progress of the move can, at least, be viewed in v$session_longops:
1
2
3
4*
SQL>

SELECT sid, serial#, opname, ROUND(sofar/totalwork*100,2) PCT


FROM gv$session_longops
WHERE opname like 'Online%'
AND
sofar<totalwork
/

SID
SERIAL#
---------- ---------392
6745

OPNAME
PCT
----------------------------------------- ---------Online data file move
77.42

Proof Point create a clone from PDB


Oracle 12c also allows the creation of clones from existing PDBs. This is a
very useful feature for managing development and test databases and
significantly reduces the associated labor intensity. One (fairly major, in
the opinion of Scale Abilities) downside to the clone functionality is that
the source PDB must be opened READ ONLY for the clone operation to run.
SQL> alter pluggable database pdb1 close instances=all;
Pluggable database altered.
SQL> alter pluggable database pdb1 open read only;
Pluggable database altered.
SQL> create pluggable database pdb1_clone from pdb1
2 file_name_convert=('+SMALLARRAY','+BIGARRAY');
Pluggable database created.
Elapsed: 00:43:16.13

This is much more indicative of the performance we should be seeing, considering the
specification of this system: A copy time of 43:16 represents a decent average write for a
bandwidth of 406MB/s. The ability to achieve a much higher number than this with a little careful
tuning of the I/O subsystem seems promising, but these results are a good start for a nonperformance focused exercise. The observed 406MB/s would saturate a 4GFC fibre channel
17

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series


HBA even without further tuning such operations highlight the need to provide for significantly
greater I/O bandwidth in a multitenant environment than in a traditional Oracle cluster.
One interesting observation is that the clone operation is carried out by PX slave processes
rather than the users foreground process. This would seem to indicate that considerably more
parallel capabilities should be possible when cloning PDBs. In this test case we had only one
file that was large enough to incur a high copy time it is possible that multiple PX slaves could
operate on PDBs that contain a number of large file.
For example, the following trace file fragment shows the PX slave performing the actual copy:
[root@Oracle2 trace]# more CDB2_3_p000_54766.trc
WAIT #140320369183712: nam='ASM file metadata operation' ela= 36 msgop=33
locn=0 p3=0 obj#=-1 tim=146952678700
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952678788
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952678864
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1249 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952680181
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 423 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952680671
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2322 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952683051
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1259 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952684325
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 4 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952684418
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952684430
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 2272 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952686758
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 248 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952687020
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 333 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952687421
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1722 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952689153
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 122 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952689395
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1651 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952691056
WAIT #140320369183712: nam='Pluggable Database file copy' ela= 1237 count=1
intr=256 timeout=2147483647 obj#=-1 tim=146952692361

It is likely that the block size used for the copy is equal to one ASM allocation unit (default 1MB),
considering the trace file indicates that this session is coordinating the copy directly with the
ASM instance.

18

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series

Conclusion
The new Multitenant functionality seems a natural fit for consolidation using server pooled
clusters and shared disk solutions using Gen 5 Fibre Channel. The easy access to data using a
shared storage infrastructure complements the ease of use afforded by the PDB concept and
effectively eliminates data copies for many use cases. The ability to pool multiple pluggable
databases into a small number of container databases dramatically improves both the system
management burden and the ability to effectively resource-manage databases in a shared
environment.
Multitenancy can also be implemented without shared disk and with locally attached SSD
storage. Doing so removes many of the advantages of consolidation, and perhaps indicates the
systems in question were not ideal candidates for consolidation. Management of a consolidated
environment requires, by its very nature, the ability to move workloads between servers and
storage tiers as the demands and growth of the business dictate.
Although some issues were observed with the throughput of the online datafile move
functionality, it should be stressed that there is nothing architecturally wrong in the method
Oracle has elected to use. Scale Abilities fully expects the throughput of such moves to be
significantly higher once the root cause is found, and this will become a frequently used
operation for DBAs, in both consolidated environments and more traditional ones.
One possible implication of the increased mobility of databases and their storage is that the
bottleneck will move from the labor intensive manual processes formerly required, and put more
pressure on the storage tier. In particular, it is evident that the bandwidth provision in the SAN
infrastructure will become increasingly important. This will be especially true for the server
HBAs, because the data mobility is a server-side action -- potentially migrating large data sets
from many storage arrays at any point in time. The availability of Gen 5 a Fibre Channel HBAs
will be instrumental in providing this uplift in data bandwidth.

19

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series

For more information


www.ImplementersLab.com
www.Emulex.com
www.scaleabilities.co.uk

To help us improve our documents, please provide feedback at implementerslab@emulex.com.


Copyright 2013 Emulex Corporation. The information contained herein is subject to change without notice. The only warranties for
Emulex products and services are set forth in the express warranty statements accompanying such products and services. Emulex
shall not be liable for technical or editorial errors or omissions contained herein.
OneCommand is a registered trademark of Emulex Corporation. HP is a registered trademark in the U.S. and other countries.
Oracle is a registered trademark of Oracle Corporation.

20

Database Consolidation with Oracle 12c Multitenancy

Solution Implementers Series

World Headquarters 3333 Susan Street, Costa Mesa, California 92626 +1 714 662 5600
Bangalore, India +91 80 40156789 | Beijing, China +86 10 68499547
Dublin, Ireland+35 3 (0)1 652 1700 | Munich, Germany +49 (0) 89 97007 177
Paris, France +33 (0) 158 580 022 | Tokyo, Japan +81 3 5322 1348
Wokingham, United Kingdom +44 (0) 118 977 2929

21

Database Consolidation with Oracle 12c Multitenancy

You might also like