You are on page 1of 9

In this Document

Goal
Solution
Background
Procedure
1. Install Oracle VM Server
2. Select the shared OVS repository filesystem type
3. Export/Provision the shared OVS repository to Oracle VM Servers
4. Identify/verify the device(s) associated with the LUN
5. Create the OCSF2 configuration
6. Format the shared OVS repository OCFS2 filesystem
7. Review the current status of the OCFS2 cluster service
8. Load the OCFS2 cluster stack
9. Online the OCFS2 cluster
10. Configure OCFS2 to autostart on server boot
11. Modify /etc/fstab to mount the shared OVS repository
11.1 If the Local OVS repositories do not contain resources ...
11.2 If the Local OVS repositories already contains resources ...
12. Mount/remount shared OVS repository on all Oracle VM Servers
References
Applies to:
Oracle VM - Version: 2.1
Linux x86-64
Goal
This article describes how to configure a standalone, non-shared OVS repository
to be shared by multiple Oracle VM Servers in a Server Pool. The article is int
ended for Oracle VM and Linux system administrators.
Solution
Background
A default Oracle VM Server installation creates a local-only, non-shared OCFS2 O
VS respository (/OVS) under which Oracle VM resources, such as Virtual Machines
(VMs), ISO images, etc. can later reside. At install time, the /OVS mount is the
default and sole OVS repository.
All resources residing under a standalone, local OVS repository are non-sharable
i.e. they cannot be accessed by or run from other Oracle VM Servers within the
Server Pool. If Oracle VM features such as VM guest migration or High Availabili
ty (HA) are required, the OVS repository must be reconfigured accordingly to be
concurrently accessible (mountable) by all Oracle VM Servers. In other words, th
e OVS repository must be moved from local to shared/network storage and it's fil
esystem made shareable.
Procedure
The following procedure describes how to configure one or more Oracle VM Servers
with local, non-shared /OVS repositories to a common, shared /OVS repository. S
ome form of network or shared storage, such as NFS or SAN/NAS, is a requirement.
1. Install Oracle VM Server
Install Oracle VM Server on all servers intended to share the common OVS reposit
ory. At time of writing, the latest version available is Oracle VM 2.1.2. All Or
acle VM Servers should be installed using the same Oracle VM version. Upgrade an
y pre-existing, earlier versioned Oracle VM Servers to 2.1.2 before proceeding.
Ideally, all Oracle VM Servers should be using the same kernel/patch/package ver
sions.
A default Oracle VM Server installation i.e. one where the default disk layout i
s selected, creates a local-only (non-shared) OCFS2 OVS repository (/OVS) on a d
edicated local disk partition, for example:
/OVS
/OVS/local/
/OVS/remote/
When (eventually) registered with Oracle VM Manager, the OVS respository directo
ry structure may be similar to the following:
/OVS
/OVS/iso_pool/
/OVS/local/
/OVS/publish_pool/
/OVS/remote/
/OVS/running_pool/
/OVS/seed_pool/
/OVS/sharedDisk/
Note: a custom Oracle VM installation i.e. where one customises the disk layout,
can be performed to initially create an OVS repository on shared SAN/NAS storag
e. Whilst this may eliminate the need to copy any existing local resources to sh
ared storage, additional actions will be necessary, especially when using OCFS2,
to ensure the repository filesystem is also shareable.
Complete the installation (and/or upgrade) of Oracle VM Server 2.1.2 on all rema
ining servers.
2. Select the shared OVS repository filesystem type
Select a filesystem type to be used for the shared OVS repository e.g. NFS or OC
FS2. Your decision may be limited by storage type and/or features available e.g.
NAS, SAN, iSCSI capability, etc.
3. Export/Provision the shared OVS repository to Oracle VM Servers
Prepare a sufficiently sized network filesystem (NFS) or shared device (SAN/NAS/
iSCSI LUN) to be used for the shared OVS repository. Export or provision the NFS
or shared device to all Oracle VM Servers in the Server Pool. Ensure all Oracle
VM Servers have a consistent view of the network/shared storage before proceedi
ng.
The 'Domain Live Migration' chapter of the Oracle VM Server User's Guide describ
es the steps required to create and provision networked and shared volumes for:
* OCFS2 (Oracle Cluster File System) over iSCSI (Internet SCSI) network prot
ocol
* OCFS2 using SAN (Storage Area Network)
* NFS (Network File System)
For completeness, a modified version of the shared OCFS2 filesystem on a SAN pro
cedure follows.
4. Identify/verify the device(s) associated with the LUN
After provisioning shared storage to the Oracle VM Servers, use the fdisk(8) uti
lity to partition the device from only one Oracle VM Server. Verify the device n
ame(s) associated with the LUN to be used as the shared repository. Run the part
probe(8) or sfdisk(8) commands on all Oracle VM Servers to ensure they maintain
a consistent and updated view of the now partitioned shared storage device.
If multipath(8) is used, further identify the corresponding /dev/mapper/ device
or alias names (typically mpathNpN) - this can be determined by listing the mapp
ing between the /dev/dm-N device names in /proc/partitions to their /dev/mapper
/ device names in the /dev/mpath/ directory. Here on in, only use the correspond
ing /dev/mapper/ device names whenever referring to or using the shared reposito
ry device.
# cat /proc/partitions
major minor #blocks name
8 0 156250000 sda ### local disk ###
8 1 104391 sda1 ### local /boot
8 2 3148740 sda2 ### local /
8 3 151942770 sda3 ### local /OVS
8 4 1 sda4
8 5 1052226 sda5
8 16 1073741824000 sdb ### san lun ###
8 17 1073741823984 sdb1 ### partition on lun ###
# fdisk -l /dev/sdb
...
Disk /dev/sdb: 1073.7 GB, 1073741824000 bytes
255 heads, 63 sectors/track, 130541 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/sdb1 1 130541 1048570551 83 Linux

Note: the device name of a LUN may be different on Oracle VM Servers - this is n
ormal and expected.
5. Create the OCSF2 configuration
As mentioned earlier, the default Oracle VM Server installation produces a local
-only OVS repository on a non-shared OCFS2 filesystem (/dev/sda3 in the example
above). Although OCFS2 is used, the filesystem type is local i.e. 'mkfs.ocfs2 -M
local ...', so that no cluster stack needs to be running to mount it locally.
An OCFS2 cluster must first be configured/formed between all Oracle VM Servers b
efore the common OVS repository can be shared (concurrently mounted). The cluste
r configuration comprises two core files:
* /etc/ocfs2/cluster.conf
* /etc/sysconfig/o2cb
Create a new directory named /etc/ocfs2/ on all Oracle VM Servers e.g.:
# mkdir /etc/ocfs2/
Create OCSF2 configuration file /etc/ocfs2/cluster.conf on one Oracle VM Server
only.
Define a separate node: stanza for each Oracle VM Server that will mount the sha
red repository. Following is a sample cluster.conf file comprising three (3) Ora
cle VM Servers:
node:
ip_port = 7777
ip_address = 10.0.0.1
number = 0
name = ovs1.acme.com
cluster = ocfs2
node:
ip_port = 7777
ip_address = 10.0.0.2
number = 1
name = ovs2.acme.com
cluster = ocfs2
node:
ip_port = 7777
ip_address = 10.0.0.3
number = 2
name = ovs3.acme.com
cluster = ocfs2
cluster:
node_count = 3
name = ocfs2
At this point, ensure the following of all Oracle VM Servers:
* tcp port 7777 is available and unblocked by a firewall
* the ip_address value specifies the private interconnect/IP address where a
vailable
* the node_count value represents at least the total number (tally) of all O
racle VM Servers
* each Oracle VM Server can not only resolve the hostname of all other serve
rs, but also connect to each other
* SELinux is disabled on all Oracle VM Servers (getenforce(1))

Create the O2CB configuration file /etc/sysconfig/o2cb on the same Oracle VM Ser
ver.
Run the /sbin/service command to configure the OCFS2 cluster stack. Once configu
red, the OCFS2 cluster stack (O2CB) is started automatically e.g.:
# service o2cb configure
Configuring the O2CB driver.
This will configure the on-boot properties of the O2CB driver.
The following questions will determine whether the driver is loaded on
boot. The current values will be shown in brackets ('[]'). Hitting
<ENTER> without typing an answer will keep that current value. Ctrl-C
will abort.
Load O2CB driver on boot (y/n) [y]: <enter>
Cluster to start on boot (Enter "none" to clear) [ocfs2]: <enter>
Specify heartbeat dead threshold (>=7) [31]: <enter>
Specify network idle timeout in ms (>=5000) [30000]: <enter>
Specify network keepalive delay in ms (>=1000) [2000]: <enter>
Specify network reconnect delay in ms (>=2000) [2000]: <enter>
Writing O2CB configuration: OK
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
Starting O2CB cluster ocfs2: OK
Following is a sample /etc/sysconfig/o2cb configuration file:
#
# This is a configuration file for automatic startup of the O2CB
# driver. It is generated by running /etc/init.d/o2cb configure.
# Please use that method to modify this file
#
# O2CB_ENABELED: 'true' means to load the driver on boot.
O2CB_ENABLED=true
# O2CB_BOOTCLUSTER: If not empty, the name of a cluster to start.
O2CB_BOOTCLUSTER=ocfs2
# O2CB_HEARTBEAT_THRESHOLD: Iterations before a node is considered dead.
O2CB_HEARTBEAT_THRESHOLD=31
# O2CB_IDLE_TIMEOUT_MS: Time in ms before a network connection is considered dea
d.
O2CB_IDLE_TIMEOUT_MS=30000
# O2CB_KEEPALIVE_DELAY_MS: Max time in ms before a keepalive packet is sent
O2CB_KEEPALIVE_DELAY_MS=2000
# O2CB_RECONNECT_DELAY_MS: Min time in ms between connection attempts
O2CB_RECONNECT_DELAY_MS=2000
The cluster.conf and o2cb files must exist and be identical across all Oracle VM
Servers.
To avoid configuration inconsistencies caused by configuring each Oracle VM Serv
er individually, copy both configuration files created from the first Oracle VM
Server to all other servers.
6. Format the shared OVS repository OCFS2 filesystem
Run the mkfs.ocfs2(8) command to format the shared OVS repository device (LUN /d
ev/sdb1 in the example above) with a clustered OCFS2 filesystem as follows - if
using multipathing, use the appropriate /dev/mapper/ device.
# /sbin/mkfs.ocfs2 -b 4K -C 8K -N 8 -M cluster -L /OVS /dev/sdb1
The above command creates a shared/clustered (-M) OCFS2 filesystem on device /de
v/sdb1 with a block size (-b) of 4Kb, a cluster size (-C) of 8Kb, a label of '/O
VS' and node slots (-N) sufficient for up to eight (8) separate Oracle VM Server
s to concurrently mount it. As a general recommendation, add two more node slots
than the actual number of nodes intended to ever mount the volume.
7. Review the current status of the OCFS2 cluster service
As depicted above, the OCFS2 cluster stack is already started on the first Oracl
e VM Server.
Run the /sbin/service command on all other Oracle VM Servers to check the curren
t cluster service status e.g.:
# service o2cb status
Module "configfs": Not loaded
Filesystem "configfs": Not mounted
Module "ocfs2_nodemanager": Not loaded
Module "ocfs2_dlm": Not loaded
Module "ocfs2_dlmfs": Not loaded
Filesystem "ocfs2_dlmfs": Not mounted
If, for whatever reason, the cluster service is already running on any Oracle VM
Server, the service must be shutdown before proceeding. Unmount all OCFS2 volum
es and stop the O2CB cluster stack accordingly e.g.:
# service ocfs2 stop
Stopping Oracle Cluster File System (OCFS2)
# service o2cb stop
Stopping O2CB cluster ocfs2: OK
Unloading module "ocfs2": OK
Unmounting ocfs2_dlmfs filesystem: OK
Unloading module "ocfs2_dlmfs": OK
Unmounting configfs filesystem: OK
Unloading module "configfs": OK
8. Load the OCFS2 cluster stack
Load the OCFS2 cluster stack on all except the first Oracle VM Server (already l
oaded) e.g.:
# service o2cb load
Loading module "configfs": OK
Mounting configfs filesystem at /config: OK
Loading module "ocfs2_nodemanager": OK
Loading module "ocfs2_dlm": OK
Loading module "ocfs2_dlmfs": OK
Mounting ocfs2_dlmfs filesystem at /dlm: OK
9. Online the OCFS2 cluster
Online the OCFS2 cluster stack on all except the first Oracle VM Server (already
onlined) e.g.:
# service o2cb online
Starting O2CB cluster ocfs2: OK
10. Configure OCFS2 to autostart on server boot
Run the chkconfig(8) command on all Oracle VM Servers to configure OCFS2 to auto
matically start and mount the shared OVS repository on system boot e.g.:
# chkconfig --add o2cb
# chkconfig o2cb on
# chkconfig --add ocfs2
# chkconfig ocfs2 on
# chkconfig --list | egrep 'o2cb|ocfs2'
o2cb 0:off 1:off 2:on 3:on 4:on 5:on 6:off
ocfs2 0:off 1:off 2:on 3:on 4:on 5:on 6:off
# grep -i enable /etc/sysconfig/o2cb
O2CB_ENABLED=true
11. Modify /etc/fstab to mount the shared OVS repository
At this point, it's likely that the local (non-shared) /OVS respository will sti
ll be individually mounted on each Oracle VM Server.
11.1 If the Local OVS repositories do not contain resources ...
If the local OVS repository of an Oracle VM Server is mounted, but contains no l
ocal resources (Virtual Machine guests, ISO images, Virtual Machine templates, e
tc.), simply unmount it e.g.:
# umount /OVS
If there is any uncertaintly as to which repository is which, run the mounted.oc
fs2(8) command that lists OCFS2 devices, labels, UUIDs, mounting nodes, etc. e.g
.:
# mounted.ocfs2 -f
Device FS Nodes
/dev/sda3 ocfs2 ovs1.acme.com
/dev/sdb1 ocfs2 Not mounted
# mounted.ocfs2 -d
Device FS UUID Label
/dev/sda3 ocfs2 3fd19ac6-8fe5-4e7c-9fbc-a0658ef0f82b ### local #
##
/dev/sdb1 ocfs2 df9826e6-20d5-4c4a-a287-92abb185a2a7 /OVS ### san #
##
Modify or replace the existing /etc/fstab entry, ensuring to mount the shared re
pository under the /OVS mount point e.g.:
# grep /OVS /etc/fstab
#/dev/sda3 /OVS ocfs2 defaults 1 0 ### local
###
LABEL=/OVS /OVS ocfs2 sync,_netdev 1 0 ### san
###
Complete the above on all relevant Oracle VM Servers, then proceed to step 12.
11.2 If the Local OVS repositories already contains resources ...
If the local OVS repository of an Oracle VM Server is still mounted and contains
local resources (Virtual Machine guests, ISO images, Virtual Machine templates,
etc.), perform the following steps to copy them to the shared OVS repository on
shared/network storage.
To be able to copy local resources to the shared repository, both the local (sou
rce) and shared (target) repositories need to be mounted concurrently.
Shutdown all Virtual Machine guests running on Oracle VM Servers, unmount the lo
cal OVS repository, then run the tunefs.ocfs2(8) utility to create a new, unique
label for the local OVS repository filesystem - avoid using a label containing
the string 'OVS' e.g.:
# umount /OVS
# mounted.ocfs2 -d
Device FS UUID Label
/dev/sda3 ocfs2 3fd19ac6-8fe5-4e7c-9fbc-a0658ef0f82b ### local #
##
/dev/sdb1 ocfs2 df9826e6-20d5-4c4a-a287-92abb185a2a7 /OVS ### san #
##
# tunefs.ocfs2 -L LOCALREPO /dev/sda3
tunefs.ocfs2 1.2.7
Changing volume label from to /LOCALREPO
Proceed (y/N): y
Changed volume label
Wrote Superblock
# mounted.ocfs2 -d
Device FS UUID Label
/dev/sda3 ocfs2 3fd19ac6-8fe5-4e7c-9fbc-a0658ef0f82b /LOCALREPO ### lo
cal ###
/dev/sdb1 ocfs2 df9826e6-20d5-4c4a-a287-92abb185a2a7 /OVS ### s
an ###
Create a new mount point directory for the newly labelled local OVS repository t
o be mounted on. Create a corresponding entry in the /etc/fstab file for the loc
al OVS repository, ensuring to add the _netdev and sync mount options e.g.:
# mkdir /LOCALREPO
# grep ocfs2 /etc/fstab
#/dev/sda3 /OVS ocfs2 defaults 1 0 ### lo
cal ###
LABEL=/LOCALREPO /LOCALREPO ocfs2 defaults 1 0 ### lo
cal ###
LABEL=/OVS /OVS ocfs2 sync,_netdev 1 0 ### s
an ###

Mount both the local and shared OVS repositories e.g.:


# mount -at ocfs2
# mounted.ocfs2 -f
Device FS Nodes
/dev/sda1 ocfs2 ovs1.acme.com
/dev/sdb1 ocfs2 ovs1.acme.com
# mounted.ocfs2 -d
Device FS UUID Label
/dev/sda3 ocfs2 3fd19ac6-8fe5-4e7c-9fbc-a0658ef0f82b /LOCALREPO ### lo
cal ###
/dev/sdb1 ocfs2 df9826e6-20d5-4c4a-a287-92abb185a2a7 /OVS ### s
an ###
# ls -l /OVS
...
# ls -l /LOCALREPO
...
Proceed to copy all local resources on Oracle VM Servers to the corresponding di
rectory on the shared OVS repository e.g.:
# cp -pR /LOCALREPO/running_pool/* /OVS/running_pool/
# cp -pR /LOCALREPO/seed_pool/* /OVS/seed_pool/
# cp -pR /LOCALREPO/iso_pool/* /OVS/iso_pool/
...
Once copying is complete, unmount the local repository (/LOCALREPO mount in the
example above) from Oracle VM Servers, then optoinally remove or reformat the lo
cal repository content/filesystem as required. Be sure to remove any redundant e
ntries from the /etc/fstab file.
Complete the above on all relevant Oracle VM Servers.
12. Mount/remount shared OVS repository on all Oracle VM Servers
Run the umount(8), then mount(8) commands on all Oracle VM Servers to umount/mou
nt/remount the shared repository under the /OVS mount point. Run the mounted.ocf
s2(8) command from at least one Oracle VM Server to verify that all Oracle VM Se
rvers successfully mounted the same shared OVS repository e.g.:
# umount -at ocfs2
# mount -at ocfs2
# mounted.ocfs2 -f
Device FS Nodes
/dev/sdb1 ocfs2 ovs1.acme.com, ovs2.acme.com, ovs3.acme.com
Further, verify that each Oracle VM Server has a consistent view of the shared O
VS repository files. All going well, you are now ready to create, share and migr
ate resources between Oracle VM Servers.
References
NOTE:468793.1 - Considerations for deploying and using Oracle VM
http://download.oracle.com/docs/cd/E11081_01/welcome.html
NOTE:794198.1 - Oracle VM: OVS Repository Design and Storage Considerations

You might also like