Professional Documents
Culture Documents
Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
Solution configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
Backup methodologies . . . . . . . . . . . . . . . . . . . . . . . . 6
Hardware statistics . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
Tape devices . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Creating nPars . . . . . . . . . . . . . . . . . . . . . . . . . . 8
Zoning overview . . . . . . . . . . . . . . . . . . . . . . . . . 9
Configuring HP VLS6510 . . . . . . . . . . . . . . . . . . . . . . . . 11
Best practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Initial tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
Best practices for disk-to-virtual-tape backups on the VLS virtual tape library . . 43
Use flash recovery for online point-in-time recovery and transaction rollback
tracking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Appendix C Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
RAC issues . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
Appendix E: Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . 60
HP Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
HP technical references . . . . . . . . . . . . . . . . . . . . . . . . 63
HP Data Protector . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Oracle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Quest Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
Overview
Over 40% of enterprises use Oracle databases as part of their core applications. The availability
requirements of these core applications vary from less than three hours of unplanned downtime
per month to as little as five minutes per calendar year. Given these requirements, techniques for
backup and recovery of Oracle databases are a vital part of an IT organization’s data protection
and availability strategy. From an operational perspective, proper validation of both recovery times
and backup windows are required to support business-driven recovery time objectives (RTOs) and
recovery point objectives (RPOs). In addition, as the size of the information stored in databases
continues to grow (current compound annual growth rate is approximately 30%), higher-performing
backup and recovery methods are required to maintain these objectives.
The investment in a backup and recovery infrastructure is significant, and the cost of deployment and
management is the largest component of this investment. There are many hardware and software
components and procedural methods available to back up a database environment and plan for its
recovery. Hardware options include disk, tape, and other new hybrid technologies such as virtual
tape systems. There also are many software options for managing the process of backing up and
recovering the data. With so many choices to consider, determining which combination of hardware,
software, and procedures to use can be challenging. The actual implementation and integration can
prove to be difficult; and the overall results can be unpredictable.
This paper describes testing that used a combination of three Oracle 10g server configurations and
three backup infrastructures—an HP StorageWorks Enterprise Modular Library (EML) E-Series Tape
Library, an HP StorageWorks Virtual Tape Library System, and an HP Enterprise Virtual Array disk
storage system. The purpose of the testing was to develop best practices for the backup and restore
of an Oracle 10g environment using HP Data Protector backup software across all combinations of
the three backup infrastructures and the three Oracle 10g environments.
This paper addresses fundamental deployment and operational issues and provides:
• A set of test-proven best practices for the backup and recovery of Oracle 10g databases, including
methodologies for deploying disk, tape, and virtual tape; setting backup and recovery software
options; and determining optimum configurations.
• The key planning and deployment considerations for Data Protector, Oracle 10g, and the server
and storage area network (SAN) infrastructure.
The test results, and specifically the methods and best practices derived from the testing, are described
in the following sections. This information is intended to facilitate both the productive planning and
the timely deployment of a fully operational HP server, storage, and backup infrastructure to ensure:
• Effective deployment of the servers, storage technology, and software, and proper procedures
for backing up and recovering an Oracle 10g database in an enterprise-class, SAN-based
environment.
3
• Ease of overall operation.
• Proven procedures are used to perform backups in a timely manner and with a good
understanding of the impact of backups on application performance.
• Proven procedures are used to restore data in full support of business-driven RTOs and RPOs.
The user of these best practices can accelerate time to deployment, reduce risks, minimize total costs,
and maintain overall-application, service-level objectives.
4
Solution configuration
Based on customer input, an enterprise environment was configured for this testing that was
representative of a typical production Oracle database environment. The key components of the
test environment included:
• The HP Integrity rx7620 Server, a mid-range server, to host the Oracle database. Two
configurations were used: a 64-bit, single-instance database and a two-node, 64-bit RAC
instance.
Note
Although the introduction of the rx7640 has made the rx7620 server obsolete, the best practices outlined in this
document are still pertinent.
• An HP StorageWorks 8000 Enterprise Virtual Array (EVA8000) as the primary SAN-based disk
array that held the production Oracle database, logs, and so on.
• An HP EML E-Series 103e Tape Library as the primary LTO-3 tape backup and restore device.
Note
HP has recently released LTO-4 tape technology which provides improved performance, encryption, and variable speed
technology. The use of LTO-4 will alter the results of this testing; however, all best practices are still applicable.
• An HP StorageWorks 6510 Virtual Library System (VLS6510) as the primary virtual tape backup
and restore device.
• The Red Hat Enterprise Linux (RHEL) AS 4.0 operating system, the Enterprise Linux operating
system, on both the rx7620 and DL580 servers.
• Quest Software Benchmark Factory to create the online transaction processing (OLTP) data in
the databases and simulate 500- and 1,000-user workloads.
5
Configuring the hardware
HP constructed the configuration using Integrity rx7620 and DL580 servers, EVA8000 and EVA5000
disk arrays to best simulate an enterprise environment supporting different Oracle databases on
Itanium and Xeon based servers. See “Bill of materials” on page 47 for a complete list of the
hardware used.
Figure 1 shows the configuration of the servers used in this test environment. The most important
elements of the Oracle database server configurations are as follows:
• IA64 RAC—Oracle 10g Real Application Clusters (RAC) on RHEL4 U4 using both partitions of
the rx7620
• IA64 Single—Oracle 10g Single Instance on RHEL4 U4 using one partition of the rx7620
The Benchmark Factory, HP Data Protector 6.0, and HP Command View EVA servers also are shown
near the top of the environment configuration diagram. For EVA SAN connectivity, 2-Gb/s Fibre
Channel (FC) links were used. A description of the backup methodologies used for each of the
configurations presented in the next section. These backup methodologies also are depicted on the
environment configuration diagram.
Backup methodologies
Disk-to-disk backups—Backing up Oracle 10g Release 2 to EVA5000
For this test, Oracle Recovery Manager (RMAN) performed a tape backup and Data Protector
translated the configured data streams into files and wrote them to the defined File Libraries. The I/O
load was balanced across host bus adapters (HBAs) and controller ports.
For this test, RMAN performed a tape backup using 12 streams to the VLS. The I/O load was
balanced across HBAs and controller ports.
For this test, RMAN performed a tape backup using four streams to the EML. The I/O load was
balanced across HBAs and controller ports.
In Figure 1, the top-most (orange) arrows indicate the disk-staging data flow, the middle (white)
arrows indicate the disk-to-disk data flow, and the bottom-most (green) arrows indicate the disk-to-tape
data flow.
6
Figure 1. Environment configuration diagram
Hardware statistics
Database server configurations 1 and 2 (IA64 RAC and IA64 Single)
• HP Integrity rx7620
– Eight IA64 1.6-GHz processors in one partition
– Eight QLogic-based HP A6286A dual-port FC HBAs
– 64-GB RAM
• HP ProLiant DL580 G3
– Four Intel Xeon 3.0-GHz processors
– Two QLogic-based HP FC2214A dual-port FC HBAs
– 16-GB RAM
Storage
• EVA8000 (primary)
– 144 300-GB FC drives
– Dual controllers (active-active)
7
– One disk group (FC)
• EVA5000 (backup)
Tape devices
• EML 103e
– Two FC paths
• VLS6510
For this test environment, the Integrity rx7620 was partitioned into two nPars, or server partitions. In
order to create partitioning from a remote system and to enable remote hardware management, the
management processor (MP) was configured for remote access as follows:
1. Connect a server running Windows or Linux to the management processor serial management
port.
2. Start a terminal program, such as Windows Hyperterminal or Linux Minicom and configure the
IP address with the lc command and follow the on-screen prompts.
Note
You can configure the MP for dynamic host configuration protocol (DHCP) or static Internet protocol (IP) address. You
can also enable or disable telnet, SSH, or HTTPS remote access.
Creating nPars
If no partition exists, a new complex must be created by using the cc command as follows:
8
2. Use the rs command to restart the partition.
One partition was created to provide access to all the on-board SCSI disks and an operating system
was loaded on the first SCSI disk. The first SCSI disk was then duplicated using the dd utility to the
remaining disks. Upon completion of the duplication, the system partitions were reset and the rx7620
was repartitioned into two partitions consisting of one cell each. Each new server partition then had
an identical bootable operating system because of the disk duplication effort. Alternatively, a network
install could have been performed on each partition if a PXE server had been setup.
Since no PXE server was used, disk duplication was the simplest method for preparing the operating
system disks for each server partition.
Important
Only one path should be used for tape devices since they are not supported in multipath configurations and sometimes cause
issues with the backup application installation or configuration.
Because the current release of Red Hat Enterprise Linux, AS 4.0 is limited to 256 buses and multiple
buses are generated for each path from a host port to an EVA port, bus exhaustion can occur if the
EVA is not properly zoned. After zoning is introduced, it must be used in all cases for the devices
to be visible to one another.
Each rx7620 partition had four dual-port HP A6826A HBAs. Four ports per partition were explicitly
zoned to the EVA8000 and EVA5000. The last four were zoned to the VLS6510 and two of the last
four ports were also zoned to the EML E-Series 103e Tape Library.
The DL580 had two dual-port HP FCA2214A HBAs. Two ports were assigned to the EVA8000 and
EVA5000 and the other two were assigned to the VLS6510 and the EML E-Series 103e Tape Library.
Zoning overview
• To provide rx7620 SAN connectivity to EVA8000—Zones were established for four of the rx7620
HBA ports and each of the EVA8000 host ports.
9
• To provide DL580 SAN connectivity to EVA8000—Zones were established for two of the four
DL580 HBA ports and each of the EVA8000 host ports.
• To provide DL580 SAN connectivity to EVA5000 for disk-based backup—Zones were established
for two of the four DL580 HBA ports and each of the EVA5000 host ports.
• To provide rx7620 SAN connectivity to VLS6510—Zones were established for four of the rx7620
HBA ports and each of the VLS6510 host ports.
• To provide DL580 SAN connectivity to VLS6510 Virtual Library—Zones were established for two
of the four DL580 HBA ports and each of the VLS6510 host ports.
• To provide rx7620 SAN connectivity to EML E-Series 103e Tape Library—Zones were established
for two of the rx7620 HBA ports and each of the EML E-Series 103e host ports.
• To provide DL580 SAN connectivity to EML E-Series 103e Tape Library—Zones were established
for two of the four DL580 HBA ports and each EML E-Series 103e host ports.
The FC connections were connected to two Brocade-based HP 2/16N SAN switches and two
Brocade Silkworm 3800s and were configured in dual fabrics.
The EVA8000 presented nine RAID1 virtual disks, which were all from a single EVA disk group. This
is in line with a typical optimal flexible architecture (OFA) configuration. The disk group included
all 144 available FC disks and was configured for double-disk failure protection. The virtual disks
were presented to all host ports that were connected to any port of the EVA. The QLogic least
recently used (LRU) load balancing policy was used.
Each used host port was identified on the EVA and the operating system type was set to Linux. Each
virtual disk from the EVA had Preferred Path/Mode set to No Preference which enabled the Linux
QLogic driver to load balance the logical unit numbers (LUNs) equally, dividing the load across the
two controllers according to the need of each host.
10
• 56 250-GB FATA disks
The EVA configuration consisted of one HP StorageWorks HSV110 controller pair and eight disk
enclosures, which were populated with 56 250-GB FATA drives. VCS 4.001 firmware (the latest
release at time of printing) and VCS 3.028 (previous release) was used.
The EVA5000 presented four RAID5 virtual disks, which were all from a single disk group. The disk
group included all 56 available FATA disks and was configured for no disk failure protection. The
virtual disks were presented to all host ports that were connected to the EVA. The HP/QLogic load
balancing driver was configured for the LRU load balancing policy.
Each used host port was identified on the EVA and the operating system type was set to Linux.
Two virtual disks from the EVA had Preferred Path/Mode set to Path A—Failover Only and two
were set to Path B—Failover Only, alternating the settings. This equally divided the load across
the two controllers according to each host and LUN mapping.
The e2400-FC interface controllers had six Fibre Channel ports. Four ports were for the back-end
tape devices, and the remaining two were for the SAN. All interfaces were 2 GB and each SAN
port on the Interface Controller was connected to separate fabrics in order to distribute the load
evenly across fabrics and HBAs.
The tape library was managed from a dedicated SAN management server using HP StorageWorks
Command View TL software with HP Command View EVA.
HP Data Protector 6.0 Cell Manager for Windows was used as the backup application with all the
latest patches applied. For clarification, the cell manager server managed the backup images and
the media where the images reside. Because there can be only one host per robotic device, HP Data
Protector asks for one of the hosts to be the robotic control host. The robotic control host moves the
media to the tape drives when backups or restores are activated. Each server in the environment was
configured with the Data Protector agent and was responsible for writing data directly to the tape
devices to avoid network backups.
Configuring HP VLS6510
The HP StorageWorks VLS6510 was configured as an Ultrium 960 tape library with 50 LTO-2 tapes
slots and 12 tape drives. VLS uses the LTO-2 tape personality for LTO-2 and LTO-3 compatibility.
The VLS interface controllers had four FC ports and four SCSI ports. The four SCSI ports were for the
back-end MSA20 disk devices, while the four FC ports were for SAN connectivity. All FC interfaces
were 2 GB and each set of FC ports on the VLS interface controller was connected to separate fabrics
to distribute the load evenly across fabrics and HBAs.
The tape library was managed from a dedicated SAN management server. HP Command View TL
software was installed on the same SAN management server used for HP Command View EVA.
11
HP Data Protector 6.0 Cell Manager for Windows was used as the backup application with all the
latest patches applied. Each server in the environment was configured with the Data Protector agent
and was responsible for writing data directly to the tape devices to avoid network backups.
12
Configuring the software
See “Bill of materials” on page 47 for the complete list of software.
Linux kernel tuning was applied to accommodate the Oracle databases running on the hosts. Table 1
lists the Linux kernel 2.6 parameters that were modified for this testing. These settings were used
as best practices based on information provided from a previous project. The default values have
been included in Table 1.
vm.swappiness 10 30
The Linux QLogic driver supports both an active-active configuration and an active-passive
configuration. The EVA8000 disk array was configured to balance the load across HBAs and
controllers with active-active enabled. The EVA5000 required an active-passive configuration, but
supports load balancing across different ports of the same controller. The latest QLogic/HP driver
can be obtained from the following HP website:
http://welcome.hp.com/country/us/en/support.html?pageDisplay=drivers
When using the QLogic driver for Linux with the EVA product line, a set of configuration utilities was
installed with the driver source. An initial ramdisk (initrd) was created as part of the post-installation
tasks of the package. To manually configure the driver options, we modified the hp_qla2300.conf
file in /etc. Example 1 shows changes made to the defaults.
qdepth = 16
port_down_retry_count = 8
login_retry_count = 8
failover = 1
load_balancing = 2
auto_restore = 0x80
13
Table 2 displays the available load_balancing parameters.
(0) Static None Finds the first active path or the first active optimized path for each LUN.
(1) Static Automatic Distributes commands across the active paths and available HBAs, such
that one path is used per LUN. Paths are automatically selected by
drivers for supported storage systems.
(2) Dynamic Least Recently Used (LRU) Sends command to the path with the lowest I/O count. Includes special
commands such as path verification and normal I/O.
(3) Dynamic Least Service Time (LST) Sends command to the path with the shortest execution time. Does not
include special commands.
A value of 2 was used for the LRU policy. In this way the paths selected were automatically balanced
across the HBAs and switches for the EVA8000, and load balanced across HBAs to the same
controller within the same switch for the EVA5000.
Note
OCFS, ASM, and raw devices can all be used with RAC, depending on platform and business requirements.
The virtual disks (Vdisks) are zoned so both hosts have visibility to all LUNs for use with RAC. The
single LUN containing u06 and u07 are the flash recovery area and Voting and Cluster Ready
Services (CRS) files respectively. Each LUN has a single partition which is then formatted for OCFS.
Figure 2 shows the hosts and LUN presentation. See Figure 3 for OCFS format information.
14
Figure 2. Shared OCFS2 LUN presentation to RAC nodes
Each of the six EVA virtual disks was formatted as an OCFS file system using the defaults shown
in Figure 3.
The cluster size was set at 64 K with a block size of 4 K. Likewise, the command line is still valuable
to mount OCFS file systems. If this is a RAC and the file system will be used to store the Voting Disk
file, Oracle Cluster Registry, Data files, Redo logs, Archive logs or Control files, the file system should
be mounted with datavolume and nointr options. For example:
15
# mount −o _netdev,datavolume,nointr /dev/cciss/c0d7p1 /data
The default /etc/fstab entries were modified to include the _netdev and datavolume options
as follows:
The datavolume option is necessary when putting data files on an OCFS volume. Prior versions of
OCFS would not allow data files to be placed on OCFS volumes. The only file types allowed were
the shared Oracle Home. The datavolume option was not added until OCFS version 2.
Note
All of the OCFS2 file systems must have the _netdev option specified. This guarantees that the network is started before the
file systems are mounted and unmounted before the network is stopped. This is required to prevent OCFS from crashing, and is
also called fencing systems during startup and shutdown.
More information is available in the Oracle Cluster File System (OCFS2) User’s Guide, available on
the following website:
http://oss.oracle.com/projects/ocfs2/dist/documentation/ocfs2_users_guide.pdf
OLTP workloads were created in accordance with the parameters defined in Table 3.
Benchmark Factory scale factors are approximate and should not be used as absolute guides. The
example depicted in Figure 4 should make approximately 930 GB of data when, in fact, this scale
factor generated 1.3 TB of data and 300 GB of indexes.
16
Figure 4. Benchmark Factory scale factors
The scale factor shown is an estimate only. The actual size of the database must include indexes
as well. This may be as much as 33% of the total data size once the data has been completely
generated. The generated data alone may be as much as 20% larger than the estimated size.
After creating the database, the Oracle spfile parameters were tuned to generate the best
performance for backup during workloads. Table 4, Table 5, and Table 1 provide the specific
Oracle parameters that were used during testing. Table 4 shows the options Benchmark Factory
can use to create the tables. Each of the settings can be changed to customize each table. These
are the optimal settings for this environment.
C_ORDER_LINE Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_STOCK Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_DISTRICT Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_CUSTOMER Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_HISTORY Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_ORDER Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_ITEM Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_WAREHOUSE Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
C_NEW_ORDER Table tablespace bmf parallel (degree default instances default) nologging cache monitoring
17
Table 5. Benchmark Factory index creation options
C_STOCK_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_WAREHOUSE_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_NEW_ORDER_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_CUSTOMER_I2 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_ORDER_LINE_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_ORDER_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_ITEM_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_DISTRICT_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
C_CUSTOMER_I1 Index tablespace bmf parallel (degree default instances default) nologging compute statistics
Table 5 shows the modifiable options available in Benchmark Factory when creating indexes. These
options enable the user to tune the indexes to those commonly used or to use the default indexing
schemes when trying different database scenarios.
Table 6 lists the Oracle default parameters and the parameters that were used. These changes
were made to accommodate the high number of simulated users configured for the workload while
backups were occurring.
parallel_max_servers 15 2048
parallel_threads_per_cpu 2 8
Dbwr_io_slaves 0 4
db_file_multiblock_read_count 8 128
18
Configuring HP Data Protector 6.0 for Oracle backups
HP Data Protector setup
This section covers some Oracle-specific information for configuring Oracle Backups with Data
Protector 6.0. Generic backup and restore information and details on the HP Data Protector setup
is listed in the HP OpenView Storage Data Protector Concepts Guide, available on the following
website:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00751562/c00751562.pdf
Data Protector Backup Specifications determine the way a backup is executed. Although the
backup specifications have similar attributes, the script template determines how the backup will
be performed. The EML, VLS, and DISK backups share similar attributes because the backup
specification used is an application backup of type oracle8; however, different devices are used to
back up the database.
• Source—the source database to back up and which objects will be part of the backup
• Schedule—the schedule set forth in the template, which can be modified on a per specification
basis
3. Select the Backup type from the available templates or select Blank Backup, if no template exists.
The screen shot in Figure 5 depicts the Disk Full Backup option.
4. Click OK, and the application prompts you for Database Server information.
6. The application presents a window for providing Destination information or making modifications
to the disk or tape devices listed. Select the appropriate boxes for attributes of the backup. The
Load Balancing configuration on this screen may be modified. Click Next when done.
7. Modify the Backup Specification Options if needed. When done here, click Next.
19
8. The application presents a scheduling window where times and dates can be added to the
schedule and full or incremental backups can be specified. Click Finish when done.
A screen shot of the Create New Backup specification screen is shown in Figure 5. For Oracle, the
Filesystem check box does not need to be selected. Also, selecting the Schedule check box will start
the Schedule configuration near the end of the backup specification configuration. If a schedule for
the backup specification or template is not required at this time, do not select the Schedule check box.
When the specification is complete, right-click on the backup specification in the group defined and
use the Start Backup command to initiate the backup. Full backups are the default.
HP Data Protector facilitates disk-to-disk backups using file libraries, or disk backup repositories. In
order to make effective use of the SAN, this repository typically will be local to the host being backed
up. Otherwise, backups can be performed to disk devices across the network.
20
The final screen reviews choices and allows any changes needed. Further changes are allowed if
required, even after the file library has been created. The file library is now ready for use as a
device. A template or backup specification can be created or modified to use the new device.
After creating the file library, modifying options may improve performance. For example, the
default block size for all Data Protector backups is 64 KB and the maximum block size is 1,024
KB. Performance of the file library is a function of the device used, the number of writers defined,
and the number of streams per writer.
A description of the settings for the disk writer and file library follows. More
information can be found on these settings; see “Examples” on page 50 and
“Configuring Oracle Recovery Manager (RMAN)” on page 49.
• Block size—The parameter we set for each disk writer when acting like a tape device.
• Number of writers—The number of maximum disk writers to create as part of the file library.
• Streams—Enables multiplexed I/O to each disk writer, allowing more than one file to be written
to the file the disk writer outputs during the backup; good for small files and incremental backups
but typically a problem for larger files and full backups.
After the backup is tuned, using the file library to create a virtual full backup or disk-staging backup
solution is very simple. Table 7 displays the parameters that were used in the disk-to-disk backups.
Number of writers 1 8
Streams 3 1
File library devices differ from tapes in that they are not shared among hosts.
In Data Protector, the file library defines the relationship between the host and a disk storage
device, but Data Protector enables RMAN to treat the library as if it were a tape device. The hosts
share the tape devices. Backup destinations cannot be modified using global parameters. Instead
destinations are modified based on the backup setting, then based on the host. The disk devices are
presented explicitly to each host and are separate in every respect. Although the devices may be
listed for each host, HP recommends that only the host which has direct SAN access to the device
use it during a disk-to-disk backup.
HP recommends that the Device Auto Configuration wizard always be used to create tape libraries
and their device associations.
2. Manually create a script on the cell manager and execute the restore with RMAN at the
command line.
21
RMAN and Data Protector automate the restore process of demultiplexing files and mounting the
required backup images, regardless of media type. This is especially helpful with a mix of disk
backups and tape backups.
Note
For the restore to be successful, the control file from the last backup must be available or you must have an RMAN catalog
configured.
The IA32 Multi system showed the least impact to any of the users, but the TPS was considerably less
than either the IA64 RAC or IA64 Single system.
The IA64 RAC system performed well, providing less impact to the users than the IA64 Single system
which was only able to keep up with the 500-user loads.
The IA64 Single system also shows a low impact, but did impact the users similarly to what happened
on the IA64 RAC system for the given number of users.
22
Oracle backup and restore
The major goal for the project was to back up the Oracle databases from each server directly to the
tape, virtual tape, and disk devices. The backups were conducted in two scenarios:
1. Perform a backup without a simulated user load and measure the backup rate (GB/hour).
2. Perform a backup with a simulated user load and measure the back up rate (GB/hour) and the
impact to the user workload (TPS).
During the first scenario, the database had no workload applied and the backup was performed
to disk, tape, and virtual tape. For the second scenario, the database was put under a peak OLTP
workload and then backed up to disk, tape, and virtual tape. This was done in order to observe
the interaction between the workload and the backup and also to understand what impact the
backup was having on the user experience. A total of 200 data files were backed up using RMAN
A combined total of approximately 5 TB was backed up for each server using the methodologies
in this paper. Data was sent to multiple drives in multiple channels, where media allows as high a
throughput as possible for each configuration. One channel was configured per tape device and two
streams per disk device (also known as multiplexing) during the backup.
Restores were done from each backup device to observe the performance of each methodology.
The most important metric captured was the actual speed of the restore as a factor in the overall
time-to-recover (TTR). This allows us to gauge the possible performance of each backup technology in
an environment.
To facilitate raw write testing of the EVA5000 the low-level UNIX tool disk-duplicate, (dd), was
used. As a tool, dd is very useful because it does only sequential I/O, but supports a modifiable
block size. This testing provided baseline performance for the tested configuration and showed the
23
raw performance of the EVA5000 being accessed from a host. An example of using the dd utility
is shown below:
The raw sequential large block performance yielded approximately 190 MB/s for a single port
on a single HSV110 controller. This performance doubled to 380 MB/s for two simultaneous
dd commands, to two different LUNs, separated by HSV110 controllers. These numbers will
change based on the type of disks used. The overall performance using FATA drives is lower than
performance using FC drives.
Raw write testing of the EVA8000 was performed using the dd utility with a command similar to that
used for EVA5000 testing.
The raw sequential large block performance yielded approximately 190 MB/s for a single Vdisk
using multiple HBAs and Controllers. This performance doubled to 380 MB/s for two simultaneous
dd commands to two different LUNs. The I/O load was balanced across HBAs and controller host
ports to avoid bottlenecks.
24
Figure 6. Disk-to-disk backup configuration
Figure 7 depicts the speed of the disk-to-disk backups using the EVA5000 while under load. These
results were generated by performing a disk backup with a Data Protector block size of 512 KB with
and without a workload applied. Table 9 lists the backup results for disk-to-disk backups using
EVA5000, VCS 4.001, with OLTP load.
25
Figure 7. Disk backup with OLTP load
Table 9. Backup results for disk-to-disk backups using EVA5000, VCS 4.001, with OLTP load
Host type Database size (GB) Channels Backup LUNs Backup time Backup rate
(hours:minutes) (GB/hour)
Figure 8 shows an example of the performance of the IA32 Multi backup to the EVA5000 array.
26
Figure 8. IA32 Multi disk backup performance
Examination of the backup time indicates that the target backup of ~1 TB per hour was not
achieved in the configuration with the OLTP workload. For comparison, Table 10 shows the backup
performance for this EVA5000 configuration, without the workload.
As indicated in Figure 8, the backup/restore rates of the two IA64 systems were similar. While there
is no noticeable contention from the EVA5000 cache, throughput is near the maximum when using
VCS 4.001. The backup rate was approximately 150% of the speed of the IA64 Single system
which ran at 215 GB/hour. These results are typical of backups run during peak workload times
when using FATA disks.
Figure 9 and Table 10 show the results of testing backup without the OLTP load. The IA64 systems
outperformed the IA32 system but still were not able to hit the target of ~1 TB/hr. A comparison of
backup times achieved during this and the previous test demonstrates the impact of the workload
on achieving specific backup times.
27
Figure 9. Backup results for disk-to-disk backups using EVA5000, VCS 4.001, without OLTP load
Table 10. Backup results for disk-to-disk backups using EVA5000, VCS 4.001, without OLTP load
Host type Database size Backup LUNs Backup time Channels Backup rate
(GB) (hours:minutes) (GB/hour)
In each backup scenario, the EVA firmware cannot optimize backup performance because cache
mirroring cannot be turned off in VCS 4.001. Cache mirroring enables the EVA to provide full
redundancy with active-active failover and multipath I/O (MPIO) capabilities. Figure 10 shows the
same type of backup but uses VCS 3.028 with cache mirroring disabled for the backup LUNs.
28
Figure 10. Backup results for disk-to-disk backups using EVA5000, VCS 3.028, without OLTP load
Table 11. Backup results for disk-to-disk backups using EVA5000, VCS 3.028, without OLTP load
Host type Database size Backup LUNs Backup time RMAN channels Backup rate
(GB) (hours:minutes) (GB/hour)
Using the earlier firmware resulted in improved backup times, nearly doubling the throughput for
each system. Most of the improvement is attributable to disabling controller cache mirroring. This
allowed the backup to be streamlined and did not congest one of the controller host ports with
cache mirroring I/O.
29
Figure 11. Restore results for disk-to-disk backups using EVA5000, VCS 4.001, off-line restore
Table 12. Restore results for disk-to-disk backups using EVA5000, VCS 4.001, off-line restore
Host type Database size Backup LUNs Restore time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)
Figure 12 below shows the EVA performance during the IA32 Multi system restore.
30
Figure 12. EVA performance during the IA32 Multi system restore
The IA64 Single and the IA64 RAC outperformed the IA32 Multi; however, the IA64 RAC achieved
the best throughput. This was attributable to the fact that the RAC was restoring to two EVA8000
LUNs while the other systems were each using a single LUN.
• The disk I/O was shared on the same HBA for backups; tape I/O was assigned to a different
HBA.
• Bigfile tablespaces were used instead of more parallel reads from smallfile tablespaces.
• There was no striping across multiple LUNs using disk management software (Oracle ASM).
The limited capability of the hardware and the bigfile tablespace used to store the entire 150-GB
database resulted in long backups and restores using only one channel. Simultaneous restores of the
IA32 Multi system were performed causing simultaneous reads from the single backup LUN.
• Using bigfile tablespaces can impede overall restore results if there are not enough tablespaces to
spread across multiple devices, that is, smallfile tablespaces or partitioned bigfile tablespaces.
31
• Using VCS 4.001 or higher code and RAID1 on the production EVA will yield the best protection
from failures, because cache mirroring is enabled and controllers are not taxed with RAID5
parity overhead.
• Using 3.028 firmware and RAID0 LUNs on the disk-to-disk (D2D) target yields the best
performance, because the EVA5000 controllers operate without cache mirroring. However, this
configuration does not provide recovery from failures.
• Properly balancing datafiles/tablespaces to RMAN channels and tape devices yields the best
results.
• Disk striping across multiple LUNs enables the fastest possible reads.
Figure 14 and Table 13 show the results for disk-to-tape backup using EML 10e, with OLTP load.
32
Figure 14. Backup results for disk-to-tape backup using EML 103e, with OLTP load
Table 13. Backup results for disk-to-tape backup using EML 103e, with OLTP load
Host type Database size RMAN channels Backup LUNs Backup time Backup rate
(GB) (hours:minutes) (GB/hour)
The IA32 system performed poorly during this backup. The OLTP workload performed plenty of
read/write I/O against the IA32 Multi system and then tried to back up these bigfile tablespaces
to four tape devices; this slowed down the overall backup. Theoretically, the system can attain
throughput of approximately 80 MB/s, or 216 GB/hour, per LTO-3 drive. However, during peak
workload with one file containing most of the data, the system attained throughput of approximately
25 MB/s, or 80 GB/hour, with simultaneous read/write operations outside of the backup.
The IA64 systems performed well during the backup because using two data sources (that is, IA64
RAC has two data LUNs, many FC paths) allowed for more read throughput.
The system backups were then tested without the OLTP load. Table 14 shows data from tests on
the three host systems.
33
Figure 15. Backup results for disk-to-tape backup using EML 103e, without OLTP load
Table 14. Backup results for disk-to-tape backup using EML 103e, without OLTP load
Host type Database size RMAN channels Backup LUNs Backup time Backup rate
(GB) (hours:minutes) (GB/hour)
Without the OLTP load, performance of each system improved. The IA32 Multi system is not as fast
as the IA64 Single or the IA64 RAC system. However, it saw the greatest performance improvement
when compared with results from the backup performed while under the workload. It is easy to
understand why backup while under heavy load is not advised.
In testing restore times using EML tape, each restore was done off-line, therefore, each server had
more resources available for the restore than it would for the backup. The IA32 system could have
attained even higher performance if more partitions had been employed for the bigfile tablespace, or
if smallfile tablespaces had been used for the single 150-GB set of data tables in each database.
The results of tests using off-line restore mode are displayed in Table 15 and in Figure 16.
34
Figure 16. Restore results for disk-to-tape backup using EML 103e, off-line restore
Table 15. Restore results for disk-to-tape backup using EML 103e, off-line restore
Host type Database size Backup LUNs Backup time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)
• Tape backup performance is sensitive to the layout of the data paths and device visibility.
• If the proper size files are used, tape streaming can perform at acceptable levels of I/O.
• Even in cases where poor RMAN channel balancing occurs, thus impacting backup, tape
restores are fast.
Tape backups with HP Data Protector using a block size of 512 KB with and without an applied
workload were tested. Figure 17 depicts a typical configuration.
35
Figure 17. Disk-to-tape backup diagram
36
Figure 18. Backup results for disk-to-tape backup using VLS6510, with OLTP load
Table 16. Backup results for disk-to-tape backup using VLS6510, with OLTP load
Host type Database size Backup LUNs Backup time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)
Even under conditions in which the workload degrades backup performance, it was possible to
achieve results nearly half the ~1 TB/hour target. To test the effect of the workload, tests were run
without the workload.
Figure 19 and Table 17 show the performance of the VLS backup without the OLTP workload.
37
Figure 19. Backup results for disk-to-tape backup using VLS6510, without OLTP load
Table 17. Backup results for disk-to-tape backup using VLS6510, without OLTP load
Host type Database size Backup LUNs Backup time RMAN channels Restore rate
(GB) (hours:minutes) (GB/hour)
These backup results demonstrate the high throughput that VLS can deliver when not impeded by
peak workloads.
• The IA32 Multi system realized the largest percentage increase in throughput over previous tests.
• The RAC system achieved the ~1 TB/hour targeted throughput demonstrating that, given the
proper configuration, VLS can attain the objective.
• Performance of each system improved substantially over the throughput for D2D and disk-to-tape
(D2T) EML backups.
If more than two LUNs had been used for reading, the backup could have increased the read
throughput from the EVA8000.
Figure 20 depicts the load on the EVA8000 during the VLS backup of the IA64 Single system. The
EVA8000 has plenty of available throughput during the backup.
38
Figure 20. EVA8000 performance during the VLS backup of the IA64 Single system
The VLS6510 increases the speed of disk-to-tape configurations. Since the VLS is capable of emulating
multiple types of tape devices and multiple libraries, the VLS enables a very flexible tape solution.
More information on the VLS6000 is available at the following website:
http://h18004.www1.hp.com/storage/disk_storage/disk_to_disk/vls/6000vls/index.html
Figure 21 and Table 18 show the VLS restore results demonstrating both throughput and backup times.
39
Figure 21. Restore results for disk-to-tape backup using VLS6510
The restore results are typical of the VLS6510. The backups, both with and without load, showed
much higher throughput than those performed using other backup mechanisms in this environment.
The IA32 Multi system performed well with restore performance at nearly 500 GB/hour. This is
especially significant because bigfile tablespaces were used, but database restores were parallelized.
If several smallfiles or many bigfile tablespaces had been used, the backup and restore times would
have been even better.
• Because backups are sensitive to the layout of the data paths, device visibility, and the ratio of
number of files to RMAN device channels, using a minimum of two dual-channel FC HBAs and
files will improve overall performance and lower impact to the database.
• Configuring many emulated tape devices on the VLS is essential for achieving maximum
performance from the VLS.
40
• The use of four MSA20 disk shelves, the maximum for the VLS6510, is required to achieve the
highest throughput of the VLS6510. Also, these shelves are essential when emulating multiple
types of libraries, tape devices, and media.
41
Best practices
During testing, several best practices were developed to improve backup and recovery performance
for each scenario.
Initial tuning
Tuning the Data Protector buffer configuration
Buffer configuration is usually second in overall effort to device configuration in tuning Data Protector.
Several parameters were tuned to achieve the highest levels of throughput in this configuration.
The parameters used to provide the optimal performance for tuning the configuration are displayed in
Table 19.
Table 19. Default versus test settings for device parameters
Number of writers 1 8
Streams 3 1
Load balancing N Y
Drive_buffering 0 1
The operating-system-level tape block settings were configured using the /etc/stinit.def file to
specify settings for the LTO tape devices. The following settings were used in the test environment:
• Device Block Size—Setting this to the largest block size that the device can handle will improve
overall backup times if the block size can be met on every write.
• Streams—This controls whether or not to multiplex writes to the media. For many small files this
can be helpful, but when backing up files in the GB range this setting will hurt performance.
• Load Balancing Min/Max—The number of channels which can be used at a given time is
limited by the actual number of physical devices available. Setting this to Y allows devices to
be used as needed.
• Block size—This is a stinit.def setting. This is used by stinit to set each tape device’s defaults. A
setting of 0 is used so that the block size is automatically determined at write time.
• Drive-buffering—This is a stinit.def setting. This is used by stinit to set defaults for each tape
device. Simply adding this to the stinit device definition will enable hardware buffering for the
LTO-3 tape device. (This parameter can only be used if the drive is buffer capable.)
42
• When using VCS 3.028, use RAID0 for backup target LUNs.
• Use RAID1 to provide the lowest write penalty for data protection on VCS 4.001, especially
when using FATA drives.
• Use VCS 3.028 and disable cache mirroring for backup LUNs.
Note
VCS 4.00x includes several critical bug fixes and other improvements; read the VCS 4.001 and 3.028 release notes
carefully before choosing to use an earlier firmware version.
• Use as many disks as possible for a given disk group. Create disk groups of at least 56 FATA
disks to achieve acceptable throughput per disk group.
• Use two or more LUNs for each host to spread data streams across controllers for improved
bandwidth utilization.
– The EML tape library is capable of approximately 320 MB/s to all four LTO-3 drives. While
the full speed of the drives may not be reached, efficient streaming will allow the tapes to
spin enough to perform the backup quickly without reaching the maximum speed of the
tape devices.
– Since tapes need to stream, they will perform better when handling more data. Using many
small files will create issues as the tape will stop writing after each file is written, adding
latency and not allowing efficient streaming. By multiplexing many small files to one drive at
the same time, Data Protector can create a larger piece of data to keep the tape streaming.
However, because Data Protector then has to demultiplex the data, this should be used only
when necessary as it will impact restore times.
• Use only one stream per LTO-3 drive when using large files to ensure maximum performance
per channel from RMAN.
– This will facilitate both the database writes and the speed of Data Protector when facilitating
those writes.
Table 19 depicts the parameters which were tuned to achieve the optimum level of throughput in
this configuration.
43
• Block Settings—The operating-system-level tape block settings were configured using the
/etc/stinit.def file to specify settings for the LTO tape devices.
• Enables restoration of required image copies when an Oracle Data Guard physical standby
database is created.
• Can add protection against and simplify recovery of lost control files.
Back up any repositories that have media information on them on a regular basis. Create a binary
copy of the control file if not using an RMAN catalog.
Without a block change tracking (BCT) table, an incremental level 1 backup will take as long as a
full level 0. Also, ensure the BCT file is included in the backup.
Use flash recovery for online point-in-time recovery and transaction rollback tracking
Using flash recovery is recommended for online recovery and it helps avoid the need to use backups
for recovery. Flash recovery should be part of any storage growth planning, because it can use a
large amount of storage. The actual storage required depends on the number of days of data
collection and the database transaction rate. Configuring even a small flash recovery area will
ensure that, at a minimum, you have a recent copy of a binary control file.
Incorporating full and incremental backups should help ease recovery, but one policy does not fit all.
Understanding how and when to use cumulative as compared to differential backups is important
for your recovery strategy.
Use the Oracle 10g incremental merge to create an up-to-date, full image copy from
incremental backup files. Unless all the archive logs since the previous full backup are available,
creating frequent full backups does not guarantee recovery, even with incremental files available.
If using disk backups as the primary recovery option, move old on-disk backups to tape with the
backup database backupset command or HP Data Protector Media/Object Copy or
Media/Object Consolidation to manage space.
44
Test copies of backups
Restoring backups to an alternate location and attempting to start the restored database is the best
way to test backups. A second method for testing the backup is to use the RMAN validate
command. Finally, use the RMAN crosscheck command to verify that backup, data, and archive
log files are still located on the target media.
Ensure your archived logs are part of your full and incremental backup. Also, create duplicate
copies of media containing archive logs. Manage the archive log space by deleting archive logs
as part of the backup.
See “Configuring Oracle Recovery Manager (RMAN)” on page 49 for additional RMAN
configuration information.
45
Conclusion
This paper demonstrates how to properly architect, successfully deploy, and productively use a
fully operational HP server, storage, and backup infrastructure for an Oracle 10g environment. In
addition, specific detailed examples illustrate how to customize and integrate HP Data Protector with
Oracle 10g and RMAN to ensure seamless deployment and operation.
• Selecting the appropriate backup methodology. Each one has a unique set of strengths and
drawbacks pertaining to function, performance, management, reliability, and ease of integration.
• Choosing the appropriate backup types, incorporating full and incremental backups as needed,
as well as knowing when to use cumulative or differential backups. In general, the backup type is
determined by both database usage and recovery objectives.
• Simplifying the environment by using zoning to narrow device presentation to hosts, and sharing
tape devices between SAN connected hosts. This also helps avoid backing up across congested
or high-latency local area networks.
• Using Oracle Flash Recovery to provide online point-in-time recovery to the transaction level and
eliminate the need to access off-line backups.
• Engaging in periodic testing of backups by restoring them to an alternate location and attempting
to start the restored database.
• Using an RMAN Catalog database to provide redundancy for the Data Protector internal
database, to protect against lost control files, and to restore image copies to create an Oracle
Data Guard physical standby database.
• Protecting both RMAN and Data Protector by backing up the RMAN catalog and keeping
redundant copies on separate media.
• Understanding these major considerations and knowing how to respond are keys to the successful
deployment of a backup infrastructure of an Oracle 10g environment using HP servers, storage,
and backup software. The test-proven techniques developed in this paper serve as a complete
guide that can be used with confidence to ensure success.
46
Appendix A Bill of materials
Backup Server 1
HP Integrity rx7620 server 1 Operating system Red Hat Enterprise Linux, AS 4.0 U3
MP E.03.13
BMC 03.47
EFI 03.10
System 03.11
Driver 1.42
Backup Server 2
HP ProLiant DL580 G2 server 1 Operating system Red Hat Enterprise Linux, AS 4.0 U3
3.0-GHz CPU 4
GB Memory 8
Driver 1.45
VLS6510 1 V3.020
47
Disk-to-tape backup target− EML 103e
48
Appendix B Configuring Oracle Recovery Manager
This section shows the recommended defaults for each RMAN instance as well as the configuration
options for configuring the RMAN backup.
It may be useful to modify some of the defaults in the previous example. In particular, the Backup
Optimization, Default Device Type, Controlfile Autobackup, Parallelism,
and Archivelog Deletion Policy.
• Default Device Type—The default device type may need to be a tape library or worm drive, so
setting this may relieve some scripting.
• Controlfile Autobackup—This is highly useful to ensure a control file backup is done often.
• Parallelism—When backup sets are being written, this will stream multiple files together to the
same channel if set to a value greater than one.
• Archivelog Deletion Policy—Setting this can ease management of scripts since one can set the
archivelogs to be deleted at a predefined interval.
49
Appendix C Examples
This appendix provides the stinit.def configuration, sample RMAN scripts, a sample Data Protector
template, and sample Data Protector screen shots.
timeout=800
long-timeout=14400
}
# HP Ultriu m960 LTO-3 devices emulated on the VLS 6510
manufacturer="HP" model="Ultrium 3-SCSI" revision="R138"
{
scsi2logical=1 # Common definitions for all modes
async-writes read-ahead
timeout=800
long-timeout=14400
RUN {
allocate channel ’dev_0’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST=Backup_Specification_Name)’;
allocate channel ’dev_1’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST= Backup_Specification_Name)’;
allocate channel ’dev_2’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST= Backup_Specification_Name)’;
allocate channel ’dev_3’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,
OB2BARLIST= Backup_Specification_Name)’;
BACKUP
INCREMENTAL LEVEL=0
FORMAT ’Data_Plus_Arch_%d_u%u_s%s_p%p_t%t’
TAG ’DB1 Full Backup’
DATABASE PLUS ARCHIVELOG;
50
RELEASE CHANNEL ch00;
’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=
Backup_Specification_Name)’;
BACKUP
FORMAT ’STBYCTLFILE-_%d_u%u_s%s_p%p_t%t’
CURRENT CONTROLFILE FOR STANDBY;
RELEASE CHANNEL ch00;
}
run
{
# Auxiliary channels are the only way to restore a database as
a duplicate
allocate channel ’dev_0’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
allocate channel ’dev_1’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
allocate channel ’dev_2’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
allocate channel ’dev_3’ type ’sbt_tape’
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
Full Backup)’;
parms ’ENV=(OB2BARTYPE=Oracle8,OB2APPNAME=application,OB2BARLIST=EML
51
Full Backup)’;
backup incremental level <incr_level>
filesperset 2
format ’EML Full Backup<application_%s:%t:%p>.dbf’
database;
sql ’alter system archive log current’;
backup
format ’EML Full Backup<application_%s:%t:%p>.dbf’
archivelog all;
}
52
Step 2. Use this screen to Select a directory/device and set the number of writers.
53
Step 4. View the Summary pane.
The following screen shots show the advanced settings of a disk writer associated with the file
library created above.
54
Screen 1. The Advanced Options for disk writer
55
Screen 2. The file library disk-writer-device buffer settings
Note
When trying to automate the copying of a D2D backup to tape, the block sizes must match the formatted block size of
any destination tape media.
The following screen shots show the advanced options of a Backup Specification or Template.
56
Screen 2. Backup Specification/Template Oracle-specific advanced options
Note
If either the Disable recovery catalog auto backup option or the Disable Data Protector managed
control file backup option is selected, there will not be a redundant copy of the recovery catalog backed up to
any media unless manually done through another means.
57
Appendix D Additional information
This section presents a set of general issues encountered during testing and provides a description of
suggested resolutions.
Issue: Oracle instances stop, leaving them in a hung state during high load.
Issue: On Data Protector 6.0 initial release, Windows Cell Manager could crash if the GUI crashed.
Resolution: Install DPWIN_00265 and DPWIN_00266 to resolve the GUI crash and the Cell
Manager crash.
Issue: To enable parity with other testing done in the past, modifications to some parameters were
made.
Resolution: Set the explicit commands in the template, or create a script on the server.
RAC issues
58
• Optimizer_Mode set to all_rows
59
Appendix E: Acronyms
Table 20 lists the acronyms used in this paper.
ACK Acknowledgement
DP Data Protector
Gb Gigabit
GB Gigabyte
FC Fibre Channel
I/O Input/Output
Mb Megabit
MB Megabyte
ms millisecond
60
RPO Recovery Point Objective
RR Round Robin
TB Terabyte
61
For more information
This section lists references and their online locations.
http://www.hp.com/go/hpcft
HP Storage
• HP StorageWorks 4x00/6x00/8x00 Enterprise Virtual Array configuration best practices white
paper
http://h71028.www7.hp.com/ERC/downloads/4AA0-2787ENW.pdf
• The role of HP StorageWorks 6000 Virtual Library Systems in a modern data protection strategy
http://h18004.www1.hp.com/storage/disk_storage/disk_to_disk/vls/6000vls/
relatedinfo.html?jumpid=reg_R1002_USEN
• Getting the most performance from your HP StorageWorks Ultrium 960 tape drive
http://h71028.www7.hp.com/ERC/downloads/5982-9971EN.pdf
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00644725/c00644725.pdf
http://www.hp.com/go/ebs
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00230303/
c00230303.pdf?jumpid=reg_R1002_USEN
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00775232/
c00775232.pdf?jumpid=reg_R1002_USEN
http://www.hp.com/support/pat
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00403562/c00403562.pdf
62
HP technical references
http://www.hp.com/hps/tos/
HP Data Protector
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00751873/c00751873.pdf
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00752717/
c00752717.pdf?jumpid=reg_R1002_USEN
http://h71028.www7.hp.com/ERC/downloads/
4AA1-2978ENW.pdf?jumpid=reg_R1002_USEN
http://h18004.www1.hp.com/products/storage/software/dataprotector/document
ation.html?jumpid=reg_R1002_USEN
Oracle
http://www.oracle.com/technology/deploy/availability/pdf/S942_Chien.doc.pdf
http://www.oracle.com/technology/deploy/availability/htdocs/BR_Overview.htm
Quest Software
http://www.quest.com/benchmark-factory/
http://www.quest.com/Quest_Site_Assets/PDF/Benchmark_Factory_5_TPCH.pdf
• Tiobench
http://directory.fsf.org/sysadmin/monitor/tiobench.html
• Bonnie++
63
http://www.coker.com.au/bonnie++/
64