You are on page 1of 17

Migrating to Oracle Database 11g

an

IT Management eBook

contents [ ]
Migrating to Oracle Database 11g
This content was adapted from Internet.com's DatabaseJournal and InternetNews Web sites Contributors: Clint Boulton, Jim Czuprynski, and Steve Callan.

2 3

Introduction
By Clint Boulton

New Features in Oracle Database 11g


By Jim Czuprynski

10

10 11

Database Migration: A Planned Approach


By Steve Callan

Hiring a DBA? 7 Interview Questions


By Sean Hull

11
Migrating to Oracle Database 11g, An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

Migrating to Oracle Database 11g


By Clint Boulton

n its 30th year, Oracle's database is the most established software of its kind in the lot, showing that a mature product is OK as long as you lavish it with new perks. Innovation is the key theme for Oracle's Database 11g, which company President Charles Phillips and other executives introduced at a launch event in New York in July. While some analysts have pegged the update as incremental over the previous 10g database, there is no mistaking 11g for old hat; more than 400 new features, including new manageability features and testing utilities, dominate the release. "People always need to manage more data, and the things they need to do with data become more complex," Phillips said. "We've got to keep up. So we continue to have these innovations year after year after year... This is really rocket science." Among the core new features in 11g is Real Application Testing, which lets customers test and manage changes to their IT environments quickly. Andrew Mendelsohn, senior vice president of Database Server Technologies at Oracle, said the technology combines a workload capture-and-replay feature with a 2

SQL performance analyzer to let users test changes against real-life workloads. The idea is to fine-tune them in a couple of days rather than months and put changes into production. Oracle Data Guard, the disaster-recovery technology, has also been upgraded in 11g, now allowing customers to use their standby databases to improve performance in their production environments as well as provide protection from system failures and disasters. The technology now allows simultaneous read and recovery of a single standby database, making it available for reporting, backup, testing and upgrades to production databases. Tired of spending thousands of dollars on disks? Mendelsohn said Oracle Database 11g has signifiJupiterimages cant new compression capabilities to further cut the number of disks and cost of storage. In one scenario, Mendelsohn showed how a customer using a combination of tiered storage (for high-performing, less active and historical data) with the new compression technologies in 11g can trim a storage budget from almost $1 million to under $60,000 a year. 11g also boasts something Oracle calls Total Recall, which allows administrators to query table data from the
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

past. The idea is to bring a heretofore-unprecedented time dimension to data for tracking changes, which in turn leads to more intelligent auditing and compliance. For unstructured data such as images and large text files, 11g has Fast Files, which stores a lot of information but retrieves it really fast. Security, of course, is always a major concern, with data breaches (hello T.J. Maxx, et al.) and compliance regulations keeping CIOs on their heels. In 11g, Oracle has boosted its Transparent Data Encryption capabilities

beyond column-level encryption to scramble data in entire tables, indexes, and other data storage. Noting figures from Gartner that put Oracle's database market share at 47 percent -- more than the combined market shares for IBM's DB2 Universal Database and Microsoft's SQL Server -- Phillips said 11g is a continuation of Oracle's practice of carving out the database technology roadmap for the industry. "We don't mind defining the roadmap for them," Phillips said, chuckling.

New Features in Oracle Database 11g


By Jim Czuprynski

spent several months participating in the Oracle Database 11g beta evaluation program. Even though Oracle Database 10g impressed me with the breadth of its changes, I'm still trying to wrap my brain around the even more impressive upgrades in this next release. Here are some of my personal favorites among the plethora of Oracle Database 11g's new features.

No. 1: Result Caches


I've often wished that the Oracle database would provide a method to retain in memory the result set from a complex query that contains what I like to call reference information. These are data that hardly ever change, but must still be read and used across multiple applications -- for example, a list of all country codes and their corresponding names for lookup when processing addresses for new international customers, or a list of all ZIP Codes in the Midwestern US. Oracle Database 11g fills this gap with three new structures called result caches, and each structure has a different purpose: The SQL query result cache is an area of memory in the Shared Global Area (SGA) that can retain the result sets that a query generates. The PL/SQL function result cache can store the results from a PL/SQL function call. 3

Jupiterimages

Finally, the client result cache can retain results from queries or functions on the application server from which the call originated. By retaining result sets in these in-memory caches, the results are immediately available for reuse by any user session. For user sessions that connect to the database through an application server, the client cache permits those sessions to simply share the results that are already cached on the application server without having to reissue a query directly against the database. These result caches therefore hold great promise for eliminating unnecessary "round trips" to the database server to collect relatively static reference data that still
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

needs to be shared across many application servers or user sessions - a potentially immense improvement in overall database throughput.

No. 2: Improved SQL Tuning


If you've already experienced the advice for SQL performance improvements that Oracle Database 10g's SQL Tuning Advisor and SQL Access Advisor provide, you'll be pleasantly surprised with Oracle Database 11g's enhanced SQL tuning capabilities. Here's a brief sample: SQL statements can now tune themselves via an expansion to the automatic SQL tuning features that were introduced in Oracle Database 10g. Statistics for the Cost-Based Optimizer (CBO) are now published separately from being gathered. This means that recomputed statistics for the CBO will not necessarily cause existing cursors to become invalidated. Multi-column statistics can be collected for two or more columns in a table. This gives the CBO the ability to more accurately select rows based on common multi-column conditions or joins. SQL Access Advisor can now make recommendations on how partitioning might be applied to existing tables, indexes, and materialized views to improve an application's performance.

tion code, database patch set, or hardware configuration will affect that database's performance. That usually meant purchasing a relatively expensive third-party package (e.g., Mercury Interactive's LoadRunner) to generate a sample workload against the database using the next version of the application code, and then comparing the results against baseline performance for the current application code version. Fortunately, Oracle Database 11g has come to the rescue with two new utilities that offer monumental strides forward in system testing:

Database Replay. Database Replay can capture generated workloads from production systems at the database level. Therefore, it's no longer necessary to run actual application code to duplicate the load on the database, and this also improves accuracy of the If you've already simulated workload because it limits or removes other factors like network experienced the latency. These captured workloads advice for SQL percan then be replayed on a quality formance improveassurance database so that the ments that Oracle impact of application changes, softDatabase 10g's SQL ware patches, and even hardware upgrades can be measured accurateTuning Advisor and ly. This feature is especially valuable in SQL Access Advisor detecting performance issues that provide, you'll be could potentially hamstring a producpleasantly surprised tion database's performance that with Oracle Database might go otherwise undetected until well after changes have been 11g's enhanced SQL deployed. tuning capabilities.

Oracle Database 11g now supports retention of historical execution plans for a SQL statement. This means that the CBO can compare a new execution plan against the original plan and, if the old plan still offers better performance than the new one, it can decide to continue to use the original execution plan.

No. 3: New System Testing Tools


As a DBA, one of the most bedeviling problems that I've regularly faced is to be able to predict accurately how the next set of changes to the database's applica4

SQL Performance Analyzer. A robust complement to the Database Replay facility, the SQL Performance Analyzer (SPA) leverages existing Oracle Database 10g SQL tuning components. The SPA provides the ability to capture a specific SQL workload in a SQL Tuning Set, take a performance baseline before a major database or system change, make the desired change to the system, and then replay the SQL workload against the modified database or configuration. The before and after performance of the SQL workload can then be compared with just a few clicks of the mouse. The DBA only needs to isolate any SQL statements that are now performing poorly and tune them via the SQL Tuning Advisor.
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

No. 4: Advisors and Fault Diagnostics


Oracle Database 10g introduced an impressive plethora of database performance advisors like the Segment Advisor, the Undo Advisor, the SQL Access Advisor, the SQL Tuning Advisor, the MTTR Advisor, and the ultimate expert system for tuning database performance: the Automatic Database Diagnostic Monitor (ADDM). Oracle Database 11g expands this advisory framework with several new Database Repair Advisors. The chief goals of these new Advisors are to locate root causes of a failure, identify and present options for repairing these root causes, and even correct the identified problems with self-healing mechanisms. Oracle Database 11g also adds a series of improved fault diagnostics to make it extremely easy for even an inexperienced DBA to detect and quickly resolve problems with Oracle Database 11g. Here are the highlights of these new features:

Support Workbench. Though it's stored outside of the database itself, the ADR can be accessed via either Enterprise Manager or command-line utilities. Once the ADR has detected and reported a critical problem, the DBA can interrogate the ADR, report on the source of the problem, and in some cases even implement repairs through the Support Workbench, a new facility that's part of Enterprise Manager. Incident Packaging Service. If the problem can't be solved using these tools, it may be time to ask for help from Oracle Support. The new Incident Packaging Service (IPS) facility provides tools for gathering and packaging all necessary logs that Oracle Support typically needs to resolve a Service Request. Hang Manager. Oracle Database 10g introduced the Hang Analysis tool in Enterprise Manager, and

Oracle Database 11g also adds a series of improved fault diagnostics to make it extremely easy for even an inexperienced DBA to detect and quickly resolve problems with Database Oracle 11g.

Automatic Health Monitoring. When a problem within the database is detected, the new Health Monitor (HM) utility will automatically perform a series of integrity checks to determine if the problem can be traced to corruption within database blocks, redo log blocks, undo segments, or dictionary table blocks. HM can also be fired manually to perform checks against the database's health on a periodic basis.

Oracle Database 11g now expands this concept with the Hang Manager. Through a series of dynamic views, it allows the DBA to traverse what's called a hang chain to determine exactly which processes and sessions are causing bottlenecks because they are blocking access to needed resources. And since it's activated by default on all single-instance databases, RAC clustered databases, and ASM instances, it's now possible to track down the source of a hang from one end of the system to the other.

Automatic Diagnostic Repository. The Automatic Diagnostic Repository (ADR) is at the heart of Oracle Database 11g's new fault diagnostic framework. The ADR is a central, file-based repository external to the database itself, and it's composed of the diagnostic data -- alert logs (in XML format), core dumps, background process dumps, and user trace files -- collected from individual database components from the first moment that a critical error is detected.

No. 5: Flashback Enhancements


Oracle Database 10g dramatically expanded database recoverability with the ability to perform an incomplete recovery of the database with Flashback Database. Oracle Database 10g also provided four new logical database recovery features: Flashback Table, Flashback Drop, Flashback Version Query, and Flashback Transaction Query. Oracle Database 11g expands this arsenal of recovery tools with two new

An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

Flashback features: Flashback Transaction. Essentially an extension of the Flashback Transaction Query functionality introduced in Oracle Database 10g, Flashback Transaction allows the DBA to back out of the database one or more transactions -- as well as any corresponding dependent transactions -- by applying the appropriate reciprocal UNDO statements for the affected transaction(s) to the corresponding affected rows in the database. Total Recall. This new feature offers the ability to retain the reciprocal UNDO information for critical data significantly beyond the point in time that it would be flushed out of the UNDO tablespace. Therefore, it's now possible to hold onto these reciprocal transactions essentially indefinitely. Once this feature is enabled, all retained transaction history can be viewed, and this eliminates the cumbersome task of creating corresponding history tracking tables for critical transactional tables. And as you might expect, Oracle Database 11g also provides methods to automatically purge data retained in the data archive once a specified retention period has been exceeded.

tively called SecureFiles, will allow Oracle Database 11g to store images, extremely large text objects, and the more advanced datatypes introduced in prior Oracle releases (e.g., XMLType, Spatial, and medical imaging objects that utilize the DICOM [Digital Imaging and Communications In Medicine] format). SecureFiles promises to offer performance that compares favorably with file system storage of these object types, as well as the ability to transparently compress and "deduplicate" these data. (Deduplication is yet another brand-new feature in Oracle Database 11g. It can detect identical LOB data in the same LOB column that's referenced in two or more rows, and then stores just one copy of that data, thus reducing the amount of space required to store these LOBs.) Perhaps most importantly, Oracle Database 11g will also ensure that these data can be encrypted using Transparent Data Encryption (TDE) methods - especially important (and welcome) in the current security-conscious environments we inhabit today as database administrators.

No. 7: Improved Database Security


Oracle Database 10g Release 2 dramatically improved the options for encrypting sensitive data both within Oracle database tables and indexes, as well as outside the database (i.e., RMAN backups and DataPump export files) with Transparent Data Encryption (TDE). Oracle Database 11g continues to expand the use of

No. 6: SecureFiles
Oracle Database 11g provides a series of brand-new methods for storing large binary objects (also known as LOBs) inside the database. These new features, collec-

The concept of a virtual column a column whose value is simply the result of an expression, but which is not stored physically in the database is a powerful new construct in Oracle Database 11g.

Jupiterimages

An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

TDE within the database. For example, it's now possible to encrypt data at the tablespace level as well as the table and index level. Also, logical standby databases can utilize TDE to protect data that's been transferred from its corresponding primary standby database site. Moreover, secured storage of the TDE master encryption key is ensured by allowing it to be stored externally from the database server in a separate Hardware Security Module. Secure By Default. Oracle Database 11g also implements a new set of out-of-the-box security enhancements that are collectively called Secure By Default. These security settings can be enabled during database creation via the Database Configuration Assistant (DBCA), or they can be enabled later after the database has been created. Here's a sample of these new security features:

Row-Level Security (RLS) policies. Finally, an RMAN recovery catalog can now be secured via Virtual Private Catalog to prevent unauthorized users from viewing backups that are registered within the catalog.

No. 8: Partitioning Upgrades


Oracle Database 10g made a few important improvements to partitioned tables and indexes (e.g., hash-partitioned global indexes), but Oracle Database 11g dramatically expands the scope of partitioning with several new composite partitioning options: Range Within Range, List Within Range, List Within Hash, and List Within List. And that's not all:

Interval Partitioning. One of the more intriguing new partitioning options, interval partitioning is a spe Every user account password is Oracle Database 11g cial version of range partitioning that now checked automatically to requires the partition key be limited adds a third standby ensure sufficient password complexto a single column with a datatype of database type, the ity is being used either NUMBER or DATE. Range parsnapshot standby titions of a fixed duration can be To further strengthen password database, that's specified just like in a regular range security, the DEFAULT user profile created by converting partition table based on this partition now sets standard values for passkey. However, the table can also be an existing physical word grace time, lifetime, and lock partitioned dynamically based on standby database to time, as well as for the maximum which date values fall into a calculatnumber of failed login attempts this format. ed interval (e.g., month, week, quarter, or even year). This enables Oracle Auditing will be turned on by Database 11g to create future new default for over 20 of the most sensitive DBA activipartitions automatically based on the interval specities (e.g., CREATE ANY PROCEDURE, GRANT ANY fied without any future DBA intervention. PRIVILEGE, DROP USER, and so forth). Also, the

AUDIT_TRAIL parameter is set to DB by default when the database is created, so this means that a database "bounce" will no longer be required to activate auditing Fine-Grained Access Control (FGAC) is now available for network callouts when using raw TCP (e.g., via the UTL_TCP package), FGAC will be able to construct Access Control Lists (ACLs) to provide finegrained access to external network services for specific Oracle Database 11g database user accounts. Enterprise Manager now provides interfaces for direct management of the External Security Module (ESM), Fine-Grained Auditing (FGA) policies, and 7

Partitioning On Virtual Columns. The concept of a virtual column - a column whose value is simply the result of an expression, but which is not stored physically in the database - is a powerful new construct in Oracle Database 11g. It's now possible to partition a table based on a virtual column value, and this leads to enormous flexibility when creating a partitioned table. For example, it's no longer necessary to store the date value that represents the starting week date for a table that is range-partitioned on week number; the value of week number can be simply calculated as a virtual column instead. Partitioning By Reference. Another welcome partiAn Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

tioning enhancement is the ability to partition a table that contains only detail transactions based on those detail transactions' relationships to entries in another partitioned table that contains only master transactions. The relationship between a set of invoice line items (detail entries) that corresponds directly to a single invoice (the master entry) is a typical business example. Oracle Database 11g will automatically place the detail table's data into appropriate subpartitions based on the foreign key constraint that establishes and enforces the relationship between master and detail rows in the two tables. This eliminates the need to explicitly establish different partitions for both tables because the partitioning in the master table drives the partitioning of the detail table. Transportable Partitions. Finally, Oracle Database 11g makes it possible to transport a partitioned table's individual partitions between a source and a target database. This means it's now possible to create a tablespace version of one or more selected partitions of a partitioned table, thus archiving that partitioned portion of the table to another database server.

Fast Mirror Resynchronization. An ASM disk group that's mirrored using ASM two-way or three-way mirroring could lose an ASM disk due to a transient failure (e.g., failure of a Host Bus Adapter, SCSI cable, or disk I/O controller). Should this occur, ASM will now utilize the Fast Mirror Resynchronization feature to quickly resynchronize only the extents that were affected by the temporary outage when the disk is repaired, thus reducing the time it takes to restore the redundancy of the mirrored ASM disk group. Preferred Mirror Read. An ASM disk group that's mirrored using ASM two-way or three-way mirroring requires the configuration of failure groups. (A failure group defines the set of disks across which ASM will mirror allocation units; this ensures that the loss of any disk(s) in the failure group doesn't cause data loss.) In Oracle Database 11g, it's now possible to inform ASM that it's acceptable to read from the nearest secondary extent (i.e., the extent that's really supporting the mirroring of the ASM allocation unit) if that extent is actually closer to the node that's accessing the extent. This feature is most useful in a Real Application Clusters (RAC) database environment, especially when the primary mirrored extent is not local to the node that's attempting to access the extent. Resizable Allocation Unit. Oracle Database 11g now permits an ASM allocation unit to be sized at either 2, 4, 8, 16, 32, or 64 MB when an ASM disk group is first created. This means that larger sequential I/O is now possible for very large tablespaces, and/or tablespaces with larger block sizes. The extent size is now automatically increased as necessary and this allows an ASM file to grow up to the maximum of 128 TB as supported by Bigfile Tablespaces (BFTs). Improved ASMCMD Command Set. ASMCMD now includes several new commands that increase visibility of ASM disk group information, support faster restoration of damaged blocks, and retain and restore complex metadata about disk groups: A system / storage administrator can execute the lsdsk command to view a list of all ASM disks even if an ASM instance is not currently running. The remap command utilizes the existing backup of a damaged block on an ASM-mirrored disk group to
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

No. 9: ASM Enhancements


Oracle Database 10g introduced Automatic Storage Management (ASM) that at its essence is a file system specifically developed for Oracle database files. Oracle Database 11g expands the reach of ASM with several new features, including: SYSASM Role. A new role, SYSASM, has been created so that ASM instances can be managed separately from the roles typically granted for traditional Oracle database instance management. ASM Rolling Upgrades. One of the most popular and sensible uses of ASM is in a Real Applications Cluster (RAC) environment. Oracle Database 10g made it possible to initiate a rolling patch set upgrade to the software in the Oracle database home on each node in the cluster. This ensures that the clustered database remains accessible at all times because at least one Oracle database instance is active while the patch set is applied to the other node(s). The good news is that Oracle Database 11g now extends this concept to ASM instances in a RAC clustered environment.

Migrating to Oracle Database 11g


standby again.

recover the damaged block to an alternate location elsewhere in the ASM disk group. Commands md_backup and md_restore allow a DBA to back up and restore, respectively, the metadata reflecting the exact structure of an ASM disk group. These new commands are an immense boon because the recreation of extremely large disk groups consisting of several dozen mount points can be tedious, time-consuming, and prone to error.

No. 10: Data Guard Enhancements


Last but most certainly not least, Oracle Database 11g adds plenty of enhancements to its flagship high-availability solution for site survivability, Data Guard: Snapshot Standby Database. Prior versions of Oracle Database supported two types of standby databases: the physical standby, which is an exact duplicate of the primary database and is updated via direct application of archived redo logs; and the logi-

Rolling Database Upgrades Support Physical Standby Databases. Oracle Database 10g introduced the ability to utilize SQL Apply to perform rolling upgrades against a primary database and its logical standby database. During a rolling upgrade, the DBA first upgrades the logical standby database to the latest database version, and then performs a switchover to make the standby database the primary and vice versa. The original primary database is then upgraded to the new database version, and a switchover reverses the roles once again. This ensures that the only interruption to database access is the time it takes to perform the switchovers. The good news is that Oracle Database 11g now allows a rolling database upgrade to be performed on a physical standby database by allowing the physical standby to be converted into a logical standby database before the upgrade begins. After the rolling upgrade is completed, the upgraded logical standby is simply reconvert-

Migrating from one version to another may be as simple as exporting the old and importing into the new, but chances are there is a lot more involved than first meets the eye.

cal standby, which contains the same logical information as the primary database, but whose data is organized and/or structured differently than on the primary database and which is updated via SQL Apply. Oracle Database 11g adds a third standby database type, the snapshot standby database, that's created by converting an existing physical standby database to this format. A snapshot standby database still accepts redo information from its primary, but unlike the first two standby types, it does not apply the redo to the database immediately; instead, the redo is only applied when the snapshot standby database is reconverted back into a physical standby. This means that the DBA could convert an existing physical standby database to a snapshot standby for testing purposes, allow developers or QA personnel to make changes to the snapshot standby, and then roll back those data created during testing and immediately reapply the valid production redo data, thus reverting the snapshot standby to a physical 9

ed back into a physical standby. Real-Time Query Capability. Active Data Guard will now allow the execution of real-time queries against a physical standby database, even while the physical standby continues to receive and apply redo transactions via Redo Apply. (In prior releases, the physical standby could only be accessed for reporting if it was opened in read-only mode while the application of redo was suspended.) This means that a physical standby database can be utilized more flexibly for read-only reporting purposes; also, the considerable resources needed to create and maintain the standby environment may now be put to much more effective use. Expanded DataType and Security Support. Oracle Database 11g now supports XMLType data stored in CLOB datatypes on logical standby databases. In addition, Transparent Data Encryption (TDE) can now
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

support encrypted table data as well as encrypted tablespaces, and Virtual Private Database (VPD) is supported for logical standby databases. Heterogeneous Data Guard. Finally, it's now possible to set up the primary database site using one operating system (e.g., Oracle Enterprise Linux 4.4) while using another operating system (e.g., Windows 2003 Server) for the standby database site.

Conclusion
Oracle Database 11g continues to improve upon the massive paradigm shift in Oracle Database 10g toward self-managed, self-tuning, and self-healing databases. These automatic database management features will be especially valuable to IT organizations that continue to struggle with ever-larger databases and ever-increasing computing workloads while attempting to answer the demand for lowered costs and value-added service.

Database Migration: A Planned Approach


By Steve Callan

fairly common event in a database's lifecycle is that of the migration from version "older" to version "newer." Migrating from one version to another may be as simple as exporting the old and importing into the new, but chances are there is a lot more involved than first meets the eye. It is not uncommon to also incorporate other significant changes such as an operating system change, a schema modification, and changes to related applications. Each change has its own inherent risk, but lumping them together in one operation flies in the face of common sense, even more so without having tested the migration from start to end. Amazingly, this situation occurs all too often. From a software engineering standpoint, is it safe or a best practice to heap so many significant changes together in one step? Further, wouldn't it seem obvious that you would want to not only practice the migration, but test the changes before actually applying them to your live/production environment? Here is something else to consider: break a dependency chain before it breaks you and the migration process. Given the scenario of migrating from Oracle 10g to 11g, changing the underlying operating system to Linux from Solaris, modifying major tables within a schema, and running newer/modified versions of related applications, where are the places you can break the dependency chain? Put another way, what are the safer/well-known/"charted by many others before you" steps, and which are the uncharted/"applies only to you" steps? 10
Jupiterimages

Separate the Known from the Unknown


For non-leading edge/early adopter/early implementers of a new version of Oracle, by the time you (and your company) are ready to migrate from an older version of the RDBMS software to a newer one, many others will have gone before you. Likewise, many others have already crossed over to the dark side by having adopted Linux as their underlying OS. Considering the combined RDBMS/OS version change as the known. Where your production database lives in terms of version and OS is a logical place to break the dependency chain. In an all-or-nothing, do-or-die migration scenario, failure means losing the time spent on what is perhaps the simplest part of the scenario,
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

namely the hours spent on exporting and importing. If you can separate the overall migration into at least two distinct stages, you will have broken the dependency chain into smaller chains. The guiding principle/lesson to be learned here is to move from point A to D via safe, incremental steps. How your database operates with respect to schema and application interaction is up to you to determine. Until you have thoroughly test-driven schema and application changes, this part of the overall migration process stays in the realm of the unknown. Going live and finding out - for the first time - that the new application/database code results in cascading triggers (thereby bringing an instance to its knees, so to speak) is obviously a poor time to become aware of this situation. Developers and testers using 100 records as a test size when the production environment contains tens of millions of records are hardly conducting a thorough test.

Hiring a DBA? 7 Interview Questions


by Sean Hull
There are nearly an infinite number and combination of questions one can pose to a DBA candidate in an interview. I prefer to lean towards the conceptual, rather than the rote, as questions of this kind emphasize your foundation and thorough understanding. Besides, I've never been one to remember facts and details I can lookup in a reference. Therefore, with that in mind, here are some brainteasers for you to ponder over. 1. Why is a UNION ALL faster than a UNION? The union operation, you will recall, brings two sets of data together. It will not, however, produce duplicate or redundant rows. To perform this feat of magic, a SORT operation is done on both tables. This is obviously computationally intensive, and uses significant memory as well. A UNION ALL conversely just dumps collection of both sets together in random order, not worrying about duplicates. 2. What are some advantages to using Oracle's CREATE DATABASE statement to create a new database manually?

Jupiterimages

Export and Import via a Proactive Approach


With respect to the export and import utilities, you do not have to accept the default parameters. In fact, you owe it to yourself to use quite a few non-default settings, and doing so makes the process easier to perform and saves time when it is time do it for real. Let's look at the indexfile parameter as a start. There are (at least) four excellent reasons to use indexfile=filename on an import. The first is that the output documents the storage of tables and indexes (all or some, depending on what was included in the export dump file). Where is your source code for schema creation? If you do not have source code, this parameter (along with a fairly simple query that returns everything else) goes a very long way toward providing that information. The query part is spooling out the contents of all or user_source. Code for packages, package bodies, procedures, functions, and triggers will be included in the output. With very little editing such as adding "create or replace" and cleaning up SQL*Plus artifacts (i.e., feedback, heading, page breaks - if these weren't suppressed to begin with), you are left with the current source for a significant portion of a schema. The second is that if you are going to do any house11

You can script the process to include it in a set of install scripts you deliver with a product. You can put your create database script in CVS for version control, so as you make changes

An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

cleaning or rearranging of tables and indexes, now is the time to edit the indexfile and update tablespace mappings and storage parameters. If the logical layout is to remain the same, then the third reason comes into play. Separate the tables from the indexes; that is, separate the SQL create statements (one script for tables, the other for indexes). Do as much as you can on the target database before it is time to do the actual migration. Part of this includes creating the same/new tablespaces and running the create tables script. Run the create tables script ahead of time for two reasons: one is to validate the logical layout, the other is to help speed up the import (concepts question: how does import work if an object exists or does not exist?). The fourth reason comes back to the indexes listed in the indexfile. Performance-wise, when doing bulk inserts, is it better to have indexes or not? What happens when a new record is inserted? One or more indexes have to be updated (assuming there is at least a primary key for that record). Oracle's recommendation is that (for large databases) you should hold off on creating indexes until after all the data has been inserted. Again, this comes back to the importance of the indexfile because it is the link between export using "indexes=n" (the default is y) and your being able to re-create the indexes after the data has been loaded.

7 Interview Questions continued


or adjustments to it, you can track them like you do changes to software code. You can log the output and review it for errors. You learn more about the process of database creation, such as what options are available and why. 3. What are three rules of thumb to create good passwords? How would a DBA enforce those rules in Oracle? What business challenges might you encounter? Typical password cracking software uses a dictionary in the local language, as well as a list of proper names, and combinations thereof to attempt to guess unknown passwords. Since computers can churn through tens of thousands of attempts quickly, this can be a very effective way to break into a database. A good password therefore should not be a dictionary word; it should not be a proper name, birthday, or other obvious guessable information. It should also be of sufficient length, such as eight to 10 characters, including upper and lowercase, special characters, and even alternate characters if possible. Oracle has a facility called password security profiles. When installed they can enforce complexity, and length rules as well as other password related security measures. In the security arena, passwords can be made better, and it is a fairly solvable problem. What about in the real world? Often the biggest challenge is in implementing a set of rules like this in the enterprise. There will likely be a lot of resistance to this, as it creates additional hassles for users of the system who may not be used to thinking about security seriously. Educating business folks about the real risks, by coming up with real stories of vulnerabilities and break-ins you've encountered on the job, or those discussed on the internet goes a long way towards emphasizing what is at stake. 4. Describe the Oracle Wait Interface, how it works, and what it provides. What are some

It's More than Running exp and imp


The new server using Red Hat is up and running, Oracle RDBMS software has been installed, a new instance is running with all of the new features you can possibly imagine, and "all" that's left to do is migrate the source database (or schema) to the new/target database. It is midnight Friday and you have an eight-hour planned outage/maintenance period available to perform the migration. What can you do prior to midnight Friday to make the migration as painless as possible? Let's look at four areas where planning can make a difference: general administration, the export phase, the import phase, and a handoff phase.

General Administration/ Project Planning


You are the one in charge of the database migration. Question: What do you and Hannibal from "The A12

An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

Team" have in common? Answer: "I love it when a plan comes together." To help make the plan come together, fire up Visio or PowerPoint and diagram a workflow process. As a minimum, you can take the low-tech route and come up with a timeline. Even if you start with brainstorming and writing down ideas as they come to mind, you will be much better off having everyone on the same sheet of music. Items to consider include: Diagramming the workflow/process, coordination meetings Assign responsibilities among team members, establish roles and responsibilities Create and distribute a contact list, include how to get in touch with other key personnel (managers, system administrators, testing, developers, third party/application providers, customers, account managers, etc.) Hours of operation for Starbucks (some of them open an hour later on Saturdays) After hours building access for contractors (meet at a designated place and time?) Janitorial services - do they alarm the building/office when they are done? There is nothing like an alarm going off, as you walk down the hall, to add a little excitement to the evening. Notification to security/police regarding after hours presence ("Really Officer, we work here, we're not just sitting here looking like we work here") Establishing a transfer point on the file system and ensuring there is enough disk space for the export Acquiring a complete understanding of schema changes (how and when key tables get altered/modified, to include data transformation processes) Establish a work schedule (does every DBA need to be present the entire time, or can schedules be staggered?)

7 Interview Questions continued


limitations? What do the db_file_sequential_read and db_file_scattered_read events indicate? The Oracle Wait Interface refers to Oracle's data dictionary for managing wait events. Selecting from tables such as v$system_event and v$session_event give you event totals through the life of the database (or session). The former are totals for the whole system, and latter on a per session basis. The event db_file_sequential_read refers to single block reads, and table accesses by rowid. db_file_scattered_read conversely refers to full table scans. It is so named because the blocks are read, and scattered into the buffer cache. 5. How do you return the top-N results of a query in Oracle? Why doesn't the obvious method work? Most people think of using the ROWNUM pseudocolumn with ORDER BY. Unfortunately the ROWNUM is determined before the ORDER BY so you don't get the results you want. The answer is to use a subquery to do the ORDER BY first. For example to return the top five employees by salary: SELECT * FROM (SELECT * FROM employees ORDER BY salary) WHERE ROWNUM < 5; 6. Can Oracle's Data Guard be used on Standard Edition, and if so how? How can you test that the standby database is in sync? Oracle's Data Guard technology is a layer of software and automation built on top of the standby database facility. In Oracle Standard Edition it is possible to be a standby database, and update it manually. Roughly, put your production database in archivelog mode. Create a hotbackup of the database and move it to the standby machine. Then create a standby controlfile on the production machine, and ship that file, along with all the archived redolog files to the standby server. Once you have all these files assembled, place them in their proper locations, recover the standby database, and

13

An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

Pre-Export and Export Phase


Aside from a shortage of time, there is very little to prevent you (or the person in charge of export) from practicing the export several times over and ensuring there are no glitches in this part of the plan. Does the export have to be a one-step, A to Z process? How about phasing the export by functional groups? Consider breaking up the export into functional groups: support tables, main tables, altered tables, and historical/static tables. By grouping tables in this manner, you can interleave export and import. Once the export of a group is complete, you can start its corresponding import. It may take two hours to export and four hours to import, but that does not mean it takes six consecutive hours. Why is there a time difference between export and import? Export and import are not one to one. Export will run quite a bit faster than import, and

7 Interview Questions continued


you're ready to roll. From this point on, you must manually ship, and manually apply those archived redologs to stay in sync with production. To test your standby database, make a change to a table on the production server, and commit the change. Then manually switch a logfile so those changes are archived. Manually ship the newest archived redolog file, and manually apply it on the standby database. Then open your standby database in read-only mode, and select from your changed table to verify those changes are available. Once you're done, shutdown your standby and startup again in standby mode. 7. What is a database link? What is the difference between a public and a private database link? What is a fixed user database link? A database link allows you to make a connection with a remote database, Oracle or not, and query tables from it, even incorporating those accesses with joins to local tables. A private database link only works for, and is accessible to the user/schema that owns it. Any user in the database can access a global one. A fixed user link specifies that you will connect to the remote db as one and only one user that is defined in the link. Alternatively, a current user database link will connect as the current user you are logged in as. As you prepare for your DBA Interview, or prepare to give one, we hope these questions provide some new ideas and directions for your study. Keep in mind that there are a lot of directions an interview can go. As a DBA, emphasize what you know, even if it is not the direct answer to the question, and as an interviewee, allow the interview to go in creative directions. In the end, what is important is potential or aptitude, not specific memorized answers. So listen for problem-solving ability, and thinking outside the box, and you will surely find or be the candidate for the job.

Aside from a shortage of time, there is very little to prevent you (or the person in charge of export) from practicing the export several times over and ensuring there are no glitches in this part of the plan.

both can run faster if optimized a bit. Do not forget that indexes are not being exported. Indexes will be re-built after the data is loaded in the target database. How are you driving the exports: interactive mode or use of shell scripts and parameter files? Shell scripts should have four key features: An interview process Feedback, or a summary of what was entered Existence checks (includes parameter files, ability to write to the dump and log file locations, and database connectivity) Bail out mechanisms ("Do you want to continue?") after key steps or operations

14

An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Migrating to Oracle Database 11g

One script can drive the entire export process, and the bail out points can be used as signals (accompanied by extensive use of echo statements, which denote where you are in the process). A key metric to be determined while practicing and refining the scripts is that of the time it takes to perform all exports. If a schema migration is taking place (as opposed to a full database migration), what are the dependencies among schemas? Look for names/items such as build_manager, process_logger, and stage (more germane to a warehouse). "Build_manager" (as an example of a name) may contain common or public functions, procedures and packages. Process_logger may be the owner of process logs for all schemas (fairly common if you see "pragma autonomous_transaction" in the text of a source; it is a way of capturing errors during failed transactions). Unless the new schema incorporates these external or associated schemas, some or all of these otherwise "Left Behind" schemas need to be accounted for in the target database. While the export is taking place, what is happening with the nonexported schemas? You may need to disable connections, change passwords, disable other processes, and suspend crons while the export is taking place. Web applications connections tend to be like crabgrass (i.e., hard to kill), and an effective way of stopping them is to change a password. Finally, what is the disposition of the source database, that is, assuming your plan comes together?

For tables undergoing a modification, questions to ask include where, when, and how does that take place? Do the changes occur within the user's schema, or within a temporary or migration schema, followed by "insert into new version of table as select from temp table?" Fully understand how major tables are being changed you may take for granted what appear to be ash and trash "not null" constraints, but application changes may completely rely upon them. In other words, it may not be enough to take care of PK, FK and unique constraints when trying to rebuild a table on the fly because there was some hiccup in the process. What about cron and database jobs? How are you migrating/exporting all of those? Something that frequently goes hand-in-hand with cron jobs is email. Is the new server configured for e-mail notification? Are there any database links to create? Do you need logging turned on while the import is taking place? Is it even necessary to log everything being imported? What about triggers, especially the "for each row" kind? Millions of rows inserted via import equals millions of times one or more triggers fired on a table with that kind of trigger. If the trigger on a table back Jupiterimages on the source database already took care of formatting a name, does it need to be fired again during an import? You can be clever and disable quite a few automatic functions to help speed up the import, but don't be too clever by half, that is, do not forget to re-enable whatever it is you disabled. At 5:30 in the morning, having worked all day Friday (in addition to coming back at 11 to get ready for the midnight starting gun), sleep deprivation can introduce a significant amount of human error. If you have to go off your game plan, have someone double-check your work or steps, especially if the object being manipulated is of key importance to a database or schema.
An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

Import phase
Practice creating schemas and associated physical/logical objects such as tablespaces and datafiles. The end result desired here is no ORA-xxxxx errors whatsoever, and all create scripts should be re-runnable. With respect to import parameter files, ensure fromuser marries up to touser. Using what was gleaned from the indexfile, pre-create tables in the target database.

15

Migrating to Oracle Database 11g

Post Import Considerations


Did everything work? Breathe a sigh of relief, but the job is not finished. What are you using for a baseline backup, once everything is up and running after the migration? Are you transitioning from export/cold/hot backups to RMAN? Has RMAN backup and recovery been practiced yet? Plan A, obviously, is a success from start to finish. However, despite all best intentions and planning, what is Plan B? What if, for some undeterminable reason, applications fail to work properly after the migration? Thorough testing minimizes this, but what if no largescale testing took place? What does it take to revert to the source database? Do you have the time to try again? A second attempt will not take as long assuming you trust what took place in the export(s). Do not assume everyone knows or understands what just took place. For example, do customer support personnel know how to point desktop CRM applications to the new database? Or are they opening trouble tickets a few weeks after the fact to complain how their changes are not being made or are not taking effect? What may be blindingly obvious to you as a DBA may be completely obscure to people who don't speak "database."

In Closing
The tips and steps covered in this article are based on real events, places, and persons. I have personally witnessed the customer service rep complaining about how his changes were not showing up, and it was because he had no idea whatsoever about pointing his desktop CRM application to the new database. Was that the responsibility of the DBA or the rep's manager? I have seen key tables have problems with the insertion of transformed data and workarounds such as "create table as select" from the stage or transformation table implemented, but alas, the stage table did not have all of the not null constraints as did the new "real" table, and there goes the Web application down the drain. The sad truism about a database migration is that if you do not have the time to test beforehand and wind up failing (the reason why is immaterial), it is amazing how time magically appears to perform testing before the second attempt. The tips mentioned in this article should give you a good perspective regarding some the external factors that come into play during a migration.

This content was adapted from Internet.com's DatabaseJournal and InternetNews Web sites Contributors: Clint Boulton, Jim Czuprynski, and Steve Callan.

16

An Internet.com IT Management eBook. 2007, Jupitermedia Corp.

You might also like