You are on page 1of 232

Using SQL*Plus

Learning objective

After completing this topic, you should be able to recognize ways for using SQL*Plus
commands to display the structure of a table and perform some editing and file
management tasks.

1. Understanding SQL*Plus
Disclaimer
Although certain aspects of the Oracle 11g Database are case and spacing insensitive, a
common coding convention has been used throughout all aspects of this course.
This convention uses lowercase characters for schema, role, user, and constraint names,
and for permissions, synonyms, and table names (with the exception of the DUAL table).
Lowercase characters are also used for column names and user-defined procedure,
function, and variable names shown in code.
Uppercase characters are used for Oracle keywords and functions, for view, table,
schema, and column names shown in text, for column aliases that are not shown in
quotes, for packages, and for data dictionary views.
The spacing convention requires one space after a comma and one space before and
after operators that are not Oracle-specific, such as +, -, /, and <. There should be no
space between an Oracle-specific keyword or operator and an opening bracket, between
a closing bracket and a comma, between the last part of a statement and the closing
semicolon, or before a statement.
String literals in single quotes are an exception to all of the convention rules provided
here. Please use this convention for all interactive parts of this course.
End of Disclaimer
SQL is a command language for communication with the Oracle Server from any tool or
application. Oracle SQL contains many extensions.
When you enter a SQL statement, it is stored in a part of memory called the SQL buffer
and remains there until you enter a new SQL statement. SQL*Plus is an Oracle tool that
recognizes and submits SQL statements to Oracle Database 11g for execution. It
contains its own command language.
SQL

can be used by a range of users, including those with little or no programming experience

is a nonprocedural language

reduces the amount of time required for creating and maintaining systems

is an English-like language
SQL*Plus

accepts ad hoc entry of statements

accepts SQL input from files

provides a line editor for modifying SQL statements

controls environmental settings

formats query results into basic reports

accesses local and remote databases


You can compare SQL and SQL*Plus based on these features:

its purpose

its definition

how data is manipulated

how commands are entered

how it handles commands

whether commands can be abbreviated

how commands are executed

how data is formatted


its purpose
SQL is a language for communicating with the Oracle server to access data, whereas
SQL*Plus recognizes SQL statements and sends them to the server.
its definition
SQL is based on American National Standards Institute (ANSI) standard SQL, whereas
SQL*Plus is one of the Oracle-proprietary interfaces for executing SQL statements.
how data is manipulated
SQL manipulates data and table definitions in the database whereas SQL*Plus does not
allow manipulation of values in the database.
how commands are entered
SQL is entered into the SQL buffer on one or more lines, whereas SQL*Plus is entered one
line at a time, not stored in the SQL buffer.
how it handles commands

SQL does not have a continuation character, whereas SQL*Plus uses a dash as a
continuation character if the command is longer than one line.
whether commands can be abbreviated
SQL commands cannot be abbreviated, whereas SQL*Plus commands can.
how commands are executed
SQL uses a termination character to execute commands immediately, but SQL*Plus does
not require termination characters to do this.
how data is formatted
SQL uses functions to perform some formatting, whereas SQL*Plus uses commands to
format data.
SQL*Plus is an environment in which you can

execute SQL statements to retrieve, modify, add, and remove data from the database

format, perform calculations on, store, and print query results in the form of reports

create script files to store SQL statements for repeated use in the future
SQL*Plus commands can be divided into seven main categories:

environment

format

file manipulation

execution

edit

interaction

miscellaneous
environment
Environment commands affect the general behavior of SQL statements for the session.
format
Format commands format query results.
file manipulation
File manipulation commands save, load, and run script files.
execution
Execution commands send SQL statements from the SQL buffer to the Oracle server.
edit
Edit commands modify SQL statements in the buffer.
interaction

Interaction commands create and pass variables to SQL statements, print variable values,
and print messages to the screen.
miscellaneous
Other SQL*Plus commands connect to the database, manipulate the SQL*Plus
environment, and display column definitions.
The way in which you invoke SQL*Plus depends on the type of operating system or
Windows environment that you are running.
To log in from a Windows environment you

select Start - Programs - Oracle - Application Development - SQL*Plus

enter the username, password, and database name


To log in from a command-line environment you

log on to your machine

enter this sqlplus command


In the syntax, username is your database username and password is your database
password. Your password is visible if you enter it here. And @database is the database
connect string.
sqlplus [username[/password[@database]]]

Note
To ensure the integrity of your password, you do not enter it at the operating
system prompt. Instead, you enter only your username and you then enter your
password at the password prompt.
You can optionally change the look of the SQL*Plus environment by using the "SQL Plus"
Properties dialog box.
In the SQL Plus window, you right-click the title bar and in the context menu that appears,
you select Properties. You can then use the Colors tab of the "SQL Plus" Properties
dialog box to set the Screen Text and the Screen Background.
In SQL*Plus, you can display the structure of a table using the DESCRIBE command. The
result of the command is a display of column names and data types, as well as an
indication whether a column must contain data.
In this syntax, tablename is the name of any existing table, view, or synonym that is
accessible to the user.

DESC[RIBE] tablename
To describe the DEPARTMENTS table, for example, you use this command. It displays
information about the structure of the table.
In this example

Null? specifies whether a column must contain data NOT NULL indicates that a column must
contain data
Type displays the data type for a column
Null? specifies whether a column must contain data NOT NULL indicates that a
column must contain dataType displays the data type for a column

SQL> DESCRIBE DEPARTMENTS


Name
-----------------------------DEPARTMENT_ID
NUMBER(4)
DEPARTMENT_NAME
VARCHAR2(30)
MANAGER_ID
NUMBER(6)
LOCATION_ID
NUMBER(4)

Null?
--------

Type

NOT NULL
NOT NULL

The basic data types in SQL*Plus are

NUMBER(p,s)

VARCHAR2(s)

DATE

CHAR(s)
NUMBER(p,s)
The data type NUMBER(p,s) is a numeric value that has a maximum number of digits p,
which is the number of digits to the right of the decimal point s.
VARCHAR2(s)
The data type VARCHAR2(s) is a variable-length character value of maximum size s.
DATE
The data type DATE is a date and time value between January 1, 4712 B.C. and A.D.
December 31, 9999.

CHAR(s)
The data type CHAR(s) is a fixed-length character value of size s.

Question
Which statements accurately describe SQL*Plus?
Options:
1.

It is an ANSI-standard language

2.

It is an Oracle-proprietary environment

3.

Its commands do not allow manipulation of database values

4.

Its keywords cannot be abbreviated

Answer
SQL*Plus is an Oracle-proprietary environment and its commands do not allow
manipulation of database values.
Option 1 is incorrect. SQL, not SQL*Plus, is an ANSI-standard language for
communicating with the Oracle server to access data.
Option 2 is correct. SQL*Plus is an Oracle-proprietary environment for executing
SQL statements. SQL*Plus has the ability to recognize SQL statements and send
them to the Oracle server.
Option 3 is correct. Although SQL statements can manipulate data and table
definitions in a database, SQL*Plus commands cannot.
Option 4 is incorrect. SQL*Plus keywords can be abbreviated, but SQL commands
cannot. For example, both DESCRIBE and DESC are acceptable commands for
displaying the structure of a table.

Question
Which statements accurately describe SQL?
Options:
1.

It has a continuation character

2.

It uses a termination character to execute commands immediately

3.

It uses functions to perform some formatting

4.

Its commands are not stored in the SQL buffer

Answer

SQL uses a termination character to execute commands immediately and it uses


functions to perform some formatting.
Option 1 is incorrect. Although the Oracle-proprietary interface of SQL*Plus has a
continuation character, SQL does not.
Option 2 is correct. SQL uses a termination character to execute commands
immediately. SQL*Plus does not require a termination character to execute
commands immediately.
Option 3 is correct. SQL uses functions to perform some formatting. For example,
SQL provides functions that can be used to format the appearance of returned text
and numerical data.
Option 4 is incorrect. In SQL*Plus, commands are entered one line at a time and
are not stored in the SQL buffer. SQL, on the other hand, is entered into the SQL
buffer on one or more lines.

2. SQL*Plus commands
SQL*Plus commands are entered one line at a time and are not stored in the SQL buffer.
When using SQL*Plus commands you need to keep in mind that if you press Enter
before completing a command, SQL*Plus prompts you with a line number. Also, you can
terminate the SQL buffer either by entering one of the terminator characters semicolon
or slash or by pressing Enter twice, whereupon the SQL prompt appears.
This table contains selected SQL*Plus editing commands.
You can enter only one SQL*Plus command for each SQL prompt. SQL*Plus commands
are not stored in the buffer. To continue a SQL*Plus command on the next line, you end
the first line with a hyphen (-). This table contains more SQL*Plus editing commands.
You use the L[IST] command to display the contents of the SQL buffer. The asterisk (*)
beside line 2 in the buffer indicates that line 2 is the current line. Any edits that you made
apply to the current line.
LIST
1 SELECT last_name
2* FROM employees
You change the number of the current line by entering the number n of the line that
you want to edit to display the new current line.
1
1* SELECT last_name

You use the A[PPEND] command to add text to the current line the newly edited line is
displayed.
A , job_id
1* SELECT last_name, job_id
You then verify the new contents of the buffer by using the LIST command.
LIST
1 SELECT last_name, job_id
2* FROM employees

Note
Many SQL*Plus commands, including LIST and APPEND, can be abbreviated to
their first letter. LIST can be abbreviated to L and APPEND can be abbreviated to
A.
When using the CHANGE command, you
- use L[IST] to display the contents of the buffer.
LIST
1* SELECT * from employees
- use the C[HANGE] command to alter the contents of the current line in the SQL buffer.
For example, you can replace the employees table with the departments table the new
current line displays.
c/employees/departments
1* SELECT * from departments
- use the L[IST] command to verify the new contents of the buffer.
LIST
1* SELECT * from departments
You use SQL statements to communicate with the Oracle server and SQL*Plus
commands to control the environment, format query results, and manage files.
Some of the SQL*Plus file commands are

SAV[E] filename [.ext] [REP[LACE]APP[END]]

GET filename [.ext]

STA[RT] filename [.ext]

@ filename

ED[IT]

ED[IT] [filename[.ext]]

SPO[OL] [filename[.ext]| OFF|OUT]

EXIT
SAV[E] filename [.ext] [REP[LACE]APP[END]]
The SAV[E] filename [.ext] [REP[LACE]APP[END]] command saves current
contents of the SQL buffer to a file. You use APPEND to add to an existing file and
REPLACE to overwrite an existing file. The default extension is .sql.
GET filename [.ext]
The GET filename [.ext] command writes the contents of a previously saved file to the
SQL buffer. The default extension for the file name is .sql.
STA[RT] filename [.ext]
The STA[RT] filename [.ext] command runs a previously saved command file.
@ filename
The @ filename command runs a previously saved command file the same as START.
ED[IT]
The ED[IT] command invokes the editor and saves the buffer contents to a file named
afiedt.buf.
ED[IT] [filename[.ext]]
The ED[IT] [filename[.ext]] command invokes the editor to edit the contents of a
saved file.
SPO[OL] [filename[.ext]| OFF|OUT]
The SPO[OL] [filename[.ext]| OFF|OUT] command stores query results in a file. OFF
closes the spool file. OUT
closes the spool file and sends the file results to the printer.
EXIT
The EXIT command quits SQL*Plus.
You use the SAVE command to store the current contents of the buffer in a file. In this
way, you can store frequently used scripts for use in the future.
LIST
1 SELECT last_name, manager_id, department_id
2* FROM employees
SAVE my_query
Created file my_query

You use the START command to run a script in SQL*Plus. Alternatively, you can also use
the symbol "@" to run a script, for example @my_query.
LIST
1 SELECT last_name, manager_id, department_id
2* FROM employees
SAVE my_query
Created file my_query
START my_query
LAST_NAME
MANAGER_ID
DEPARTMENT_ID
----------- ------------- ------------King
90
Kochhar
100
90
...
107 rows selected.LIST
You use the EDIT command to edit an existing script. This will open an editor with the
script file in it.
EDIT my_query
When you have made the changes, you quit the editor to return to the SQL*Plus
command line.
SELECT last_name, manager_id, department_id
FROM employees
/

Note
The / character is a delimiter that signifies the end of the statement. When
encountered in a file, SQL*Plus runs the statement prior to this delimiter. The
delimiter must be the first character of a new line immediately following the
statement.
Most of the PL/SQL programs perform input and output through SQL statements, to store
data in database tables or query those tables. All other PL/SQL I/O is performed through
APIs that interact with other programs.
For example, the DBMS_OUTPUT package has procedures such as PUT_LINE. To see the
result outside of PL/SQL requires another program, such as SQL*Plus, to read and
display the data passed to DBMS_OUTPUT.
SQL*Plus does not display DBMS_OUTPUT data unless you first issue this SQL*Plus
command.

SET SERVEROUTPUT ON

Note
SIZE sets the number of bytes of the output that can be buffered within the Oracle
Database server. The default is UNLIMITED. n cannot be less than 2000 or
greater than 1,000,000.
The DBMS_OUTPUT line length limit is increased from 255 bytes to 32,767 bytes.
Resources are not preallocated when SERVEROUTPUT is set. And because there is no
performance penalty, you use UNLIMITED unless you want to conserve physical memory.
SET SERVEROUT[PUT] {ON | OFF} [SIZE {n | UNL[IMITED]}]
[FOR[MAT] {WRA[PPED] | WOR[D_WRAPPED] | TRU[NCATED]}]
The SPOOL command stores query results in a file, or optionally sends the file to a printer.
The SPOOL command has been enhanced. You can now append to, or replace an
existing file, where previously you could only use SPOOL to create and replace a file.
REPLACE is the default.
To spool the output generated by commands in a script without displaying the output on
the screen, you use SET TERMOUT OFF as it does not affect the output from commands
that run interactively.
SPO[OL] [file_name[.ext] [CRE[ATE] | REP[LACE] |
APP[END]] | OFF | OUT]
You must use quotes around file names containing white space. To create a valid HTML
file using SPOOL APPEND commands, you must use PROMPT or a similar command to
create the HTML page header and footer.
The SPOOL APPEND command does not parse HTML tags. SET
SQLPLUSCOMPAT[IBILITY] to 9.2 or earlier to disable the CREATE, APPEND, and SAVE
parameters.
SPO[OL] [file_name[.ext] [CRE[ATE] | REP[LACE] |
APP[END]] | OFF | OUT]
The options that can be used with the SQL*Plus SPOOL command are

file_name[.ext]

CRE[ATE]

REP[LACE]

APP[END]

OFF

OUT
file_name[.ext]
The file_name[.ext] option spools output to the specified file name.
CRE[ATE]
The CRE[ATE] option creates a new file with the name specified.
REP[LACE]
The REP[LACE] option replaces the contents of an existing file. If the file does not exist,
REPLACE creates the file.
APP[END]
The APP[END] option adds the contents of the buffer to the end of the file that you specify.
OFF
The OFF option stops the spooling.
OUT
The OUT option stops spooling and sends the file to your computer's standard or default
printer.
When you use the AUTOTRACE command EXPLAIN shows the query execution path by
performing an EXPLAIN PLAN.
You use the STATISTICS option to display SQL statement statistics. The formatting of
your AUTOTRACE report may vary depending on the version of the server to which you
are connected and the configuration of the server.
The DBMS_XPLAN package provides an easy way to display the output of the EXPLAIN
PLAN command in several, predefined formats.
SET AUTOT[RACE] {ON | OFF | TRACE[ONLY]} [EXP[LAIN]]
[STAT[ISTICS]]
The AUTOTRACE command displays a report after the successful execution of SQL DML
statements, such as SELECT, INSERT, UPDATE, or DELETE.
The report can now include execution statistics and the query execution path.

Question
Which SQL*Plus command can be used to delete a specified range of lines of
code from the SQL buffer?
Options:

1.

CL BUFF

2.

DEL

3.

DEL m n

4.

DEL n

Answer
The SQL*Plus command that can be used to delete a specified range of lines of
code from the SQL buffer is DEL m n.
Option 1 is incorrect. The CL BUFF command or CLEAR BUFFER is used to
delete all lines from the SQL buffer.
Option 2 is incorrect. The DEL command is used to delete the current line from the
SQL buffer.
Option 3 is correct. The DEL m n command is used to delete a specified range of
lines lines m to n inclusive from the SQL buffer.
Option 4 is incorrect. The DEL n command is used to delete line n from the SQL
buffer.

Question
Which SQL*Plus command is used to run a previously saved script?
Options:
1.

EDIT filename [.ext]

2.

GET filename [.ext]

3.

SAVE filename [.ext]

4.

START filename [.ext]

Answer
The SQL*Plus command that is used to run a previously saved script is START
filename [.ext].
Option 1 is incorrect. The EDIT filename [.ext] command is used to invoke
the editor so the contents of a saved file can be edited.
Option 2 is incorrect. The GET filename [.ext] command is used to write the
contents of a previously saved file to the SQL buffer.

Option 3 is incorrect. The SAVE filename [.ext] command is used to save the
contents of the SQL buffer to a file.
Option 4 is correct. The START filename [.ext] command is used to run a
script in SQL*Plus. The @ symbol can also be used to run a script.

3. Creating a SQL*Plus script


You want to use SQL*Plus to connect to your Oracle database and to execute a SQL
script you've created and saved.
You double-click the SQL Plus icon on your desktop.
You enter this command at the prompt.
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Jan 30 03:54:17
2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Enter user-name: hr/hr@localhost.easynomadtravel.com/orcl
Then you press Enter to connect and display the SQL prompt.
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL>
Next you create a SQL script named customer_info.sql that describes and reports
the customers' first and last names from the CUSTOMERS table.
To do this, you open Notepad and enter this statement.
SELECT cust_first_name, cust_last_name FROM customers;
You save the file as C:\customer_info.sql.
You use this command from the SQL> prompt in SQL*Plus to execute the file:
@C:\customer_info.sql

SQL*Plus: Release 11.1.0.6.0 - Production on Wed Jan 30 03:54:17


2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Enter user-name: hr/hr@localhost.easynomadtravel.com/orcl
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL> @C:\customer_info.sql
The result displays a list of customer first and last names.
SQL*Plus: Release 11.1.0.6.0 - Production on Wed Jan 30 03:54:17
2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Enter user-name: hr/hr@localhost.easynomadtravel.com/orcl
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL> @C:\customer_info.sql
SELECT cust_first_name, cust_last_name FROM customers
CUST_FIRST_NAME
--------------Harry dean
Kathleen
Sean
Gerard
Gerard
Maureen
Clint
SQL>
You exit SQL*Plus.

CUST_LAST_NAME
-------------Kinski
Garcia
Olin
Dench
Altman
de Funes
Chapman

SQL*Plus: Release 11.1.0.6.0 - Production on Wed Jan 30 03:54:17


2008
Copyright (c) 1982, 2007, Oracle. All rights reserved.
Enter user-name: hr/hr@localhost.easynomadtravel.com/orcl
Connected to:
Oracle Database 11g Enterprise Edition Release 11.1.0.6.0 Production
With the Partitioning, OLAP, Data Mining and Real Application
Testing options
SQL> @C:\customer_info.sql
SELECT cust_first_name, cust_last_name FROM customers
CUST_FIRST_NAME
--------------Harry dean
Kathleen
Sean
Gerard
Gerard
Maureen
Clint
SQL> exit

CUST_LAST_NAME
-------------Kinski
Garcia
Olin
Dench
Altman
de Funes
Chapman

Summary
SQL*Plus is an execution environment that you can use to send SQL commands to the
database server, and to edit and save SQL commands. You can execute commands from
the SQL prompt or from a script file.
SQL*Plus commands are entered one line at a time and are not stored in the SQL buffer.
SQL*Plus editing commands are APPEND, CHANGE, CLEAR, DELETE, INPUT, LIST and
RUN. SQL*Plus file commands control the environment, format query results, and manage
files. SQL*Plus file commands are SAVE, GET, START, EDIT, SPOOL, and EXIT. You use
the SQL*Plus SERVEROUTPUT command to display DBMS_OUTPUT data stored by
PL/SQL programs. The SPOOL command stores query results in a file, or sends the file to
a printer or allows you to append to an existing file. The AUTOTRACE command displays a
report after the successful execution of SQL DML statements, such as SELECT, INSERT,
UPDATE, or DELETE.
You can create and save a SQL script which you can then execute from SQL*Plus.

Use SQL Developer


Learning objective

After completing this topic, you should be able to recognize the steps for using SQL
Developer and SQL Worksheet to connect to a database, browse and export database
objects, and use SQL*Plus to enter and execute SQL and PL/SQL statements.

1. Understanding SQL Developer


Oracle SQL Developer is a free graphical tool designed to improve your productivity and
simplify the development of everyday database tasks. With just a few clicks, you can
easily create and debug stored procedures, test SQL statements, and view optimizer
plans.
SQL Developer, the visual tool for database development, simplifies

browsing and managing database objects

executing SQL statements and scripts

editing and debugging PL/SQL statements

creating reports
You can connect to any target Oracle database schema using the standard Oracle
database authentication. When connected, you can perform operations on objects in the
database.
Oracle SQL Developer does not require an installer. To install SQL Developer, you need
an unzip tool.
To install SQL Developer, you create a folder as local drive:\SQL Developer. Then you
download the SQL Developer kit from the Oracle SQL Developer Home page at
www.oracle.com. Finally, you unzip the downloaded SQL Developer kit into the folder
created at the start.
To start SQL Developer, you go to local drive:\SQL Developer and double-click
sqldeveloper.exe.
SQL Developer has two main navigation tabs:

Connections

Reports
Connections

By using the Connections tab, you can browse database objects and users to which you
have access.
Reports
By using the Reports tab, you can run predefined reports or create and add your own
reports.
SQL Developer uses the left side for navigation to find and select objects, and the right
side to display information about selected objects.
You can customize many aspects of the appearance and behavior of SQL Developer by
setting preferences.
The menus at the top of the SQL Developer user interface contain standard entries, plus
entries for features specific to the tool:

View

Navigate

Run

Debug

Source

Migration

Tools
View
The View menu contains options that affect what is displayed in the SQL Developer
interface.
Navigate
The Navigate menu contains options for navigating to panes and for execution of
subprograms.
Run
The Run menu contains the Run File and Execution Profile options that are relevant
when a function or procedure is selected.
Debug
The Debug menu contains options that are relevant when a function or procedure is
selected.
Source
The Source menu contains options for use when editing functions and procedures.
Migration
The Migration menu enables you to migrate from another database, such as Microsoft
SQL Server and Microsoft Access, to an Oracle database.

Tools
The Tools menu invokes SQL Developer tools such as SQL*Plus, Preferences, and SQL
Worksheet.
A connection is a SQL Developer object that specifies the necessary information for
connecting to a specific database as a specific user of that database. To use SQL
Developer, you must have at least one database connection, which may be existing,
created, or read.
You can create and test connections for multiple databases and for multiple schemas.
By default, the tnsnames.ora file is located in the $ORACLE_HOME/network/admin
directory. But it can also be in the directory specified by the TNS_ADMIN environment
variable or registry value.
When you start SQL Developer and display the New/Select Database Connection
window, SQL Developer automatically reads any connections defined in the
tnsnames.ora file on your system.

Note
On Windows systems, if the tnsnames.ora file exists, but its connections are not
being used by SQL Developer, you define TNS_ADMIN as a system environment
variable.
You can export connections to an XML file so that you can reuse it later.
You can create additional connections as different users to the same database or to
connect to the different databases.
To create a database connection, you start SQL Developer. Then, on the Connections
tabbed page, you right-click Connections and select New Connection.
You enter the connection name, username, password, host name, port, and system
identifier (SID) or Service name for the database that you want to connect to.
On the Oracle tabbed page, you enter the

Hostname the host system for the Oracle database

Port listener port

SID database name

Service name network service name for a remote database connection

If you select the Save Password check box, the password is saved to an XML file.
Therefore, the next time you access the SQL Developer connection, you will not be
prompted for the password.
The other tabbed pages enable you to set up connections to non-Oracle databases.
You click Test to make sure that the connection has been set correctly.
And you click Connect.
The new database connection appears in the navigation pane.
You can now use the database navigator to browse through many objects in a database
schema including Tables, Views, Indexes, Packages, Procedures, Triggers, and Types.
You can see the definition of the objects broken into tabs of information that is pulled out
of the data dictionary.
If you select a table in the Connections navigator, the details about columns, constraints,
grants, statistics, triggers, and more, are displayed on an easy-to-read tabbed page.
For example, to see the definition of the CUSTOMERS table, you expand the Connections
node in the Connections navigator. Then you expand Tables and click CUSTOMERS.
Using the Data tab, you can enter new rows, update data, and commit these changes to
the database.
You can export DDL and data using the Export utility. For a selected database
connection, you can export some or all objects of one or more types of database objects
to a file containing SQL data definition language (DDL) statements to create these
objects.
To specify options for the export operation, you select Tools - Export DDL (and Data).
You select the objects to export on the Export tabbed page of the Export page.
You specify the objects or types of objects to export on the Filter Objects tabbed page.
You type the name of the file that you want the data saved to, in the File field, or click
Browse to select a directory for the file.
Then you click Apply to proceed with the export.
The export procedure starts.

After the export is completed, you can examine the contents of the exported file.
In this example, the data for views is exported to the Exported_Data.sql file.
You can export table data by using the submenu when you right-click the Tables object in
the navigator.
The export utility offers you wide flexibility in the different formats that you can export to.
You can import data from an Excel spreadsheet using the Import Data submenu.
You can also export data using the Export Data submenu.
You can export data using the Export DDL submenu.
You can also use the Migration tool to export and import data from other data sources.

Question
Which statements accurately describe SQL Developer?
Options:
1.

It allows you to browse and manage database objects

2.

It does not require an installer

3.

It is a text based tool

4.

It is an extension to SQL, used specifically for development

Answer
SQL Developer allows you to browse and manage database objects and does not
require an installer.
Option 1 is correct. Oracle SQL Developer is a free graphical tool designed to
improve productivity and simplify the development of everyday database tasks.
These tasks include browsing and managing database objects.
Option 2 is correct. Oracle SQL Developer does not require an installer. To install
SQL Developer, you need an unzip tool.
Option 3 is incorrect. SQL Developer is a graphical tool that can be used to
connect to an Oracle database and complete everyday database tasks.
Option 4 is incorrect. Although SQL Developer allows you to execute SQL
statements and scripts, it is not an extension to SQL.

Question
In Oracle Database 11g, where can the tnsnames.ora file be located?
Options:
1.

In the $ORACLE_HOME/ bin directory

2.

In the $ORACLE_HOME/network/admin directory

3.

In the $ORACLE_HOME/tns_admin directory

4.

In the directory specified by the TNS_ADMIN environment variable

Answer
In Oracle Database 11g, the tnsnames.ora file can be located in the
$ORACLE_HOME/network/admin directory or in the directory specified by the
TNS_ADMIN environment variable.
Option 1 is incorrect. The tnsnames.ora file is not located in the
$ORACLE_HOME/bin directory. The bin directory contains a number of required
files and tools, such as SQL*Plus.
Option 2 is correct. By default, the tnsnames.ora file is located in the
$ORACLE_HOME/network/admin directory.
Option 3 is incorrect. The directory $ORACLE_HOME/tns_admin is not created by
default when Oracle Database 11g is installed, and the tnsnames.ora file is not
located within it.
Option 4 is correct. The tnsnames.ora file can be stored in the directory
specified by the TNS_ADMIN environment variable or registry value.

2. Using SQL Worksheet


When you connect to a database, a SQL Worksheet window for that connection is
automatically opened. You can use SQL Worksheet to enter and execute SQL, PL/SQL,
and SQL*Plus statements.
SQL Worksheet supports SQL*Plus statements to a certain extent. SQL*Plus statements
that are not supported by SQL Worksheet are ignored and not passed to the database.
You can specify any actions that can be processed by the database connection
associated with the worksheet, such as

creating a table

inserting data

creating and editing a trigger

selecting data from a table

saving the selected data to a file

saving and running SQL scripts


You can display a SQL worksheet by either

selecting Tools - SQL Worksheet

clicking the Open SQL Worksheet icon


You can set your preferences so that the SQL Worksheet is opened automatically when
you have a database connection. To begin, you select Tools - Preferences.
You expand Database and click Worksheet Parameters. Then you check Open a
Worksheet on connect.
You may want to use the shortcut keys or icons to perform certain tasks such as
executing a SQL statement, running a script, and viewing the history of SQL statements
that you have executed.
You can use these icons on the SQL Worksheet toolbar:

Execute Statement

Run Script

Commit

Rollback

Cancel

SQL History

Execute Explain Plan

Autotrace

Clear
Execute Statement
The Execute Statement icon enables you to execute the statement at the cursor in the
Enter SQL Statement box. You can use bind variables in the SQL statements but not
substitution variables.
Run Script
The Run Script icon enables you to execute all statements in the Enter SQL Statement
box using Script Runner. You can use substitution variables in the SQL statements but not
bind variables.

Commit
The Commit icon enables you to write any changes to the database and end the
transaction.
Rollback
The Rollback icon enables you to discard any changes to the database, without writing
them to the database, and end the transaction.
Cancel
The Cancel icon enables you to stop the execution of any statements currently being
executed.
SQL History
The SQL History icon enables you to display a dialog box with information about the SQL
statements that you have executed.
Execute Explain Plan
The Execute Explain Plan icon enables you to generate the execution plan, which you
can see by clicking the Explain tab.
Autotrace
The Autotrace icon enables you to generate trace information for the statement.
Clear
The Clear icon enables you to erase the statement or statements in the Enter SQL
Statement box.
In SQL Worksheet, you can use the Enter SQL Statement box to enter a single SQL
statement or multiple SQL statements. For a single statement, the semicolon at the end is
optional.
When you enter the statement, the SQL keywords are automatically highlighted. To
execute a SQL statement, you ensure that your cursor is within the statement and click
the Execute Statement icon. Alternatively, you can press the F9 key.
In the example, because there are multiple SQL statements, the first statement is
terminated with a semicolon. The cursor is in the first statement and so when the
statement is executed, results corresponding to the first statement are displayed in the
Results box.
To execute multiple SQL statements and see the results, you click the Run Script icon.
Alternatively, you can press the F5 key.
The results corresponding to all the statements are displayed in the Script Output box.

You can save your SQL statements from the SQL Worksheet into a text file. To save the
contents of the Enter SQL Statement text box, you click the Save icon or select File Save.
In the Windows Save dialog box, you enter a file name and the location where you want
the file saved. Then you click Save.
After you save the contents to a file, the Enter SQL Statement text box displays a tabbed
page of your file contents. You can have multiple files open at once. Each file displays as
a tabbed page.
You can select a default path to look for scripts and to save scripts.
You select Tools - Preferences and on the Preferences page you expand Database and
then select Worksheet Parameters. You then enter a path in the "Select default path to
look for scripts" field.
To run a saved SQL script, you use the @ command In the Enter SQL Statement window,
followed by the location and name of the file that you want to run.
And then you click the Run Script icon.
The results from running the file are displayed on the Script Output tabbed page.
You can also save the script output by clicking the Save icon on the Script Output tabbed
page. The Windows File Save dialog box appears and you can specify a name and
location for your file.

Note
You can also right-click in the Enter SQL Statement area and select Open File
from the shortcut menu.

Question
Identify the true statements regarding SQL Worksheet.
Options:
1.

SQL Worksheet can be used to enter and execute SQL and SQL*Plus commands
only

2.

Unsupported SQL*Plus commands result in an error when entered in SQL


Worksheet

3.

When you connect to a database, a SQL Worksheet window for that connection is
automatically opened

4.

You can specify any actions that can be processed by the database connection
associated with the worksheet

Answer
When you connect to a database, a SQL Worksheet window for that connection is
automatically opened. Within SQL Worksheet, you can specify any actions that
can be processed by the database connection associated with the worksheet.
Option 1 is incorrect. SQL Worksheet can be used to enter and execute SQL,
PL/SQL, and some SQL*Plus commands.
Option 2 is incorrect. SQL Worksheet supports a number of SQL*Plus commands,
but any unsupported commands are ignored and are not passed to the database.
Option 3 is correct. When you connect to a database, a SQL Worksheet window
for that connection is automatically opened. This is customizable in the Worksheet
Parameters.
Option 4 is correct. In SQL Worksheet, you can specify any actions that can be
processed by the database connection associated with the worksheet. These
actions include creating a table, inserting data, and saving and running SQL
scripts.

Question
Which statements are true regarding entering and executing SQL statements from
SQL Worksheet?
Options:
1.

For single statements, the ending semicolon is required

2.

Shortcut keys are available for executing statements and scripts

3.

SQL keywords are automatically highlighted

4.

You cannot enter multiple statements in the Enter SQL Statement box

Answer
When entering and executing SQL statements from SQL Worksheet, shortcut keys
are available for executing statements and scripts and SQL keywords are
automatically highlighted.
Option 1 is incorrect. For single statements in SQL Worksheet, the semicolon at
the end is optional.

Option 2 is correct. SQL Worksheet provides shortcut keys you can use. For
example, you can execute a SQL statement using the F9 key, and run scripts
using the F5 key.
Option 3 is correct. When you enter a statement in SQL Worksheet, the SQL
keywords are automatically highlighted in the window. This increases the
readability of your code.
Option 4 is incorrect. In SQL Worksheet, you can use the Enter SQL Statement
box to enter a single SQL statement or multiple SQL statements.

3. Using PL/SQL in SQL Developer


You can create, execute, and debug procedures, functions, packages, and triggers with
SQL Developer using PL/SQL.
To create a PL/SQL object, such as a procedure, you right-click the PL/SQL object type in
the Object navigator.
For example, you right-click Procedures in the navigator pane and select New
Procedure.
The Create PL/SQL Procedure dialog box displays. You use the Add Column + button to
enter the header information for the procedure. And then you click OK.
The Code Editor window for the new procedure PROCEDURE 1 displays. At the top of
the Code Editor window, you have tools to help you run, compile, and debug your PL/SQL
code.
The icons represent

Find

Run

Run in debug mode

Compile

Compile for debug mode


Your header information is entered in the Code Editor window. You need to enter the code
for the body of the PL/SQL object.
In the example, the code for the PROCEDURE1 procedure is entered.
CREATE OR REPLACE
PROCEDURE PROCEDURE1

( param1 IN VARCHAR2
) AS
BEGIN
NULL;
END PROCEDURE1;

Note
To display the line numbers in the Code Editor, you select Tools - Preferences
followed by Code Editor - Line Gutter and select the Show Line Numbers
option.
To compile your code, you click the Compile icon in the Code Editor window.
If your code compiles successfully, you see the message "Compiled" on the Messages Log window. If there are errors or warnings, you see "Compiled (with errors)" on the
Messages tabbed page.
The details for the errors are on the Compiler tabbed page. If you have any errors, you fix
the errors and then recompile.
To execute your PL/SQL code, you click the Run icon.
The Run PL/SQL dialog box appears. Within it, the call to your named PL/SQL block is
wrapped in an anonymous block of code.
DECLARE
PARAM1 VARCHAR2(200);
BEGIN
PARAM1 := NULL;
PROCEDURE1(
PARAM1 => PARAM1
);
END;
You enter any variable values which are converted to your parameter values in your
stored block of code and then click OK.
The results from running your code are displayed on the Running-Log tabbed page.
To use the PL/SQL debugger in SQL Developer, you must compile the code in debug
mode.

Summary

Oracle SQL Developer is a free graphical tool designed to improve productivity and
simplify the development of database tasks. To use SQL Developer you need to set up a
connection to at least one database.
You can use SQL Worksheet to enter and execute SQL, PL/SQL, and selected SQL*Plus
statements.
You can also create, execute, and debug procedures, functions, packages, and triggers
with SQL Developer using PL/SQL. You use the Code Editor to run, compile, and debug
PL/SQL code.

Creating Reports and Migrating with SQL Developer


Learning objective

After completing this topic, you should be able to identify the steps for using SQL
Developer to create reports and migrate to an Oracle Database 11g database.

1. Creating reports with SQL Developer


SQL Developer 1.2 has links to popular search engines and discussion forums. You can
use the search engines:

Ask Tom

Google

Metalink

Docs

10.2 docs

9.2 docs

search.oracle.com

OTN Forums
In this example, the OTN Forums are searched using the search term SQL Developer.
The search results display in your browser window.
You can customize many aspects of the SQL Developer interface and environment by
modifying the SQL Developer preferences according to your preferences and needs.
To modify SQL Developer preferences, you select Tools - Preferences.
The Preferences dialog box displays.

Most preferences are self-explanatory. Some preferences involve performance or system


resource trade-offs for example, enabling a feature that adds execution time and
other preferences involve only personal aesthetic taste. The preferences are grouped into
categories.
In this example the Code Editor preferences are displayed.

Note
You can toggle your line numbers on and off using the Show Line Numbers
check box.
SQL Developer provides many reports about the database and its objects. These reports
are grouped into categories:

About Your Database reports

Object reports

Application Express reports

Charts

Database Administration reports

Data Dictionary reports

Jobs reports

PL/SQL reports

Security reports

Streams reports

Table reports

XML reports
To display a report, you select the Reports tab, and then select the report type.
You can also create your own user-defined reports. For example, to display a chart, you
expand the Charts node and then select the particular chart you want to display. The
example shows an Object Distribution chart report.
You can generate reports about your PL/SQL code. You can find out information about
arguments, search source code for object name or text strings, and find out the length of
your PL/SQL routines.
To run the Search Source Code report, you click the Search Source Code node, specify
a database connection in the Select Connection dialog box, and click OK. This report
enables you to find either text strings or object names in your PL/SQL code.

In the Enter Bind Values dialog box, you enter a value for example customer in the
Value text field and click Apply.
The search result shows all occurrences of the text string "customer" in the PL/SQL code.
The results show the owner, the PL/SQL object name, code type, line number, and the
text on that line number.
User-defined reports are any reports that are created by SQL Developer users.
To create a user-defined report, you right-click the User Defined Reports node, and
select Add Report.
This displays the Create Report dialog box. You specify the report name and an optional
description to indicate that the report contains sales orders organized by sales
representatives.
You enter the complete SQL statement for retrieving the information to be displayed in the
user-defined report in the SQL box. You can also include an optional tool tip to be
displayed when the cursor stays briefly over the report name in the Reports navigator
display.
To retrieve the information you click Apply.
SELECT order_mode, order_total, sales_rep_id
FROM orders
WHERE sales_rep_id IS NOT NULL
GROUP BY order_mode, order_total, sales_rep_id
ORDER BY sales_rep_id
Your new report OrdersByRep appears under the node User Defined Reports and its
contents are displayed in the right-hand pane.
You can organize user-defined reports in folders, and you can create a hierarchy of
folders and subfolders.
To create a folder for user-defined reports, you right-click the User Defined Reports
node or any folder name under that node and select Add Folder.
This displays the Create Folder dialog box that enables you to name your new folder.

Note
Information about user-defined reports, including any folders for these reports, is
stored in a file named UserReports.xml under the directory for user-specific
information.

Question
Which statements accurately describe report creation in SQL Developer?
Options:
1.

A hierarchy of folders must be created before you can create reports

2.

SQL Developer reports are grouped into categories, such as object reports and
charts

3.

You can customize existing report templates, but you cannot create user-defined
reports

4.

You can generate reports about PL/SQL code

Answer
In SQL Developer reports are grouped into categories, such as object reports and
charts. And you can generate reports about PL/SQL code.
Option 1 is incorrect. You can organize user-defined reports into folders, and you
can create a hierarchy of folders and subfolders. However, these folders do not
have to be created before the reports are created.
Option 2 is correct. SQL Developer provides many reports about the database and
its objects. Reports are grouped into categories, such as object reports, charts,
PL/SQL reports, and XML reports.
Option 3 is incorrect. Although SQL Developer provides many reports about the
database and its objects, you can also create your own user-defined reports.
Option 4 is correct. You can generate reports about your PL/SQL code. You can
find out information about arguments, search source code for object name or text
strings, and find out the length of your PL/SQL routines.

2. Using SQL*Plus
SQL Worksheet supports most SQL*Plus statements. SQL*Plus statements must be
interpreted by SQL Worksheet before being passed to the database. Any SQL*Plus
statements that are not supported by the SQL Worksheet are ignored and not passed to
the database.
To display the SQL*Plus command-line interface, you first close all SQL worksheets and
then select Tools - SQL*Plus.
You must use the Oracle SID in your SQL Developer connection in order for the
SQL*Plus menu item to be enabled.

This opens the SQL*Plus command-line window on top of SQL Developer.


To use this feature, the system on which you are using SQL Developer must have an
Oracle home directory or folder, with a SQL*Plus executable under that location. If the
location of the SQL*Plus executable is not already stored in your SQL Developer
preferences, you are asked to specify its location.
To do this you select Tools - Preferences and then select Database. You enter the path
to the sqlplus.exe file in the SQL*Plus executable text box and click OK.
SQL Developer does not support all SQL*Plus statements.
For example, the SQL*Plus statements append, archive, and attribute are not
supported by SQL Developer.

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the full list of SQL*Plus statements that are and are not supported by SQL
Developer here.

Question
What must be considered when invoking SQL*Plus from SQL Developer?
Options:
1.

An Oracle home directory or folder is not required

2.

The location of sqlplus.exe is determined automatically

3.

You must close all SQL worksheets to enable the SQL*Plus menu option

4.

You must use the Oracle SID in your SQL Developer connection

Answer
When invoking SQL*Plus from SQL Developer, you must close all SQL worksheets
to enable the SQL*Plus menu option. And you must use the Oracle SID in your
SQL Developer connection.
Option 1 is incorrect. To launch SQL*Plus from SQL Developer, the system on
which you are using SQL Developer must have an Oracle home directory or
folder, with a SQL*Plus executable under that location.

Option 2 is incorrect. If the location of the SQL*Plus executable, sqlplus.exe, is


not already stored in your SQL Developer preferences, you are asked to specify
its location the first time you invoke SQL*Plus.
Option 3 is correct. To enable the SQL*Plus menu option, and to invoke SQL*Plus
from SQL Developer, you must close all open SQL worksheets.
Option 4 is correct. You must use the Oracle SID in your SQL Developer
connection in order for the SQL*Plus menu item to be enabled.

3. Migrating with SQL Developer


You can migrate from a database, such as Microsoft SQL Server and Microsoft Access, to
an Oracle database. This enables you to take advantage of Oracle's scalability, reliability,
increased performance, and better security.
Using SQL Developer to migrate a third-party database to an Oracle database

reduces the effort and risks involved in a migration project

enables you to migrate an entire third-party database, including triggers and stored procedures

enables you to see and compare the captured model and the converted model, and to customize
each if you want, so that you can control how much automation there is in the migration process

provides feedback about the migration through reports


SQL Developer enables you to simplify the process of migrating a third-party database to
an Oracle database. SQL Developer captures information from the source database and
displays it in the captured model, which is a representation of the structure of the source
database.
This representation is stored in a migration repository, which is a collection of schema
objects that SQL Developer uses to store migration information. The information in the
repository is used to generate the converted model, which is a representation of the
structure of the destination database as it will be implemented in the Oracle database.
You can then use the information in the captured model and the converted model to
compare database objects, identify conflicts with Oracle reserved words, and manage the
migration progress.
When you are ready to migrate, you generate the Oracle schema objects, and then
migrate the data. SQL Developer contains logic to extract data from the data dictionary of
the source database, create the captured model, and convert the captured model to the
converted model.

Question

Identify the valid statements regarding the use of SQL Developer for migration.
Options:
1.

A representation of the structure of the source database is stored in a migration


repository

2.

At migration time, you migrate the data and then generate the Oracle schema
objects

3.

Database triggers and stored procedures must be migrated manually

4.

The process of migrating a third-party database is simplified

Answer
When using SQL Developer for migration a representation of the structure of the
source database is stored in a migration repository and the process of migrating a
third-party database is simplified.
Option 1 is correct. SQL Developer captures information from the source database
and displays it in the captured model, which is a representation of the structure of
the source database. This representation is stored in a migration repository.
Option 2 is incorrect. When you are ready to migrate, you generate the Oracle
schema objects, and then migrate the data. SQL Developer contains logic to
extract data from the data dictionary of the source database, create the captured
model, and convert it.
Option 3 is incorrect. You can migrate from a database, such as Microsoft SQL
Server and Microsoft Access, to an Oracle database. SQL Developer enables you
to migrate an entire third-party database, including triggers and stored procedures.
Option 4 is correct. SQL Developer enables you to simplify the process of
migrating a third-party database to an Oracle database. It also reduces the effort
and risks involved in a migration project.

4. Using SQL Worksheet


You want to create and connect to a new database connection.
Step 1: You start SQL Developer, right-click the Connection node, and select New
Connection.
Step 2: Next you create a database connection using this information:

Connection Name mydbconnection

Username oe

Password oe

Hostname localhost.easynomadtravel.com

Port 1521

SID orcl
Step 3: Next you need to test the new connection. If the Status is Success, you can
connect to the database using this new connection.
First you click the Test button in the New/Select Database Connection window.
If the status is Success, you click the Connect button.
The new connection is created.
Step 4: To browse the CUSTOMERS table and display its data, you expand the
mydbconnection node by clicking the + sign next to it. Then you expand the Tables
node by clicking the + sign next to it. And you click CUSTOMERS to display the structure
of the CUSTOMERS table.
The Columns tab displays the columns in the table.
You click the Data tab to display the customers' data.
Step 5: You use the SQL Worksheet to select the information for all line item orders with
an ordered quantity greater than 200.
To display the SQL worksheet, you select Tools - SQL Worksheet to display the Select
Connection window.
Then you select the new mydbconnection from the Connection drop-down list if not
already selected and click OK.
The mydbconnection Enter SQL statement window displays. You enter this statement in
the Enter SQL Statement box.
SELECT order_id, line_item_id, product_id, unit_price, quantity
FROM order_items
WHERE quantity > 200
You click the Execute Statement icon or press F9 to display the results of the SQL
statement in the Results window.
The results are displayed in the form of a table with order_id, line_item_id,
product_id, unit_price, and quantity columns. Three items with a quantity
exceeding 200 have been found.

Step 6: You set your script pathing preference.


First you select Tools - Preferences.
Then you expand the Database node and select Worksheet Parameters.
You enter C:\Oracle Data\Scripts in the "Select default path to look for scripts" field
and click OK.
Step 7: You need to save the SQL statement to a script file.
You click the Save icon on the toolbar.
In the Save dialog box you name the file ItemsGT200.sql, save it in your C:\Oracle
Data\Scripts folder, and click Save.
The SQL statement has been saved.
Step 8: You open and run the file ItemsGT200.sql from your
C:\Oracle Data\Scripts folder.
You click the Open icon.
You select ItemsGT200.sql and click Open.
You click the Execute Statement icon or F9 to display the results.
The results of your query display.
Step 9: You need to create a report named CUSTBYACCTMGR and save it to a folder
named CUSTOMERREPORTS.
On the Reports tabbed page, you right-click User Defined Reports and select Add
Folder.
In the Create Folder dialog box, you enter the name CUSTOMERREPORTS, add a
description, and click Apply.
The subfolder CUSTOMERREPORTS is created in the folder User Defined Reports.
You right-click the CUSTOMERREPORTS node and select Add Report.
The Create Report dialog box displays. You name the report CUSTBYACCTMGR, enter this
query in the SQL text field, and click Apply.

SELECT COUNT(*), account_mgr_id


FROM customers
GROUP BY account_mgr_id
The report CUSTBYACCTMGR is created in the folder CUSTOMERREPORTS.
You click the report to view its contents.
Finally, you exit SQL Developer.

Summary
SQL Developer enables you to use search engines such as Ask Tom, Google and OTN
Forums. You can customize the SQL Developer interface according to your preferences.
SQL Developer provides categories of reports about the database and its objects and
enables you to create user-defined reports which you can organize in folders you've
added.
SQL Worksheet supports most SQL*Plus statements and you can open the SQL*Plus
command-line window from within SQL Developer.
You can migrate from a database such as Microsoft SQL Server and Microsoft Access to
an Oracle database. SQL Developer enables you to simplify the migration process.
SQL Developer enables you to create and connect to a new database connection,
browse database objects such as tables, use the SQL Worksheet to execute and save
SQL scripts, open and run .sql files from folders and create your own reports.

Creating Reports and Migrating with SQL Developer


Learning objective

After completing this topic, you should be able to identify the steps for using SQL
Developer to create reports and migrate to an Oracle Database 11g database.

1. Creating reports with SQL Developer


SQL Developer 1.2 has links to popular search engines and discussion forums. You can
use the search engines:

Ask Tom

Google

Metalink

Docs

10.2 docs

9.2 docs

search.oracle.com

OTN Forums
In this example, the OTN Forums are searched using the search term SQL Developer.
The search results display in your browser window.
You can customize many aspects of the SQL Developer interface and environment by
modifying the SQL Developer preferences according to your preferences and needs.
To modify SQL Developer preferences, you select Tools - Preferences.
The Preferences dialog box displays.
Most preferences are self-explanatory. Some preferences involve performance or system
resource trade-offs for example, enabling a feature that adds execution time and
other preferences involve only personal aesthetic taste. The preferences are grouped into
categories.
In this example the Code Editor preferences are displayed.

Note
You can toggle your line numbers on and off using the Show Line Numbers
check box.
SQL Developer provides many reports about the database and its objects. These reports
are grouped into categories:

About Your Database reports

Object reports

Application Express reports

Charts

Database Administration reports

Data Dictionary reports

Jobs reports

PL/SQL reports

Security reports

Streams reports

Table reports

XML reports
To display a report, you select the Reports tab, and then select the report type.
You can also create your own user-defined reports. For example, to display a chart, you
expand the Charts node and then select the particular chart you want to display. The
example shows an Object Distribution chart report.
You can generate reports about your PL/SQL code. You can find out information about
arguments, search source code for object name or text strings, and find out the length of
your PL/SQL routines.
To run the Search Source Code report, you click the Search Source Code node, specify
a database connection in the Select Connection dialog box, and click OK. This report
enables you to find either text strings or object names in your PL/SQL code.
In the Enter Bind Values dialog box, you enter a value for example customer in the
Value text field and click Apply.
The search result shows all occurrences of the text string "customer" in the PL/SQL code.
The results show the owner, the PL/SQL object name, code type, line number, and the
text on that line number.
User-defined reports are any reports that are created by SQL Developer users.
To create a user-defined report, you right-click the User Defined Reports node, and
select Add Report.
This displays the Create Report dialog box. You specify the report name and an optional
description to indicate that the report contains sales orders organized by sales
representatives.
You enter the complete SQL statement for retrieving the information to be displayed in the
user-defined report in the SQL box. You can also include an optional tool tip to be
displayed when the cursor stays briefly over the report name in the Reports navigator
display.
To retrieve the information you click Apply.
SELECT order_mode, order_total, sales_rep_id
FROM orders
WHERE sales_rep_id IS NOT NULL
GROUP BY order_mode, order_total, sales_rep_id
ORDER BY sales_rep_id

Your new report OrdersByRep appears under the node User Defined Reports and its
contents are displayed in the right-hand pane.
You can organize user-defined reports in folders, and you can create a hierarchy of
folders and subfolders.
To create a folder for user-defined reports, you right-click the User Defined Reports
node or any folder name under that node and select Add Folder.
This displays the Create Folder dialog box that enables you to name your new folder.

Note
Information about user-defined reports, including any folders for these reports, is
stored in a file named UserReports.xml under the directory for user-specific
information.

Question
Which statements accurately describe report creation in SQL Developer?
Options:
1.

A hierarchy of folders must be created before you can create reports

2.

SQL Developer reports are grouped into categories, such as object reports and
charts

3.

You can customize existing report templates, but you cannot create user-defined
reports

4.

You can generate reports about PL/SQL code

Answer
In SQL Developer reports are grouped into categories, such as object reports and
charts. And you can generate reports about PL/SQL code.
Option 1 is incorrect. You can organize user-defined reports into folders, and you
can create a hierarchy of folders and subfolders. However, these folders do not
have to be created before the reports are created.
Option 2 is correct. SQL Developer provides many reports about the database and
its objects. Reports are grouped into categories, such as object reports, charts,
PL/SQL reports, and XML reports.
Option 3 is incorrect. Although SQL Developer provides many reports about the
database and its objects, you can also create your own user-defined reports.

Option 4 is correct. You can generate reports about your PL/SQL code. You can
find out information about arguments, search source code for object name or text
strings, and find out the length of your PL/SQL routines.

2. Using SQL*Plus
SQL Worksheet supports most SQL*Plus statements. SQL*Plus statements must be
interpreted by SQL Worksheet before being passed to the database. Any SQL*Plus
statements that are not supported by the SQL Worksheet are ignored and not passed to
the database.
To display the SQL*Plus command-line interface, you first close all SQL worksheets and
then select Tools - SQL*Plus.
You must use the Oracle SID in your SQL Developer connection in order for the
SQL*Plus menu item to be enabled.
This opens the SQL*Plus command-line window on top of SQL Developer.
To use this feature, the system on which you are using SQL Developer must have an
Oracle home directory or folder, with a SQL*Plus executable under that location. If the
location of the SQL*Plus executable is not already stored in your SQL Developer
preferences, you are asked to specify its location.
To do this you select Tools - Preferences and then select Database. You enter the path
to the sqlplus.exe file in the SQL*Plus executable text box and click OK.
SQL Developer does not support all SQL*Plus statements.
For example, the SQL*Plus statements append, archive, and attribute are not
supported by SQL Developer.

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the full list of SQL*Plus statements that are and are not supported by SQL
Developer here.

Question
What must be considered when invoking SQL*Plus from SQL Developer?
Options:

1.

An Oracle home directory or folder is not required

2.

The location of sqlplus.exe is determined automatically

3.

You must close all SQL worksheets to enable the SQL*Plus menu option

4.

You must use the Oracle SID in your SQL Developer connection

Answer
When invoking SQL*Plus from SQL Developer, you must close all SQL worksheets
to enable the SQL*Plus menu option. And you must use the Oracle SID in your
SQL Developer connection.
Option 1 is incorrect. To launch SQL*Plus from SQL Developer, the system on
which you are using SQL Developer must have an Oracle home directory or
folder, with a SQL*Plus executable under that location.
Option 2 is incorrect. If the location of the SQL*Plus executable, sqlplus.exe, is
not already stored in your SQL Developer preferences, you are asked to specify
its location the first time you invoke SQL*Plus.
Option 3 is correct. To enable the SQL*Plus menu option, and to invoke SQL*Plus
from SQL Developer, you must close all open SQL worksheets.
Option 4 is correct. You must use the Oracle SID in your SQL Developer
connection in order for the SQL*Plus menu item to be enabled.

3. Migrating with SQL Developer


You can migrate from a database, such as Microsoft SQL Server and Microsoft Access, to
an Oracle database. This enables you to take advantage of Oracle's scalability, reliability,
increased performance, and better security.
Using SQL Developer to migrate a third-party database to an Oracle database

reduces the effort and risks involved in a migration project

enables you to migrate an entire third-party database, including triggers and stored procedures

enables you to see and compare the captured model and the converted model, and to customize
each if you want, so that you can control how much automation there is in the migration process

provides feedback about the migration through reports


SQL Developer enables you to simplify the process of migrating a third-party database to
an Oracle database. SQL Developer captures information from the source database and
displays it in the captured model, which is a representation of the structure of the source
database.

This representation is stored in a migration repository, which is a collection of schema


objects that SQL Developer uses to store migration information. The information in the
repository is used to generate the converted model, which is a representation of the
structure of the destination database as it will be implemented in the Oracle database.
You can then use the information in the captured model and the converted model to
compare database objects, identify conflicts with Oracle reserved words, and manage the
migration progress.
When you are ready to migrate, you generate the Oracle schema objects, and then
migrate the data. SQL Developer contains logic to extract data from the data dictionary of
the source database, create the captured model, and convert the captured model to the
converted model.

Question
Identify the valid statements regarding the use of SQL Developer for migration.
Options:
1.

A representation of the structure of the source database is stored in a migration


repository

2.

At migration time, you migrate the data and then generate the Oracle schema
objects

3.

Database triggers and stored procedures must be migrated manually

4.

The process of migrating a third-party database is simplified

Answer
When using SQL Developer for migration a representation of the structure of the
source database is stored in a migration repository and the process of migrating a
third-party database is simplified.
Option 1 is correct. SQL Developer captures information from the source database
and displays it in the captured model, which is a representation of the structure of
the source database. This representation is stored in a migration repository.
Option 2 is incorrect. When you are ready to migrate, you generate the Oracle
schema objects, and then migrate the data. SQL Developer contains logic to
extract data from the data dictionary of the source database, create the captured
model, and convert it.
Option 3 is incorrect. You can migrate from a database, such as Microsoft SQL
Server and Microsoft Access, to an Oracle database. SQL Developer enables you
to migrate an entire third-party database, including triggers and stored procedures.

Option 4 is correct. SQL Developer enables you to simplify the process of


migrating a third-party database to an Oracle database. It also reduces the effort
and risks involved in a migration project.

4. Using SQL Worksheet


You want to create and connect to a new database connection.
Step 1: You start SQL Developer, right-click the Connection node, and select New
Connection.
Step 2: Next you create a database connection using this information:

Connection Name mydbconnection

Username oe

Password oe

Hostname localhost.easynomadtravel.com

Port 1521

SID orcl
Step 3: Next you need to test the new connection. If the Status is Success, you can
connect to the database using this new connection.
First you click the Test button in the New/Select Database Connection window.
If the status is Success, you click the Connect button.
The new connection is created.
Step 4: To browse the CUSTOMERS table and display its data, you expand the
mydbconnection node by clicking the + sign next to it. Then you expand the Tables
node by clicking the + sign next to it. And you click CUSTOMERS to display the structure
of the CUSTOMERS table.
The Columns tab displays the columns in the table.
You click the Data tab to display the customers' data.
Step 5: You use the SQL Worksheet to select the information for all line item orders with
an ordered quantity greater than 200.
To display the SQL worksheet, you select Tools - SQL Worksheet to display the Select
Connection window.

Then you select the new mydbconnection from the Connection drop-down list if not
already selected and click OK.
The mydbconnection Enter SQL statement window displays. You enter this statement in
the Enter SQL Statement box.
SELECT order_id, line_item_id, product_id, unit_price, quantity
FROM order_items
WHERE quantity > 200
You click the Execute Statement icon or press F9 to display the results of the SQL
statement in the Results window.
The results are displayed in the form of a table with order_id, line_item_id,
product_id, unit_price, and quantity columns. Three items with a quantity
exceeding 200 have been found.
Step 6: You set your script pathing preference.
First you select Tools - Preferences.
Then you expand the Database node and select Worksheet Parameters.
You enter C:\Oracle Data\Scripts in the "Select default path to look for scripts" field
and click OK.
Step 7: You need to save the SQL statement to a script file.
You click the Save icon on the toolbar.
In the Save dialog box you name the file ItemsGT200.sql, save it in your C:\Oracle
Data\Scripts folder, and click Save.
The SQL statement has been saved.
Step 8: You open and run the file ItemsGT200.sql from your
C:\Oracle Data\Scripts folder.
You click the Open icon.
You select ItemsGT200.sql and click Open.
You click the Execute Statement icon or F9 to display the results.
The results of your query display.

Step 9: You need to create a report named CUSTBYACCTMGR and save it to a folder
named CUSTOMERREPORTS.
On the Reports tabbed page, you right-click User Defined Reports and select Add
Folder.
In the Create Folder dialog box, you enter the name CUSTOMERREPORTS, add a
description, and click Apply.
The subfolder CUSTOMERREPORTS is created in the folder User Defined Reports.
You right-click the CUSTOMERREPORTS node and select Add Report.
The Create Report dialog box displays. You name the report CUSTBYACCTMGR, enter this
query in the SQL text field, and click Apply.
SELECT COUNT(*), account_mgr_id
FROM customers
GROUP BY account_mgr_id
The report CUSTBYACCTMGR is created in the folder CUSTOMERREPORTS.
You click the report to view its contents.
Finally, you exit SQL Developer.

Summary
SQL Developer enables you to use search engines such as Ask Tom, Google and OTN
Forums. You can customize the SQL Developer interface according to your preferences.
SQL Developer provides categories of reports about the database and its objects and
enables you to create user-defined reports which you can organize in folders you've
added.
SQL Worksheet supports most SQL*Plus statements and you can open the SQL*Plus
command-line window from within SQL Developer.
You can migrate from a database such as Microsoft SQL Server and Microsoft Access to
an Oracle database. SQL Developer enables you to simplify the migration process.
SQL Developer enables you to create and connect to a new database connection,
browse database objects such as tables, use the SQL Worksheet to execute and save
SQL scripts, open and run .sql files from folders and create your own reports.

DCN and Lock Enhancements

Abstract

This article describes the Data Change Notification and lock enhancements included with Oracle
Database 11g, and discusses their scope and uses in database management.
Data Change Notification

Overview of Data Change Notification enhancements


Prior to the release of Oracle Database 11g, Release 11.1, only object-change notifications which
resulted from DML or DDL changes to the objects associated with the registered queries were
published.
Oracle Database 11g, Release 11.1 offers two significant Data Change Notification (DCN) enhancements:

result-set-change notifications, which result from DML or DDL changes to the result set
associated with the registered queries are published

new static data dictionary views, which allow you to see which queries are registered for resultset-change notifications

Data Change Notification for result sets


Continuous Query Notification (CQN) enables you to set up client applications to register queries with the
database and receive either object-change notifications or result-set-change notifications. Object-change
notifications (the default) result from DML or DDL changes to the objects associated with the queries.
Result-set-change notifications result from DML or DDL changes to the result set associated with the
queries. The database publishes the notifications when the DML or DDL transaction commits.
This is useful for an application that caches query result sets on mostly read-only objects in the middle tier
to avoid network round trips to the database. Such an application can create a registration on the queries
it is interested in caching using the change notification service. On changes to objects referenced inside
those queries, the database publishes a change notification when the underlying transaction commits. In
response to the notification, the application can refresh its cache by re-executing the queries.
For example, if you have a Web forum application, your users might not need to view new content as
soon as it is inserted into the back-end database. This type of an application is intrinsically tolerant of
slightly out-of-date data, and can benefit from caching in the middle tier. CQN helps in this scenario in
keeping the cache updated with the back-end database.
In order to use CQN, you need the CHANGE NOTIFICATION privilege, the EXECUTE ON
DBMS_CQ_NOTIFICATION privilege, and you need to enable job queue processes to receive
notifications.

The Data Change Notification process

Basic Data Change Notification process

In this example, the application has cached the result set of a query on OE.ORDERS.
The Data Change Notification process consists of several steps.
1. A registration for the query on OE.ORDERS using the CQN PL/SQL interface is created. In
addition, a stored PL/SQL procedure to process notifications and supply the server-side PL/SQL
procedure as the notification handler is created.
2. The database populates the registration information in the data dictionary.
3. A user modifies one of the registered objects with DML statements and commits the transaction. For
example, a user updates a row in the OE.ORDERS table on the back-end database. The data for
OE.ORDERS cached in the middle tier is now stale.
4. Oracle Database adds a message that describes the change to an internal queue.
5. A JOBQ background process is notified of a new change notification message.
6. The JOBQ process executes the stored procedure specified by the client application. In this example,
JOBQ passes the data to a server-side PL/SQL procedure. The implementation of the PL/SQL callback
procedure determines how the notification is handled.
7. Inside the server-side PL/SQL procedure, you can implement logic to notify the middle tier client
application of the changes to the registered objects. For example, it notifies the application of the ROWID
of the changed row in OE.ORDERS.
8. The middle-tier application queries the back-end database to retrieve the changed data.
9. The client application updates the cache with the new data.

Data Change Notification examples

When you run a query, Domain Name Service (DNS) which is a system for naming computers and
network services that is organized into a hierarchy of domains generates either object-change
notifications or result-set-change notifications.
The following two examples can be used to illustrate the difference between object-change notifications
and result-set-change notifications.

Example 1
SELECT order_id, order_total
FROM orders
WHERE sales_rep_id = 158;
In this example, DNS generates an object-change notification for this query for any DML or DDL change
to the ORDERS table, even if the changed row or rows did not satisfy the query predicate for example, if
sales_rep_id = 160.
DNS generates a result-set-change notification only if the query result set itself changed and both of
these conditions are true:

the changed row or rows satisfy the query predicate sales_rep_id = 158 either before or
after the change

the change affected at least one of the columns in the SELECT list order_id or order_total
as the result of either an UPDATE or an INSERT

Example 2
SELECT customer_id, cust_first_name, cust_last_name
FROM customers
WHERE credit_limit = 1400;
In this example, DNS generates an object-change notification for this query for any DML or DDL change
to the CUSTOMERS table, even if the changed row or rows did not satisfy the query predicate for
example, if credit_limit = 1200.
DNS generates a result-set-change notification for this query only if the query result set itself changed,
which means that both of these conditions are true:

the changed row or rows satisfy the query predicate credit_limit = 1400 either before
or after the change

the change affected at least one of the columns in the SELECT list customer_id or
cust_first_name or cust_last_name as the result of either an UPDATE or an INSERT statement

Data dictionary views for CQN


If your application uses CQN, Oracle Database can publish a notification when a change occurs to
registered objects with details on what changed. In response to the notification, your application can
refresh cached data by fetching it from the back-end database.

There are several dictionary views that you can query to see the status of CQN. For example, to view toplevel information about all registrations, you can use the DBA_CHANGE_NOTIFICATION_REGS and the
USER_CHANGE_NOTIFICATION_REGS dictionary views.
Two dictionary views are added in Oracle Database 11g to support result-set-change notifications. These
are

DBA_CQ_NOTIFICATION_QUERIES

USER_CQ_NOTIFICATION_QUERIES
These views contain the query ID, query text, and registration ID values.
Utilizing lock enhancements

Using the LOCK TABLE ... WAIT new syntax


The LOCK TABLE statement has new syntax that enables you to specify the maximum number of
seconds a statement should wait to obtain a DML lock on a table. The new syntax is the WAIT clause.
When you specify the WAIT clause, you identify the number of seconds your table should wait to have a
DML lock on it. By default, the option is to wait indefinitely. However, if you specify the NOWAIT option,
you are returned control immediately if the table is already locked. You receive a message indicating that
the table is already locked by another user.
There is no limit on the number of seconds you can specify when you use the WAIT clause.

Using the LOCK TABLE statement with the WAIT option

In this example, the ORDERS table is locked in session 1. In session 2, a user tries to put a lock on the
same ORDERS table, but specifies to wait 60 seconds. This means that if the table is already locked by
another user, session 2 will wait 60 seconds for the lock. If the lock in the other session is not released
after 60 seconds, an appropriate error message is returned to the session 2 user.

Setting the DDL_LOCK_TIMEOUT parameter


When you execute DDL statements, these statements require exclusive locks on internal structures. If
these locks are unavailable when a DDL statement runs, the DDL statement fails, although it might have
succeeded if it had been executed subseconds later.
To enable DDL statements to wait for locks, you can use the DDL_LOCK_TIMEOUT parameter to specify a
DDL lock timeout. This parameter controls the number of seconds that a DDL command waits for its
required locks before failing. The permissible range of values for DDL_LOCK_TIMEOUT is 0 through
1,000,000 (in seconds). The maximum value of 1,000,000 seconds will result in the DDL statement
waiting forever to acquire a DML lock. The default value of zero indicates a status of NOWAIT.
You can set DDL_LOCK_TIMEOUT at the system level, or you can set it at the session level using an
ALTER SESSION statement.
In this example, a system-level lock timeout of 50000 seconds is specified as
DDL_LOCK_TIMEOUT = 50000
If a lock is not acquired before the timeout period expires, the default behavior is for an error to be
returned. However, you can change that behavior at the session level but not systemwide so that all
sessions holding the lock the DDL is waiting for are killed:

Summary
Data Change Notification and lock enhancements are two of the new features that have been added in
Oracle Database 11g.
Continuous Query Notification (CQN) enables an application to register queries with the database for
either object change notification or result-set-change notification. Result-set-change notifications result
from changes to the result set associated with the queries. You can use two new dictionary views,
DBA_CQ_NOTIFICATION_QUERIES and USER_CQ_NOTIFICATION_QUERIES, to see which queries are
registered for result-set-change notifications.
You can use the new WAIT syntax with the LOCK TABLE statement to specify the maximum number of
seconds your table should wait to obtain a DML lock. If the table is already locked by another user, you
receive an appropriate error message. You can use the new DDL_LOCK TIMEOUT parameter to specify a
DDL lock timeout. You can set DDL_LOCK TIMEOUT at the system level, or you can set it at the session
level, using an ALTER SESSION statement.

Using Language Functionality Enhancements


Learning objective

After completing this topic, you should be able to use SQL and PL/SQL language
functionality enhancements to connect to a database and create a report, examine
dependency at the element level, and modify an exception handler.

Exercise overview
In this exercise, you're required to identify the correct code that uses various Oracle
Database 11g language functionality enhancements to retrieve table information, examine
dependencies, and handle errors.
This involves the following tasks:

using regular expression support

examining dependencies

handling exceptions
Suppose you're a database administrator for a large computer retailer. You've just
upgraded to Oracle Database 11g, and want to use some of the new language
functionality enhancements to retrieve information from a table, examine dependencies,
and handle errors.

Task 1: Using regular expression support


You want to use some of the new regular expression support functions included with
Oracle Database 11g to retrieve certain information from your company's
PRODUCT_INFORMATION table.
You're currently examining some of the data in the PRODUCT_INFORMATION table.

Step 1 of 2
First, you want to return the number of occurrences of the string "RAM" from the
PRODUCT_DESCRIPTION column and the associated product name.
Which statement should you use?
Options:

1.

SELECT REGEXP_COUNT (product_description, 'ram', 1, 'i') Count


FROM product_information WHERE REGEXP_COUNT
(product_description, 'ram', 1, 'i') > 0;

2.

SELECT REGEXP_COUNT (product_description, 'ram', 1, 'i')


Count, product_name FROM product_information WHERE
REGEXP_COUNT (product_description, 'ram', 1, 'i') > 0;

3.

SELECT REGEXP_COUNT (product_description, 'ram', 1, 'i')


Count, product_name FROM product_information WHERE
REGEXP_COUNT (product_description, 'ram', 1, 'i') = 0;

Result
To return the position of the occurrences of the string as specified, you use this
statement:
SELECT REGEXP_COUNT (product_description, 'ram', 1, 'i')
Count, product_name FROM product_information WHERE
REGEXP_COUNT (product_description, 'ram', 1, 'i') > 0;
Option 1 is incorrect. Although this statement will return the number of
occurrences of the string "RAM" in the PRODUCT_DESCRIPTION column, it does
not return the associated information from the PRODUCT_NAME column.
Option 2 is correct. The result of this statement will have two columns, COUNT and
PRODUCT_NAME. These will show the number of occurrences of the string "RAM"
from the PRODUCT_DESCRIPTION column and the product names associated
with the product descriptions that contained the string.
Option 3 is incorrect. In order for this statement to return the required results, the
REGEXP_COUNT function should be searching for a value greater than zero
instead of a value equal to zero.

Step 2 of 2
Next, you want to return the position of the occurrences of the second
subexpression in the string "(SD)(RAM)" in the PRODUCT_DESCRIPTION column
of the PRODUCT_INFORMATION table. You also want to return the associated
information from the PRODUCT_NAME column.
Which query should you use?
Options:
1.

SELECT REGEXP_INSTR (product_name, '(RAM)', 1, 1, 0, 'i',3)


POSITION, product_name FROM product_information WHERE
REGEXP_INSTR (product_name, '(RAM)', 1, 1, 0, 'i',2) > 0;

2.

SELECT REGEXP_INSTR (product_name, '(SD)(RAM)', 1, 1, 0,


'i',2) POSITION FROM product_information WHERE REGEXP_INSTR
(product_name, '(SD)(RAM)', 1, 1, 0, 'i',2) > 0;

3.

SELECT REGEXP_INSTR (product_name, '(SD)(RAM)', 1, 1, 0,


'i',2) POSITION, product name FROM product_information WHERE
REGEXP_INSTR (product_name, '(SD)(RAM)', 1, 1, 0, 'i',2) > 0;

Result
To return the position of the occurrences of the second subexpression as
specified, you should use this query:
SELECT REGEXP_INSTR (product_name, '(SD)(RAM)', 1, 1, 0,
'i',2) POSITION, product name FROM product_information WHERE
REGEXP_INSTR (product_name, '(SD)(RAM)', 1, 1, 0, 'i',2) >
0;
Option 1 is incorrect. Although this query will complete without error, it will not
return the required results because it does not contain the specified string within
the REGEXP_INSTR function.
Option 2 is incorrect. Although this query will return the position of the occurrences
of the second subexpression in the string "(SD)(RAM)" in the
PRODUCT_DESCRIPTION column, it does not return the associated information
from the PRODUCT_NAME column.
Option 3 is correct. This query will return the required results in two columns. The
POSITION column returns the position of the second subexpression in the
specified string, and the second column PRODUCT_NAME returns the associated
product names.

Task 2: Examining dependencies


You want to use fine-grained dependency to examine dependencies at the element level.
You're currently examining some of the data in the CUSTOMERS table.

Step 1 of 3
First, you want to create a view based on the CUSTOMERS table to store the
customer id, first name, last name, e-mail address, and credit limit data for
customers with credit limits over 2500.
Which statement should you use?
Options:

1.

CREATE view best_customers AS SELECT customer_id,


cust_first_name, cust_last_name, cust_email, credit_limit FROM
customers WHERE credit_limit < 2500;

2.

CREATE view best_customers AS SELECT customer_id,


cust_first_name, cust_last_name, credit_limit FROM customers
WHERE credit_limit > 2500;

3.

CREATE view best_customers AS SELECT customer_id,


cust_first_name, cust_last_name, cust_email, credit_limit FROM
customers WHERE credit_limit > 2500;

Result
To create a view as specified, you should use this statement:
CREATE view best_customers AS SELECT customer_id,
cust_first_name, cust_last_name, cust_email, credit_limit
FROM customers WHERE credit_limit > 2500;
Option 1 is incorrect. This statement would create a view of the customer id, first
name, last name, e-mail address and credit limit of customers who have credit
limits below 2500 and not above 2500.
Option 2 is incorrect. This statement would create a view of the customer id, first
name, last name, and credit limit of customers with credit limits over 2500.
However, it does not include the required CUST_EMAIL in the view.
Option 3 is correct. This statement will create the required view, based on the
customer id, first name, last name, e-mail address and credit limit of customers in
the CUSTOMERS table that have a credit limit of greater than 2500.
Next, you want to create a function named "GET_BEST_CUSTOMERS" that returns the
number of customers with credit ratings greater than or equal to 2500.
You have already written part of the code.
CREATE OR REPLACE FUNCTION GET_BEST_CUSTOMERS
(p_credit_limit NUMBER DEFAULT 2500)
RETURN NUMBER
IS
v_highest_amount NUMBER := 0;
BEGIN
<required code>
END GET_BEST_CUSTOMERS;

Step 2 of 3

Which code segment should be used instead of the line <required code> inside
the executable section to return the required results?
Options:
1.

SELECT COUNT(*) INTO p_credit_limit FROM best_customers WHERE


credit_limit >= p_credit_limit;
RETURN p_credit_limit;

2.

SELECT COUNT(*) INTO p_credit_limit FROM best_customers WHERE


credit_limit >= p_credit_limit;
RETURN v_highest_amount;

3.

SELECT COUNT(*) INTO v_highest_amount FROM best_customers


WHERE credit_limit >= p_credit_limit;
RETURN v_highest_amount;

Result
To create a function named "GET_BEST_CUSTOMERS" as specified, you should
replace the missing code with this code segment:
SELECT COUNT(*) INTO v_highest_amount FROM best_customers
WHERE credit_limit >= p_credit_limit;
RETURN v_highest_amount;
Option 1 is incorrect. To achieve the desired results, the value returned from
selecting COUNT(*) from the BEST_CUSTOMERS view should be stored in the
V_HIGHEST_AMOUNT variable. The statement should also return
V_HIGHEST_AMOUNT.
Option 2 is incorrect. To achieve the desired results, the value returned from
selecting COUNT(*) from the BEST_CUSTOMERS view should be stored in the
V_HIGHEST_AMOUNT variable.
Option 3 is correct. This statement will return the number of customers who have
a credit rating greater than or equal to 2500.

Step 3 of 3
Finally, you want to view the status of the BEST_CUSTOMERS view and
GET_BEST_CUSTOMERS function you have created by querying the
USER_OBJECTS data dictionary view.
Which statement should you use to return the object name, type, and status?
Options:

1.

SELECT object_name, object_type, status FROM user_objects


WHERE object_name = '%BEST_CUST%';

2.

SELECT object name, object type, status FROM user_objects


WHERE object_name LIKE '%BEST_CUST%';

3.

SELECT object_name, status FROM user_objects WHERE object_name


LIKE '%BEST_CUST%';

4.

SELECT object_name, object_type, status FROM user_objects


WHERE object_name LIKE '%BEST_CUST%';

Result
To return the object name, object type, and status, you should query the
USER_OBJECTS data dictionary view using this statement:
SELECT object_name, object_type, status FROM user_objects
WHERE object_name LIKE '%BEST_CUST%';
Option 1 is incorrect. Although the view name BEST_CUSTOMERS and the function
name GET_BEST_CUSTOMERS contain the string "BEST_CUST", this query will not
return the correct results because the LIKE operator should be used in place of
the equals sign operator in the WHERE clause.
Option 2 is incorrect. The columns from the USER_OBJECTS table that should be
returned are OBJECT_NAME, OBJECT_TYPE, and STATUS. The underscore
symbol is a required part of the column name.
Option 3 is incorrect. Although this query will return the object name and status of
the BEST_CUSTOMERS view and the GET_BEST_CUSTOMERS function, it will not
return the required object type.
Option 4 is correct. This query will return the object name, object type, and status
of the BEST_CUSTOMERS view and GET_BEST_CUSTOMERS function. In this case,
both objects would have a status of VALID.

Task 3: Handling exceptions


You want to write code to handle exceptions using the new PLW-06009 warning.

Step 1 of 2
You want to add an exception handler to your GET_BEST_CUSTOMERS function to
catch all exceptions.
Which statement correctly alters a session to enable compiler warnings?
Options:

1.

ALTER SESSION SET PLSQL_WARNING = 'enable:all';

2.

ALTER SESSION SET PLSQL_WARNINGS = 'enable:all';

3.

ALTER SESSION PLSQL_WARNINGS = 'enable:all';

4.

ALTER SYSTEM SET PLSQL_WARNINGS = 'enable:all';

Result
This statement alters a session to enable compiler warnings:
ALTER SESSION SET PLSQL_WARNINGS = 'enable:all';
Option 1 is incorrect. It is the PLSQL_WARNINGS parameter and not
PLSQL_WARNING that is used to enable or disable the reporting of warning
messages by the PL/SQL compiler.
Option 2 is correct. This statement enables the reporting of warning messages by
the PL/SQL compiler. The ENABLE value is used to enable a specific warning or a
set of warnings.
Option 3 is incorrect. The SET keyword is required before the PLSQL_WARNINGS
parameter for the statement to function correctly.
Option 4 is incorrect. This statement sets the PLSQL_WARNINGS parameter
system level instead of the session level.
Next, you want to add an exception handler to the GET_BEST_CUSTOMERS function to
catch for all exceptions.
You have already written part of the code.
CREATE OR REPLACE FUNCTION GET_BEST_CUSTOMERS
(p_credit_limit NUMBER DEFAULT 2500)
RETURN NUMBER
IS
v_highest_amount NUMBER := 0;
BEGIN
SELECT COUNT(*)
INTO v_highest_amount
FROM best_customers
WHERE credit_limit >= p_credit_limit;
RETURN v_highest_amount;
EXCEPTION
WHEN OTHERS THEN
RETURN 0;
END GET_BEST_CUSTOMERS;

Step 2 of 2

What code should replace the "WHEN OTHERS THEN" line in the current
EXCEPTION section to enable the GET_BEST_CUSTOMERS function to catch all
exceptions?
Options:
1.

WHEN OTHERS RAISE_APPLICATION_ERROR (-20001, 'Error occurred


in get_best_customers.');

2.

WHEN OTHERS THEN RAISE APPLICATION ERROR (-20001, 'Error


occurred in get_best_customers.');

3.

WHEN OTHERS THEN RAISE_APPLICATION_ERROR (-20001, Error


occurred in get_best_customers.);

4.

WHEN OTHERS THEN RAISE_APPLICATION_ERROR (-20001, 'Error


occurred in get_best_customers.');

Result
To enable the GET_BEST_CUSTOMERS function to catch all exceptions, you use
this code:
WHEN OTHERS THEN RAISE_APPLICATION_ERROR (-20001, 'Error
occurred in get_best_customers.');
Option 1 is incorrect. The THEN keyword is required following the WHEN OTHERS
clause and before the call to the RAISE_APPLICATION_ERROR function.
Option 2 is incorrect. The correct function name is RAISE_APPLICATION_ERROR
and not RAISE APPLICATION ERROR.
Option 3 is incorrect. When using the RAISE_APPLICATION_ERROR function, the
specified error message must be enclosed in single quotes.
Option 4 is correct. Using the RAISE_APPLICATION_ERROR function in
conjunction with the WHEN OTHERS THEN clause allows you specify an error
number and message to return when errors are encountered in your code.
You have successfully used some of the new language functionality enhancements
provided in Oracle Database 11g to retrieve table information, examine dependencies,
and handle errors.

Executing Dynamic SQL in PL/SQL


Learning objective

After completing this topic, you should be able to recognize the steps for using native
dynamic SQL and the DBMS_SQL package to specify dynamic SQL statements, using
CLOB data types and also abstract data types.

1. Native Dynamic SQL


Disclaimer
Although certain aspects of the Oracle 11g Database are case and spacing insensitive, a
common coding convention has been used throughout all aspects of this course.
This convention uses lowercase characters for schema, role, user, and constraint names,
and for permissions, synonyms, and table names (with the exception of the DUAL table).
Lowercase characters are also used for column names and user-defined procedure,
function, and variable names shown in code.
Uppercase characters are used for Oracle keywords and functions, for view, table,
schema, and column names shown in text, for column aliases that are not shown in
quotes, for packages, and for data dictionary views.
The spacing convention requires one space after a comma and one space before and
after operators that are not Oracle-specific, such as +, -, /, and <. There should be no
space between an Oracle-specific keyword or operator and an opening bracket, between
a closing bracket and a comma, between the last part of a statement and the closing
semicolon, or before a statement.
String literals in single quotes are an exception to all of the convention rules provided
here. Please use this convention for all interactive parts of this course.
End of Disclaimer
A dynamic SQL statement is a string literal, string variable, or string expression. The full
text of the dynamic SQL statement might be unknown until run time.
Your program may build the SQL statement based on different scenarios within an
application. Because the full statement is not known until run time, its syntax is checked
at run time rather than at compile time.
With dynamic SQL, you can make your PL/SQL programs more general and flexible
because you do not need to know the full text of the dynamic SQL statement until run
time.
Because the PL/SQL compiler does not check their syntax, dynamic SQL statements can
be SQL statements that are not part of the PL/SQL language.
Your PL/SQL program can use dynamic SQL with either

native dynamic SQL

the DBMS_SQL package


native dynamic SQL
Native dynamic SQL uses a single operation to bind any arguments in the dynamic SQL
statement and execute the statement.
the DBMS_SQL package
The DBMS_SQL package defines a basic data type called a SQL cursor number. Because
the SQL cursor number is a basic data type, you can pass it across call boundaries and
store it. You can use the SQL cursor number to obtain information about the SQL
statement that you are executing. You also use the DBMS_SQL package to execute a
dynamic SQL statement that has an unknown number of input or output variables.
Different situations may require you to use one method over the other. It is recommended
that you use native dynamic SQL except when you cannot.
You use DBMS_SQL when you do not know the SELECT list at compile time. You also use
it when you do not know how many columns a SELECT statement will return, or what their
data types are.
You use native dynamic SQL when the dynamic SQL statement retrieves rows into
records. You can also use it when you want to use the %FOUND, %ISOPEN, %NOTFOUND,
or %ROWCOUNT SQL cursor attributes after issuing a dynamic SQL statement that is an
INSERT, UPDATE, DELETE, or single-row SELECT statement.
Prior to the Oracle Database 11g release, each of these methods had functional
limitations. In Oracle Database 11g, functionality is added to both methods to make them
more complete.
Interoperability between native dynamic SQL and DBMS_SQL is supported. This means
that SQL statements larger than 32 KB are allowed in native dynamic SQL, and
DBMS_SQL.PARSE() is overloaded for CLOBs. A REF CURSOR can be converted to a
DBMS_SQL cursor and vice versa to support interoperability. Also, DBMS_SQL supports the
full range of data types including collections and object types.
You can now use the CLOB data type when specifying dynamic SQL statements. This
removes the previous length constraint of 32 KB. In the example, the DYNAMIC_PL
variable is set up with the CLOB data type. It is assigned a string that is passed in, which
can now hold more than 32 KB.
This is useful if you are dynamically building a PL/SQL block that might be very large. It is
powerful because any valid code can be passed to the procedure and executed.
CREATE OR REPLACE PROCEDURE gen_pl
(p_pgm CLOB)

IS
dynamic_pl CLOB := p_pgm;
BEGIN
EXECUTE IMMEDIATE dynamic_pl;
-- next line is for learning purposes only
DBMS_OUTPUT.PUT_LINE ('Just executed the following code: ' ||
dynamic_pl);
END gen_pl;
In this example, you can pass any PL/SQL block as a parameter to the procedure.
CREATE OR REPLACE PROCEDURE gen_pl
(p_pgm CLOB)
IS
dynamic_pl CLOB := p_pgm;
BEGIN
EXECUTE IMMEDIATE dynamic_pl;
-- next line is for learning purposes only
DBMS_OUTPUT.PUT_LINE ('Just executed the following code: ' ||
dynamic_pl);
END gen_pl;
EXECUTE gen_pl('begin dbms_output.put_line
(''put any code here''); end;')
put any code here
Just executed the following code: begin dbms_output.put_line('put
any
code
here'); end;
PL/SQL procedure successfully completed.
This is an example of passing a call to the DBMS_OUTPUT procedure. You can pass in any
string to represent a PL/SQL block.
EXECUTE gen_pl('begin dbms_output.put_line
(''put any code here''); end;')
put any code here
Just executed the following code: begin dbms_output.put_line('put
any
code
here'); end;

PL/SQL procedure successfully completed.


In this example, an anonymous block is passed to the GEN_PL procedure.
EXECUTE gen_pl('begin null; end;')
Just executed the following code: begin null; end;
PL/SQL procedure successfully completed.
In this example, the "Hello World" program is passed to the GEN_PL procedure.
EXECUTE gen_pl('begin dbms_output.put_line(''hello world'');
end;')
hello world!
Just executed the following code: begin
dbms_output.put_line('hello
world!'); end;
PL/SQL procedure successfully completed.
The DBMS_SQL package is extended so that it accepts CLOBs too.
PROCEDURE parse (c IN INTEGER, statement IN CLOB,
language_flag IN INTEGER);

Question
Which statements accurately describe dynamic SQL?
Options:
1.

A dynamic SQL statement is a string literal, string variable, or string expression

2.

PL/SQL programs can use dynamic SQL with the DBMS_SQL package only

3.

The full text of a dynamic SQL statement might be unknown until run time

4.

The syntax of a dynamic SQL statement is checked at compile time

Answer
A dynamic SQL statement is a string literal, string variable, or string expression.
And the full text of a dynamic SQL statement might be unknown until run time.
Option 1 is correct. A dynamic SQL statement is a string literal, string variable, or
string expression. Using dynamic SQL, you can make your PL/SQL programs
more general and flexible because you do not need to know the full text of the
dynamic SQL statement until run time.

Option 2 is incorrect. A PL/SQL program can use dynamic SQL with either native
dynamic SQL or the DBMS_SQL package.
Option 3 is correct. The full text of a dynamic SQL statement might be unknown
until runtime. Your program may build the SQL statement based on different
scenarios within an application.
Option 4 is incorrect. Because a full dynamic SQL statement may not be known
until run time, its syntax is checked at run time instead of at compile time.

Question
When should native dynamic SQL be used in place of the DBMS_SQL package?
Options:
1.

When the dynamic SQL statement retrieves rows into records

2.

When you do not know how many columns a SELECT statement will return

3.

When you do not know the SELECT list at compile time

4.

When you want to use the %FOUND cursor attributes after issuing a dynamic SQL
INSERT statement

Answer
You should use native dynamic SQL when the dynamic SQL statement retrieves
rows into records. And you should use it when you want to use the %FOUND cursor
attributes after issuing a dynamic SQL INSERT statement.
Option 1 is correct. Native dynamic SQL should be used when a SQL statement
retrieves rows into records. It is recommended that you always use native
dynamic SQL, except when you cannot.
Option 2 is incorrect. The DBMS_SQL package, instead of native dynamic SQL,
should be used when you do not know how many columns a SELECT statement
with return, or what their data types are.
Option 3 is incorrect. The DBMS_SQL package, instead of native dynamic SQL,
should be used when you do not know the SELECT list at compile time.
Option 4 is correct. You should use native dynamic SQL when you want to use the
%FOUND, %NOTFOUND, %ISOPEN, or %ROWCOUNT SQL cursor attributes after
issuing a dynamic SQL statement that is an INSERT, UPDATE, DELETE, or singlerow SELECT statement.

2. Interoperability
You can switch between the DBMS_SQL package and native dynamic SQL by using the
two new functions that are added into the DBMS_SQL package as of Oracle Database
11g.
The DBMS_SQL.TO_REFCURSOR function converts a SQL cursor number to a weakly
typed variable of the PL/SQL data type REF CURSOR. You can use the REF CURSOR
variable in native dynamic SQL statements.
The DBMS_SQL.TO_CURSOR_NUMBER function converts a REF CURSOR variable, either
strongly or weakly typed, to a SQL cursor number. You can pass the SQL cursor number
to DBMS_SQL subprograms.
The two new functions have specific code associated with them:

DBMS_SQL.TO_REFCURSOR
DBMS_SQL.TO_REFCURSOR
(cursor_number IN INTEGER)
RETURN SYS_REFCURSOR;

DBMS_SQL.TO_CURSOR_NUMBER
DBMS_SQL.TO_CURSOR_NUMBER
(rc IN OUT SYS_REFCURSOR)
RETURN INTEGER;
It is best practice to avoid having any client programs use SQL directly. You should
implement the SQL via PL/SQL routines.
If you are doing a query with an unbounded result, you need a REF CURSOR.
If the WHERE clause is not known until run time, you can use DBMS_SQL for processing
and then switch to do BULK COLLECT.
In this example, a DBMS_SQL cursor is created, opened, parsed, and executed.
CREATE OR REPLACE PROCEDURE do_query (rep_id NUMBER)
IS
TYPE num_list IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
TYPE cur_type IS REF CURSOR;
src_cur cur_type;
c_hndl NUMBER;
cust_nos num_list;

crdt_nos num_list;
ret INTEGER;
sql_stmt CLOB;
BEGIN
c_hndl := DBMS_SQL.OPEN_CURSOR;
sql_stmt := 'SELECT customer_id, credit_limit FROM customers
WHERE account_mgr_id = :b1';
DBMS_SQL.PARSE(c_hndl, sql_stmt, DBMS_SQL.NATIVE);
DBMS_SQL.BIND_VARIABLE(c_hndl, 'b1', rep_id);
ret := DBMS_SQL.EXECUTE(c_hndl);
-- continued on next page
The cursor is then transformed into a PL/SQL REF CURSOR that is consumed by native
dynamic SQL.
The cursor is switched to native dynamic SQL and the fetch is performed with native
dynamic SQL.
-- continued from previous page
-- switch from dbms_sql to native dynamic SQL
src_cur := DBMS_SQL.TO_REFCURSOR(c_hndl);
-- fetch with native dynamic SQL
FETCH src_cur BULK COLLECT INTO cust_nos, crdt_nos;
IF cust_nos.COUNT > 0 THEN
DBMS_OUTPUT.PUT_LINE ('Customer Credit Limit');
DBMS_OUTPUT.PUT_LINE ('-------- ------------');
FOR i IN 1 .. cust_nos.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(cust_nos(i) || ' ' ||
crdt_nos(i));
END LOOP;
END IF;
CLOSE src_cur;
END do_query;
/
This example shows the execution of the DO_QUERY procedure.
EXECUTE do_query(145)
Customer
-------308
309

Credit Limit
-----------1200
1200

310
5000
360
3600
344
2400
380
3700
...
934
600
PL/SQL procedure successfully completed.
When using the DBMS_SQL.TO_REFCURSOR function, the cursor passed in by the cursor
number parameter must be opened, parsed, and executed.
After the cursor number is transformed into a REF CURSOR

the cursor_number value is no longer accessible by any DBMS_SQL operations

you cannot use DBMS_SQL.ISOPEN to check to see if the cursor number is still open
Toggling between a REF CURSOR and a DBMS_SQL cursor number after starting to fetch
is not allowed.
You can use the DBMS_SQL.TO_CURSOR_NUMBER function to transform a REF CURSOR
into a DBMS_SQL cursor number.
In this example, a REF CURSOR is opened and transformed into a DBMS_SQL cursor.
When using the DBMS_SQL.TO_CURSOR_NUMBER function, the REF CURSOR passed in
must be opened first.
CREATE OR REPLACE PROCEDURE do_query2 (sql_stmt VARCHAR2, rep_id
NUMBER)
IS
TYPE cur_type IS REF CURSOR;
src_cur cur_type;
c_hndl NUMBER;
desctab DBMS_SQL.DESC_TAB;
colcnt NUMBER; custid NUMBER; crdvar NUMBER;
BEGIN
OPEN src_cur FOR sql_stmt USING rep_id;
-- switch from native dynamic SQL to DBMS_SQL:
c_hndl := DBMS_SQL.TO_CURSOR_NUMBER(src_cur);
DBMS_SQL.DESCRIBE_COLUMNS(c_hndl, colcnt, desctab);
-- define columns
FOR i in 1 .. colcnt LOOP
IF desctab(i).col_type=1 THEN
DBMS_SQL.DEFINE_COLUMN(c_hndl, i, custid);
ELSIF desctab(i).col_type = 2 THEN
DBMS_SQL.DEFINE_COLUMN(c_hndl, i, crdvar);

END IF;
END LOOP;
-- continued on next page
After the REF CURSOR is transformed into a DBMS_SQL cursor number, the REF CURSOR
is no longer accessible by any native dynamic SQL operations.
You cannot toggle between a REF CURSOR and a DBMS_SQL cursor number after
fetching is started.
-- continued from previous page
-- fetch rows
WHILE DBMS_SQL.FETCH_ROWS(c_hndl) > 0 LOOP
FOR i IN 1 .. colcnt LOOP
IF desctab(i).col_type=1 THEN
DBMS_SQL.COLUMN_VALUE(c_hndl, i, custid);
ELSIF desctab(i).col_type = 2 THEN
DBMS_SQL.COLUMN_VALUE(c_hndl, i, crdvar);
END IF;
END LOOP;
-- could do more processing...
END LOOP;
DBMS_SQL.CLOSE_CURSOR(c_hndl);
END do_query2;
/
In this example, the DO_QUERY2 procedure is created. It is then executed with a SQL
statement passed to it as the first parameter.
This SQL statement is opened as a REF CURSOR. The procedure converts the REF
CURSOR into a DBMS_SQL cursor. The rest of the processing for this procedure uses
DBMS_SQL.
EXECUTE do_query2('SELECT customer_id, credit_limit FROM customers
WHERE account_mgr_id = :b1', 148)
PL/SQL procedure successfully completed.

3. DBMS_SQL Support for abstract data types


DBMS_SQL supports abstract data types. You can use varrays, nested tables, REFs, and
opaque types with the DBMS_SQL package.

Opaque types are abstract data types. With data implemented as simply a series of
bytes, the internal representation is not exposed.
Often opaque types are provided by Oracle's supplied packages rather than being
implemented by you.
Opaque types are similar in some basic ways to object types, with similar concepts of
static methods, instances, and instance methods. Typically, only the methods supplied
with an opaque type allow you to manipulate the state and internal byte representation.
For example, XMLType, provided with Oracle Database 11g, facilitates handling XML data
natively in the database.

Note
PL/SQL data types, such as INDEX-BY tables, Booleans, and records are not
supported as bind and define variable data types in DBMS_SQL. However,
DBMS_SQL supports all SQL data types.
In this example, DBMS_SQL is used with the varray column in the CUSTOMERS table. The
PHONE_LIST_TYP object is defined in the Order Entry sample schema as
Phone_list_typ VARRAY(5) OF VARCHAR2(25).
CREATE OR REPLACE PROCEDURE update_phone_nos
(p_new_nos phone_list_typ, p_cust_id customers.customer_id%TYPE)
IS
some_phone_nos phone_list_typ;
c_hndl NUMBER;
r NUMBER;
sql_stmt CLOB :=
'UPDATE customers SET phone_numbers = :b1
WHERE customer_id = :b2
RETURNING phone_numbers INTO :b3';
BEGIN
c_hndl := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(c_hndl, sql_stmt, dbms_sql.native);
DBMS_SQL.BIND_VARIABLE (c_hndl, 'b1', p_new_nos);
DBMS_SQL.BIND_VARIABLE (c_hndl, 'b2', p_cust_id);
DBMS_SQL.BIND_VARIABLE (c_hndl, 'b3', some_phone_nos);
r := DBMS_SQL.EXECUTE (c_hndl);
DBMS_SQL.VARIABLE_VALUE(c_hndl, 'b3', some_phone_nos);
DBMS_SQL.CLOSE_CURSOR(c_hndl);
-- continued on next page

The UPDATE_PHONE_NOS procedure binds the PHONE_LIST_TYP object. And then it


assigns a value to the SOME_PHONE_NOS varray variable defined in the PL/SQL block.
-- continued from previous page
-- select the phones nos
sql_stmt := 'SELECT phone_numbers FROM customers
WHERE customer_id = :b2';
c_hndl := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(c_hndl, sql_stmt, dbms_sql.native);
DBMS_SQL.DEFINE_COLUMN(c_hndl, 1, some_phone_nos);
DBMS_SQL.BIND_VARIABLE(c_hndl, 'b2', p_cust_id);
r := DBMS_SQL.EXECUTE_AND_FETCH(c_hndl);
DBMS_SQL.COLUMN_VALUE(c_hndl, 1, some_phone_nos);
DBMS_SQL.CLOSE_CURSOR(c_hndl);
FOR i IN some_phone_nos.FIRST .. some_phone_nos.LAST LOOP
DBMS_OUTPUT.PUT_LINE('Phone number = ' || some_phone_nos(i) ||
'
updated.');
END LOOP;
END update_phone_nos;
/
When the UPDATE_PHONE_NOS procedure is called, the new phone numbers for a
specific customer are passed as parameters.
The result shows that the varray abstract data type is updated.
DECLARE
new_phone_nos phone_list_typ;
BEGIN
new_phone_nos :=
phone_list_typ
('12345678', '22222222', '33333333', '44444444');
update_phone_nos(new_phone_nos, 980);
END;
/
Phone
Phone
Phone
Phone

number
number
number
number

=
=
=
=

12345678
22222222
33333333
44444444

updated.
updated.
updated.
updated.

PL/SQL successfully completed.


Suppose you want to create a new procedure, called UPDATE_TRUCK_INFO, that adds
truck capacity information into the WAREHOUSES table.
To do this, you start SQL Developer and connect to the database.
Next, you examine the structure of the WAREHOUSES table. You click the expand symbol
next to the Tables node in the Object navigator, and select WAREHOUSES from the tree.
The table definition appears on the right.
You want to create a new VARRAY type, called TRUCK_CAPACITY_TYP, to store the truck
capacities for trucks at a given warehouse.
It is estimated that a warehouse will have no more than 10 trucks. The capacity
information is a variable length string, no more than 40 characters in length.
You can either run a SQL statement or create the object through the SQL Developer GUI
interface.
CREATE TYPE truck_capacity_typ
AS varray (10) OF VARCHAR2(40);
You begin by right-clicking Types, and selecting New Type from the shortcut menu.
In the Create Object Type dialog box that is displayed, you enter the name
TRUCK_CAPACITY_TYP in the Name field, select Array Type from the Type drop-down
list, and click OK.
You enter the array size and the data type, and then click Compile.
CREATE OR REPLACE
TYPE TRUCK_CAPACITY_TYP AS VARRAY (10) OF VARCHAR2(40);
A confirmation message is displayed.
TRUCK_CAPACITY_TYP Compiled
The new type appears in the Object navigator under the Types node.
Next, you enter this code in the Code Editor window to add a new column called TRUCKS,
of the type TRUCK_CAPACITY_TYP, to the WAREHOUSES table.
Then you click the Run Script icon.

ALTER TABLE warehouses


ADD (trucks truck_capacity_typ);
A confirmation message is displayed.
ALTER TABLE warehouses succeeded.
You can then create the UPDATE_TRUCK_INFO procedure. This procedure accepts two
arguments:

the warehouse ID for which you are updating the trucking information

the trucking information


The trucking information is the capacity for each truck at a warehouse.
You begin by right-clicking Procedures, and selecting New Procedure from the shortcut
menu.
In the Create PL/SQL Procedure dialog box that is displayed, you enter the name
UPDATE_TRUCK_INFO in the Name field, and add the two parameters by clicking the
Add button. You then click OK.
In the Code Editor window, you enter this code and then click Compile.
CREATE OR REPLACE
PROCEDURE update_truck_info
(p_new_truck IN TRUCK_CAPACITY_TYP,
p_wh_id IN NUMBER
) AS
some_truck_info truck_capacity_typ :=
truck_capacity_typ();
c_hndl NUMBER;
r NUMBER;
sql_stmt CLOB :=
'UPDATE warehouses SET trucks = :b1
WHERE warehouse_id = :b2
RETURNING trucks INTO :b3';
BEGIN
c_hndl := DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(c_hndl, sql_stmt, dbms_sql.native);

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window

View the full code for creating the UPDATE_TRUCK_INFO procedure.


A confirmation message is displayed.
UPDATE_TRUCK_INFO Compiled
UPDATE_TRUCK_INFO appears in the Object navigator under the Procedures node.
You right-click the UPDATE_TRUCK_INFO procedure and select Edit.
Finally, you call your procedure through an anonymous block.
You begin by right-clicking Procedures, and selecting Refresh.
Then you click the Run icon on the UPDATE_TRUCK_INFO tabbed page.
In the Run PL/SQL dialog box that appears, you enter the run parameters, and click OK.
DECLARE
P_NEW_TRUCK OE.TRUCK_CAPACITY_TYP;
P_WH_ID NUMBER;
BEGIN
-- Modify the code to initialize the variable
P_NEW_TRUCK := truck_capacity_typ ('1 ton', '2 ton', '1 ton',
'5 ton');
P_WH_ID := 9;
UPDATE_TRUCK_INFO (
P_NEW_TRUCK => P_NEW_TRUCK,
P_WH_ID => P_WH_ID
);
END;
The results are displayed on the Running-Log tabbed page.

Question
Which data types does the DBMS_SQL package support?
Options:
1.

It supports abstract data types

2.

It supports all SQL data types

3.

It supports only non-opaque types

4.

It supports PL/SQL datatypes as bind variables

Answer

The DBMS_SQL package supports abstract data types and all SQL data types.
Option 1 is correct. The DBMS_SQL package supports abstract types, including
opaque types.
Option 2 is correct. DBMS_SQL supports abstract data types, varrays, nested
tables, REFs, opaque types, and all SQL data types.
Option 3 is incorrect. Opaque types are abstract data types, which are supported
by DBMS_SQL.
Option 4 is incorrect. PL/SQL data types, such as INDEX-BY tables, Booleans,
and records are not supported as bind and define variable data types in
DBMS_SQL.

Summary
Native dynamic SQL and the DBMS_SQL package are two ways to implement a dynamic
SQL statement programmatically.
You can switch between the DBMS_SQL package and native dynamic SQL by using the
two new functions that are added into the DBMS_SQL package.
DBMS_SQL supports abstract data types. You can use varrays, nested tables, REFs, and
opaque types with the DBMS_SQL package. PL/SQL data types, such as INDEX-BY
tables, Booleans, and records are not supported as bind and define variable data types in
DBMS_SQL.

Language Usability Enhancements


Learning objective

After completing this topic, you should be able to identify the steps for using language
enhancements to improve sequence usability, control loop iterations, employ named and
mixed notation in calls to PL/SQL, and place a table in read-only mode.

1. Sequence enhancement
Prior to Oracle Database 11g, you were forced to write a SQL statement in order to use a
sequence object value in a PL/SQL subroutine.
Typically, you would write a SELECT statement to reference the pseudocolumns of

NEXTVAL and CURRVAL to obtain a sequence number. This method created a usability
problem.
In Oracle Database 11g, the limitation of forcing you to write a SQL statement to retrieve
a sequence value is lifted. With the sequence enhancement feature

sequence usability is improved

less typing is required by the developer

the resulting code is clearer


You also can use the CURRVAL and NEXTVAL pseudocolumns, qualified by a sequence
name, directly in a PL/SQL expression.
In Oracle Database 11g, you can use the NEXTVAL and CURRVAL pseudocolumns in any
PL/SQL context where an expression of NUMBER data type may legally appear.
This example assumes that you have already created a sequence named MY_SEQ using
the CREATE SEQUENCE command.
DECLARE
v_new_id NUMBER;
BEGIN
v_new_id := my_seq.NEXTVAL;
END;
/
You should try to avoid using this old syntax.
SELECT my_seq.NEXTVAL INTO v_new_id FROM dual;

Question
Within an anonymous PL/SQL block, you have declared a variable
v_new_orderno, with the data type NUMBER.
Which line of code would correctly set the value of the v_new_orderno variable
to the next value of the orderno_seq sequence in the executable section of your
PL/SQL block?
Options:
1.

v_new_orderno = orderno_seq.CURRVAL;

2.

v_new_orderno := orderno_seq.CURRVAL;

3.

v_new_orderno = orderno_seq.NEXTVAL;

4.

v_new_orderno := orderno_seq.NEXTVAL;

Answer
This line of code would correctly set the value of the v_new_orderno variable as
specified:
v_new_orderno := orderno_seq.NEXTVAL;
Option 1 is incorrect. To set the value of the variable to the next value of the
orderno_seq sequence, a colon is required before the equals sign. And the
pseudocolumn NEXTVAL and not CURRVAL should be used.
Option 2 is incorrect. To set the value of the variable to the next value of the
orderno_seq sequence, the pseudocolumn NEXTVAL not CURRVAL should
be used.
Option 3 is incorrect. To set the value of the variable to the next value of the
orderno_seq sequence, a colon is required before the equals sign.
Option 4 is correct. With the release of Oracle Database 11g, you can use the
CURRVAL and NEXTVAL pseudocolumns, qualified by a sequence name, directly
in a PL/SQL expression.

2. The new PL/SQL CONTINUE statement


The CONTINUE statement that is added to PL/SQL enables you to transfer control within
a loop back to a new iteration or to leave the loop. Many other programming languages
have this functionality. With Oracle Database 11g, PL/SQL also offers this functionality.
Prior to Oracle Database 11g, you could code a workaround using Boolean variables and
conditional statements to simulate the CONTINUE programmatic functionality. In some
cases, the workarounds are less efficient.
The CONTINUE statement uses parallel structure and semantics to the EXIT statement.
The CONTINUE statement offers you a simplified means to control loop iterations. It may
be more efficient than previous coding workarounds.
The CONTINUE statement is commonly used to filter data inside a loop body before the
main processing begins.
The first TOTAL assignment is executed for each of the 10 iterations of the loop.
The second TOTAL assignment is executed for the first five iterations of the loop. The
CONTINUE statement transfers control within a loop back to a new iteration. So for the
last five iterations of the loop, the second TOTAL assignment is not executed.

DECLARE
v_total SIMPLE_INTEGER := 0;
BEGIN
FOR i IN 1..10 LOOP
v_total := v_total + i;
dbms_output.put_line
('Total is: '|| v_total);
CONTINUE WHEN i > 5;
v_total := v_total + i;
dbms_output.put_line
('End of Loop Total is:
'|| v_total);
END LOOP;
END;
/
The end result of the TOTAL variable is 70.
Total is: 1
End of Loop Total
2
Total is: 4
End of Loop Total
6
Total is: 9
End of Loop Total
12
Total is: 16
End of Loop Total
20
Total is: 25
End of Loop Total
30
Total is: 36
Total is: 43
Total is: 51
Total is: 60
Total is: 70

is:

is:

is:

is:

is:

PL/SQL procedure successfully


completed.
You can use the CONTINUE statement to jump to the next iteration of an outer loop. You
give the outer loop a label to identify where the CONTINUE statement should go.
The CONTINUE statement in the innermost loop terminates that loop whenever the WHEN
condition is true, just like the keyword EXIT.

After the innermost loop is terminated by the CONTINUE statement, control transfers to
the next iteration of the outermost loop, labeled BeforeTopLoop in this example.
CREATE OR REPLACE PROCEDURE two_loop
IS
v_total NUMBER := 0;
BEGIN
<<BeforeTopLoop>>
FOR i IN 1..10 LOOP
v_total := v_total + 1;
dbms_output.put_line
('Total is: ' || v_total);
FOR j IN 1..10 LOOP
CONTINUE BeforeTopLoop WHEN i + j > 5;
v_total := v_total + 1;
END LOOP;
END LOOP;
END two_loop;
Procedure created.
When this pair of loops completes, the value of the TOTAL variable is 20.
You can also use the CONTINUE statement inside an inner block of code that does not
contain a loop as long as the block is nested inside an appropriate outer loop.
--RESULTS:
EXECUTE two_loop
Total
Total
Total
Total
Total
Total
Total
Total
Total
Total

is:
is:
is:
is:
is:
is:
is:
is:
is:
is:

1
6
10
13
15
16
17
18
19
20

PL/SQL procedure
successfully completed.
The CONTINUE statement gives you greater programming functionality. However, there
are some limitations for its use.

You should be careful not to use the CONTINUE statement outside of a loop or to pass
through a procedure, function, or method boundary. Doing so generates a compiler error.
PL/SQL allows arguments in a subroutine call to be specified using positional, named, or
mixed notation.
Before Oracle Database 11g, only the positional notation was supported in calls from
SQL. Starting in Oracle Database 11g, named and mixed notation can be used for
specifying arguments in calls to PL/SQL subroutines from SQL statements.
The benefits of named and mixed notation from SQL are that

for long parameter lists, with most having default values, you can omit values from the optional
parameters
you can avoid duplicating the default value of the optional parameter at each call site
In this example, the call to the function f within the SELECT SQL statement uses the
named notation.
Prior to Oracle Database 11g, you could not use the named or mixed notation when
passing parameters to a function from within a SQL statement. Prior to Oracle Database
11g, you received the 'ORA-00907: missing right parenthesis' error.
CREATE OR REPLACE FUNCTION f (
p1 IN NUMBER DEFAULT 1,
p5 IN NUMBER DEFAULT 5)
RETURN NUMBER
IS
v number;
BEGIN
v:= p1 + (p5 * 2);
RETURN v;
END f;
/
Function created.
SELECT f(p5 => 10) FROM DUAL;
F(P5=>10)
---------21

Question
What must be considered when using the CONTINUE statement?

Options:
1.

It allows you to transfer control within a loop back to a new iteration

2.

It can be used to pass through a procedure boundary

3.

It cannot appear outside a loop

4.

It cannot be used to filter data inside a loop body before main processing begins

Answer
The CONTINUE statement allows you to transfer control within a loop back to a
new iteration. And it cannot appear outside a loop.
Option 1 is correct. The CONTINUE statement gives you greater programming
functionality. It allows you to transfer control within a loop back to a new iteration
or to leave the loop.
Option 2 is incorrect. You cannot use the CONTINUE statement to pass through a
procedure, function or method boundary. This will result in a compiler error.
Option 3 is correct. The CONTINUE statement cannot appear outside of a loop.
Doing so will result in a compile error.
Option 4 is incorrect. The CONTINUE statement offers you a simplified means to
control loop iterations. It is commonly used to filter data inside the body of a loop
before the main processing begins.

3. Read-only tables
You can specify READ ONLY to place a table in read-only mode. When the table is in
read-only mode, you cannot issue any DML statements that affect the table or any
SELECT ... FOR UPDATE statements.
You can issue DDL statements as long as they do not modify any table data. Operations
on indexes associated with the table are allowed when the table is in read-only mode.
You use the ALTER TABLE syntax to put a table into read-only mode. This prevents DDL
or DML changes during table maintenance.
You specify READ WRITE to return a read-only table to read/write mode.
ALTER TABLE customers READ ONLY;
-- perform table maintenance and then
-- return table back to read/write mode

ALTER TABLE customers READ WRITE;


You can drop a table that is in read-only mode.

Note
The DROP command is executed only in the data dictionary, so access to the table
contents is not required. The space used by the table will not be reclaimed, until
the tablespace is made read/write again and the required changes can be made to
the block segment headers, and so on.
You decide to try out the various usability features introduced in Oracle Database 11g,
using the OE schema.
You start SQL Developer by double-clicking the SQL Developer 1.2 icon on your
desktop.
You right-click mydbconnection and select Connect.
You enter your password when prompted and click OK.
You create a new sequence using this code.
CREATE SEQUENCE customer_seq
START WITH 1000;
/
In the Object navigator, you right-click Sequences and select New Sequence from the
shortcut menu.
You then enter the sequence Name and Start Value, and click OK.
This code, which uses the SELECT statement, represents an older way of performing this
task.
CREATE OR REPLACE PROCEDURE add_customer
(p_last_name customers.cust_last_name%TYPE,
p_first_name customers.cust_first_name%TYPE)
IS
BEGIN
INSERT INTO customers(customer_id,cust_last_name,
cust_first_name)
You modify the code so that you can call this sequence directly and eliminate the SELECT
statement.

This code can be run from the Code Editor window.


CREATE OR REPLACE PROCEDURE add_customer
(p_last_name customers.cust_last_name%TYPE,
p_first_name customers.cust_first_name%TYPE)
IS
BEGIN
INSERT INTO customers(customer_id,cust_last_name,
cust_first_name)
You need to test the ADD_CUSTOMERS procedure and pass 'Smith' for the last name
parameter, and 'John' for the first name parameter.
To do this, you run the new procedure by expanding the Procedures node in the Object
navigator. Then you right-click the ADD_CUSTOMER procedure and select Run from the
shortcut menu.
Then you enter the parameter values in the Run PL/SQL window and click OK.
Verify that the row has been inserted.
Now you can try using the mixed notation calls with PL/SQL.
You start by creating a function named CALC_CHARGES that calculates the surcharge on
a product based on its weight class. To do this, you enter this code in the Code Editor
window and click the Run Script icon.
CREATE OR REPLACE FUNCTION calc_charges
(p_id NUMBER DEFAULT 0,
p_weight_class NUMBER DEFAULT 1,
p_list_price NUMBER DEFAULT 0)
RETURN NUMBER
IS
v_surcharge NUMBER;
BEGIN
IF p_id = 0 THEN
RETURN 0;
END IF;
v_surcharge :=

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window

View the complete code to create a function named CALC_CHARGES.


You can use this code query to sample the data in your new function.
SELECT product_id, list_price, weight_class,
calc_charges(product_id, weight_class, list_price)
AS new_price
FROM product_information
WHERE product_id BETWEEN 1725 and 1740;
In this code there is an error - the CONTINUE statement is located at the top of the loop,
so v_index is never incremented. This results in an infinite loop.
CREATE OR REPLACE PROCEDURE continue_loop
(p_looper NUMBER)
IS
v_index SIMPLE_INTEGER := 0;
BEGIN
LOOP
CONTINUE WHEN v_index < p_looper;
v_index := v_index + 1;
dbms_output.put_line ('In the loop, index value is: ' ||
v_index);
EXIT WHEN v_index = p_looper;
END LOOP;
dbms_output.put_line ('Out of the loop, index value is: '
|| v_index);
END continue_loop;
/
This is the correct code for doing this. You enter it in the Code Editor window.
CREATE OR REPLACE PROCEDURE continue_loop
(p_looper NUMBER)
IS
v_index SIMPLE_INTEGER := 0;
BEGIN
LOOP
v_index := v_index + 1;
dbms_output.put_line ('In the loop before CONTINUE,
index value is: ' || v_index);
EXIT WHEN v_index = p_looper;
IF v_index < 3
THEN CONTINUE;
END IF;
dbms_output.put_line ('In the loop after CONTINUE,
index value is: ' || v_index);

END LOOP;
dbms_output.put_line ('Out of the loop,
index value is: ' || v_index);
END continue_loop;
/
To run the CONTINUE_LOOP procedure, you right-click it and select Run from the options
menu.
You enter the parameter value in the Run PL/SQL window, and then click OK.
If you execute the CONTINUE_LOOP procedure and pass it the value of 5, these are the
results.
In the loop before CONTINUE,
index value is: 1
In the loop before CONTINUE,
index value is: 2
In the loop before CONTINUE,
index value is: 3
In the loop after CONTINUE,
index value is: 3
In the loop before CONTINUE,
index value is: 4
In the loop after CONTINUE,
index value is: 4
In the loop before CONTINUE,
index value is: 5
Out of the loop,
index value is: 5
You can modify a table to be read-only by running this code in the Code Editor window.
ALTER TABLE customers READ ONLY;
You run the procedure by expanding the Procedures node in the Object navigator and
right-click the ADD_CUSTOMER procedure. You select Run from the shortcut menu.
You then enter the parameter values in the Run PL/SQL window, and then click OK.
If you try to modify the table by adding an extra customer, for example you receive an
error message.
To change the table back to being read-write, you use this code.
ALTER TABLE customers READ WRITE;

Question
Identify the statement that would correctly place a table called ORDERS into readonly mode.
Options:
1.

ALTER TABLE orders READ ONLY;

2.

ALTER TABLE orders READ-ONLY;

3.

ALTER TABLE orders READ ONLY MODE;

4.

ALTER TABLE orders READ-ONLY MODE;

Answer
This statement would correctly place a table called ORDERS into read-only mode:
ALTER TABLE orders READ ONLY;
Option 1 is correct. To place a table named ORDERS into read-only mode, the
statement ALTER TABLE orders READ ONLY; is used. In read-only mode,
DML and DDL statements that affect the table are not permitted.
Option 2 is incorrect. This is not the correct statement for placing a table named
ORDERS into read-only mode. In the ALTER TABLE statement, there should not be
a hyphen between READ and ONLY.
Option 3 is incorrect. This is not the correct statement for placing a table named
ORDERS into read-only mode. In the ALTER TABLE statement, the word MODE
should not be used.
Option 4 is incorrect. This is not the correct statement for placing a table named
ORDERS into read-only mode. In the ALTER TABLE statement, there should not be
a hyphen between READ and ONLY, and the word MODE should not be used.

Summary
In Oracle Database 11g, you can use the NEXTVAL and CURRVAL pseudocolumns in any
PL/SQL context where an expression of NUMBER data type may legally appear.
The CONTINUE statement that is added to PL/SQL enables you to transfer control within
a loop back to a new iteration or to leave the loop. PL/SQL allows arguments in a
subroutine call to be specified using positional, named, or mixed notation.
You use the ALTER TABLE syntax to put a table into read-only mode. You specify READ

WRITE to return a read-only table to read/write mode. You can drop a table that is in readonly mode.

Improving Performance
Learning objective

After completing this topic, you should be able to recognize the steps for improving the
performance of SQL and PL/SQL with a new compiler, a new, faster data type, inlining
for faster performance, caching, and flashback enhancements.

1. The SIMPLE_INTEGER data type


In Oracle Database 11g, the compiler translates PL/SQL source directly to the dynamic
link library (DLL) for the current hardware.
It also does the linking and loading so that the file system directories are no longer
needed. This means that a C compiler is not needed and that PL/SQL native compilation
will work out of the box.
The parameter that controls the compilation type is PLSQL_CODE_TYPE.
You can set this to either native or interpreted with the ALTER SESSION, ALTER
SYSTEM, and ALTER PROCEDURE statements. The ALTER PROCEDURE statement can
be set for a specific PL/SQL subprogram only.

Note
In Oracle Database 10g, Release 1, the configuration of initialization parameters
and the command setup for native compilation were simplified. The only
parameter required was PLSQL_NATIVE_LIBRARY_DIR. The parameters related
to the compiler, linker, and make utility are obsolete
PLSQL_NATIVE_C_COMPILER, PLSQL_NATIVE_LINKER,
PLSQL_NATIVE_MAKE_FILE_NAME, PLSQL_NATIVE_MAKE_UTILITY. Native
compilation is turned on and off by a separate initialization parameter,
PLSQL_CODE_TYPE, rather than being one of several options in the
PLSQL_COMPILER_FLAGS parameter, which is now deprecated. The
spnc_commands file, located in your ORACLE_HOME/plsql directory, contains
the information for compiling and linking, rather than a makefile.
The SIMPLE_INTEGER data type is a predefined subtype of the BINARY_INTEGER or
PLS_INTEGER data type that has the same numeric range as BINARY_INTEGER. It
differs significantly from PLS_INTEGER in its overflow semantics.

Incrementing the largest SIMPLE_INTEGER value by one silently produces the smallest
value and decrementing the smallest value by one silently produces the largest value.
These "wrap around" semantics conform to the IEEE standard for 32-bit integer
arithmetic.
The SIMPLE_INTEGER predefined subtype has several key features. It

is a predefined subtype

has the range between -2147483648 and 2147483648

does not include a null value

is allowed anywhere in PL/SQL where the PLS_INTEGER data type is allowed


Without the overhead of overflow checking and nullness checking, the SIMPLE_INTEGER
data type provides significantly better performance than PLS_INTEGER when the
parameter PLSQL_CODE_TYPE is set to native because arithmetic operations on the
former are performed directly in the machine's hardware.
The performance difference is less noticeable when parameter PLSQL_CODE_TYPE is set
to interpreted, but even with this setting, the SIMPLE_INTEGER type is faster than the
PLS_INTEGER type.
For example, a test procedure named p is created, which utilizes conditional compilation.
By using conditional compilation, this procedure can easily be run with the test case of
the SIMPLE_INTEGER or the test case of the PLS_INTEGER. The subtype in the
conditional compilation statement is prefixed with the $ symbol.
Before running this procedure, you set the conditional compilation flag named SIMPLE to
either true or false. By using a subtype, you can improve the readability of the code and
reduce potential typographic errors.
CREATE OR REPLACE PROCEDURE p
IS
t0
NUMBER :=0;
t1
NUMBER :=0;
$IF $$Simple $THEN
SUBTYPE My_Integer_t IS
SIMPLE_INTEGER;
My_Integer_t_Name CONSTANT VARCHAR2(30) := 'SIMPLE_INTEGER';
$ELSE
SUBTYPE My_Integer_t IS PLS_INTEGER;
My_Integer_t_Name CONSTANT VARCHAR2(30) := 'PLS_INTEGER';
$END
v00
v02

My_Integer_t := 0;
My_Integer_t := 0;

v01
v03

My_Integer_t := 0;
My_Integer_t := 0;

v04
two
lmt

My_Integer_t := 0;

v05

My_Integer_t := 0;

CONSTANT My_Integer_t := 2;
CONSTANT My_Integer_t := 100000000;

Note
There is no difference between using the PLS_INTEGER data type and the
BINARY_INTEGER data type. Starting in Oracle Database 10g, they are exactly
the same.
The main processing of this code performs a loop where simple mathematical
computations take place. The loop is timed using DBMS_UTILITY.GET_CPU_TIME.
If you use the SIMPLE_INTEGER in a mixed operation with any other numeric types, or
pass it as a parameter or a bind, or define it where a PLS_INTEGER is expected, a
compiler warning is issued. If you violate a limitation, a compiler error is raised.
BEGIN
t0 := DBMS_UTILITY.GET_CPU_TIME();
WHILE v01 < lmt LOOP
v00 := v00 + Two;
v01 := v01 + Two;
v02 := v02 + Two;
v03 := v03 + Two;
v04 := v04 + Two;
v05 := v05 + Two;
END LOOP;
IF v01 <> lmt OR v01 IS NULL THEN
RAISE Program_Error;
END IF;
t1 := DBMS_UTILITY.GET_CPU_TIME();
DBMS_OUTPUT.PUT_LINE(
RPAD(LOWER($$PLSQL_Code_Type), 15)||
RPAD(LOWER(My_Integer_t_Name), 15)||

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the complete DBMS_UTILITY.GET_CPU_TIME loop.

The procedure p is executed under two different conditions native compilation with the
SIMPLE_INTEGER data type and native compilation with the PLS_INTEGER data type.
The procedure is natively compiled and the code type is set to use the
SIMPLE_INTEGER.
ALTER PROCEDURE p COMPILE
PLSQL_Code_Type = NATIVE PLSQL_CCFlags = 'simple:true'
REUSE SETTINGS;
Procedure altered.
EXECUTE p()
native

simple_integer

51 centiseconds

PL/SQL procedure successfully completed.


The results of executing the procedure show the timing for the code running with the
SIMPLE_INTEGER data type.
The procedure is natively compiled and the code type is set to not use the
SIMPLE_INTEGER.
ALTER PROCEDURE p COMPILE
PLSQL_Code_Type = native PLSQL_CCFlags = 'simple:false'
REUSE SETTINGS;
Procedure altered.
EXECUTE p()
native

pls_integer

884 centiseconds

PL/SQL procedure successfully completed.


The results of executing the procedure show the timing for the code running with the
PLS_INTEGER data type.

2. Inlining
Procedure inlining is an optimization process that replaces procedure calls with a copy of
the body of the procedure to be called.
The copied procedure almost always runs faster than the original call because the need
to create and initialize the stack frame for the called procedure is eliminated.
The optimization can be applied over the combined text of the call context and the copied
procedure body. Propagation of constant actual arguments often causes the copied body
to collapse under optimization.

When inlining is achieved, you will notice performance gains of two to ten times.
With Oracle Database 11g, the PL/SQL compiler can automatically find calls that should
be inlined and can do that inlining correctly and quickly. There are some controls to
specify where and when the compiler should do this work, using the
PLSQL_OPTIMIZATION_LEVEL database parameter, but usually, a general request is
sufficient.
When implementing inlining, it is recommended that the process should be applied to
smaller programs, and programs that execute frequently. For example, you may want to
inline small helper programs.
To help you identify which programs to inline, you can use the plstimer PL/SQL
performance tool. This tool specifically analyzes program performance in terms of time
spent in procedures and time spent from particular call sites. It is important that you
identify the procedure calls that may benefit from inlining.
You can use inlining by setting the PLSQL_OPTIMIZE_LEVEL parameter to 3. When this
parameter is set to 3, the PL/SQL compiler searches for calls that might profit by inlining
and inlines the most profitable calls.
Profitability is measured by those calls that will help the program speed up the most and
keep the compiled object program as short as possible.
You can set the PLSQL_OPTIMIZE_LEVEL parameter using an ALTER SESSION
command.
ALTER SESSION SET plsql_optimize_level = 3;
Another way to use inlining is to use PRAGMA INLINE in your PL/SQL code. This
identifies whether a specific call should be inlined or not.
Setting this pragma to "YES" will have an effect only if the optimize level is set to 2 or
higher.
When a program is noninlined, the a:=a*b assignment at the end of the loop looks like it
could be moved before the loop.
However, it cannot be because a is passed as an IN OUT parameter to the TOUCH
procedure.
The compiler cannot be certain what the procedure does to its parameters. This results in
the multiplication and assignment being completed ten times instead of only once, even
though multiple executions are not necessary.

CREATE OR REPLACE PROCEDURE small_pgm


IS
a NUMBER;
b NUMBER;
PROCEDURE touch(x IN OUT NUMBER, y NUMBER)
IS
BEGIN
IF y > 0 THEN
x := x*x;
END IF;
END;
BEGIN
a := b;
FOR I IN 1..10 LOOP
touch(a, -17);
a := a*b;
END LOOP;
END small_pgm;
After inlining, the loop is changed, as indicated in the code.
...
BEGIN
a := b;
FOR i IN 1..10 LOOP
IF 17 > 0 THEN
a := a*a;
END IF;
a := a*b;
END LOOP;
END small_pgm;
...
Because the insides of the procedure are now visible to the compiler, it can transform the
loop in several steps.
Instead of 11 assignments one outside of the loop and 10 multiplications, only one
assignment and one multiplication are performed. If the loop ran a million times, instead
of 10, the savings would be a million assignments. For code that contains deep loops that
are executed frequently, inlining offers tremendous savings.
a := b;
FOR i IN 1..10 LOOP ...
IF false THEN
a := a*a;

END IF;
a := a*b;
END LOOP;
a := b;
FOR i IN 1..10 LOOP ...
a := a*b;
END LOOP;
a := b;
a := a*b;
FOR i IN 1..10 LOOP ...
END LOOP;

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the complete code of the loop inlining transformation.
To influence the optimizer to use inlining, you can set the PLSQL_OPTIMIZE_LEVEL
parameter to a value of 2 or 3. By setting this parameter, you are making a request that
inlining be used. It is up to the compiler to analyze the code and determine whether
inlining is appropriate.
Setting it to 2 means no automatic inlining is attempted. When the optimize level is set to
3, the PL/SQL compiler searches for calls that might profit by inlining and inlines the most
profitable calls.
Within a PL/SQL subroutine, you can use PRAGMA INLINE to suggest that a specific call
be inlined. When using PRAGMA INLINE, the first argument is the simple name of a
subroutine, a function name, a procedure name, or a method name. The second
argument is either the constant string "NO" or "YES". The pragma can go before any
statement or declaration. If you put it in the wrong place, you receive a syntax error
message from the compiler.
Setting the PRAGMA INLINE to "YES" strongly encourages the compiler to inline the
call. The compiler keeps track of the resources used during inlining and makes the
decision to stop inlining when the cost becomes too high.
CREATE OR REPLACE PROCEDURE small_pgm
IS
a PLS_INTEGER;
FUNCTION add_it(a PLS_INTEGER, b PLS_INTEGER)
RETURN PLS_INTEGER

IS
BEGIN
RETURN a + b;
END;
BEGIN
pragma INLINE (small_pgm, 'YES');
a := add_it(3, 4) + 6;
END small_pgm;
Setting the PRAGMA INLINE to "NO" always works, regardless of any other pragmas that
might also apply to the same statement. The pragma also applies at all optimization
levels, and it applies no matter how badly the compiler would like to inline a particular call.
To identify that a specific call should not be inlined, you use this code.
PRAGMA INLINE (function_name, 'NO');
Pragmas apply only to calls in the next statement following the pragma. Programs that
make use of smaller helper subroutines are good candidates for inlining.
Only local subroutines can be inlined. You cannot inline an external subroutine and cursor
functions should not be inlined.
Inlining can increase the size of a unit. However, be careful about suggesting to inline
functions that are deterministic.
The compiler inlines code automatically, provided that you are using native compilation
and have set the PLSQL_OPTIMIZE_LEVEL to 3.
If you have set PLSQL_Warnings = 'enable:all', using the SQL*Plus SHOW
ERRORS command displays the name of the code that is inlined.
The PLW-06004 compiler message tells you that a PRAGMA INLINE(, 'YES')
referring to the named procedure was found. The compiler will, if possible, inline this call.
The PLW-06005 compiler message tells you the name of the code that is inlined.
Alternatively, you can query the USER/ALL/DBA_ERRORS dictionary view.
Deterministic functions compute the same outputs for the same inputs every time it is
invoked and have no side effects. In Oracle Database 11g, the PL/SQL compiler can
figure out whether a function is deterministic. It may not find all of the ones that truly are,
but it will find many of them. It will never mistake a nondeterministic function for a
deterministic function.

Question

Identify the methods of using inlining.


Options:
1.

Set the PLSQL_OPTIMIZE_LEVEL parameter to 3

2.

Set the PLSQL_OPTIMIZER_LEVEL parameter to 3

3.

Use PRAGMA INLINE in your PL/SQL code

4.

Use the plstimer PL/SQL performance tool

Answer
To use inlining, you set the PLSQL_OPTIMIZER_LEVEL parameter to 3 or use
PRAGMA INLINE in your PL/SQL code.
Option 1 is correct. One of the methods for using inlining is to set the
PLSQL_OPTIMIZE_LEVEL parameter to 3. When this parameter is set to 3, the
PL/SQL compiler searches for calls that might profit by inlining and inlines the
most profitable calls.
Option 2 is incorrect. The parameter used for inlining is
PLSQL_OPTIMIZE_LEVEL and not PLSQL_OPTIMIZER_LEVEL.
Option 3 is correct. One of the methods for using inlining is to use PRAGMA
INLINE in your PL/SQL code. This identifies whether a specific call should be
inlined or not. Setting this parameter to YES will have an effect only if the optimize
level is set to two or higher.
Option 4 is incorrect. Using the plstimer PL/SQL performance tool isn't a
method of using inlining. It is a tool for identifying the procedure calls that might
benefit from inlining.

3. Caching
You can improve the performance of your queries by caching the results of a query in
memory and then using the cached results in future executions of the query or query
fragments.
The cached results reside in the result cache memory portion of the SGA. This feature is
designed to speed up query execution on systems with large memories.
SQL result caching is useful when your queries need to analyze a large number of rows
to return a small number of rows or a single row. Two new optimizer hints are available to
turn on and turn off SQL result caching. These are

/*+ result_cache */

/*+ no_result_cache */
These hints let you override settings of the RESULT_CACHE_MODE initialization
parameter.
You can execute DBMS_RESULT_CACHE.MEMORY_REPORT to produce a memory usage
report of the result cache.
Suppose you need to find the greatest average value of credit limit grouped by state over
the whole population.
The query results in a huge number of rows analyzed to yield a few or one row. In your
query, the data changes fairly slowly, say every hour, but the query is repeated fairly
often, say every second.
In this case, you use the new optimizer hint /*+ result_cache */ in your query.
SELECT /*+ result_cache */
AVG(cust_credit_limit), cust_state_province
FROM sh.customers
GROUP BY cust_state_province;
Starting in Oracle Database 11g, you can use the PL/SQL cross-section function result
caching mechanism. This caching mechanism provides you with a language-supported
and system-managed means for storing the results of PL/SQL functions in a shared
global area (SGA), which is available to every session that runs your application.
The caching mechanism is both efficient and easy to use, and it relieves you of the
burden of designing and developing your own caches and cache-management policies.
To enable result caching for a function, use the RESULT_CACHE clause in your PL/SQL
function.
If a result-cached function is called, the system checks the cache.
If the cache contains the result from a previous call to the function with the same
parameter values, the system returns the cached result to the caller and does not
reexecute the function body.
If the cache does not contain the result, the system executes the function body and adds
the result, for these parameter values, to the cache before returning control to the caller.
The cache can accumulate many results one result for every unique combination of
parameter values with which each result-cached function has been called. If the system
needs more memory, it ages out, or deletes one or more cached results.

You can specify the database objects that are used to compute a cached result, so that if
any of them are updated, the cached result becomes invalid and must be recomputed.
The best candidates for result caching are functions that are called frequently but depend
on information that changes infrequently or never.
Suppose you need a PL/SQL function that derives a complex metric.The data that your
function calculates changes slowly, but the function is frequently called.
In this case, you use the new result_cache clause in your function definition.
CREATE OR REPLACE FUNCTION productName
(prod_id NUMBER, lang_id VARCHAR2)
RETURN NVARCHAR2
RESULT_CACHE RELIES_ON (product_descriptions)
IS
result VARCHAR2(50);
BEGIN
SELECT translated_name INTO result
FROM product_descriptions
WHERE product_id = prod_id AND language_id = lang_id;
RETURN result;
END;
When writing code for the PL/SQL result cache option, you need to

include the RESULT_CACHE option in the function declaration section of a package

include the RESULT_CACHE option in the function definition

optionally include the RELIES_ON clause to specify any tables or views on which the function
results depend
In the example, the productName function has result caching enabled through the
RESULT_CACHE option in the function declaration.
In this example, the RELIES_ON clause is used to identify the PRODUCT_DESCRIPTIONS
table on which the function results depend.
CREATE OR REPLACE FUNCTION productName
(prod_id NUMBER, lang_id VARCHAR2)
RETURN NVARCHAR2
RESULT_CACHE RELIES_ON (product_descriptions)
IS
result VARCHAR2(50);
BEGIN
SELECT translated_name INTO result
FROM product_descriptions

WHERE product_id = prod_id AND language_id = lang_id;


RETURN result;
END;
Finally, you execute the function.
You can also run the DBMS_RESULT_CACHE.MEMORY_REPORT to view the result cache
memory results.
EXECUTE dbms_output.put_line(productname(1792, 'US'))
Industrial 600/DQ

Question
Which statements accurately describe SQL result caching?
Options:
1.

It is enabled for a function using the CACHE_RESULT clause in the PL/SQL function

2.

It is useful when your queries need to analyze a large number of rows to return a
small number of rows or a single row

3.

It includes two new optimizer hints for turning SQL result caching on and off

4.

It requires you to design and develop your own cache-management policies

Answer

SQL result caching is useful when your queries need to analyze a large number of
rows to return a small number of rows or a single row. In addition, two new
optimizer hints are available to turn on and off SQL result caching.
Option 1 is incorrect. To enable result caching for a function, you use the
RESULT_CACHE clause in your PL/SQL code.
Option 2 is correct. SQL result caching is useful when your queries need to
analyze a large number of rows to return a small number of rows or a single row.
You can improve the performance of your queries by caching the results in
memory and then using the cached results in future executions of the query or
query fragments.
Option 3 is correct. Two new optimizer hints are available to turn SQL result
caching on and off, result_cache, and no_result_cache. These hints let you
override settings of the RESULT_CACHE_MODE initialization parameter.

Option 4 is incorrect. Result caching is efficient and easy to use, and it relieves
you of the burden of designing and developing your own caches and cachemanagement policies.

4. Flashback Data Archives


With Flashback Data Archives, you have the ability to track and store all transactional
changes to a table over its lifetime. With this feature, you save development resources
because you no longer need to manually build this functionality into your applications.
Furthermore, Flashback Data Archive is useful for compliance with record stage policies
and audit reports.
The process of creating a Flashback Data Archive comprises four steps. They are

create the Flashback Data Archive

specify the default Flashback Data Archive

enable the Flashback Data Archive

view Flashback Data Archive data


create the Flashback Data Archive
The first step is to create a Flashback Data Archive. A Flashback Data Archive consists of
one or more tablespaces. You can have multiple Flashback Data Archives.
specify the default Flashback Data Archive
In the second step, you can specify a default Flashback Data Archive for the system. A
Flashback Data Archive is configured with retention time. Data archived in the Flashback
Data Archive is retained for the retention time.
enable the Flashback Data Archive
In the third step, you can enable flashback archiving and then disable it again for a
table. Although flashback archiving is enabled for a table, some DDL statements are not
allowed on that table. By default, flashback archiving is off for any table.
view Flashback Data Archive data
In the final step, you can examine the Flashback Data Archives. There are static data
dictionary views that you can query for information about Flashback Data Archives.
You create a Flashback Data Archive with the CREATE FLASHBACK ARCHIVE
statement.
You can optionally specify whether this is the default Flashback Data Archive for the
system. If you omit this option, you can still make this Flashback Data Archive the default
later.

You must provide the name of the Flashback Data Archive. You must also provide the
name of the first tablespace of the Flashback Data Archive.
You can optionally identify the Maximum amount of space that the Flashback Data
Archive can use in the first tablespace. The default is unlimited.
Unless your space quota on the first tablespace is also unlimited, you must specify this
value, otherwise, you receive the ORA-55621 error.
You must provide the retention time number of days that Flashback Data Archive data
for the table is guaranteed to be stored.
When you use Flashback Data Archive to access historical data a default Flashback Data
Archive named fla1 is created that uses up to 10 GB of the tbs1 tablespace, whose
data will be retained for five years.
CONNECT sys/oracle@orcl AS sysdba
-- create the Flashback Data Archive
CREATE FLASHBACK ARCHIVE DEFAULT fla1
TABLESPACE tbs1 QUOTA 10G RETENTION 5 YEAR;
Next you specify the default Flashback Data Archive. By default, the system has no
Flashback Data Archive.
You can set it by specifying the name of an existing Flashback Data Archive in the SET
DEFAULT clause of the ALTER FLASHBACK ARCHIVE statement.
Alternatively, you can include DEFAULT in the CREATE FLASHBACK ARCHIVE statement
when you create a Flashback Data Archive.
-- Specify the default Flashback Data Archive
ALTER FLASHBACK ARCHIVE fla1 SET DEFAULT;
Next the Flashback Data Archive is enabled. By default, flashback archiving is disabled.
At any time, you can enable flashback archiving for a table.
-- Enable Flashback Data Archive
ALTER TABLE oe1.inventories FLASHBACK ARCHIVE;
ALTER TABLE oe1.warehouses FLASHBACK ARCHIVE;

Note
If Automatic Undo Management is disabled, you receive the ORA-55614 error
when you try to modify the table.

To enable flashback archiving for a table, include the FLASHBACK ARCHIVE clause in
either the CREATE TABLE or ALTER TABLE statement.
In the FLASHBACK ARCHIVE clause, you can specify the Flashback Data Archive where
the historical data for the table will be stored. The default is the default Flashback Data
Archive for the system. If a table already has flashback archiving enabled, and you try to
enable it again with a different Flashback Data Archive, an error occurs.
To disable flashback archiving for a table, specify NO FLASHBACK ARCHIVE in the
ALTER TABLE statement.
For example, you can use Flashback Data Archive to access historical data, and retrieve
the inventories of product 3108.
The initial values from the first statement are displayed.
SELECT product_id, warehouse_id, quantity_on_hand
FROM oe1.inventories
WHERE product_id = 3108;
PRODUCT_ID WAREHOUSE_ID QUANTITY_ON_HAND
---------- ------------ ---------------3108
8
122
3108
9
110
3108
2
194
3108
4
170
3108
6
146
Next you change the data to update the QUANTITY_ON_HAND values for product 3108 to
300.
UPDATE oe1.inventories
SET quantity_on_hand = 300
WHERE product_id = 3108;
The QUANTITY_ON_HAND values for product 3108 are updated to 300.
SELECT product_id, warehouse_id, quantity_on_hand
FROM oe1.inventories
WHERE product_id = 3108;
PRODUCT_ID WAREHOUSE_ID QUANTITY_ON_HAND
---------- ------------ ---------------3108
8
300
3108
9
300
3108
2
300

3108
3108

4
6

300
300

After the update occurs, you can still view the flashback data with this query.
These are the values retrieved for the inventories of product 3108 as of June 26, 2007.
This date is before the update statement was executed.
SELECT product_id, warehouse_id, quantity_on_hand
FROM oe1.inventories AS OF TIMESTAMP TO_TIMESTAMP
('2007-06-26 00:00:00', 'YYYY-MM-DD HH24:MI:SS')
WHERE product_id = 3108;
PRODUCT_ID WAREHOUSE_ID QUANTITY_ON_HAND
---------- ------------ ---------------3108
8
122
3108
9
110
3108
2
194
3108
4
170
3108
6
146
You can view information about your flashback archives from the dictionary views. The
directory views are

*_FLASHBACK_ARCHIVE

*_FLASHBACK_ARCHIVE_TS

*_FLASHBACK_ARCHIVE_TABLES
*_FLASHBACK_ARCHIVE
The *_FLASHBACK_ARCHIVE view displays information about Flashback Data Archives.
*_FLASHBACK_ARCHIVE_TS
The *_FLASHBACK_ARCHIVE_TS view displays tablespaces of Flashback Data Archives.
*_FLASHBACK_ARCHIVE_TABLES
The *_FLASHBACK_ARCHIVE_TABLES view displays information about tables that are
enabled for flashback archiving.
This example provides information about tables that are enabled for flashback archiving
using the *_FLASHBACK_ARCHIVE_TABLES view.
DESCRIBE dba_flashback_archive_tables
Name
Null?
----------------------------------- -------TABLE_NAME
NOT NULL
OWNER_NAME
NOT NULL
FLASHBACK_ARCHIVE_NAME
NOT NULL

Type
--------------VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(255)

ARCHIVE_TABLE_NAME

VARCHAR2(53)

SELECT * FROM dba_flashback_archive_tables;


TABLE_NAME
OWNER_NAME
------------- ---------------------------INVENTORIES
OE
WAREHOUSES
OE

FLASHBACK_ARCHIVE_NAME ARCHIVE_TABLE_NAME
---------------------FLA1
FLA1

SYS_FBA_HIST_70355
SYS_FBA_HIST_70336

This example provides Flashback dictionary information using the


*_FLASHBACK_ARCHIVE view.
DESCRIBE dba_flashback_archive
Name
Null?
---------------------------- -------FLASHBACK_ARCHIVE_NAME
NOT NULL
FLASHBACK_ARCHIVE#
NOT NULL
RETENTION_IN_DAYS
NOT NULL
CREATE_TIME
LAST_PURGE_TIME
STATUS

Type
-----------------VARCHAR2(255)
NUMBER
NUMBER
TIMESTAMP(9)
TIMESTAMP(9)
VARCHAR2(7)

SELECT * FROM dba_flashback_archive;


FLASHBACK_ARCHIVE_NA FLASHBACK_ARCHIVE# RETENTION_IN_DAYS
-------------------- ------------------ ----------------CREATE_TIME
-------------------------------------------------------LAST_PURGE_TIME
STATUS
--------------------------------- -----FLA1
1
1825
27-JUN-07 09.29.23.000000000 PM
You can use the DBMS_FLASHBACK.ENABLE and DBMS_FLASHBACK.DISABLE
procedures to enable and disable the Flashback Data Archives, respectively.
You use Flashback Query, Flashback Version Query, or Flashback Transaction Query for
the SQL code that you write, for convenience.
To obtain an SCN to use later with a flashback feature, you can use the
DBMS_FLASHBACK.GET_SYSTEM_CHANGE_NUMBER function.
To compute or retrieve a past time to use in a query, use a function return value as a time
stamp or SCN argument. For example, add or subtract an INTERVAL value to the value of
the SYSTIMESTAMP function.

To ensure database consistency, you always perform a COMMIT or ROLLBACK operation


before querying past data.
All flashback processing uses the current session settings, such as national language and
character set, not the settings that were in effect at the time being queried.
To query past data at a precise time, you use an SCN. If you use a time stamp, the actual
time queried might be up to 3 seconds earlier than the time you specify. Oracle Database
uses SCNs internally and maps them to time stamps at a granularity of 3 seconds.
You cannot retrieve past data from a dynamic performance (V$) view. A query on such a
view always returns current data. However, you can perform queries on past data in static
data dictionary views, such as *_TABLES.
Using the ALTER TABLE DDL statement on a table enabled for Flashback Data Archive
can cause the error ORA-55610:Invalid DDL statement on history-tracked
table.
The error is raised if the DDL statement

drops, renames, or modifies a column

performs partition or subpartition operations

converts a LONG column to a LOB column

includes an UPGRADETABLE clause, with or without an INCLUDINGDATA clause


The DDL statements run on a table enabled for Flashback Data Archive that also cause
the error ORA-55610:Invalid DDL statement on history-tracked table are

DROP TABLE statement

RENAME TABLE statement

TRUNCATE TABLE statement

Question
What data dictionary view enables you to view information about tables that are
enabled for flashback archiving?
Options:
1.

ALL/USER/DBA_FLASHBACK_ARCHIVE

2.

ALL/USER/DBA_FLASHBACK_ARCHIVE_TABLES

3.

ALL/USER/DBA_FLASHBACK_ARCHIVE_TS

4.

ALL/USER/DBA_FLASHBACK_ARCHIVES

Answer
The ALL/USER/DBA_FLASHBACK_ARCHIVE_TABLES data dictionary view
enables you to view information about tables that are enabled for flashback
archiving.
Option 1 is incorrect. The ALL/USER/DBA_FLASHBACK_ARCHIVE data dictionary
view displays information about Flashback Data Archives.
Option 2 is correct. The ALL/USER/DBA_FLASHBACK_ARCHIVE_TABLES data
dictionary view displays information about tables that are enabled for flashback
archiving. This view contains TABLE_NAME, OWNER_NAME,
FLASHBACK_ARCHIVE_NAME, and ARCHIVE_TABLE_NAME columns.
Option 3 is incorrect. The ALL/USER/DBA_FLASHBACK_ARCHIVE_TS data
dictionary view displays the tablespaces of Flashback Data Archives.
Option 4 is incorrect. This is not a valid data dictionary view. You would query
ALL/USER/DBA_FLASHBACK_ARCHIVE for information about Flashback Data
Archives.

Summary
In Oracle Database 11g, PL/SQL source is compiled directly to a dynamic link library (DLL
). The PL/SQL native compilation works out of the box, without requiring a C compiler on
a production box. This makes real native compilation faster than C native compilation.
Inlining is the process of replacing a call to a subroutine with a copy of the body of the
subroutine that is called. The copied procedure generally runs faster than the original and
can provide performance gains of up to ten times.
You can also use SQL query result caching and PL/SQL function result caching to
improve performance.
Flashback Data Archives provide the ability to store and track all transactional changes to
a record. This feature enables you to save development resources because you no
longer need to build this intelligence into your application.

Implementing Improved Performance


Learning objective

After completing this topic, you should be able to examine and then improve performance
in Oracle Database 11g.

Excercise overview
In this exercise, you're required to identify the correct code that uses various Oracle
Database 11g performance enhancements to examine the performance of a data type,
and examine SQL and PL/SQL result caching and inlining.
This involves the following tasks:

examining performance

using result caching and inlining


Suppose you've recently upgraded to Oracle Database 11g, and you want to examine the
performance of a data type. You also want to improve performance using result caching
and inlining.

Task 1: Examining performance


You want to use the new Oracle Database 11g performance improvements to examine
the performance of the SIMPLE_INTEGER data type.

Step 1 of 3
What statement queries the V$PARAMETER view and returns the name and value
for the PLSQL_CODE_TYPE parameter?
Options:
1.

SELECT name
FROM v$parameter
WHERE name like 'plsql%';

2.

SELECT name, value


FROM v$parameter
WHERE name like 'plsql%';

3.

SELECT name, value


FROM v$parameter
WHERE name like plsql%;

4.

SELECT name, value


FROM vparameter
WHERE name like 'plsql%';

Result
The statement, SELECT name, value FROM v$parameter WHERE name
like 'plsql%'; queries the V$PARAMETER view and returns the name and
value for the PLSQL_CODE_TYPE parameter.

Option 1 is incorrect. This statement will return the name of all parameters from
the V$PARAMETER view that begin with the string "plsql". However, it will not
return the values associated with these parameters.
Option 2 is correct. This statement will return two columns. The first column will
contain the name of any parameter that begins with the string "plsql". The
second column will contain the values associated with the parameters returned in
the first column.
Option 3 is incorrect. To successfully return a name from the V$PARAMETER view
beginning with the string "plsql", the string in the WHERE clause should be
enclosed in single quotes.
Option 4 is incorrect. This statement will result in a "table or view does not exist
error" because it queried the VPARAMETER view instead of the V$PARAMETER
view.
Next you want to store the current CPU time in 100^th's of a second in the t_end
variable.
You have already written part of the code.
CREATE OR REPLACE PROCEDURE test_simple_integer IS
sim_counter SIMPLE_INTEGER :=0 ;
t_start
SIMPLE_INTEGER :=0 ;
t_end
SIMPLE_INTEGER :=0 ;
t_max
SIMPLE_INTEGER := 10000000;
BEGIN
t_start := DBMS_UTILITY.GET_CPU_TIME();
WHILE sim_counter < t_max LOOP
sim_counter := sim_counter + 1;
END LOOP;
<required code>
DBMS_OUTPUT.PUT_LINE((t_end-t_start)||' centiseconds with
Simple counter');
END test_simple_integer ;
/

Step 2 of 3
What line of code should be used instead of the line <required code> inside
the executable section to return the required results?
Options:
1.

t_end = DBMS_UTILITY.GET_CPU_TIME();

2.

t_end := DBMS_UTILITY.GET_CPU_TIME();

3.

t_end := DBMS_UTILITY_GET_CPU_TIME();

4.

t_end := GET_CPU_TIME();

Result
To store the current CPU time in 100^th's of a second in the t_end variable, you
should replace the line <required code> inside the executable section with the
t_end := DBMS_UTILITY.GET_CPU_TIME(); code segment.
Option 1 is incorrect. To assign the value returned by
DBMS_UTILITY.GET_CPU_TIME() to the t_end variable, the assignment
operator ":=" is required.
Option 2 is correct. The DBMS_UTILITY package contains a number of utilityrelated subprograms. The GET_CPU_TIME function returns the current CPU time
in 100^th's of a second.
Option 3 is incorrect. This line of code would result in an error as the package is
named DBMS_UTILITY and the function is named GET_CPU_TIME. The package
name and function name must be separated by a period.
Option 4 is incorrect. To call the GET_CPU_TIME function, you must include the
name of the package to which the function belongs. In this case, the package is
called DBMS_UTILITY.

Step 3 of 3
What statement alters only a session to use native compilation?
Options:
1.

ALTER SESSION SET PLSQL_CODE_TYPE = 'NATIVE';

2.

ALTER SESSION SET PLSQL_CODE_TYPE_NATIVE;

3.

ALTER SESSION SET PLSQL_TYPE = 'NATIVE';

4.

ALTER SYSTEM SET PLSQL_CODE_TYPE = 'NATIVE';

Result
The statement ALTER SESSION SET PLSQL_CODE_TYPE = 'NATIVE'; alters
only a session to use native compilation.
Option 1 is correct. This statement will alter a session so that native compilation is
used. With PL/SQL native compilation, PL/SQL statements in a PL/SQL program
unit are compiled into native code and stored in the SYSTEM tablespace.

Option 2 is incorrect. PLSQL_CODE_TYPE_NATIVE is not a possible value to use


with an ALTER SESSION SET statement.
Option 3 is incorrect. The statement must set the value of the PLSQL_CODE_TYPE
parameter to native, and not the PLSQL_TYPE parameter.
Option 4 is incorrect. This statement enables native compilation at the system
level and not at the session level.

Task 2: Using result caching and inlining


You want to improve performance by enabling SQL result caching in a query. You also
want to modify an existing function to enable PL/SQL result caching and explore inlining.
First you want to enable SQL result caching in a query that retrieves data from the
INVENTORIES and PRODUCT_INFORMATION tables.
You have already written part of the code.
SELECT <required code> count(*),
round(avg(quantity_on_hand)) AVG_AMT,
product_id, product_name
FROM inventories natural join product_information
GROUP BY product_id, product_name;

Step 1 of 3
What should replace <required code> to achieve the desired results?

SELECT <required code> count(*),


round(avg(quantity_on_hand)) AVG_AMT,
product_id, product_name
FROM inventories natural join product_information
GROUP BY product_id, product_name;
Options:
1.

/*+ result_cache */

2.

/*+ result cache */

3.

/*+ result_cache_on */

4.

/*+ result cache on */

Result

To enable SQL result caching in a query that retrieves data from the
INVENTORIES and PRODUCT_INFORMATION tables, you should replace the line
<required code> inside the executable section with the /*+ result_cache
*/ code segment.
Option 1 is correct. You use the new optimizer hint /*+ result_cache */ to
turn on SQL result caching in a query.
Option 2 is incorrect. The correct optimizer hint to enable result caching is /*+
result_cache */ and not /*+ result cache */.
Option 3 is incorrect. To enable result caching, you use the optimizer hint /*+
result_cache */. Result_cache_on is not a valid optimizer hint.
Option 4 is incorrect. To enable result caching, you use the optimizer hint /*+
result_cache */, and not /*+ result cache on */.
You notice that the GET_WAREHOUSE_NAMES function is called frequently and the content
of the data returned does not frequently change. You decide that this code would benefit
from enabling PL/SQL result caching.
You have already written part of the code.
CREATE OR REPLACE TYPE list_typ IS TABLE OF VARCHAR2(35);
/
CREATE OR REPLACE FUNCTION get_warehouse_names
RETURN list_typ
<required code>
IS
v_count BINARY_INTEGER;
v_wh_names list_typ;
BEGIN
SELECT count(*)
INTO v_count
FROM warehouses;
FOR i in 1..v_count LOOP
SELECT warehouse_name
INTO v_wh_names(i)
FROM warehouses;
END LOOP;
RETURN v_wh_names;
END get_warehouse_names;
/

Step 2 of 3

What statement should replace the line <required code> to turn on PL/SQL
result caching?

CREATE OR REPLACE TYPE list_typ IS TABLE OF VARCHAR2(35);


/
CREATE OR REPLACE FUNCTION get_warehouse_names
RETURN list_typ
<required code>
IS
v_count BINARY_INTEGER;
v_wh_names list_typ;
BEGIN
SELECT count(*)
INTO v_count
FROM warehouses;
FOR i in 1..v_count LOOP
SELECT warehouse_name
INTO v_wh_names(i)
FROM warehouses;
END LOOP;
RETURN v_wh_names;
END get_warehouse_names;
/
Options:
1.

RESULT_CACHE RELIES_ON

2.

RESULT CACHE RELIES ON (warehouses)

3.

RESULT_CACHE RELIES_ON warehouses

4.

RESULT_CACHE RELIES_ON (warehouses)

Result
To turn on PL/SQL result caching in a query that retrieves data from the
GET_WAREHOUSE_NAMES function, you should replace the line <required
code> inside the executable section with the RESULT_CACHE RELIES_ON
(warehouses) code segment.
Option 1 is incorrect. When including the RELIES ON clause of RESULT_CACHE,
you need to specify the table or view on which the results of the function depend.
Option 2 is incorrect. To enable PL/SQL result caching for a function, you include
the RESULT_CACHE option in the function definition. The underscore between
RESULT and CACHE is required.

Option 3 is incorrect. The RELIES ON clause of RESULT_CACHE requires that the


table or view on which the function results depend be included inside parentheses.
Option 4 is correct. To enable result caching for a function, you use the
RESULT_CACHE clause in your PL/SQL function. The RELIES_ON clause is
optional and is used to specify any tables or views on which the function results
depend.

Step 3 of 3
Which statements accurately describe inlining?
Options:
1.

You can control inlining by setting PLSQL_OPTIMIZE_LEVEL and PRAGMA INLINE

2.

You can influence inlining

3.

You cannot control inlining by setting PLSQL_OPTIMIZE_LEVEL and


PRAGMA_INLINE

4.

You cannot influence inlining

Result
You can influence inlining and you cannot control inlining by setting
PLSQL_OPTIMIZE_LEVEL and PRAGMA_INLINE.
Option 1 is incorrect. You can only influence inlining by using the
PLSQL_OPTIMIZE_LEVEL parameter or PRAGMA INLINE. The compiler makes
the final decisions on inlining.
Option 2 is correct. You can influence inlining by setting
PLSQL_OPTIMIZE_LEVEL or by using PRAGMA INLINE. However, this is not a
guarantee that inlining will be used. The compiler makes its decision based on the
algorithms applied to the code.
Option 3 is correct. You can request that inlining should be used with the
PLSQL_OPTIMIZE_LEVEL parameter or PRAGMA INLINE. But, it is up to the
compiler to analyze the code and determine whether inlining is appropriate.
Option 4 is incorrect. By setting the PLSQL_OPTIMIZE_LEVEL parameter to 3 or
by using PRAGMA INLINE in your PL/SQL code, you can make use of inlining.
You have successfully examined performance of a data type in Oracle Database 11g.
You've improved performance by enabling SQL result caching in a query and modified an
existing function to enable PL/SQL caching. You've also explored inlining.

Using the PL/SQL Debugger in SQL Developer


Learning objective

After completing this topic, you should be able to identify the steps for using the PL/SQL
Debugger tool in SQL Developer.

1. Overview of the SQL Developer Debugger


Disclaimer
Although certain aspects of the Oracle 11g Database are case and spacing insensitive, a
common coding convention has been used throughout all aspects of this course.
This convention uses lowercase characters for schema, role, user, and constraint names,
and for permissions, synonyms, and table names (with the exception of the DUAL table).
Lowercase characters are also used for column names and user-defined procedure,
function, and variable names shown in code.
Uppercase characters are used for Oracle keywords and functions, for view, table,
schema, and column names shown in text, for column aliases that are not shown in
quotes, for packages, and for data dictionary views.
The spacing convention requires one space after a comma and one space before and
after operators that are not Oracle-specific, such as +, -, /, and <. There should be no
space between an Oracle-specific keyword or operator and an opening bracket, between
a closing bracket and a comma, between the last part of a statement and the closing
semicolon, or before a statement.
String literals in single quotes are an exception to all of the convention rules provided
here. Please use this convention for all interactive parts of this course.
End of Disclaimer
The PL/SQL Debugger is a powerful debugging tool that enables you to step through your
code line by line and analyze the contents of variables, arguments, loops, and branch
statements.
create or replace PROCEDURE emp_list2
( pMaxRows IN NUMBER
) AS
CURSOR emp_cursor IS
SELECT d.department_name, e.employer_id, e.last_name,
e.salary, e.commission_pct
FROM departments d, employees e

WHERE d.department_id - e.department;


emp_record emp_cursor%ROWTYPE;
TYPE emp_tab_type IS TABLE OF emp_cursor%ROWTYPE
INDEX BY BINARY_INTEGER;
emp_tab emp_tab_type;
i NUMBER := 1;
v_city VARCHAR2(30);
BEGIN
The debugger enables you to control the execution of your program. You can control
whether your program executes a single line of code, an entire subprogram which may
be a procedure or function or an entire program block.
By manually controlling when the program should run and when it should pause, you can
quickly move over the sections that you know work correctly and concentrate on the
sections that are causing problems.
To debug a PL/SQL subprogram, a security administrator needs to grant these privileges
to the application developer:

DEBUG ANY PROCEDURE

DEBUG CONNECT SESSION


There are seven steps to debug a subprogram.
In step 1, you start the Edit procedure.
In step 2, you add breakpoints.
In step 3, you compile for debug.
In step 4, you debug.
In step 5, you enter a parameter value or values.
DECLARE
PMAXROWS NUMBER;
BEGIN
PMAXROWS := NULL;
EMP_LIST2(
PMAXROWS => 2
);
In step 6, you choose a debugging tool.

In step 7, you monitor the data.


FETCH emp_cursor INTO emp_record;
emp_tab(i) := emp_record;
v_city := get_location(emp_record.department_name);
dbms_output.put_line( 'Emplyee ' | |
emp_record.last_name | | ' works in ' | | v_city );
END LOOP;
CLOSE emp_cursor;
FOR j IN REVERSE 1..i LOOP
DBMS_OUTPUT.PUT_LINE(emp_tab(j).last_name);
The procedure or function Code tabbed page displays a toolbar and the text of the
subprogram, which you can edit by clicking the Edit icon on the toolbar.
Once you are in edit mode, you can set and unset breakpoints for debugging by clicking
to the left of the thin vertical line beside each statement with which you want to associate
a breakpoint. When a breakpoint is set, a red circle displaces the code row number.
The commands you access on the toolbar of the procedure or function tabbed page are

Run

Debug

Compile

Compile for Debug


Run
The Run command starts a normal execution of the function or procedure, and displays
the results on the Running - Log tabbed page.
Debug
The Debug command starts execution of the subprogram in debug mode, and displays the
Debugging - Log tabbed page, which includes the debugging toolbar for controlling the
execution.
Compile
The Compile command performs a PL/SQL compilation of the subprogram.
Compile for Debug
The Compile for Debug command performs a PL/SQL compilation so that it can be
debugged.
The commands on the toolbar of the Debugging - Log tabbed page are

Find Execution Point

Resume

Step Over

Step Into

Step Out

Step to End of Method

Pause

Terminate
Find Execution Point
The Find Execution Point command navigates to the next execution point.
Resume
The Resume command continues execution.
Step Over
The Step Over command bypasses the next subprogram and goes to the next statement
after the subprogram, provided that the subprogram does not have any breakpoint
elsewhere.
Step Into
The Step Into command executes a single program statement at a time. If the execution
point is located on a call to a subprogram, the Step Into command steps into that
subprogram and places the execution point on the subprogram's first statement.
Step Out
The Step Out command leaves the current subprogram and goes to the next statement
with a breakpoint.
Step to End of Method
The Step to End of Method command goes to the last statement in the current
subprogram, or to the next breakpoint if there are any in the current procedure.
Pause
The Pause command stops execution but does not exit.
Terminate
The Terminate command stops and exits the execution.
The debugging tabbed pages are

Data

Watches
Data
The Data tabbed page is located under the code text area and displays information about
all variables.
Watches

The Watches tabbed page is located under the code text area and displays information
about watchpoints you have entered.
If you cannot see some of the debugging tabs, you can redisplay such tabs using the
View - Debugger menu option.

Question
What PL/SQL Debugger command halts program execution but does not exit the
debugger?
Options:
1.

Pause

2.

Resume

3.

Step to End of Method

4.

Terminate

Answer
The Pause PL/SQL Debugger command halts program execution without exiting
the debugger.
Option 1 is correct. The Pause command of the PL/SQL Debugger is used to halt
the execution of the debugger without exiting the debugger.
Option 2 is incorrect. The Resume command of the PL/SQL Debugger is used to
resume execution of the debugger once it has been paused or stopped.
Option 3 is incorrect. The Step to End of Method command of the PL/SQL
Debugger goes to the last statement in the current subprogram or to the next
breakpoint if there are any in the current subprogram.
Option 4 is incorrect. The Terminate command of the PL/SQL Debugger is used to
halt execution and exit the debugger.

Question
What privileges must an application developer have to be able to debug a PL/SQL
subprogram?
Options:
1.

DEBUG ALL PROCEDURES

2.

DEBUG ANY PROCEDURE

3.

DEBUG CONNECT SESSION

4.

DEBUG SESSION CONNECT

Answer
An application developer must have DEBUG ANY PROCEDURE and DEBUG
CONNECT SESSION privileges to be able to debug a PL/SQL subprogram.
Option 1 is incorrect. The correct privilege required to debug a PL/SQL
subprogram is DEBUG ANY PROCEDURE and not DEBUG ALL PROCEDURES.
The DEBUG CONNECT SESSION privilege is also required.
Option 2 is correct. The DEBUG ANY PROCEDURE privilege is required to debug
a PL/SQL subprogram. This privilege is the equivalent to the DEBUG privilege
granted on all objects in the database.
Option 3 is correct. The DEBUG CONNECT SESSION privilege is required to
debug a PL/SQL subprogram. When the debugger becomes connected to a
session, the session login user and the currently enabled session-level roles are
fixed as the privilege environment for that debugging connection.
Option 4 is incorrect. The correct privilege required to debug a PL/SQL
subprogram is DEBUG CONNECT SESSION and not DEBUG SESSION
CONNECT. The DEBUG ANY PROCEDURE privilege is also required.

2. Debugging a procedure: An example


The emp_list2 procedure gathers employee information, such as the employee's
department name, ID, name, salary, and commission percent. The procedure creates a
record to store the employee's information. The procedure also creates a table that can
hold multiple records of employees. "i" is a counter.
The code opens the cursor and fetches the employee's records. The code also checks
whether or not there are more records to fetch, or whether the number of records fetched
so far is less than the number of records that you specify. The code eventually prints out
the employee's information.
You need to make sure that you are displaying the procedure code in edit mode. To edit
the procedure code, you click the Edit icon on the procedure's toolbar.
CREATE or REPLACE PROCEDURE emp_list2
pmaxrows IN NUMBER
) AS
CURSOR emp_cursor IS
SELECT d.department_name, e.employee_id, e.last_name,
e.salary, e.commission_pct
FROM departments d, employees e

WHERE d.department_id = e.department_id;


emp_record emp_cursor % ROWTYPE;
type emp_tab_type IS TABLE OF emp_cursor%ROWTYPE
INDEX BY binary_integer;
emp_tab emp_tab_type;

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the code used to create a new emp_list2 procedure.
The emp_list2 procedure calls the get_location function, which returns the name of
the city in which an employee works.
For debugging purposes, you can set breakpoints in the procedure emp_list2. Here
emp_list2 is displayed in edit mode, and three breakpoints have been added.
To compile the emp_list2 procedure for debugging, you right-click the code, and then
select Compile for Debug from the shortcut menu.
Once completed, the Messages tabbed page displays the message that the procedure
was compiled.
You now compile the get_location function for debug mode. You do this by displaying
the function in edit mode.
To compile the function for debugging, you right-click the code, and then select Compile
for Debug from the shortcut menu.
The Messages tabbed page displays the message that the function was compiled.
Next you want to debug the emp_list2 procedure. You click the Debug icon on the
procedure's toolbar.
An anonymous PL/SQL block displays in the Debug PL/SQL dialog box, and you are
prompted to enter the parameters for the procedure. The procedure emp_list2 has one
parameter pMaxRows which specifies the number of records to return. You replace
the second pMaxRows with a number such as 2, and then you click OK.
DECLARE
PMAXROWS NUMBER;
BEGIN
PMAXROWS := NULL;

EMP_LIST2(
PMAXROWS => 2
);
END;
The debugging program stops at the first breakpoint.
i NUMBER := 1;
v_city VARCHAR2(30);
BEGIN
OPEN emp_cursor;
FETCH emp_cursor INTO emp_record;
emp_tab(i) := emp_record;
WHILE (emp_cursor%FOUND) AND (i <= pMaxRows) LOOP
i := i + 1
FETCH emp_cursor INTO emp_record;
emp_tab(i) := emp_record;
v_city := get_location(emp_record.department_name);
dbmas_output.put_line('Emplyee ' | |
emp_record.last_name | | ' works in ' | | v_city );
END LOOP;
CLOSE emp_cursor;
FOR j IN REVERSE 1..i LOOP
The Debugging - Log tabbed page opens.
Finished processing prepared classes.
Source breakpoint occurred at line 16 of EMP_LIST2.pls.
The Step Into command executes a single program statement at a time. If the execution
point is located on a call to a subprogram, the Step Into command steps into that
subprogram and places the execution point on the subprogram's first statement. If the
execution point is located on the last statement of a subprogram, choosing Step Into
causes the debugger to return from the subprogram, placing the execution point on the
line of code that follows the call to the subprogram you are returning from.
The term single stepping refers to using Step Into to run successively through the
statements in your program code.
There is more than one way to step into a subprogram. You can

select Debug - Step Into from the menu

press F7 on your keyboard

click the Step Into icon on the toolbar of the Debugging - Log tabbed page

When you press F7 again, program control moves to the first breakpoint in the code. The
arrow next to the breakpoint indicates that this is the line of code that will be executed
next.
Various tabs display below the code window.

Note
The Step Into and Step Over commands offer the simplest way of moving through
your program code. Although the two commands are very similar, they each offer
a different way to control code execution.
Selecting the Step Into command executes a single program statement at a time. If the
execution point is located on a call to a subprogram, the Step Into command steps into
that subprogram and places the execution at the first statement in the subprogram.
For example, pressing F7 executes the line of code at the first breakpoint. In this case,
program control is transferred to the section where the cursor is defined.
You can view your data while you are debugging your code. You can use the Data tabbed
page to display and modify the variables. You can also set watches to monitor a subset of
the variables displayed in the Data tabbed page.
To display or hide the Data, Smart Data, and Watches tabbed pages, you select View Debugger, and then you select the tabs you want to display or hide.
For example, the Data tabbed page displays the values of the variables when i = 1.
When the value of i changes to 2, the values of the variables change accordingly.
You can modify the variables while debugging the code. To modify the value of a variable
in the Data tabbed page, you right-click the variable name, and then select Modify Value
from the shortcut menu.
The Modify Value dialog box displays the current value of the variable. You can enter a
new value in the second text box, and then click OK.
The Step Over debugging command, like Step Into, enables you to execute program
statements one at a time. However, if you issue the Step Over command when the
execution point is located on a subprogram call, the debugger runs that subprogram
without stopping instead of stepping into it and then positions the execution point on
the statement that follows the subprogram call.
If the execution point is located on the last statement of a subprogram, choosing Step
Over causes the debugger to return from the subprogram, placing the execution point on
the line of code that follows the call to the subprogram you are returning from.

To step over a subprogram you can

select Debug - Step Over from the menu

press F8 on your keyboard

click the Step Over icon on the toolbar of the Debugging - Log tabbed page
In the emp_list2 procedure, stepping over will execute the open cursor line without
transferring program control to the cursor definition, as was the case with the Step Into
option example.
The Step Out of the code option leaves the current subprogram and goes to the next
statement subprogram.
With program control at the first break point, you click the Debug icon to display the
Debug PL/SQL dialog box.
You change the pMaxRows parameter to 4, for example.
If you now press Shift+F7, the program control leaves the emp_list2 procedure.
Control now goes to the next statement in the anonymous block.
You continue to press Shift+F7, which takes you to the next anonymous block that prints
the contents of the SQL buffer.
The SQL buffer contains the locations of the first four employees.
When stepping through your application code in the debugger, you may want to run to a
particular location without having to single step or set a breakpoint. To run to a specific
program location, in a subprogram editor, position your text cursor on the line of code
where you want the debugger to stop.
To run to the cursor location you can

right-click the line of code and choose Run to Cursor in the procedure editor

choose the Debug - Run to Cursor option from the main menu

press F4 on your keyboard


When running to the cursor location, any of these conditions may result:

your program executes without stopping, until the execution reaches the location marked by the
text cursor in the source editor

the Run to Cursor command will cause your program to run until it encounters a breakpoint or
when your program finishes if your program never actually executes the line of code where the text
cursor is
The Step to End of Method debugging command moves control to the last statement in
the current subprogram, or to the next breakpoint if there are any in the current
subprogram.
You display the Debugging window again and click the Step to End of Method icon on
the debugger toolbar.
If there is a second breakpoint, the Step to End of Method debugging tool will transfer
control to that breakpoint.
Selecting Step to End of Method again will go through the iterations of the WHILE loop
first, and then it will transfer the program control to the next executable statement in the
anonymous block.

Question
When using the PL/SQL Debugger Step Over command, what occurs when the
execution point is located on a subprogram call?
Options:
1.

The debugger runs that subprogram without stopping and then positions the
execution point on the statement that follows the subprogram call

2.

The debugger runs until it encounters a breakpoint or until the program finishes

3.

The debugger steps into the subprogram and places the execution point at the first
statement in the subprogram

4.

The debugger will transfer the program control to the next executable statement in a
block

Answer
When using the PL/SQL Debugger Step Over command when the execution point
is located on a subprogram call, the debugger runs that subprogram without
stopping and then positions the execution point on the statement that follows the
subprogram call.
Option 1 is correct. When using the Step Over command, if the execution point is
located on a subprogram call, the debugger will run that subprogram without
stopping instead of stepping into it and will then position the execution point on
the statement that follows the subprogram call.

Option 2 is incorrect. This occurs when using the Run to Cursor command, not the
Step Over command. When using the Run to Cursor command, if a program
never executes the line of code where the text cursor is, the debugger will run the
program until it encounters a breakpoint or until the program finishes.
Option 3 is incorrect. This does not occur when you use the Step Over command,
but the Step Into command, which executes a single program statement at a time.
When the execution point is located on a call to a subprogram, this command
steps into that subprogram and places the execution point on the subprogram's
first statement.
Option 4 is incorrect. When the execution point is located on the last statement of
a subprogram, the Step Over command causes the debugger to return from the
subprogram and place the execution point on the line of code that follows the call
to the subprogram you are returning from.

Summary
The PL/SQL Debugger is a powerful debugging tool that enables you to step through your
code line by line and analyze the contents of variables, arguments, loops, and branch
statements. By setting breakpoints, you can manually control when the program should
run and when it should pause. This allows you to quickly move over the sections that you
know work correctly and concentrate on the sections that are causing problems. The
debugging tools at your disposal are: Find Execution Point, Resume, Step Over, Step
Into, Step Out, Step to End of Method, Pause, and Terminate.
To gather employee information which it stores in a record, you create the procedure
emp_list2. The emp_list2 procedure calls the function get_location, which
returns the name of the city in which an employee works. You use the debugger tools to
step through the code, modify parameters, and view program output.

Creating a New emp_list2 Procedure


1 CREATE or REPLACE PROCEDURE emp_list2
2 pmaxrows IN NUMBER
3 ) AS
4 CURSOR emp_cursor IS
5
SELECT d.department_name, e.employee_id, e.last_name,
6
e.salary, e.commission_pct
7
FROM departments d, employees e
8
WHERE d.department_id = e.department_id;
9
emp_record emp_cursor % ROWTYPE;
10 type emp_tab_type IS TABLE OF emp_cursor%ROWTYPE
11 INDEX BY binary_integer;
12 emp_tab emp_tab_type;
13 i NUMBER := 1;

14 v_city VARCHAR2(30);
15 BEGIN
16
OPEN emp_cursor;
17
FETCH emp_cursor INTO emp_record;
18
emp_tab(i) := emp_record;
19
WHILE (emp_cursor%FOUND) AND (i <= pMaxRows) LOOP
20
i := i + 1;
21
FETCH emp_cursor INTO emp_record;
22
emp_tab(i) := emp_record;
23
v_city := get_location(emp_record.department_name);
24
dbms_output.put_line('Employee ' ||
25
emp_record.last_name || ' works in ' || v_city );
26
END LOOP;
27
CLOSE emp_cursor;
28
FOR j IN REVERSE 1..i LOOP
29
DBMS_OUTPUT.PUT_LINE (emp_tab(j).last_name);
30
END LOOP;
31 END emp_list2;
Copyright 2008 SkillSoft. All rights reserved.
SkillSoft and the SkillSoft logo are trademarks or registered trademarks
of SkillSoft in the United

Debugging a Procedure
Learning objective

After completing this topic, you should be able to debug a procedure using the PL/SQL
Debugger.

Exercise overview
In this exercise, you're required to correctly identify the results of using various debugging
tools.
This involves the following tasks:

stepping into and over code

stepping through and to the end of code


As database administrator (DBA) you are responsible for the administration of the
database HR-ORCL. You have created a new procedure HR.EMP_LIST2@HR.ORCL
that gathers employee information such as the employee's department, name, ID, name,
salary, and commission percent.
The procedure also creates a table that can hold multiple records of employees.
The procedure calls the function get_location, which returns the name of the city in

which an employee works.


You want to use the PL/SQL Debugger tool in SQL Developer to analyze the code and its
contents.

Task 1: Stepping into and over code


You want to debug the emp_list2 procedure. The debugging procedure requires that
you first enter parameters for emp_list2. The procedure only has one parameter,
PMAXROWS, for which you've entered a value using the anonymous PL/SQL block.
You have set three breakpoints in code rows 16, 20, and 28.

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the code for creating a new emp_list2 procedure.

Step 1 of 2
You now want to debug the procedure by stepping into the code.
When stepping into code, what does the debugger do if the execution point is
located on the last statement of a subprogram?
Options:
1.

It bypasses the next subprogram and goes to the next statement after that
subprogram

2.

It returns from the subprogram and places the execution point on the line of code
that follows the original call

3.

It steps into the subprogram and places the execution point on the first statement

Result
When stepping into code, the debugger returns from the subprogram and places
the execution point on the line of code that follows the original call if the execution
point is located on the last statement of a subprogram.
Option 1 is incorrect. The Step Over not the Step Into command, bypasses the
next subprogram and goes to the next statement after the subprogram, provided
that the subprogram does not have a breakpoint elsewhere.

Option 2 is correct. When using the Step Into command, if the execution point is
located on the last statement of a subprogram, the debugger returns from the
subprogram and places the execution point on the line of code that follows the call
to the subprogram you are returning from.
Option 3 is incorrect. The Step Into command executes one program statement at
a time. If the execution point is located on a call to a subprogram, the Step Into
command steps into that subprogram and places the execution point on the first
statement.

Step 2 of 2
You now want to use the Step Over command to analyze the code.
What is the result of the debugger stepping over the provided code in line 16?
Options:
1.

It leaves the current subprogram and goes to the next statement

2.

It executes the open cursor line and transfers program control to the cursor definition

3.

It executes the open cursor line without transferring program control to the cursor
definition

4.

It goes to the last statement in the current subprogram or to the next breakpoint

Result
The result of stepping over the provided code in the debugger at line 16 is that the
debugger will execute the open cursor line without transferring program control to
the cursor definition.
Option 1 is incorrect. Using the Step Out command would cause the debugger to
leave the current subprogram and go to the next statement subprogram.
Option 2 is incorrect. Stepping into the code would cause the debugger to execute
the open cursor line and would transfer program control to the cursor definition.
Option 3 is correct. Stepping over will execute the open cursor line without
transferring program control to the cursor definition. This differs from stepping into
the code because program control is not transferred to the cursor definition.
Option 4 is incorrect. Using Step to End of Method would cause the debugger to
go to the last statement in the current subprogram or to the next breakpoint.

Task 2: Stepping through and to the end of code

You continue debugging the emp_list2 procedure using some of the other debugging
methods at your disposal.

Step 1 of 2
You've positioned the cursor on the line of code where you want the debugger to
stop. What must you do next to run to a particular location without having to single
step or set a breakpoint?
Options:
1.

Run to the cursor

2.

Step into the code

3.

Step out of the code

4.

Step to the end of the method

Result
When debugging, to run to a particular location without having to single step or set
a breakpoint you must position the cursor on the line of code where you want the
debugger to stop and then run to the cursor.
Option 1 is correct. To run to a specific program location, in a subprogram editor,
you position your text cursor on the line of code where you want the debugger to
stop. You can then run to the cursor location by choosing Run To Cursor from the
shortcut menu or Debug menu, or by pressing F4.
Option 2 is incorrect. To run to a particular location without having to single step or
set a breakpoint, you use the Run to Cursor method. Stepping into code executes
a single program statement at a time.
Option 3 is incorrect. Using Run to Cursor, not Step Out, enables you to run to a
particular location without having to single step or set a breakpoint. Stepping out
leaves the current subprogram and goes to the next statement subprogram.
Option 4 is incorrect. Stepping to the end of the method does not allow you to run
to a particular location without having to single step or set a breakpoint. Instead,
the Step to End of Method command goes to the last statement in the current
subprogram or to the next breakpoint if there are any in the current subprogram.

Step 2 of 2
What command causes the debugger to go to the last statement in the current
subprogram or to the next breakpoint if there are any in the current subprogram?
Options:

1.

Run to Cursor

2.

Step Into

3.

Step Over

4.

Step to End of Method

Result
The Step to End of Method command causes the debugger to go to the last
statement in the current subprogram or to the next breakpoint in the current
subprogram.
Option 1 is incorrect. Run to Cursor allows you to run to a particular location in
your code without having to single step or set a breakpoint.
Option 2 is incorrect. Step Into executes a single program statement at a time.
Option 3 is incorrect. Step Over leaves the current subprogram and goes to the
next statement subprogram.
Option 4 is correct. Step to End of Method goes to the last statement in the current
subprogram. However, if any breakpoints exist before the end of the subprogram,
the debugger will stop at the next one.

Summary
A procedure has been debugged using the Step Into, Step Over, Run to Cursor, and Step
to End of Method commands.

Working with Collections


Learning objective

After completing this topic, you should be able to recognize the steps for making effective
use of collections in PL/SQL and deciding which is the best collection to use in a given
scenario.

1. Understanding collections
A collection is a group of elements, all of the same type. Each element has a unique
subscript that determines its position in the collection.
Collections work like the arrays found in most third-generation programming languages.
They can store instances of an object type and, conversely, can be attributes of an object
type.

Collections can also be passed as parameters. You can use them to move columns of
data into and out of database tables or between client-side applications and stored
subprograms. Object types are used not only to create object relational tables, but also to
define collections.
You can use any of the three categories of collections:

nested tables

varrays

associative arrays
Nested tables can have any number of elements. Varrays are an ordered collection of
elements. And associative arrays known as "index-by tables" in earlier Oracle releases
are sets of keyvalue pairs, where each key is unique and is used to locate a
corresponding value in the array. The key can be an integer or a string.
PL/SQL offers two collection types:

nested tables

varrays
nested tables
A nested table holds a set of values. That is, it is a table within a table. Nested tables are
unbounded, meaning the size of the table can increase dynamically.
Nested tables are available in both PL/SQL and the database. Within PL/SQL, nested
tables are like one-dimensional arrays whose size can increase dynamically. Within the
database, nested tables are column types that hold sets of values.
varrays
Variable-size arrays, or varrays, are also collections of homogeneous elements that hold a
fixed number of elements, although you can change the number of elements at run time.
They use sequential numbers as subscripts.
You can define equivalent SQL types, allowing varrays to be stored in database tables.
They can be stored and retrieved through SQL, but with less flexibility than nested tables.
The Oracle database stores the rows of a nested table in no particular order. When you
retrieve a nested table from the database into a PL/SQL variable, the rows are given
consecutive subscripts starting at 1. This gives you array-like access to individual rows.
Nested tables are initially dense but they can become sparse through deletions. And,
therefore, they have nonconsecutive subscripts.

You can use varrays to reference the individual elements for array operations, or
manipulate the collection as a whole.
Varrays are always bounded and never sparse. You can specify the maximum size of the
varray in its type definition. Its index has a fixed lower bound of 1 and an extensible upper
bound.
A varray can contain a varying number of elements from zero (when empty) to the
maximum specified in its type definition. To reference an element, you can use the
standard subscripting syntax.
If you already have code or business logic that uses some other language, you can
usually translate that language's array and set types directly to PL/SQL collection types:

arrays in other languages become varrays in PL/SQL

sets and bags in other languages become nested tables in PL/SQL

hash tables and other kinds of unordered lookup tables in other languages become associative
arrays in PL/SQL
If you are writing original code or designing business logic from the start, you should
consider the strengths of each collection type and decide which is appropriate.
You use varrays when

the number of elements is known in advance

the elements are usually all accessed in sequence


You use nested tables when

the index values are not consecutive

there is no predefined upper bound for index values

you need to delete or update some elements, but not all the elements at once

you would usually create a separate lookup table, with multiple entries for each row of the main
table, and access it through join queries
The table compares the listing characteristics of PL/SQL collection types with those of DB
collection types.

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window

View the listing characteristics for collections.


There are several guidelines for using collections effectively. Because varray data is
stored inline in the same tablespace retrieving and storing varrays involves fewer disk
accesses. This makes them more efficient than nested tables.
To store large amounts of persistent data in a column collection, you should use nested
tables. This enables the Oracle server to use a separate table to hold the collection data,
which can grow over time. For example, when a collection for a particular row contains 1
to 1,000,000 elements, a nested table is simpler to use than a varray.
If preserving the order of elements in a collection column is important for data sets that
are not very large, you use a varray. For example, if you know that in each row the
collection will not contain more than 10 elements, you can use a varray with a limit of 10.
Deletions in the middle of the data can be avoided by using a varray. If you expect to
retrieve the entire collection simultaneously, you use a varray. If you need to perform
piecewise updates you cannot use varrays.
To create a collection, you first define a collection type, and then declare collections of
that type.
You can create a nested table or a varray data type in the database. This makes the data
type available to use in places such as columns in database tables, variables in PL/SQL
programs, and attributes of object types.
Before you can define a database table containing a nested table or varray, you must first
create the data type for the collection in the database.
This is the syntax for defining nested table and varray collection types persistent in
the database.
CREATE [OR REPLACE] TYPE type_name AS TABLE OF
element_datatype [NOT NULL];
CREATE [OR REPLACE] TYPE type_name AS VARRAY
(max_elements) OF element_datatype [NOT NULL];
You can also create a nested table or a varray in PL/SQL.
This is the syntax for defining nested table and varray collection types transient in
PL/SQL.
TYPE type_name IS TABLE OF element_datatype
[NOT NULL];

Note
Collections can be nested.
This is the syntax for defining a string-indexed collection in PL/SQL.
TYPE type_name IS VARRAY (max_elements) OF
element_datatype [NOT NULL];
To create a table based on a nested table, you first define an object type.
You create the typ_item type, which holds the information for a single line item.
CREATE TYPE typ_item AS OBJECT - -create object
(prodid NUMBER(5),
price NUMBER(7,2) )
/
CREATE TYPE typ_item_nst - - define nested table type
AS TABLE OF typ_item
/

Note
You must create the typ_item_nst nested table type based on the previously
declared type because it is illegal to declare multiple data types in this nested
table declaration.
Next you declare a column of that collection type. You create the typ_item_nst type,
which is created as a table of the typ_item type.
CREATE TABLE pOrder ( - - create database table
ordid NUMBER(5),
supplier NUMBER(5),
requester NUMBER(4),
ordered DATE,
items typ_item_nst)
NESTED TABLE items STORE AS item_stor_tab
Finally you create the pOrder table. And you use the nested table type in a column
declaration, which will include an arbitrary number of items based on the typ_item_nst
type. Thus, each row of pOrder may contain a table of items.
The NESTED TABLE STORE AS clause is required to indicate the name of the storage
table in which the rows of all the values of the nested table reside. The storage table is
created in the same schema and the same tablespace as the parent table.

CREATE TABLE pOrder ( - - create database table


ordid NUMBER(5),
supplier NUMBER(5),
requester NUMBER(4),
ordered DATE,
items typ_item_nst)
NESTED TABLE items STORE AS item_stor_tab

Note
The USER_COLL_TYPES dictionary view holds information about collections.
The rows for all nested tables of a particular column are stored within the same segment.
This segment is called the storage table.
A storage table is a system-generated segment in the database that holds instances of
nested tables within a column. You specify the name for the storage table by using the
NESTED TABLE STORE AS clause in the CREATE TABLE statement. The storage table
inherits storage options from the outermost table.
To distinguish between nested table rows belonging to different parent table rows, a
system-generated nested table identifier unique for each outer row that encloses a
nested table is created.
Operations on storage tables are performed implicitly by the system. You should not
access or manipulate the storage table, except implicitly through its containing objects.
Privileges of the column of the parent table are transferred to the nested table.
To create a table based on a varray, you first create the typ_project type, which holds
the information for a project.
Then you create the typ_ projectlist type, which is created as a varray of the
project type. The varray contains a maximum of 50 elements.
CREATE TYPE typ_Project AS OBJECT( - - create object
project_no NUMBER(2),
title VARCHAR2(35),
cost NUMBER(7,2))
/
CREATE TYPE typ_ProjectList AS VARRAY (50) OF typ_Project
- - define VARRAY type
/
Next you create the DEPARTMENT table and use the varray type in a column declaration.
Each element of the varray stores a project object.

CREATE TABLE department ( - - create database table


dept_id NUMBER(2),
name VARCHAR2(15),
budget NUMBER(11,2),
projects typ_ProjectList) - - declare varray as column
/
This code creates a varray of phone numbers, and then uses it in a CUSTOMERS table.
The OE sample schema uses this definition.
CREATE TYPE phone_list_typ
AS VARRAY(5) OF VARCHAR2(25);
/
CREATE TABLE customers
(customer_id NUMBER(6)
,cust_first_name VARCHAR2(50)
,cust_last_name VARCHAR2(50)
,cust_address cust_address_typ(100)
,phone_numbers phone_list_typ
...
);

Question
Which statements accurately describe either varrays or nested tables?
Options:
1.

Neither varrays or nested tables allow piecewise updates

2.

Nested tables are best used when a data set is not very large and it's important to
preserve the order of elements in the collection column

3.

Using nested tables to store large amounts of persistent data allows the Oracle
server to use a separate table to hold collection data that can grow over time

4.

Varray data is stored inline, so retrieving and storing varrays involves fewer disk
accesses

Answer
Using nested tables to store large amounts of persistent data allows the Oracle
server to use a separate table to hold collection data that can grow over time.
Varray data is stored inline, so retrieving and storing varrays involves fewer disk
accesses.
Option 1 is incorrect. Although varrays do not allow piecewise updates, nested
tables do.

Option 2 is incorrect. If your data set is not very large and it is important to
preserve the order of elements in a collection column, you should use varrays and
not nested tables.
Option 3 is correct. To store large amounts of persistent data in a column
collection, you should use nested tables. This way, the Oracle server can use a
separate table to hold the collection data, which can grow over time.
Option 4 is correct. Because varray data is stored inline in the same tablespace
retrieving and storing varrays involves fewer disk accesses. Varrays are
therefore more efficient than nested tables.

Question
Which statements accurately describe storage tables?
Options:
1.

Operations on them must be performed manually

2.

They are named using the STORAGE TABLE AS clause of the CREATE TABLE
statement

3.

They are system-generated segments in the database that hold instances of nested
tables within a column

4.

They inherit the storage options from the outermost table

Answer
Storage tables are system-generated segments in the database that hold
instances of nested tables within a column. And they inherit the storage options
from the outermost table.
Option 1 is incorrect. Operations on storage tables are performed implicitly by the
system. You should not access or manipulate the storage table, except implicitly
through its containing objects.
Option 2 is incorrect. You specify the name for the storage table by using the
NESTED TABLE STORE AS clause in the CREATE TABLE statement.
Option 3 is correct. The rows for all nested tables of a particular column are stored
within the same segment, which is called the storage table.
Option 4 is correct. A storage table inherits storage options from the outermost
table. Privileges of the column of the parent table are transferred to the nested
table.

2. Working with collections

There are several points about collections that you must know when working with them.
You can declare collections as the formal parameters of functions and procedures so that
you can pass collections to stored subprograms and from one subprogram to another.
A function's RETURN clause can be a collection type.
Collections follow the usual scoping and instantiation rules.
In a block or subprogram, collections are instantiated when you enter the block or
subprogram and cease to exist when you exit. In a package, collections are instantiated
when you first reference the package and cease to exist when you end the database
session.
In this example, a nested table is used as the formal parameter of a packaged procedure
the data type of an IN parameter for the procedure ALLOCATE_PROJ. It is also used as
the return data type of the TOP_PROJECT function.
CREATE OR REPLACE PACKAGE manage_dept_proj AS
TYPE typ_proj_details IS TABLE OF typ_Project;
...
PROCEDURE allocate_proj
(propose_proj IN typ_proj_details);
FUNCTION top_project (n NUMBER)
RETURN typ_proj_details;
...
Until you initialize it, a collection is atomatically null the collection itself is null, and not
its elements. To initialize a collection, you can

use a constructor

use a fetch

assign another collection variable directly


use a constructor
A constructor is a system-defined function with the same name as the collection type. A
constructor allows the creation of an object from an object type.
Invoking a constructor is a way to instantiate or create an object. This function constructs
collections from the elements passed to it.
use a fetch
You can read an entire collection from the database using a fetch.
assign another collection variable directly

You can assign another collection variable directly. You can copy the entire contents of one
collection to another as long as both are built from the same data type.
In this example, you pass three elements to the typ_ProjectList() constructor,
which returns a varray containing those elements.
DECLARE
- -this example uses a constructor
v_accounting_project typ_ProjectList;
BEGIN
v_accounting_project :=
typ_ProjectList
(typ_Project (1, 'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 12350),
typ_Project (3, 'Audit Accounts Payable',1425));
INSERT INTO department
VALUES(10, 'Accounting', 123, v_accounting_project);
...
END;
/
In this example of the initialization of a collection, an entire collection from the database is
fetched into the local PL/SQL collection variable.
DECLARE
- - this example uses a fetch from the database
v_accounting_project typ_ProjectList;
BEGIN
SELECT projects
INTO v_accounting_project
FROM department
WHERE dept_id = 10;
...
END;
/
In this example, the entire contents of one collection variable are assigned to another
collection variable.
DECLARE

- - this example assigns another collection


- - variable directly
v_accounting_project typ_ProjectList;
v_backup_project typ_ProjectList;
BEGIN
SELECT projects
INTO v_accounting_project
FROM department
WHERE dept_id = 10;
v_backup_project := v_accounting_project;

END;
/
Every element reference includes a collection name and a subscript enclosed in
parentheses. The subscript determines which element is processed. To reference an
element, you can specify its subscript by using this syntax.
In the syntax, subscript is an expression that yields a positive integer. For nested tables,
the integer must lie in the range 1 to 2147483647. For varrays, the integer must lie in the
range from 1 to maximum_size that you provide.
collection_name(subscript)
The first example shows you how to reference a specific collection element. The second
example shows you how to reference a field in a collection.
v_accounting_project(1)
v_accounting_project(1).cost
You can use collection methods from procedural statements, but not from SQL
statements. The methods are

EXISTS

COUNT

LIMIT

FIRST and LAST

PRIOR and NEXT

EXTEND

TRIM

DELETE
EXISTS
The EXISTS statement returns TRUE if the nth element in a collection exists. Otherwise,
EXISTS(n) returns FALSE.
COUNT
The COUNT statement returns the number of elements that a collection contains.
LIMIT
For nested tables that have no maximum size, the LIMIT statement returns NULL. And for
varrays, LIMIT returns the maximum number of elements that a varray can contain.
FIRST and LAST

The FIRST and LAST statements return the first and last smallest and largest index
numbers in a collection, respectively.
PRIOR and NEXT
The PRIOR(n) statement returns the index number that precedes index n in a collection.
And the NEXT(n) statement returns the index number that follows index n.
EXTEND
The EXTEND statement appends one null element, EXTEND(n) appends n elements, and
EXTEND(n, i) appends n copies of the ith element.
TRIM
The TRIM statement removes one element from the end, while TRIM(n) removes n
elements from the end of a collection.
DELETE
The DELETE statement removes all elements from a nested or associative array table.
DELETE(n) removes the nth element and DELETE(m, n) removes a range. The DELETE
statement does not work on varrays.
In this example, the FIRST method finds the smallest index number, and the NEXT
method traverses the collection starting at the first index. The output from this block of
code is Project too expensive: Outsource Payroll.
DECLARE
i INTEGER;
v_accounting_project typ_ProjectList;
BEGIN
v_accounting_project := typ_ProjectList(
typ_Project (1,'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 12350),
typ_Project (3, 'Audit Accounts Payable',1425));
i := v_accounting_project.FIRST ;
WHILE i IS NOT NULL LOOP
IF v_accounting_project(i).cost > 10000 then
DBMS_OUTPUT.PUT_LINE('Project too expensive: '
|| v_accounting_project(i).title);
END IF;
i := v_accounting_project.NEXT (i);
END LOOP;
END;
/
You can use the PRIOR and NEXT methods to traverse collections indexed by any series
of subscripts. In the example, the NEXT method is used to traverse a varray.

PRIOR(n) returns the index number that precedes index n in a collection. NEXT(n)
returns the index number that succeeds index n. If n has no predecessor, PRIOR(n)
returns NULL. Likewise, if n has no successor, NEXT(n) returns NULL. PRIOR is the
inverse of NEXT.
PRIOR and NEXT do not wrap from one end of a collection to the other. When traversing
elements, PRIOR and NEXT ignore deleted elements.
DECLARE
i INTEGER;
v_accounting_project typ_ProjectList;
BEGIN
v_accounting_project := typ_ProjectList(
typ_Project (1,'Dsgn New Expense Rpt', 3250),
typ_Project (2, 'Outsource Payroll', 12350),
typ_Project (3, 'Audit Accounts Payable',1425));
i := v_accounting_project.FIRST ;
WHILE i IS NOT NULL LOOP
IF v_accounting_project(i).cost > 10000 then
DBMS_OUTPUT.PUT_LINE('Project too expensive: '
|| v_accounting_project(i).title);
END IF;
i := v_accounting_project.NEXT (i);
END LOOP;
END;
/
This code uses the COUNT, EXTEND, LAST, and EXISTS methods on the my_projects
varray.
The COUNT method reports that the projects collection holds three projects for department
10. The EXTEND method creates a fourth empty project. The LAST method reports that
four projects exist. When testing for the existence of a fifth project, the program reports
that it does not exist.
DECLARE
v_my_projects typ_ProjectList;
v_array_count INTEGER;
v_last_element INTEGER;
BEGIN
SELECT projects INTO v_my_projects FROM department
WHERE dept_id = 10;
v_array_count := v_my_projects.COUNT ;
dbms_output.put_line('The # of elements is: ' || v_array_count);
v_my_projects.EXTEND ; --make room for new project
v_last_element := v_my_projects.LAST ;

dbms_output.put_line('The last element is: ' || v_last_element);


IF v_my_projects.EXISTS(5) THEN
dbms_output.put_line('Element 5 exists!');
ELSE
dbms_output.put_line('Element 5 does not exist.');
END IF;
END;
/
You must use PL/SQL procedural statements to reference the individual elements of a
varray in an INSERT, UPDATE, or DELETE statement.
In this example, the stored procedure inserts a new project into a department's projects at
a given position.
CREATE OR REPLACE PROCEDURE add_project (
p_deptno IN NUMBER,
p_new_project IN typ_Project,
p_position IN NUMBER )
IS
v_my_projects typ_ProjectList;
BEGIN
SELECT projects INTO v_my_projects FROM department
WHERE dept_id = p_deptno FOR UPDATE OF projects;
v_my_projects.EXTEND; --make room for new project
/* Move varray elements forward */
FOR i IN REVERSE p_position..v_my_projects.LAST - 1 LOOP
v_my_projects(i + 1) := v_my_projects(i);
END LOOP;
v_my_projects(p_position) := p_new_project; -- add new project
UPDATE department SET projects = v_my_projects
WHERE dept_id = p_deptno;
END add_project;
/
To execute the procedure, you pass
- the department number to which you want to add a project
- the project information
- the position where the project information is to be inserted
EXECUTE add_project(10, typ_Project(4, 'Information Technology', 789), 4)
SELECT * FROM department;
DEPT_ID
NAME
BUDGET
------------------------PROJECTS(PROJECT_NO, TITLE, COST)
-----------------------------------------

10
Accounting
123
PROJECTLIST(PROJECT(1, 'Dsgn New Expense Rpt', 3250),
PROJECT(2, 'Outsource Payroll', 12350),
PROJECT(3, 'Audit Accounts Payable', 1425),
PROJECT(4, 'Information Technology', 789))
In most cases, if you reference a nonexistent collection element, PL/SQL raises a
predefined exception. The exceptions are

COLLECTION_IS_NULL

NO_DATA_FOUND

SUBSCRIPT_BEYOND_COUNT

SUBSCRIPT_OUTSIDE_LIMIT

VALUE_ERROR
COLLECTION_IS_NULL
The exception COLLECTION_IS_NULL is raised when you try to operate on an
automatically null collection.
NO_DATA_FOUND
The exception NO_DATA_FOUND is raised when a subscript designates an element that
was deleted.
SUBSCRIPT_BEYOND_COUNT
The exception SUBSCRIPT_BEYOND_COUNT is raised when a subscript exceeds the
number of elements in a collection.
SUBSCRIPT_OUTSIDE_LIMIT
The exception SUBSCRIPT_OUTSIDE_LIMIT is raised when a subscript is outside the
legal range.
VALUE_ERROR
The exception VALUE_ERROR is raised when a subscript is null or not convertible to an
integer.
In the first case, the nested table is automatically null.
In the second case, the subscript is null.
In the third case, the subscript is outside the legal range.
In the fourth case, the subscript exceeds the number of elements in the table.
In the fifth case, the subscript designates a deleted element.

DECLARE
TYPE NumList IS TABLE OF NUMBER;
nums NumList; -- automatically null
BEGIN
/* Assume execution continues despite the raised
exceptions.
*/
nums(1) := 1; -- raises COLLECTION_IS_NULL
nums := NumList(1,2); -- initialize table
nums(NULL) := 3 -- raises VALUE_ERROR
nums(0) := 3; -- raises SUBSCRIPT_OUTSIDE_LIMIT
nums(3) := 3; -- raises SUBSCRIPT_BEYOND_COUNT
nums.DELETE(1); -- delete element 1
IF nums(1) = 1 THEN -- raises NO_DATA_FOUND
...

Question
Which collection method removes one element from the end of a collection?
Options:
1.

COUNT

2.

EXTEND

3.

FIRST AND LAST

4.

TRIM

Answer
The TRIM collection method removes one element from the end of a collection.
Option 1 is incorrect. The COUNT method returns the number of elements that a
collection contains.
Option 2 is incorrect. The EXTEND method appends one null element. The syntax
EXTEND(n) is used to append n elements. And the syntax EXTEND(n, i) is
used to append n copies of the ith element.
Option 3 is incorrect. The FIRST AND LAST method returns the first and last or
smallest and largest index numbers in a collection, respectively.
Option 4 is correct. The TRIM method removes one element from the end of a
collection. The syntax TRIM(n) removes n elements from the end of a collection.

Summary

Collections are a grouping of elements, all of the same type. The types of collections are
nested tables, varrays, and associative arrays. You can define nested tables in PL/SQL
program units and in the database. Nested tables, varrays, and associative arrays can be
used in a PL/SQL program.
When using collections in PL/SQL programs, you can access collection elements, use
predefined collection methods, and use exceptions that are commonly encountered with
collections.

Using Data Warehousing Enhancements


Learning objective

After completing this topic, you should be able to recognize the steps for using
materialized views and query rewrite enhancements to improve query execution times.

1. Using materialized view enhancements


Materialized views are schema objects that can be used to summarize, compute,
replicate, and distribute data.
Materialized views can be used most appropriately

in data warehouses

by the optimizer

in distributed environments

in mobile computing environments


in data warehouses
Materialized views can be used in data warehouses, to compute and store aggregated
data, such as sums and averages. Materialized views in these environments are typically
referred to as summaries because they store summarized data.
by the optimizer
Materialized views can be used by the optimizer to improve query performance by
automatically recognizing when a materialized view can and should be used to satisfy a
request. Queries are then directed to the materialized view and not to the underlying detail
tables or views.
in distributed environments
Materialized views can be used in distributed environments, to replicate data at distributed
sites and synchronize updates done at several sites with conflict resolution methods.
Materialized views as replicas provide local access to data that otherwise has to be
accessed from remote sites.

in mobile computing environments


Materialized views can be used in mobile computing environments, to download a subset
of data from central servers to mobile clients, with periodic refreshes from the central
servers and propagation of updates by clients back to the central servers.
Materialized views are similar to indexes in that they

consume storage space

need to be refreshed

improve performance of SQL for query rewrites

are transparent
The SH sample schema comes with a materialized view.
The view is defined with this code.
SELECT
,
,
,
,
FROM
,
,
WHERE
AND
GROUP BY
,
,
,

t.week_ending_day
p.prod_subcategory
sum(s.amount_sold) AS dollars
s.channel_id
s.promo_id
sales s
times t
products p
s.time_id = t.time_id
s.prod_id = p.prod_id
t.week_ending_day
p.prod_subcategory
s.channel_id
s.promo_id;

This example shows a sample materialized view in the SH schema.


DESCRIBE sh.FWEEK_PSCAT_SALES_MV
Name
Null
-------------------------- -------WEEK_ENDING_DAY
NOT NULL
PROD_SUBCATEGORY
NOT NULL
DOLLARS
CHANNEL_ID
NOT NULL
PROMO_ID
NOT NULL

Type
-----------DATE
VARCHAR2(50)
NUMBER
NUMBER
NUMBER

Oracle Database 11g introduces both new and enhanced catalog views for materialized
views that enable you to track the materialized view freshness.

New catalog views display

the partition change tracking (PCT) information for a given materialized view

which sections of the materialized views data are fresh or stale


You can view the partition staleness information of the materialized view as this affects
the usability and maintainability of the materialized view.
Partition Change Tracking (PCT) was introduced in Oracle Database 9i but its related
information was not yet exposed to the user through catalog views. Therefore, users were
unable to see this valuable information, which could help them make better decisions as
to how their materialized views should be maintained.
Oracle Database 11g exposes the materialized view (MV) freshness information that
corresponds to PCT base table partitions. In this way users are kept informed as to which
ranges of the MV data are fresh and which are not.
The fresh part of MV is reliable and available for use. There are four available catalog
views:

USER/ALL/DBA_MVIEWS

USER/ALL/DBA_MVIEW_DETAIL_RELATIONS

USER/ALL/DBA_MVIEW_DETAIL_PARTITION

USER/ALL/DBA_MVIEW_DETAIL_SUBPARTITION
USER/ALL/DBA_MVIEWS
The USER/ALL/DBA_MVIEWS catalog view is extended where new columns are added to
describe the number of PCT tables, and the number of fresh and stale PCT regions.
The USER/ALL/DBA_MVIEWS extension describes all materialized views in the database.
USER/ALL/DBA_MVIEW_DETAIL_RELATIONS
The USER/ALL/DBA_MVIEW_DETAIL_RELATIONS catalog view is extended where new
columns are added to indicate whether the detail table is PCT-enabled, and to show the
numbers of fresh and stale PCT partitions.
The USER/ALL/DBA_MVIEW_DETAIL_RELATIONS extension represents the named
detail relations that are either in the FROM list of a materialized view, or that are indirectly
referenced through views in the FROM list.
USER/ALL/DBA_MVIEW_DETAIL_PARTITION
A new catalog view for PCT partition USER/ALL/DBA_MVIEW_DETAIL_PARTITION is
created to describe the freshness of each PCT partition.

The new USER/ALL/DBA_MVIEW_DETAIL_PARTITION displays the freshness


information of the materialized views, with respect to a PCT detail partition.
USER/ALL/DBA_MVIEW_DETAIL_SUBPARTITION
A new catalog view for PCT subpartition
USER/ALL/DBA_MVIEW_DETAIL_SUBPARTITION is created to describe the freshness of
each PCT subpartition.
The new USER/ALL/DBA_MVIEW_DETAIL_SUBPARTITION displays freshness
information for all materialized views in the database, with respect to a PCT detail
subpartition.
The _MVIEWS catalog view describes materialized views. You can use this view to find
the query definition of the view as well as other useful information such as whether it is
updatable, rewrite enabled, and fresh.
In the syntax for the extended DBA/ALL/USER_MVIEWS specification, ALL_MVIEWS
describes all the materialized views accessible to the current user. DBA_MVIEWS
describes all the materialized views in the database. And USER_MVIEWS describes all
materialized views owned by the current user.
SELECT mview_name, num_pct_tables,
num_fresh_pct_regions,
num_stale_pct_regions
FROM all_mviews
WHERE owner = 'SH';
This is the syntax for the extended specification for the DBA/ALL/USER_MVIEWS catalog
view. This catalog view is extended to show how many detail partitions support PCT. In
addition, two extended columns show how many fresh and stale PCT regions are present
in that MV.
In the syntax
- NUM_PCT_TABLES specifies the number of PCT detail tables
- NUM_FRESH_PCT_REGIONS specifies the number of fresh PCT partition regions
- NUM_STALE_PCT_REGIONS specifies the number of stale PCT partition regions
MVIEW_NAME NUM_PCT_TABLES NUM_FRESH_PCT_REGIONS
NUM_STALE_PCT_REGIONS
---------- -------------- ----------------------------------------FWEEK_PSCAT_
1
28
0
SALES_MV

SELECT mview_name, num_pct_tables,


num_fresh_pct_regions,
num_stale_pct_regions
FROM all_mviews
WHERE owner = 'SH';
The USER/ALL/DBA_MVIEW_DETAIL_RELATIONS catalog view describes the named
detail relations that are either

specified in the FROM list of the subquery that defines a materialized view accessible to the
current user
indirectly referenced through views in that FROM list
Inline views in the materialized view definition are not represented in this view or the
related views.
Three new columns are added to this view in Oracle Database 11g. The views are
extended to show whether the detail partition supports PCT with respect to a given MV. If
the detail partition does support PCT, the catalog views display how many fresh and stale
PCT partitions are present in that detail table.
The code shows the new columns that are added to the
DBA/ALL/USER_MVIEW_DETAIL_RELATIONS catalog views:
- DETAILOBJ_PCT is the detail object PCT supported
- NUM_FRESH_PCT_PARTITIONS specifies the number of fresh PCT partitions
- NUM_STALE_PCT_PARTITIONS specifies the number of stale PCT partitions
DESCRIBE all_mview_detail_relations
Name
Null
------------------------------ -------OWNER
NOT NULL
MVIEW_NAME
NOT NULL
DETAILOBJ_OWNER
NOT NULL
DETAILOBJ_NAME
NOT NULL
DETAILOBJ_TYPE
DETAILOBJ_ALIAS
DETAILOBJ_PCT
NUM_FRESH_PCT_PARTITIONS
NUM_STALE_PCT_PARTITIONS

Type
---------------VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(9)
VARCHAR2(30)
VARCHAR2(1)
NUMBER
NUMBER

You can use the new USER/ALL/DBA_MVIEW_DETAIL_PARTITION catalog view to find


the freshness information about the materialized view with respect to a PCT detail
partition.
In the syntax

OWNER is the name of the owner of the materialized view

MVIEW_NAME is the name of the materialized view


SELECT detailobj_owner, detailobj_name,
detail_partition_name,
detail_partition_position POSITION,
freshness FRESH
FROM all_mview_detail_partition
WHERE mview_name = 'FWEEK_PSCAT_SALES_MV';
The columns of this view are
- DETAILOBJ_OWNER is the name of the owner of the detail object
- DETAILOBJ_NAME is the detail object name (table or view)
- DETAIL_PARTITION_NAME is the name of the detail object partition
- DETAIL_PARTITION_POSITION is the position of the detail object partition
- FRESHNESS is the freshness state either FRESH, STALE, UNKNOWN, or NA
This code is an example of the catalog view.
DETAIL_OBJ_OWNER
FRESH
-------------------SH
FRESH
SH
FRESH
SH
FRESH
...
SH
FRESH
SH
FRESH
SH
FRESH
SH
FRESH
28 rows selected

DETAIL_OBJ_NAME DETAIL_PARTITION_NAME POSITION


--------------- --------------------- -------SALES

SALES_1995

SALES

SALES_1996

SALES

SALES_H1_1997

SALES

SALES_Q1_2003

25

SALES

SALES_Q2_2003

26

SALES

SALES_Q3_2003

27

SALES

SALES_Q4_2003

28

ALL_MVIEW_DETAIL_SUBPARTITION displays freshness information of the materialized


views, with respect to a PCT detail subpartition, accessible to the current user.
DBA_MVIEW_DETAIL_SUBPARTITION displays freshness information for all materialized
views in the database, with respect to a PCT detail subpartition.

USER_MVIEW_DETAIL_SUBPARTITION displays freshness information for all


materialized views, with respect to a PCT detail subpartition, owned by the current user.
DESCRIBE all_mview_detail_subpartition
Name
Null
----------------------------- -------OWNER
NOT NULL
MVIEW_NAME
NOT NULL
DETAILOBJ_OWNER
NOT NULL
DETAILOBJ_NAME
NOT NULL
DETAIL_PARTITION_NAME
DETAIL_SUBPARTITION_NAME
DETAIL_SUBPARTITION_POSITION
FRESHNESS

Type
-----VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
VARCHAR2(30)
NUMBER
CHAR(5)

The refresh performance improvements reduce the time required to refresh materialized
views.
Previously, when the materialized view was being refreshed, it was implicitly disabled for
query rewrite even if its data was acceptable to the user. This was especially true when
atomic refresh was in progress and the user saw the data in the materialized view in a
transactional state of the past refresh.
In Oracle Database 11g, when the materialized view is refreshed in the atomic mode, it is
eligible for query rewrite if the rewrite integrity mode is set to STALE_TOLERATED.
Previously, the fast refresh was extended to support MVs that have UNION ALL
operators. However, the fast refresh of the UNION ALL MV did not apply partition-based
(PCT) refresh. In Oracle Database 11g, PCT refresh is now allowed for UNION ALL MV
fast refresh.
The materialized view with UNION ALL operators is fast refreshable but unlike most of
the fast refreshable MVs, it is not created with an index. Previously, in order to speed up
refresh execution, you needed to manually create such an index, but now it is automatic.
Summaries are aggregate views that are created to improve query execution times. In an
Oracle Database, summaries are implemented with a materialized view.
There are many well-known techniques that you can use to increase query performance.
For example, you can create additional indexes, or you can partition your data.
Many data warehouses are also using a technique called summaries. The basic process
for a summary is to precompute the result of a long-running query and store this result in
a database table called summary table, which is comparable to a CREATE TABLE AS
SELECT (CTAS) statement.

Instead of precomputing the same query result many times, the user can directly access
the summary table. Although this approach has the benefit of enhancing query response
time, it also has many drawbacks. The user needs to be aware of the summary table's
existence in order to rewrite the query to use that table instead.
Also, the data contained in a summary table is frozen, and must be manually refreshed
whenever modifications occur on the real tables.
With Oracle Database summary management, the user no longer has to be aware of
summaries that have been defined. The DBA creates materialized views that are
automatically used by the system when rewriting SQL queries.
Using MVs offers another advantage over manually creating summaries tables, in that the
data can be refreshed automatically.
In a typical use of summary management, the database administrator creates the
materialized view or summary table. When the end user queries tables and views, the
query rewrite mechanism of the Oracle server automatically rewrites the SQL query to
use the summary table.
The use of the materialized view is transparent to the end user or application querying the
data.
The implementation of summary management in Oracle Database includes the use of
these components:

mechanisms to define materialized views and dimensions

a refresh mechanism to ensure materialized views contain the latest data

query rewrite capability to transparently rewrite a query to use a materialized view

the SQL Access Advisor that recommends materialized views and indexes to be created

the DBMS_ADVISOR.TUNE_MVIEW procedure, which shows you how to make your materialized
view fast refreshable and use general query rewrite
After your data has been transformed, staged, and loaded into the detail tables, you
invoke the summary management process by

using the SQL Access Advisor to determine how you will use materialized views

creating materialized views and design how queries will be rewritten

using DBMS_ADVISOR.TUNE_MVIEW to obtain an optimized materialized view as necessary

viewing the CREATE output results by querying USER_TUNE_MVIEW or DBA_TUNE_MVIEW


DBMS_ADVISOR.TUNE_MVIEW
( name, 'CREATE MATERIALIZED VIEW my_mv_name

REFRESH FAST AS
SELECT_statement_goes_here);
The parameters for the DBMS_ADVISOR.TUNE_MVIEW procedure are

name is the task name for looking up the results in a catalog view. If not specified, the system will
generate a name and return.
mv_create_stmt is the original materialized view creation statement.
DBMS_ADVISOR.TUNE_MVIEW
( name, 'CREATE MATERIALIZED VIEW my_mv_name
REFRESH FAST AS
SELECT_statement_goes_here);

Question
How are materialized views used?
Options:
1.

In data warehouses, they are used to compute and store aggregated data

2.

In distributed environments, they are used to download a subset of data from central
servers

3.

In mobile computing environments, they are used to replicate data at distributed


sites

4.

They can be used by the optimizer to improve query performance

Answer
In data warehouses, materialized views are used to compute and store
aggregated data. And they can be used by the optimizer to improve query
performance.
Option 1 is correct. In data warehouses, materialized views are used to compute
and store aggregated data, such as sums and averages. Materialized views in
these environments are typically referred to as summaries because they store
summarized data.
Option 2 is incorrect. In distributed environments, materialized views are used to
replicate data at distributed sites and synchronize updates done at several sites
with conflict resolution methods.
Option 3 is incorrect. In mobile computing environments, materialized views are
used to download a subset of data from central servers to mobile clients. This
involves periodic refreshes from the central servers and propagation of updates by
clients back to the central servers.

Option 4 is correct. The optimizer can use materialized views to improve query
performance by automatically recognizing when a materialized view can and
should be used to satisfy a request.

Question
Which catalog view, available to all users, has been extended to include new
columns showing the number of PCT tables and the number of fresh and stale PCT
regions?
Options:
1.

DBA_MVIEW_DETAIL_PARTITION

2.

DBA_MVIEW_DETAIL_RELATIONS

3.

DBA_MVIEW_DETAIL_SUBPARTITION

4.

DBA_MVIEWS

Answer
The USER/ALL/DBA_MVIEWS catalog view has been extended to include new
columns showing the number of PCT tables and the number of fresh and stale PCT
regions.
Option 1 is incorrect. The new USER/ALL/DBA_MVIEW_DETAIL_PARTITION
catalog view for PCT partition has been created to describe the freshness of each
PCT partition.
Option 2 is incorrect. The USER/ALL/DBA_MVIEW_DETAIL_RELATIONS catalog
view has been extended to include new columns that indicate whether the detail
table is PCT-enabled.
Option 3 is incorrect. The new USER/ALL/DBA_MVIEW_DETAIL_SUBPARTITION
catalog view for PCT subpartition has been created to describe the freshness of
each PCT subpartition.
Option 4 is correct. The USER/ALL/DBA_MVIEWS catalog view has been
extended to include the new columns NUM_PCT_TABLES,
NUM_FRESH_PCT_REGIONS, and NUM_STALES_PCT_REGIONS.

2. Using query rewrite enhancements


When base tables contain large amounts of data, it is expensive and time consuming to
compute the required aggregates or to compute joins between these tables. Because

materialized views contain already precomputed aggregates and joins, you can use the
process called query rewrite to quickly answer the query using materialized views.
One of the major benefits of creating and maintaining materialized views is the ability to
take advantage of query rewrite. This transforms a SQL statement expressed in terms of
tables or views into a statement accessing one or more materialized views that are
defined in the detail tables.
The transformation is transparent to the end user or application, requiring no intervention
and no reference to the materialized view in the SQL statement. Because query rewrite is
transparent, materialized views can be added or dropped just like indexes without
invalidating the SQL in the application code.
A query undergoes several checks to determine whether it is a candidate for query
rewrite. If the query fails any of the checks, then the query is applied to the detail tables
rather than the materialized view. This can be costly in terms of response time and
processing power.
The optimizer uses two different methods to recognize when to rewrite a query in terms of
a materialized view:

matching

comparing
matching
In the matching method, the optimizer matches the SQL text of the query with the SQL text
of the materialized view definition.
comparing
If the optimizer fails to match the SQL text of the query with the SQL text of the MV
definition, it uses the more general method in which it compares joins, selections, data
columns, grouping columns, and aggregate functions between the query and materialized
views.
Dimensions, constraints, and rewrite integrity levels affect whether or not a given query is
rewritten to use one or more materialized views. Additionally, query rewrite can be
enabled or disabled by REWRITE and NOREWRITE hints, and the
QUERY_REWRITE_ENABLED session parameter.
The DBMS_MVIEW.EXPLAIN_REWRITE procedure advises whether query rewrite is
possible on a query and, if so, which materialized views will be used. It also explains why
a query cannot be rewritten.
A query is rewritten only when a certain number of conditions are met. Query rewrite must
be enabled for the session. And a materialized view must be enabled for query rewrite.

In addition, the rewrite integrity level should allow the use of the materialized view. For
example, if a materialized view is not fresh and query rewrite integrity is set to ENFORCED,
then the materialized view is not used.
And either all or part of the results requested by the query must be obtainable from the
precomputed result stored in the materialized view or views.
To test these conditions, the optimizer may depend on some of the data relationships
declared by the user using constraints and dimensions, among others, hierarchies,
referential integrity, and uniqueness of key data, and so on.
Query rewrite is available only with the cost-based optimizer. The Oracle database
optimizes the input query with and without rewrite, and selects the least costly alternative.
The optimizer rewrites a query by rewriting one or more query blocks, one at a time.
If the rewrite logic has a choice between multiple materialized views to rewrite a query
block, it selects the one that can result in reading the least amount of data.
After a materialized view has been picked for a rewrite, the optimizer performs the rewrite
and then tests whether the rewritten query can be rewritten further with another
materialized view. This can be the case only when nested materialized views exist.
This process continues until no further rewrites are possible. Query rewrite is attempted
recursively to take advantage of nested materialized views.
Query rewrite operates on queries and subqueries in the following types of SQL
statements:

SELECT

CREATE TABLE ... AS SELECT

INSERT INTO ... SELECT


Query rewrite operates on subqueries in DML statements INSERT, DELETE, and
UPDATE. It also operates on subqueries in set operators UNION, UNION ALL,
INTERSECT, and MINUS.
In Oracle Database 10g, general query rewrite was supported when the user's query
contained an inline view, or a subquery in the FROM list. Query rewrite matched inline
views in the materialized view with inline views in the request query when the text of the
two inline views exactly matches. In that case, Oracle Database 10g treated the matching
inline view as it would a named view, and general rewrite processing was possible.
The query rewrite in Oracle Database 11g supports queries containing inline views. More
queries are now eligible for query rewrite, thus improving system throughput and
performance.

Oracle Database 11g supports query rewrite with inline views when

the text from the inline views in the materialized view exactly matches the text in the request query
the request query contains inline views that are equivalent to the inline views in the materialized
view
Two inline views are considered equivalent when

the SELECT lists and GROUP BY lists are equivalent

the FROM clauses contain the same or equivalent objects

the join graphs including all the selections in the WHERE clauses are equivalent

the HAVING clauses are equivalent


This example displays a materialized view that contains an inline view.
CREATE MATERIALIZED VIEW SUM_SALES_MV
ENABLE QUERY REWRITE AS
SELECT mv_iv.prod_id, mv_iv.cust_id,
sum(mv_iv.amount_sold) sum_amount_sold
FROM (SELECT prod_id, cust_id, amount_sold
FROM sales, products
WHERE sales.prod_id = products.prod_id) MV_IV
GROUP BY mv_iv.prod_id, mv_iv.cust_id;
In this example the query has an inline view the text of which matches exactly that of the
materialized view's inline view. Therefore, the query inline view will be internally replaced
with the materialized view's inline view so that the query can be rewritten.
-- The text of the IV matches exactly the text of the
-- MV; therefore, the query is rewritten with the MV
SELECT iv.prod_id, iv.cust_id,
SUM(iv.amount_sold) sum_amount_sold
FROM (SELECT prod_id, cust_id, amount_sold
FROM sales, products
WHERE sales.prod_id = products.prod.id) IV
GROUP BY iv.prod_id, iv.cust_id;
This example displays a materialized view that contains an inline view. In this case the
inline view does not have an exact text match with the inline view in the preceding
materialized view. Note that the join predicate in the query inline view is switched.
Even though this query does not textually match with that of the materialized view's inline
view, query rewrite will identify the query's inline view as equivalent to the materialized
view's inline view. As before, the query inline view will be internally replaced with the
materialized view's inline view so that the query can be rewritten.

In this code the earlier query is rewritten with SUM_SALES_MV.


CREATE MATERIALIZED VIEW SUM_SALES_MV
ENABLE QUERY REWRITE AS
SELECT mv_iv.prod_id, mv_iv.cust_id,
sum(mv_iv.amount_sold) sum_amount_sold
FROM (SELECT prod_id, cust_id, amount_sold
FROM sales, products
WHERE sales.prod_id = products.prod_id) MV_IV
GROUP BY mv_iv.prod_id, mv_iv.cust_id;
This example displays a query that has an equivalent inline view to the inline view found
in the materialized view example, but their texts do not match. It receives the same object
number and rewrite takes place.

-- The text of the IV doesn't match the text of the MV;


-- however, they are equivalent
SELECT iv.prod_id, iv.cust_id,
SUM(iv.amount_sold) sum_amount_sold
FROM (SELECT prod_id, cust_id, amount_sold
FROM products, sales
WHERE sales.prod_id = products.prod.id) IV
GROUP BY iv.prod_id, iv.cust_id;
These query examples with the matching and equivalent inline view texts to that of the
MV are first transformed as in this example.
Next the query inline view for both the examples will be internally replaced with the
materialized view's inline view so that the query can be rewritten.
SELECT iv.prod_id, iv.cust_id,
SUM(iv.amount_sold) sum_amount_sold
FROM (SELECT prod_id, cust_id,
amount_sold
FROM products, sales
WHERE sales.prod_id =
products.prod.id) IV
GROUP BY iv.prod_id, iv.cust_id;

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window

View the full code for transforming and rewriting the query.
Because query rewrite occurs transparently, it is not always evident that it has taken
place. The rewritten statement is not stored in the V$SQL view, nor can it be dumped in a
trace file. Of course, if the query runs faster, rewrite should have occurred, but that is not
proof.
There are two ways to confirm that the query rewrite has occurred:

use the EXPLAIN PLAN statement and check whether the OBJECT_NAME column contains the
name of a materialized view

use the DBMS_MVIEW.EXPLAIN_REWRITE procedure to see whether a query will be rewritten or


not
Oracle Database 11g extends the current query rewrite capability. Query rewrite can
reference remote objects such as tables using MVs, which reference the remote
objects.
The MV must be present at the site where the query is issued. Because any remote table
update cannot be propagated to the local site simultaneously, query rewrite will only work
in the stale_tolerated mode.

Note
Because the constraint information of the remote tables is not available at the
remote site, query rewrite will not make use of any constraint information.
Whenever a query contains columns that are not found in the MV, a join back is used to
rewrite the query. If the join back table is not found at the local site, query rewrite will not
take place.
This reduces or eliminates the data from network network round trips which is a costly
operation.
The materialized view in this example is present at the local site, but it references tables
that are all found at the remote site.
CREATE MATERIALIZED VIEW sum_sales_prod_week_mv
ENABLE QUERY REWRITE AS
SELECT p.prod_id, t.week_ending_day, s.cust_id,
SUM(s.amount_sold) AS sum_amount_sold
FROM sales@remotedbl s,products@remotedbl p, times@remotedbl t
WHERE s.time_id=t.time_id AND s.prod_id=p.prod_id
GROUP BY p.prod_id, t.week_ending_day, s.cust_id;
The query in this example contains tables that are found at a single remote site.

SELECT p.prod_id, t.week_ending_day, s.cust_id,


SUM(s.amount_sold) AS sum_amount_sold
FROM sales@remotedbl s, products@remotedbl p,
times@remotedbl t
WHERE s.time_id=t.time_id AND s.prod_id=p.prod_id
GROUP BY p.prod_id, t.week_ending_day, s.cust_id;
Even though the query references remote tables, it will be rewritten using the materialized
view as in this example.
SELECT prod_id, week_ending_day, cust_id, sum_amount_sold
FROM sum_sales_prod_week_mv;

Question
Which statements accurately describe the cost-based query rewrite process?
Options:
1.

After the initial rewrite, the optimizer tests if the query can be rewritten further

2.

It is not compatible with nested materialized views

3.

It is only available with the cost-based optimizer

4.

The optimizer rewrites one or more query blocks, several at a time

Answer
The cost-based query rewrite process is only available with the cost-based
optimizer which tests if the query can be rewritten further after the initial rewrite.
Option 1 is correct. After a materialized view has been picked for a rewrite, the
optimizer performs the rewrite and then tests whether the rewritten query can be
rewritten further with another materialized view. This process continues until no
further rewrites are possible.
Option 2 is incorrect. Query rewrite is attempted recursively to take advantage of
nested materialized views.
Option 3 is correct. Query rewrite is available only with the cost-based optimizer.
The Oracle database optimizes the input query with and without rewrite, and
selects the least costly alternative.
Option 4 is incorrect. The optimizer rewrites a query by rewriting one or more
query blocks one at a time and not several at a time.

Summary

Materialized views can be used to summarize, compute, replicate, and distribute data.
Oracle Database 11g introduces both new and enhanced catalog views for materialized
views that enable you to track the materialized view freshness. Summaries improve query
execution times.
When base tables contain large amounts of data, you can save time by using a process
called query rewrite to quickly answer a query using materialized views. The query rewrite
in Oracle Database 11g supports queries containing inline views.

Developing Enhanced Triggers


Learning objective

After completing this topic, you should be able to identify the steps for creating and
enabling a compound trigger and control its firing order using new trigger clauses.

1. Understanding compound triggers


Disclaimer
Although certain aspects of the Oracle 11g Database are case and spacing insensitive, a
common coding convention has been used throughout all aspects of this course.
This convention uses lowercase characters for schema, role, user, and constraint names,
and for permissions, synonyms, and table names (with the exception of the DUAL table).
Lowercase characters are also used for column names and user-defined procedure,
function, and variable names shown in code.
Uppercase characters are used for Oracle keywords and functions, for view, table,
schema, and column names shown in text, for column aliases that are not shown in
quotes, for packages, and for data dictionary views.
The spacing convention requires one space after a comma and one space before and
after operators that are not Oracle-specific, such as +, -, /, and <. There should be no
space between an Oracle-specific keyword or operator and an opening bracket, between
a closing bracket and a comma, between the last part of a statement and the closing
semicolon, or before a statement.
String literals in single quotes are an exception to all of the convention rules provided
here. Please use this convention for all interactive parts of this course.
End of Disclaimer
Starting with Oracle Database 11g, you can use a compound trigger. With a compound
trigger, you create a single trigger on a table that allows you to specify actions for each of

the four triggering timing points.


The compound trigger body supports a common PL/SQL state that the code for each
timing point can access. The common state is automatically destroyed when the firing
statement completes, even when the firing statement causes an error.
Your applications can avoid the mutating table error by allowing rows destined for a
second table such as a history table or an audit table to accumulate and then bulkinserting them.
Before Oracle Database 11g, Release 1 (11.1), you needed to model the common state
with an ancillary package. This approach was both cumbersome to program and subject
to memory leak when the firing statement causes an error and the after-statement trigger
does not fire.
Compound triggers make PL/SQL easier to use, and improve run-time performance and
scalability.
A compound trigger is a single trigger, which can be used on a table or view. It allows you
to specify actions for each of the four triggering timing points:

before the firing statement

before each row that the firing statement affects

after each row that the firing statement affects

after the firing statement


If multiple compound triggers are specified on a table, all BEFORE statement sections will
be executed at the BEFORE statement timing point, BEFORE EACH ROW sections will be
executed at the BEFORE EACH ROW timing point, and so forth.
If trigger execution order has been specified using the FOLLOWS clause, order of
execution of the compound trigger sections will be determined by the FOLLOWS clause. If
FOLLOWS is specified only for some triggers but not all triggers, the order of execution of
triggers is guaranteed only for those that are related using the FOLLOWS clause.
For tables, the code for the compound trigger is
CREATE OR REPLACE TRIGGER schema.trigger
FOR dml_event_clause ON schema.table
COMPOUND TRIGGER
-- Initial section
-- Declarations
-- Subprograms
-- Optional section

BEFORE STATEMENT IS ...;


-- Optional section
BEFORE EACH ROW IS ...;
-- Optional section
AFTER EACH ROW IS ...;
-- Optional section
AFTER STATEMENT IS ...;
For tables, the compound trigger structure has two main sections. The two main sections
are the

initial section

optional section
initial section
The initial section declares the variables and subprograms. The code in this section
executes before any of the code in the optional section.
The code for the initial section is
-- Initial section
-- Declarations
-- Subprograms
optional section
The optional section defines the code for each possible trigger point. Depending on
whether you are defining a compound trigger for a table or for a view, these triggering
points are different and in a specific order.
The code for the optional section is
-- Optional section
BEFORE STATEMENT IS ...;
-- Optional section
BEFORE EACH ROW IS ...;
-- Optional section
AFTER EACH ROW IS ...;
-- Optional section
AFTER STATEMENT IS ...;
With views, an INSTEAD OF EACH ROW clause takes the place of the BEFORE EACH
ROW and AFTER EACH ROW clauses.

A compound trigger must be implemented in PL/SQL. It must be either a PL/SQL block or


a PL/SQL procedure. It cannot be a C or Java procedure and it cannot call C or Java
procedures. The optional section cannot be enclosed in a PL/SQL block.
If a compound trigger has the INSTEAD OF EACH ROW section, that must be its only
section.
Any section can include the Inserting(), Updating(), Deleting(), and
Applying() functions.
The triggering statement of a compound trigger must be a DML statement.
If the triggering statement affects no rows, and the compound trigger has neither a
BEFORE STATEMENT section nor an AFTER STATEMENT section, the trigger never fires.
And if the triggering statement of a compound trigger is within a FORALL statement, each
execution of the triggering statement fires the compound trigger anew.
You specify the FOR clause if you are creating a compound trigger. You cannot specify the
FOR EACH ROW clause for a compound trigger.
A compound trigger must be associated with either a table or a view.
The initial section cannot include PRAGMA AUTONOMOUS_TRANSACTION.
A compound trigger body cannot have an initialization block. Therefore, it cannot have an
exception section. This is not a problem, however, because the BEFORE STATEMENT
section is always executed exactly once before any other optional sections are executed.
An exception that occurs in one section must be handled in that section. You cannot
transfer control to another section. If a section includes a GOTO statement, the target of
the GOTO statement must be in the same section.
When you are using a compound trigger, :OLD, :NEW, and :PARENT cannot appear in
the initial section, the BEFORE STATEMENT section, or the AFTER STATEMENT section.
Only the BEFORE EACH ROW section can change the value of :NEW.
If, after the compound trigger is fired, the triggering statement rolls back due to a DML
exception. When this occurs

local variables declared in the compound trigger sections are reinitialized, and any values
computed thus far are lost
the side effects from firing the compound trigger are not rolled back
The firing order of compound triggers is not guaranteed. Their firing can be interleaved
with the firing of conventional triggers.

The ordering is ignored if

compound triggers are ordered using the FOLLOWS option

the target of FOLLOWS does not contain the corresponding section as source code
A compound trigger is used to populate an audit table that records the value of the order
total, for any inserted or updated rows in the ORDERS table when the update changes a
value in the ORDER_TOTAL column. It also records the value of the order total, the time
stamp of when the change was made, and the user ID.
It is also used to bulk insert records into the audit table to improve performance.
The example tracks changes on the ORDER_TOTAL column in the ORDERS table to an
audit table. You can assume that a single UPDATE statement updates many rows.
Before creating the trigger, you should have three object definitions:

a sequence

the BEFORE INSERT trigger

the history audit table


a sequence
The first object definition that you should have is a sequence.
The code for the sequence is
CREATE SEQUENCE ordertotals_audit_seq START WITH 2500;
the BEFORE INSERT trigger
Another object definition that you should have is the BEFORE INSERT trigger on the
ORDERS table that is used to generate a new order ID.
The code for the BEFORE INSERT trigger is
CREATE OR REPLACE TRIGGER gen_ordertotals_audit_id_trg
BEFORE INSERT ON orders FOR EACH ROW
BEGIN
:NEW.order_id := ordertotals_audit_seq.NEXTVAL;
END gen_ordertotals_audit_id_trg;
the history audit table
The final object definition that you should have is the history audit table, which is a detail
table for the ORDERS table with a composite primary key of ORDER_ID and CHANGE_DATE.
The purpose of this table is to track changes to the ORDER_TOTAL column in the ORDERS
table. This table will capture both the old order total for order updates and the new
order total for order updates and order inserts.

The code for the history audit table is


CREATE TABLE ordertotals_audit(
order_id NUMBER NOT NULL,
change_date DATE NOT NULL,
user_id VARCHAR2(30),
old_total NUMBER(8, 2) NOT NULL,
new_total NUMBER(8, 2) NOT NULL,
CONSTRAINT order_total_PK
PRIMARY KEY (order_id, change_date),
CONSTRAINT orders_FK
FOREIGN KEY (order_id)
REFERENCES orders(order_id)
ON DELETE CASCADE);
In this example, the trigger fires on an INSERT or UPDATE of the ORDER_TOTAL column
in the ORDERS table. A THRESHOLD constant value is set to 7.
A collection called O_TOTALS is defined to hold the values of the order ID and order totals
being changed in the ORDERS table. The date of the change and user are also stored in
this record structure.
When a row is inserted into the ORDERS table or ORDER_TOTALS are updated in the
ORDERS table, the BEFORE STATEMENT code runs and empties the O_TOTALS
collection, and initializes an index to zero.
CREATE OR REPLACE TRIGGER maintain_ordertotals_audit_trg
FOR INSERT OR UPDATE OF order_total ON orders
COMPOUND TRIGGER
--Initial section begins
--Declarations
threshold CONSTANT SIMPLE_INTEGER := 7;
TYPE order_totals_t IS TABLE OF ordertotals_audit%rowtype
INDEX BY PLS_INTEGER;
o_totals order_totals_t;
idx SIMPLE_INTEGER := 0;
-- subprogram
PROCEDURE Flush_Array IS
n CONSTANT SIMPLE_INTEGER := o_totals.Count();
BEGIN
FORALL j IN 1..n
INSERT INTO ordertotals_audit VALUES o_totals(j);
o_totals.Delete();
idx := 0;

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the complete output of the compound trigger.
Your session is set up with a trigger. This will enable inlining for your session.
ALTER SESSION SET PLSQL_Warnings = 'enable:all';
ALTER SESSION SET PLSQL_Optimize_Level = 3;
ALTER SESSION SET PLSQL_Code_Type = native;
In this example, for each row that is either inserted or updated in the ORDERS table, the
AFTER EACH ROW code runs and builds the O_TOTALS collection so that it holds the
values of the order ID and order total being changed in the ORDERS table.
When the number of records in the O_TOTALS collection reaches 7, the FLUSH_ARRAY
subroutine is called.
The FLUSH_ARRAY subroutine performs a bulk insert into the ORDERTOTALS_AUDIT
table. Not more than 7 records are bulk-inserted into the table because this is what the
threshold is set to.
CREATE OR REPLACE TRIGGER maintain_ordertotals_audit_trg
FOR INSERT OR UPDATE OF order_total ON orders
COMPOUND TRIGGER
--Initial section begins
--Declarations
threshold CONSTANT SIMPLE_INTEGER := 7;
TYPE order_totals_t IS TABLE OF ordertotals_audit%rowtype
INDEX BY PLS_INTEGER;
o_totals order_totals_t;
idx SIMPLE_INTEGER := 0;
-- subprogram
PROCEDURE Flush_Array IS
n CONSTANT SIMPLE_INTEGER := o_totals.Count();
BEGIN
FORALL j IN 1..n
INSERT INTO ordertotals_audit VALUES o_totals(j);
o_totals.Delete();
idx := 0;
The AFTER STATEMENT code is run that calls the FLUSH_ARRAY subroutine.

If there are less than 7 records being modified in the ORDERS table, this code ensures
that these records are recorded into the ORDERTOTALS_AUDIT table.
-- Optional section
BEFORE STATEMENT IS
BEGIN
o_totals.Delete();
idx := 0;
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
idx := idx + 1;
o_totals(idx).order_ID := :New.order_ID;
o_totals(idx).Change_Date := SYSDATE();
o_totals(idx).user_id := sys_context('userenv',
'session_user');
o_totals(idx).old_total := :OLD.order_total;
o_totals(idx).new_total := :NEW.order_total;
IF idx >= Threshold THEN -- PLW-06005: inlining... done
Flush_Array();
END IF;
END AFTER EACH ROW;
AFTER STATEMENT IS

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the complete output of the AFTER STATEMENT code.
Because the session settings are enabled for inlining and viewing all compiler messages,
when creating this trigger, you are able to view the message that inlining has been
performed.
In SQL*Plus, you first issue the SHOW ERRORS command. The trigger is successfully
compiled, so it generates only an informational warning.
SP2-0814: Trigger created with compilation warnings
SHOW ERRORS
Errors for TRIGGER MAINTAIN_ORDERTOTALS_AUDIT:
LINE/COL ERROR
-------- --------------------------------------------------32/7
PLW-06005: inlining of call of procedure

39/5

'FLUSH_ARRAY' was done


PLW-06005: inlining of call of procedure
'FLUSH_ARRAY' was done

You execute this statement to force the trigger to fire.


UPDATE orders
SET order_total = order_total * 1.05
WHERE order_status = 10;
Next, you examine the results in the audit table. In this example, all orders with an order
status of 10 are surcharged a 5% tax. An UPDATE statement is executed to accomplish
this.
The order total values for rows in the ORDERS table are changed. This results in six rows
being recorded into the ORDERTOTALS_AUDIT table.
SELECT * FROM ordertotals_audit;
ORDER_ID
-----------2432
2433
2367
2368
2386
2412

CHANGE_DATE
--------------27-JUL-07
27-JUL-07
27-JUL-07
27-JUL-07
27-JUL-07
27-JUL-07

USER_ID
OLD_TOTAL NEW_TOTAL
---------- --------- --------OE1
10523 11049.15
OE1
78
81.9
OE1
144054.8 151257.54
OE1
60065 63068.25
OE1
21116.9 22172.75
OE1
66816
67140

In Oracle Database 11g, the CREATE TRIGGER clause now includes three clauses that
give you more control over triggers:

DISABLE

ENABLE

FOLLOWS
DISABLE
The DISABLE clause enables you to create a trigger in a disabled state so that you can
ensure that your code compiles successfully before you enable the trigger.
ENABLE
The ENABLE clause enables the trigger.
FOLLOWS
The FOLLOWS clause enables you to specify that the trigger you are creating fires after
certain other triggers.

Question
Which statements accurately describe the use of a compound trigger?
Options:
1.

An exception that occurs in one section of a compound trigger must be handled in


that section

2.

It can be implemented as a Java procedure

3.

It can contain an exception section

4.

The firing order of compound triggers is not guaranteed

Answer
When using a compound trigger, an exception that occurs in one section of the
compound trigger must be handled in that section. And the firing order of
compound triggers is not guaranteed.
Option 1 is correct. An exception that occurs in one section of a compound trigger
must be handled in that section. You cannot transfer control to another section.
Option 2 is incorrect. A compound trigger must be implemented in PL/SQL. It must
be either a PL/SQL block or a PL/SQL procedure. It cannot be a C or Java
procedure or call such a procedure.
Option 3 is incorrect. A compound trigger body cannot have an initialization block.
Therefore, it cannot have an exception section.
Option 4 is correct. The firing order of compound triggers is not guaranteed. Their
firing can be interleaved with the firing of conventional triggers.

Question
What happens if, after a compound trigger is fired, the triggering statement rolls
back due to a DML exception?
Options:
1.

Any values that have been computed are lost

2.

Local variables declared in the compound trigger sections are reinitialized

3.

Side effects from firing the compound trigger are rolled back

4.

The trigger fires again

Answer

If the triggering statement rolls back due to a DML exception after a compound
trigger is fired, any values that have been computed are lost. And local variables
declared in the compound trigger sections are reinitialized.
Option 1 is correct. If, after the compound trigger is fired, the triggering statement
rolls back due to a DML exception, any values that have been computed are lost.
Option 2 is correct. Once a compound trigger has been fired, if the triggering
statement rolls back due to a DML exception, local variables declared in the
compound trigger sections are reinitialized.
Option 3 is incorrect. In this situation, the side effects from firing the compound
trigger would not be rolled back.
Option 4 is incorrect. This situation would not cause the trigger to fire again.
However, if the triggering statement of a compound trigger is within a FORALL
statement, each execution of the triggering statement would fire the trigger again.

2. Creating a disabled trigger


Starting with Oracle Database 11g, you can create a trigger in a disabled mode and later
enable it with the ALTER TRIGGER statement. This gives you greater flexibility in that you
can create a trigger and not use it until the data is ready.
ALTER TRIGGER gen_cust_id ENABLE;
Prior to Oracle Database 11g, you had the ability to compile a valid trigger and then
disable it. You never had the ability to compile a trigger before all of the underlining data
structures existed. If you created a trigger whose body has a PL/SQL compilation error,
DML to the table would fail.
This new feature creating a disabled trigger enhances the development process by
removing the constraint that developers could not stage any triggers before all the data
structures were in place with proper access.
ORA-04098: trigger 'TRG' is invalid and failed re-validation
It is safer to create the trigger as disabled, and then, to enable it only when you know it
will be compiled without an error.
CREATE OR REPLACE TRIGGER gen_cust_id
BEFORE INSERT ON customers FOR EACH ROW
DISABLE
BEGIN
:NEW.customer_id := customer_seq.Nextval;
END;

To ensure that a trigger fires after certain other triggers defined on the same object, you
can use the FOLLOWS clause when you create the first trigger.
If two or more triggers are defined with the same timing point, and the order in which they
fire is important, you can control the firing order using the FOLLOWS clause. Without the
FOLLOWS clause, you are not guaranteed a firing order when two or more triggers of the
same type are created on an object.
If trigger execution order is specified by using the FOLLOWS clause, the order of execution
of compound trigger sections is determined by the FOLLOWS clause.
If FOLLOWS is specified only for some triggers but not all triggers, the order of execution
of triggers is guaranteed only for those that are related using the FOLLOWS clause.
The FOLLOWS clause applies to both compound and simple triggers. It enables you to
order the executions of multiple triggers relative to each other.
It can be placed in the definition of a simple trigger with a compound trigger target.
Alternatively, it can be placed in the definition of a compound trigger with a simple trigger
target.
It applies only to the section of the compound trigger with the same timing point as the
simple trigger. If the compound trigger has no such timing point, FOLLOWS is quietly
ignored.
When defining triggers that contain the FOLLOWS clause, the specified triggers must
already exist, they must be defined on the same table as the trigger being created, and
they must have been successfully compiled. You do not need to have them enabled.

Note
If it is practical, you should consider replacing the set of individual triggers for a
particular timing point with a single compound trigger that explicitly codes the
actions in the order you intend.
For example, suppose two AFTER ROW ... FOR UPDATE triggers are defined on the
same table. One trigger needs to reference the :OLD value and the other trigger needs to
change the :OLD value.
In this case, you can use the FOLLOWS clause to order the firing sequence.
You specify FOLLOWS to indicate that the trigger being created should fire after the
specified triggers.

CREATE OR REPLACE TRIGGER change_product


AFTER UPDATE of product_id ON order_items
FOR EACH ROW
FOLLOWS oe1.compute_total
BEGIN
dbms_output.put_line ('Do processing here');
END;
In this example, the COMPUTE_TOTAL trigger is created. Its purpose is to update an order
total whenever a price or quantity of a product in the ORDER_ITEMS table changes.
CREATE OR REPLACE TRIGGER compute_total
AFTER UPDATE OR INSERT OR DELETE of unit_price, quantity
ON order_items FOR EACH ROW
BEGIN
IF UPDATING THEN
UPDATE orders SET order_total = order_total
(:old.unit_price * :old.quantity)
+ (:new.quantity * :new.unit_price)
WHERE order_id = :old.order_id;
ELSIF DELETING THEN
UPDATE orders SET order_total = order_total
(:old.unit_price * :old.quantity)
WHERE order_id = :old.order_id;
ELSE --inserting
UPDATE orders
SET order_total = order_total +
(:new.quantity * :new.unit_price)
WHERE order_id = :old.order_id;
END IF;
END;
To verify the data, you examine the output.
SELECT * FROM order_items
WHERE order_id = 2412;
ORDER_ID
LINE_ITEM_ID PRODUCT_ID UNIT_PRICE
QUANTITY
---------- ------------ ---------- ---------- ---------2412
1
3106
46
170
2412
2
3114
98
68
2412
3
3123
71.5
68
2412
4
3127
492
72
2412
5
3134
18
75
2412
6
3139
20
79
2412
7
3143
16
80
2412
8
3163
30
92

2412

3167

54

94

9 rows selected.
In this example, the first statement changes the quantity of an item ordered. This causes
the COMPUTE_TOTAL trigger to fire and the order total is updated.
UPDATE order_items SET quantity = 100
WHERE order_id = 2412 AND line_item_id = 9;
1 row updated.
The value of the updated ORDER_TOTAL is displayed.
SELECT order_id, order_date,
customer_id, order_status, order_total
FROM orders WHERE customer_id = 170;
ORDER_ID ORDER_DAT CUSTOMER_ID ORDER_STATUS ORDER_TOTAL
---------- --------- ----------- ------------ ----------2412 29-MAR-04
170
9
67140
The value for a product ID is changed in the ORDER_ITEMS table. This causes the
CHANGE_PRODUCT trigger to fire. Because the CHANGE_PRODUCT trigger has the
FOLLOWS clause, it fires following the execution of the COMPUTE_TOTAL trigger.
UPDATE order_items SET product_id=3165
WHERE order_id = 2412 AND line_item_id = 8;
Do processing here...
1 row updated.

Question
Identify the correct statements regarding the use of the FOLLOWS clause with
triggers.
Options:
1.

It applies only to the section of the compound trigger with the same timing point as
the simple trigger

2.

It applies to compound triggers only

3.

It can be placed in the definition of a simple trigger with a compound trigger target

4.

It cannot be placed in the definition of a compound trigger with a simple trigger


target

Answer
The FOLLOWS clause applies only to the section of the compound trigger with the
same timing point as the simple trigger. And it can be placed in the definition of a
simple trigger with a compound trigger target.
Option 1 is correct. The FOLLOWS clause applies only to the section of the
compound trigger with the same timing point as the simple trigger. If the
compound trigger has no such timing point, the FOLLOWS clause is quietly
ignored.
Option 2 is incorrect. The FOLLOWS clause applies to both compound and simple
triggers.
Option 3 is correct. The FOLLOWS clause can be placed in the definition of a
simple trigger with a compound trigger target or in the definition of a compound
trigger with a simple trigger target.
Option 4 is incorrect. The FOLLOWS clause can be placed in the definition of a
compound trigger with a simple trigger target. It can also be placed in the
definition of a simple trigger with a compound trigger target.

3. Using the new trigger enhancements


Suppose you want to use the trigger enhancements to create a compound trigger using
the OE schema.
First, you alter your session so that you can view the compiler warnings.
Next, you create a compound trigger to track any changes in the MIN_PRICE and
LIST_PRICE columns in the PRODUCT_INFORMATION table. Whenever a price is
modified, you want to capture the date, user, product ID, old prices, and new prices.
CREATE TABLE productprice_audit(
product_id
NUMBER NOT NULL,
change_date
DATE NOT NULL,
user_id
VARCHAR2(30),
old_min_price NUMBER,
new_min_price NUMBER,
old_list_price NUMBER,
new_list_price NUMBER,
CONSTRAINT product_change_PK
PRIMARY KEY (product_id, change_date),
CONSTRAINT products_FK
FOREIGN KEY (product_id)

REFERENCES product_information(product_id)
ON DELETE CASCADE);
You create an audit table to store the information.
Next, you create the compound trigger. You name the trigger
MAINTAIN_PRICES_AUDIT_TRG. You build the trigger so that it checks for updates of
the MIN_PRICE and LIST_PRICE columns in the PRODUCT_INFORMATION table.
CREATE OR REPLACE TRIGGER maintain_prices_audit_trg
FOR UPDATE OF min_price, list_price ON product_information
COMPOUND TRIGGER
--Initial section begins
--Declarations
threshold CONSTANT SIMPLE_INTEGER := 7;
TYPE productprice_t IS TABLE OF productprice_audit%rowtype
INDEX BY PLS_INTEGER;
v_prices productprice_t;
idx SIMPLE_INTEGER := 0;
-- subprogram
PROCEDURE Flush_Array IS
n CONSTANT SIMPLE_INTEGER := v_prices.Count();
BEGIN

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the complete code for the MAINTAIN_PRICES_AUDIT_TRG trigger.
You then execute this query to analyze the data for supplier 102050.
SELECT product_id, supplier_id, min_price, list_price
FROM product_information
WHERE supplier_id = 102050;
Now suppose supplier 102050 is increasing its prices by 5%. You issue the UPDATE
statement to update the list prices and minimum prices for supplier 102050.
UPDATE product_information
SET list_price = list_price * 1.05,
min_price = min_price * 1.05
WHERE supplier_id = 102050;

You can refer to the Results table to verify that the data has changed.
SELECT product_id, supplier_id, min_price, list_price
FROM product_information
WHERE supplier_id = 102050;
Next, you examine the contents of the PRODUCTPRICE_AUDIT table.
SELECT *
From productprice_audit
Next you want to create a disabled trigger, and then execute a statement that would fire
the trigger if it were not disabled. Then you want to enable the trigger and observe the
results. This trigger uses the FOLLOWS clause to ensure that its firing order occurs after
the MAINTAIN_PRICES_AUDIT_TRG you created previously.
To do this, you create a trigger named INFORM_LIST_PRICE in a disabled state. This
trigger fires after the LIST_PRICE is updated and following
MAINTAIN_PRICES_AUDIT_TRG. You configure the trigger to display the message
"Warning new list price is unknown for product: xyz," where xyz is the product number.
CREATE OR REPLACE TRIGGER inform_list_price
AFTER UPDATE OF list_price ON product_information
FOR EACH ROW
FOLLOWS maintain_prices_audit_trg
DISABLE
BEGIN
IF :new.list_price IS NULL then
dbms_output.put_line('Warning - new list price is unknown
for product: ' || :old.product_id);
END IF;
END inform_list_price;
Next, you execute the following UPDATE statement and observe whether your trigger is
fired.
UPDATE product_information
SET list_price = list_price * 1.05,
min_price = min_price * 1.05
WHERE supplier_id = 102050;
Next, you enable the trigger.
ALTER TRIGGER inform_list_price ENABLE;

You execute this UPDATE statement and observe whether your trigger is fired. You can
view any messages in the Script Output tab.
UPDATE product_information
SET list_price = list_price * 1.05,
min_price = min_price * 1.05
WHERE supplier_id = 102050;

Summary
A compound trigger enables you to create a single trigger on a table that allows you to
specify actions for each of the four triggering timing points. The DISABLE clause of the
CREATE TRIGGER statement enables you to create a trigger in a disabled state.
You can create a disabled trigger which is safe, because you can enable it only when you
know it will be compiled without an error. To ensure that a trigger fires after certain other
triggers defined on the same object, you use the FOLLOWS clause when you create the
first trigger.
You create a disabled trigger and then enable it. You can create a compound trigger and
build it such that it checks for specified updates.

Output of the compound trigger


CREATE OR REPLACE TRIGGER maintain_ordertotals_audit_trg
FOR INSERT OR UPDATE OF order_total ON orders
COMPOUND TRIGGER
--Initial section begins
--Declarations
threshhold CONSTANT SIMPLE_INTEGER := 7;
TYPE order_totals_t IS TABLE OF ordertotals_audit%rowtype
INDEX BY PLS_INTEGER;
o_totals order_totals_t;
idx SIMPLE_INTEGER := 0;
-- subprogram
PROCEDURE Flush_Array IS
n CONSTANT SIMPLE_INTEGER := o_totals.Count();
BEGIN
FORALL j IN 1..n
INSERT INTO ordertotals_audit VALUES o_totals(j);
o_totals.Delete();
idx := 0;
DBMS_Output.Put_Line('Flushed '||n||' rows');
END Flush_Array;
-- Initial section ends

Output of the AFTER STATEMENT code

-- Optional section
BEFORE STATEMENT IS
BEGIN
o_totals.Delete();
idx := 0;
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
idx := idx + 1;
o_totals(idx).order_ID := :New.order_ID;
o_totals(idx).Change_Date := SYSDATE();
o_totals(idx).user_id := sys_context('userenv', 'session_user');
o_totals(idx).old_total := :OLD.order_total;
o_totals(idx).new_total := :NEW.order_total;
IF idx >= Threshhold THEN -- PLW-06005: inlining... done
Flush_Array();
END IF;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
-- PLW-06005: inlining... done
Flush_Array();
END AFTER STATEMENT;
END maintain_ordertotals_audit_trg;

The MAINTAIN_PRICES_AUDIT_TRG trigger


CREATE OR REPLACE TRIGGER maintain_prices_audit_trg
FOR UPDATE OF min_price, list_price ON product_information
COMPOUND TRIGGER
--Initial section begins
--Declarations
threshold CONSTANT SIMPLE_INTEGER := 7;
TYPE productprice_t IS TABLE OF productprice_audit%rowtype
INDEX BY PLS_INTEGER;
v_prices productprice_t;
idx SIMPLE_INTEGER := 0;
-- subprogram
PROCEDURE Flush_Array IS
n CONSTANT SIMPLE_INTEGER := v_prices.Count();
BEGIN
FORALL j IN 1..n
INSERT INTO productprice_audit VALUES v_prices(j);
v_prices.Delete();
idx := 0;
DBMS_Output.Put_Line('Flushed '||n||' rows');
END Flush_Array;
-- Initial section ends
-- Optional section
BEFORE STATEMENT IS

BEGIN
v_prices.Delete();
idx := 0;
END BEFORE STATEMENT;
AFTER EACH ROW IS
BEGIN
idx := idx + 1;
v_prices(idx).product_id := :new.product_id;
v_prices(idx).change_date := SYSDATE();
v_prices(idx).user_id := sys_context('userenv',
'session_user');
v_prices(idx).old_min_price := :OLD.min_price;
v_prices(idx).new_min_price := :NEW.min_price;
v_prices(idx).old_list_price := :OLD.list_price;
v_prices(idx).new_list_price := :NEW.list_price;
IF idx >= threshold THEN -- PLW-06005: inlining... done
Flush_Array();
END IF;
END AFTER EACH ROW;
AFTER STATEMENT IS
BEGIN
-- PLW-06005: inlining... done
Flush_Array();
END AFTER STATEMENT;
END maintain_prices_audit_trg;

Implementing SecureFile LOBs


Learning objective

After completing this topic, you should be able to recognize the steps for implementing
SecureFile LOBs.

1. SecureFile LOBs
With SecureFile LOBs, the LOB data type is completely reengineered with dramatically
improved performance, manageability, and ease of application development.
This new implementation also offers advanced, next-generation functionality such as
intelligent compression and transparent encryption. This feature significantly strengthens
the native content management capabilities of Oracle Database.
SecureFile LOBs are introduced to supplement the original BasicFile LOBs
implementation that is identified by the BASICFILE SQL parameter.

Starting with Oracle Database 11g, you have the option of using the new SecureFile
storage paradigm for LOBs.
You can specify to use the new paradigm by using the SECUREFILE keyword in the
CREATE TABLE statement. If that keyword is left out, and the BASICFILE storage
keyword is used instead, the old storage paradigm for basic file LOBs is used. This is the
default behavior.
You can modify the init.ora file and change the default behavior for the storage of
LOBs by setting the DB_SECUREFILE initialization parameter. The values allowed are

ALWAYS

FORCE

PERMITTED

NEVER

IGNORE
ALWAYS
Setting the parameter to ALWAYS attempts to create all LOB files as SECUREFILES, but
creates any LOBs not in ASSM tablespaces as BASICFILE LOBs.
FORCE
Setting the parameter to FORCE means that all LOBs created in the system are created as
SECUREFILE LOBs.
PERMITTED
The PERMITTED parameter is the default. It allows SECUREFILES to be created when
specified with the SECUREFILE keyword in the CREATE TABLE statement.
NEVER
Setting the parameter to NEVER creates any LOBs that are specified as SECUREFILE
LOBs as BASICFILE LOBs.
IGNORE
Setting the parameter to IGNORE ignores the SECUREFILE keyword and all SECUREFILE
options.
To create a column to hold a LOB that is a SecureFile, you need to

create a tablespace to hold the data


define a table that contains a LOB column data type that is used to store the data in the
SecureFile format
In this example, the code defines the sf_tbs1 tablespace. This tablespace stores the
LOB data in the SecureFile format.

When you define a column to hold SecureFile data, you must have Automatic Segment
Space Management (ASSM) enabled for the tablespace in order to support SecureFiles.
CREATE TABLESPACE sf_tbs1
DATAFILE 'sf_tbs1.dbf' SIZE 1500M REUSE
AUTOEXTEND ON NEXT 200M
MAXSIZE 3000M
SEGMENT SPACE MANAGEMENT AUTO;
In this example, the code creates the CUSTOMER_PROFILES table. The column
PROFILE_INFO will hold the LOB data in the SecureFile format because the storage
clause identifies the format.
CONNECT oe1/oe1@orcl
CREATE TABLE customer_profiles
(id NUMBER,
first_name VARCHAR2 (40),
last_name VARCHAR2 (80),
profile_info BLOB)
LOB(profile_info) STORE AS SECUREFILE
(TABLESPACE sf_tbs1);

Question
What is the default value for the DB_SECUREFILE initialization parameter?
Options:
1.

ALWAYS

2.

FORCE

3.

IGNORE

4.

PERMITTED

Answer
The default value for the DB_SECUREFILE initialization parameter is PERMITTED.
Option 1 is incorrect. The ALWAYS value attempts to create all LOB files as
SECUREFILES, but creates any LOBs not in ASSM tablespaces as BASICFILE
LOBs.
Option 2 is incorrect. When DB_SECUREFILE is set to FORCE, all LOBs created in
the system are created as SECUREFILE LOBs.

Option 3 is incorrect. This value ignores the SECUREFILE keyword and all
SECUREFILE options.
Option 4 is correct. PERMITTED is the default value of the DB_SECUREFILE
initialization parameter. This value allows SECUREFILES to be created when
specified with the SECUREFILE keyword in the CREATE TABLE statement.

2. Writing data to the SecureFile LOB


This procedure is used to load data into the LOB column.
CREATE OR REPLACE PROCEDURE loadLOBFromBFILE_proc
(dest_loc IN OUT BLOB, file_name IN VARCHAR2)
IS
src_loc
BFILE := BFILENAME('CWD', file_name);
amount
INTEGER := 4000;
BEGIN
DBMS_LOB.OPEN(src_loc, DBMS_LOB.LOB_READONLY);
amount := DBMS_LOB.GETLENGTH(src_loc);
DBMS_LOB.LOADFROMFILE(dest_loc, src_loc, amount);
DBMS_LOB.CLOSE(src_loc);
END loadLOBFromBFILE_proc;
Before running the LOADLOBFROMBFILE_PROC procedure, you need to set a directory
object that identifies where the LOB files are stored externally.
In this example, the Microsoft Word documents are stored in a directory that is external to
the database. Assume that the .doc files are placed in a folder called SECUREFILES.
CREATE OR REPLACE DIRECTORY cwd
AS '/localhost/oracle/SECUREFILES';
The LOADLOBFROMBFILE_PROC procedure is used to read the LOB data into the
PROFILE_INFO column in the CUSTOMER_PROFILES table.
CREATE OR REPLACE PROCEDURE loadLOBFromBFILE_proc
(dest_loc IN OUT BLOB, file_name IN VARCHAR2)
IS
src_loc
BFILE := BFILENAME('CWD', file_name);
amount
INTEGER := 4000;
BEGIN
DBMS_LOB.OPEN(src_loc, DBMS_LOB.LOB_READONLY);
amount := DBMS_LOB.GETLENGTH(src_loc);
DBMS_LOB.LOADFROMFILE(dest_loc, src_loc, amount);
DBMS_LOB.CLOSE(src_loc);
END loadLOBFromBFILE_proc;

Note
The LOADLOBFROMBFILE_PROC procedure can be used to read both SecureFile
and BasicFile formats.
In this example

DBMS_LOB.OPEN is used to open an external LOB in read-only mode

DBMS_LOB.GETLENGTH is used to find the length of the LOB value

DBMS_LOB.LOADFROMFILE is used to load the BFILE data into an internal LOB

DBMS_LOB.CLOSE is used to close the external LOB


CREATE OR REPLACE PROCEDURE loadLOBFromBFILE_proc
(dest_loc IN OUT BLOB, file_name IN VARCHAR2)
IS
src_loc
BFILE := BFILENAME('CWD', file_name);
amount
INTEGER := 4000;
BEGIN
DBMS_LOB.OPEN(src_loc, DBMS_LOB.LOB_READONLY);
amount := DBMS_LOB.GETLENGTH(src_loc);
DBMS_LOB.LOADFROMFILE(dest_loc, src_loc, amount);
DBMS_LOB.CLOSE(src_loc);
END loadLOBFromBFILE_proc;
Before you write data to the LOB column, you need to make the LOB column non-NULL.
The LOB column must contain a locator that points to an empty or populated LOB value.
You can initialize a BLOB column value by using the EMPTY_BLOB() function as a
default predicate.
CREATE OR REPLACE PROCEDURE write_lob (p_file IN VARCHAR2)
IS
i
NUMBER;
v_fn VARCHAR2(15);
v_ln VARCHAR2(40);
v_b BLOB;
BEGIN
DBMS_OUTPUT.ENABLE;
DBMS_OUTPUT.PUT_LINE('Begin inserting rows...');
FOR i IN 1 .. 30 LOOP
v_fn:=SUBSTR(p_file,1,INSTR(p_file,'.')-1);
v_ln:=SUBSTR(p_file,INSTR(p_file,'.')+1,LENGTH(p_file)INSTR(p_file,'.')-4);
INSERT INTO customer_profiles
VALUES i, v_fn, v_ln, EMPTY_BLOB())
RETURNING profile_info INTO v_b;
loadLOBFromBFILE_proc(v_b,p_file);
DBMS_OUTPUT.PUT_LINE('Row '|| i ||' inserted.');

END LOOP;
COMMIT;
END write_lob;
This code uses the INSERT statement to initialize the locator. The LOADLOBFROMBFILE
routine is then called and the LOB column value is inserted.
The write and read performance statistics for LOB storage is captured through output
messages.
CREATE OR REPLACE PROCEDURE write_lob (p_file IN VARCHAR2)
IS
i
NUMBER;
v_fn VARCHAR2(15);
v_ln VARCHAR2(40);
v_b BLOB;
BEGIN
DBMS_OUTPUT.ENABLE;
DBMS_OUTPUT.PUT_LINE('Begin inserting rows...');
FOR i IN 1 .. 30 LOOP
v_fn:=SUBSTR(p_file,1,INSTR(p_file,'.')-1);
v_ln:=SUBSTR(p_file,INSTR(p_file,'.')+1,LENGTH(p_file)INSTR(p_file,'.')-4);
INSERT INTO customer_profiles
VALUES i, v_fn, v_ln, EMPTY_BLOB())
RETURNING profile_info INTO v_b;
loadLOBFromBFILE_proc(v_b,p_file);
DBMS_OUTPUT.PUT_LINE('Row '|| i ||' inserted.');
END LOOP;
COMMIT;
END write_lob;
When writing Data to the SecureFile LOB, the Microsoft Word files are stored in the
SECUREFILES directory.
To read them into the PROFILE_INFO column in the CUSTOMER_PROFILES table, the
WRITE_LOB procedure is called and the name of the .doc files is passed as a
parameter.
set
set
set
set

serveroutput on
verify on
term on
linesize 200

timing start load_data


execute write_lob('karl.brimmer.doc');
execute write_lob('monica.petera.doc');
execute write_lob('david.sloan.doc');
timing stop

Note
This script is run in SQL*Plus because TIMING is a SQL*Plus option and is not
available in SQL Developer.
The output of the WRITE_LOB procedure is similar to this code.
timing start load_data
execute write_lob('karl.brimmer.doc');
Begin inserting rows...
Row 1 inserted.
...
PL/SQL procedure successfully completed.
execute write_lob('monica.petera.doc');
Begin inserting rows...
Row 1 inserted.
...
PL/SQL procedure successfully completed.
execute write_lob('david.sloan.doc');
Begin inserting rows...
Row 1 inserted.
...
PL/SQL procedure successfully completed.
timing stop
timing for: load_data
Elapsed: 00:00:00.96
To retrieve the records that were inserted, you can call the READ_LOB procedure.
set
set
set
set

serveroutput on
verify on
term on
linesize 200

timing start read_data


execute read_lob;
timing stop
These commands create the procedure to read back the 90 records from the
CUSTOMER_PROFILES table.
For each record, the size of the LOB value plus the first 200 characters of the LOB are
displayed on screen. A SQL*Plus timer is started to capture the total elapsed time for the
retrieval.

CREATE OR REPLACE PROCEDURE read_lob


IS
lob_loc
BLOB;
CURSOR profiles_cur IS
SELECT id, first_name, last_name, profile_info
FROM customer_profiles;
profiles_rec customer_profiles%ROWTYPE;
BEGIN
OPEN profiles_cur;
LOOP
FETCH profiles_cur INTO profiles_rec;
lob_loc := profiles_rec.profile_info;
DBMS_OUTPUT.PUT_LINE('The length is: '||
DBMS_LOB.GETLENGTH(lob_loc));
DBMS_OUTPUT.PUT_LINE('The ID is: '|| profiles_rec.id);
DBMS_OUTPUT.PUT_LINE('The blob is read: '||
UTL_RAW.CAST_TO_VARCHAR2(DBMS_LOB.SUBSTR(lob_loc,200,1)));
EXIT WHEN profiles_cur%NOTFOUND;
END LOOP;
CLOSE profiles_cur;
END;
The output of the commands that create the procedure to read LOBs from the table is
similar to this code.
The ID is: 1
The blob is read: >
x z w

The length is : 64000


The ID is: 2
The blob is read: >
x z w
...
The length is: 37376
The ID is: 30
The blob is read: >
D F C

The length is: 37376


The ID is: 30
The blob is read: >
D F C

PL/SQL procedure successfully completed.


timing stop

timing for: read_data


Elapsed: 00:00:01.09

Note
This text appears as garbage because it is a binary file.

3. Enabling deduplication and compression


You can enable deduplication and compression of SecureFiles with the ALTER TABLE
statement and the DEDUPLICATE and COMPRESS options.
The DEDUPLICATE option allows you to specify that LOB data, which is identical in two or
more rows in a LOB column, should all share the same data blocks.
The opposite of this option is KEEP_DUPLICATES. Using a secure hash index to detect
duplication, the database combines LOBs with identical content into a single copy,
reducing storage and simplifying storage management.
You can also use DBMS_LOB.SETOPTIONS to enable or disable deduplication on
individual LOBs.
ALTER TABLE tblname
MODIFY LOB lobcolname
(DEDUPLICATE option
COMPRESS option)
The options for the COMPRESS clause are

COMPRESS HIGH

COMPRESS MEDIUM

NOCOMPRESS
COMPRESS HIGH
The COMPRESS HIGH option provides the best compression, but incurs the most work.
COMPRESS MEDIUM
The COMPRESS MEDIUM option is the default.
NOCOMPRESS
The NOCOMPRESS option disables compression.
You can also use DBMS_LOB.SETOPTIONS to enable or disable compression on
individual LOBs.

To test how efficient deduplication and compression are on SecureFiles, you decide to
1. check the space being used by the CUSTOMER_PROFILES table
2. enable deduplication and compression for the PROFILE_INFO LOB column in the
CUSTOMER_PROFILES table
3. examine the space being used after deduplication and compression are enabled
4. reclaim the space and examine the results
This procedure checks for LOB space usage.
CREATE OR REPLACE PROCEDURE check_space
IS
l_fs1_bytes NUMBER;
l_fs2_bytes NUMBER;
l_fs3_bytes NUMBER;
l_fs4_bytes NUMBER;
l_fs1_blocks NUMBER;
l_fs2_blocks NUMBER;
l_fs3_blocks NUMBER;
l_fs4_blocks NUMBER;
l_full_bytes NUMBER;
l_full_blocks NUMBER;
l_unformatted_bytes NUMBER;
l_unformatted_blocks NUMBER;
BEGIN
DBMS_SPACE.SPACE_USAGE(
segment_owner => 'OE1',
segment_name => 'CUSTOMER_PROFILES',
segment_type => 'TABLE',

Supplement
Selecting the link title opens the resource in a new browser window.

Launch window
View the complete code for checking for LOB space usage.
Before you enable deduplication and compression, the space usage displays.
This amount will be used as a baseline for comparison.
execute check_space
anonymous block completed
FS1 Blocks = 0 Bytes = 0
FS2 Blocks = 1 Bytes = 8192
FS3 Blocks = 0 Bytes = 0

FS4 Blocks = 4 Bytes = 32768


Full Blocks = 0 Bytes = 0
=============================================
Total Blocks = 5 ||
Total Bytes = 40960
To enable deduplication and compression, you run the ALTER TABLE statement with the
appropriate options.
In this example, deduplication is turned on and the compression rate is set to HIGH.
ALTER TABLE customer_profiles
MODIFY LOB (profile_info)
(DEDUPLICATE LOB
COMPRESS HIGH);
Table altered.
The total space used appears to be the same as before deduplication and compression
were enabled. This is because the free space needs to be reclaimed before it is usable
again.
execute check_space
anonymous block completed
FS1 Blocks = 0 Bytes = 0
FS2 Blocks = 1 Bytes = 8192
FS3 Blocks = 0 Bytes = 0
FS4 Blocks = 4 Bytes = 32768
Full Blocks = 0 Bytes = 0
=============================================
Total Blocks = 5 ||
Total Bytes = 40960
The full code is
ALTER TABLE customer_profiles ENABLE ROW MOVEMENT;
Table altered.
ALTER TABLE customer_profiles SHRINK SPACE COMPACT;
Table altered.
ALTER TABLE customer_profiles SHRINK SPACE;
Table altered.
You can use this code to reclaim free space:

enabling row movement

ALTER TABLE resumes SHRINK SPACE COMPACT

ALTER TABLE resumes SHRINK SPACE


enabling row movement
The first statement enables row movement so that the data can be shifted to save space.
Compacting the segment requires row movement.
The code for this is
ALTER TABLE customer_profiles ENABLE ROW MOVEMENT;
Table altered.
ALTER TABLE resumes SHRINK SPACE COMPACT
The second statement, ALTER TABLE resumes SHRINK SPACE COMPACT, redistributes
the rows inside the blocks. This results in more free blocks under the High Water Mark
(HWM) but the HWM itself is not disturbed.
The code for this is
ALTER TABLE customer_profiles SHRINK SPACE COMPACT;
Table altered.
ALTER TABLE resumes SHRINK SPACE
The third statement, ALTER TABLE resumes SHRINK SPACE, will return unused blocks to
the database and reset the HWM, moving it to a lower position. Lowering the HWM should
result in better full table scan reads.
The code for this is
ALTER TABLE customer_profiles SHRINK SPACE;
Table altered.
After reclaiming, the amount of space used is about 65% less than before deduplication
and compression were enabled.
execute check_space
anonymous block completed
FS1 Blocks = 0 Bytes = 0
FS2 Blocks = 1 Bytes = 8192
FS3 Blocks = 0 Bytes = 0
FS4 Blocks = 0 Bytes = 0
Full Blocks = 0 Bytes = 8192
=============================================
Total Blocks = 1 || Total Bytes = 16384

4. Encryption
The encryption option enables you to turn on or off the LOB encryption, and optionally
select an encryption algorithm.

Encryption is performed at the block level and you can specify the encryption algorithm:

3DES168

AES128

AES192 (default)

AES256
The column encryption key is derived from PASSWORD and all LOBs in the LOB column
will be encrypted. DECRYPT keeps the LOBs in cleartext. And LOBs can be encrypted on
a per-column or per-partition basis.
The current Transparent Data Encryption (TDE) syntax is used for extending encryption
to LOB data types. TDE enables you to encrypt sensitive data in database columns as it
is stored in the operating system files.
Transparent data encryption is a key-based access control system that enforces
authorization by encrypting data with a key that is kept secret.
There can be only one key for each database table that contains encrypted columns,
regardless of the number of encrypted columns in a given table. Each table's column
encryption key is, in turn, encrypted with the database server's master key.
No keys are stored in the database. Instead, they are stored in an Oracle wallet, which is
part of the external security module.
To enable TDE, you need to create a directory to store the TDE wallet. This is required for
the SecureFiles LOB encryption.
mkdir $ORACLE_HOME/wallet
You also need to modify the sqlnet.ora file to identify the location of the TDE wallet,
using this code.
ENCRYPTION_WALLET_LOCATION=
(SOURCE=(METHOD=FILE)(METHOD_DATA= (DIRECTORY
=/u01/app/oracle/product/11.1.0/db_1/wallet)))
In this example, the CUSTOMER_PROFILES table is modified so that the PROFILE_INFO
column uses encryption.
ALTER TABLE customer_profiles
MODIFY (profile_info ENCRYPT USING 'AES192');
Table altered.

You can query the USER_ENCRYPTED_COLUMNS dictionary view to see the status of
encrypted columns.
SELECT *
FROM user_encrypted_columns;
TABLE_NAME
COLUMN_NAME
ENCRYPTION_ALG
SAL
----------------- ----------------- ---------------- --CUSTOMER_PROFILES PROFILE_INFO
AES 192 bits key YES

Question
Identify the accurate statements regarding the use of encryption with LOBs.
Options:
1.

A directory is required to store the Transparent Data Encryption wallet

2.

All encryption keys are stored in the database

3.

It enables you to encrypt sensitive data in database columns

4.

There are multiple keys for each database table that contains encrypted columns

Answer
Encryption with LOBs requires a directory to store the Transparent Data
Encryption wallet. And it enables you to encrypt sensitive data in database
columns.
Option 1 is correct. To enable Transparent Data Encryption (TDE), you need to
create a directory to store the TDE wallet. You also need to modify the
sqlnet.ora file to identify the location of the TDE wallet.
Option 2 is incorrect. No encryption keys are stored in the database. Instead, they
are stored in an Oracle wallet, which is part of the external security module.
Option 3 is correct. TDE enables you to encrypt sensitive data in database
columns as it is stored in the operating system files. TDE is a key-based access
control system that enforces authorization by encrypting data with a key that is
kept secret.
Option 4 is incorrect. There can be only one key for each database table that
contains encrypted columns regardless of the number of encrypted columns in a
given table.

5. Migrating between formats

You may have LOB data in tables that were created before Oracle Database 11g. You can
migrate the LOB data from the BasicFile format to the SecureFile format.
In this example, you may have previously created a table with a LOB column stored in the
BasicFile format, which is the default and only choice before Oracle Database 11g.
The BasicFile format migrates to the SecureFile format through a series of steps.
connect system/oracle@orcl
CREATE TABLESPACE bf_tbs1
DATAFILE 'bf_tbs1.dbf' SIZE 800M REUSE
EXTENT MANAGEMENT LOCAL
UNIFORM SIZE 64M
SEGMENT SPACE MANAGEMENT AUTO;
connect oe1/oe1@orcl
CREATE TABLE customer_profiles
(id NUMBER,
first_name VARCHAR2 (40),
last_name VARCHAR2 (80),
profile_info BLOB)
LOB(profile_info) STORE AS BASICFILE
(TABLESPACE bf_tbs1);

Note
For this example, you need to drop the CUSTOMER_PROFILES table, and recreate it with this code:
DROP TABLE customer_profiles;
In this example, data is loaded into the PROFILE_INFO BLOB column in the
CUSTOMER_PROFILES table.
This example builds and populates the BasicFile LOB format column so that it can be
migrated to the SecureFile format.
set serveroutput on
set verify on
set term on
set linesize 200
timing start load_data
execute write_lob('karl.brimmer.doc');
execute write_lob('monica.petera.doc');
execute write_lob('david.sloan.doc');
timing stop
PL/SQL procedure successfully completed.

timing for: load_data


Elapsed: 00:00:01.68

Note
The elapsed time is much longer than loading the data in the SecureFile format.
These commands read back the 90 records from the CUSTOMER_PROFILES table. For
each record, the size of the LOB value plus the first 200 characters of the LOB are
displayed on screen.
A SQL*Plus timer is started to capture the total elapsed time for the retrieval.
Later, you can use this timing information to compare the performance between the
BasicFile format and the SecureFile format LOBs.
set serveroutput on
set verify on
set term on
set lines 200
timing start read_data
execute read_lob;
timing stop
PL/SQL procedure successfully completed.
timing for: read_data
Elapsed: 00:00:01.15
By querying the DBA_SEGMENTS view, you can see that the LOB segment subtype name
for BasicFile LOB storage is ASSM.
col segment_name format a30
col segment_type format 13
SELECT
FROM
WHERE
AND

segment_name, segment_type, segment_subtype


dba_segments
tablespace_name = 'BF_TBS1'
segment_type = 'LOBSEGMENT';

SEGMENT_NAME
SEGMENT_TYPE
SEGME
------------------------------ ------------------ ----SYS_LOB0000080068C00004$$
LOBSEGMENT
ASSM
The migration from BasicFile to SecureFiles LOB storage format is performed online. This
means that the CUSTOMERS_PROFILES table will continue to be accessible during the

migration.
This type of operation is called online redefinition. Online redefinition requires an interim
table for data storage.
In this example, the interim table is defined with the SecureFiles LOB storage format. And
the LOB is stored in the sf_tbs1 tablespace.
After the migration is completed, the PROFILE_INFO LOB is stored in the sf_tbs1
tablespace.
CREATE TABLE customer_profiles_interim
(id NUMBER,
first_name VARCHAR2 (40),
last_name VARCHAR2 (80),
profile_info BLOB)
LOB(profile_info) STORE AS SECUREFILE
(TABLESPACE sf_tbs1);
After running this code and completing the redefinition operation, you can drop the interim
table.
connect system/oracle@orcl
DECLARE
error_count PLS_INTEGER := 0;
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE
('OE1', 'customer_profiles', 'customer_profiles_interim',
'id id, first_name first_name,
last_name last_name, profile_info profile_info',
OPTIONS_FLAG => DBMS_REDEFINITION.CONS_USE_ROWID);
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS
('OE1', 'customer_profiles', 'customer_profiles_interim',
1, true,true,true,false, error_count);
DBMS_OUTPUT.PUT_LINE('Errors := ' || TO_CHAR(error_count));
DBMS_REDEFINITION.FINISH_REDEF_TABLE
('OE1', 'customer_profiles', 'customer_profiles_interim');
END;
connect oe1/oe1@orcl
DROP TABLE customer_profiles_interim;
You can then check the segment type of the migrated LOB. Note that the segment
subtype for SecureFile LOB storage is SECUREFILE. For BasicFile format, it is ASSM.

SELECT segment_name, segment_type, segment_subtype


FROM dba_segments
WHERE tablespace_name = 'SF_TBS1'
AND segment_type = 'LOBSEGMENT'
/
SEGMENT_NAME
SEGMENT_TYPE
SEGMENT_SU
------------------------------ ------------------ ---------SYS_LOB0000080071C00004$$
LOBSEGMENT
SECUREFILE
In all of these examples, the performance on loading and reading data in the LOB column
of the SecureFile format LOB is faster than that of the BasicFile format LOB.
You can compare the performance on loading and reading LOB columns in the
SecureFile and BasicFile formats.

Question
Which statements accurately describe migrating from BasicFile to the SecureFile
format?
Options:
1.

An interim table is required for data storage

2.

By querying DBA_LOBS, you can determine the LOB

3.

The DBMS_REDEFINITION package is required

4.

This process is known as offline redefinition

Answer
An interim table is required for data storage. And the DBMS_REDEFINITION
package is required when migrating from BasicFile to the SecureFile format.
Option 1 is correct. Online redefinition requires an interim table for data storage.
The interim table is defined with the SecureFiles LOB storage format.
Option 2 is incorrect. By querying DBA_SEGMENTS, and not DBA_LOBS, you can
determine the segment name and type for BasicFile LOB storage. The DBA_LOBS
data dictionary view can be used to view the compression and deduplication
settings for the SecureFiles LOB segment.
Option 3 is correct. The DBMS_REDEFINITION package is used to perform online
redefinition, including redefining table columns and column names.
Option 4 is incorrect. The process of migrating from BasicFile to SecureFile format
is known as online, and not offline, redefinition.

6. Using SecureFile format LOBs


Suppose you want to migrate a BasicFile format LOB to a SecureFile format LOB.
First, you need to set up several supporting structures. You drop your existing
PRODUCT_DESCRIPTIONS table and create a new one.
DROP TABLE product_descriptions;
CREATE TABLE product_descriptions
(product_id NUMBER);
Next you alter the table and add a BLOB column of BASICFILE storage type.
ALTER TABLE product_descriptions ADD
(detailed_product_info BLOB )
LOB (detailed_product_info) STORE AS BASICFILE
(tablespace bf_tbs2);
Then you create a directory object that identifies the location of your LOBs. In this code,
you need to replace vx0115 with your server name.
CREATE OR REPLACE DIRECTORY cwd
AS '/vx0115/oracle/SECUREFILES';
You create this procedure to load the LOB data into the column.
CREATE OR REPLACE PROCEDURE loadLOBFromBFILE_proc
(dest_loc IN OUT BLOB, file_name IN VARCHAR2)
IS
src_loc
BFILE := BFILENAME('CWD', file_name);
amount
INTEGER := 4000;
BEGIN
DBMS_LOB.OPEN(src_loc, DBMS_LOB.LOB_READONLY);
amount := DBMS_LOB.GETLENGTH(src_loc);
DBMS_LOB.LOADFROMFILE(dest_loc, src_loc, amount);
DBMS_LOB.CLOSE(src_loc);
END loadLOBFromBFILE_proc;
/
Then you create this procedure to write the LOB data.
CREATE OR REPLACE PROCEDURE write_lob (p_file IN
VARCHAR2)
IS
i
NUMBER;
v_id NUMBER;
v_b BLOB;
BEGIN

DBMS_OUTPUT.ENABLE;
DBMS_OUTPUT.PUT_LINE('Begin inserting rows...');
FOR i IN 1 .. 5 LOOP
v_id:=SUBSTR(p_file, 1, 4);
INSERT INTO product_descriptions
VALUES (v_id, EMPTY_BLOB())
RETURNING detailed_product_info INTO v_b;
loadLOBFromBFILE_proc(v_b,p_file);
DBMS_OUTPUT.PUT_LINE('Row '|| i ||' inserted.');
END LOOP;
COMMIT;
END write_lob;
/
Next you execute the procedures to load the data.
If you are using SQL*Plus, you can set the timing on to observe the time. If you are using
SQL Developer, you issue only these EXECUTE statements. In SQL Developer, some of
the SQL*Plus commands are ignored.
set
set
set
set

serveroutput on
verify on
term on
lines 200

timing start load_data


execute write_lob('1726_LCD.doc');
execute write_lob('1734_RS232.doc');
execute write_lob('1739_SDRAM.doc');
timing stop
You can check the segment type in the data dictionary using this code.
SELECT segment_name, segment_type, segment_subtype
FROM dba_segments
WHERE tablespace_name = 'BF_TBS1'
AND segment_type = 'LOBSEGMENT';
Then you create an interim table using this code.
CREATE TABLE product_descriptions_interim
(product_id NUMBER,
detailed_product_info BLOB)
LOB(detailed_product_info) STORE AS SECUREFILE
(TABLESPACE sf_tbs1);

Now you connect as system and run the redefinition script. You need to replace orcl
with your sid, and replace OE1 with your ID.
connect system/oracle@orcl
DECLARE
error_count PLS_INTEGER := 0;
BEGIN
DBMS_REDEFINITION.START_REDEF_TABLE
('OE1', 'product_descriptions',
'product_descriptions_interim',
'product_id product_id, detailed_product_info
detailed_product_info',
OPTIONS_FLAG =>
DBMS_REDEFINITION.CONS_USE_ROWID);
DBMS_REDEFINITION.COPY_TABLE_DEPENDENTS
('OE1', 'product_descriptions',
'product_descriptions_interim',
1, true,true,true,false, error_count);
DBMS_OUTPUT.PUT_LINE('Errors := ' ||
TO_CHAR(error_count));
DBMS_REDEFINITION.FINISH_REDEF_TABLE
('OE1', 'product_descriptions',
'product_descriptions_interim');
END;
/
In SQL Developer, using your OE User ID, you remove the interim table using this code.
DROP TABLE product_descriptions_interim;
Next you check the segment type in the data dictionary.
SELECT segment_name, segment_type, segment_subtype
FROM dba_segments
WHERE tablespace_name = 'SF_TBS2'
AND segment_type = 'LOBSEGMENT';
Next you turn on compression and deduplication for your PRODUCT_DESCRIPTIONS
table.
You modify the table to enable deduplication and compression using this code.
ALTER TABLE product_descriptions
MODIFY LOB (detailed_product_info)
(DEDUPLICATE LOB
COMPRESS HIGH);

Finally, you alter the table to reclaim the free space using this code.
ALTER TABLE product_descriptions ENABLE ROW MOVEMENT;
ALTER TABLE product_descriptions SHRINK SPACE COMPACT;
ALTER TABLE product_descriptions SHRINK SPACE;

Summary
The new SecureFile format for LOBs dramatically improves performance, manageability,
and ease of application development. It also offers intelligent compression and
transparent encryption.
To read data from a SecureFile LOB, you use the LOADLOBFROMBFILE_PROC procedure.
You use the WRITE_LOB procedure to read data into the LOB column.
The SecureFile format offers features such as deduplication and compression. You can
test how efficient deduplication and compression are on SecureFiles.
You can also turn on or off the LOB encryption, and specify an encryption algorithm.
Encryption is performed at the block level.
You can migrate the older version BasicFile format to the SecureFile format. And the
performance of the SecureFile format LOBs is faster than the BasicFile format LOBs.
With Oracle 11g, you can migrate a BasicFile format LOB to a SecureFile format LOB.

Checking for LOB space usage


CREATE OR REPLACE PROCEDURE check_space
IS
l_fs1_bytes NUMBER;
l_fs2_bytes NUMBER;
l_fs3_bytes NUMBER;
l_fs4_bytes NUMBER;
l_fs1_blocks NUMBER;
l_fs2_blocks NUMBER;
l_fs3_blocks NUMBER;
l_fs4_blocks NUMBER;
l_full_bytes NUMBER;
l_full_blocks NUMBER;
l_unformatted_bytes NUMBER;
l_unformatted_blocks NUMBER;
BEGIN
DBMS_SPACE.SPACE_USAGE(
segment_owner => 'OE1',
segment_name => 'CUSTOMER_PROFILES',
segment_type => 'TABLE',

fs1_bytes => l_fs1_bytes,


fs1_blocks => l_fs1_blocks,
fs2_bytes => l_fs2_bytes,
fs2_blocks => l_fs2_blocks,
fs3_bytes => l_fs3_bytes,
fs3_blocks => l_fs3_blocks,
fs4_bytes => l_fs4_bytes,
fs4_blocks => l_fs4_blocks,
full_bytes => l_full_bytes,
full_blocks => l_full_blocks,
unformatted_blocks => l_unformatted_blocks,
unformatted_bytes => l_unformatted_bytes
);
DBMS_OUTPUT.ENABLE;
DBMS_OUTPUT.PUT_LINE(' FS1 Blocks = '||l_fs1_blocks||'
Bytes = '||l_fs1_bytes);
DBMS_OUTPUT.PUT_LINE(' FS2 Blocks = '||l_fs2_blocks||'
Bytes = '||l_fs2_bytes);
DBMS_OUTPUT.PUT_LINE(' FS3 Blocks = '||l_fs3_blocks||'
Bytes = '||l_fs3_bytes);
DBMS_OUTPUT.PUT_LINE(' FS4 Blocks = '||l_fs4_blocks||'
Bytes = '||l_fs4_bytes);
DBMS_OUTPUT.PUT_LINE('Full Blocks = '||l_full_blocks||'
Bytes = '||l_full_bytes);
DBMS_OUTPUT.PUT_LINE('====================================
=========');
DBMS_OUTPUT.PUT_LINE('Total Blocks =
'||to_char(l_fs1_blocks + l_fs2_blocks +
l_fs3_blocks + l_fs4_blocks + l_full_blocks)|| ' ||
Total Bytes = '|| to_char(l_fs1_bytes + l_fs2_bytes
+ l_fs3_bytes + l_fs4_bytes + l_full_bytes));
END;
/

Perform PIVOT and UNPIVOT Operations


Learning objective

After completing this topic, you should be able to recognize the steps for performing
PIVOT and UNPIVOT operations in various ways on the server side.

1. Pivoting and unpivoting


The new pivot functionality in Oracle Database 11g enables you to transform multiple
rows of input into fewer rows, generally with more columns.
Using pivoting operations has several benefits. Data returned by business intelligence
(BI) queries is more useful if presented in a cross-tabular format. When pivoting, an

aggregation operator is applied, enabling the query to condense large data sets into
smaller, more readable results. And data that was originally on multiple rows can be
transformed into a single row of output, enabling intra-row calculations without a SQL
JOIN operation.
By performing pivots on the server side, you can

enhance processing speed

reduce network load


enhance processing speed
By performing pivots on the server side, processing burden is removed from client
applications, simplifying client-side development and potentially enhancing processing
speed.
reduce network load
By performing pivots on the server side, network load is reduced because only aggregated
pivot results need to traverse the network and not the detail data.
You can use the PIVOT operator of the SELECT statement to write cross-tabulation
queries that rotate the column values into new columns, aggregating data in the process.
You can use the UNPIVOT operator of the SELECT statement to rotate columns into
values of a column.
An UNPIVOT operation does not reverse a PIVOT operation. Instead, it rotates data from
columns into rows.

Question
Identify the benefits of using pivoting operations.
Options:
1.

Enhanced processing speed

2.

More rows with fewer columns

3.

No need for aggregation operators

4.

Reduced network load

Answer
Benefits of using pivoting operations include enhanced processing speed and
reduced network load.

Option 1 is correct. By performing pivots on the server side the processing burden
is removed from client applications, simplifying client-side development and
potentially enhancing processing speed.
Option 2 is incorrect. Pivoting enables you to transform multiple rows of input into
fewer rows, generally with more columns.
Option 3 is incorrect. When pivoting, an aggregation operator is applied, enabling
the query to condense large data sets into smaller, more readable results.
Option 4 is correct. By performing pivots on the server side, network load is
reduced because only aggregated pivot results need to traverse the network and
not the detail data.

2. Performing PIVOT operations


This example shows two tables, which demonstrate how pivoting works. The first table
contains data about the quarterly sales figures for product sales in different countries.
The second table displays the results of pivoting on the QUARTER column. The values of
the QUARTER column, Q1, Q2, Q3, and Q4 are rotated into new columns. The number sold
is grouped by products and quarters.
To accomplish the desired result, you can use the PIVOT clause to PIVOT the QUARTER
column. That is, you turn the values of this column into separate columns and aggregate
data using a group function such as SUM on the QUANTITY_SOLD along the way for each
PRODUCT.
In the second table, the QUARTER column is pivoted. The values of the QUARTER column
namely Q1, Q2, Q3, and Q4 have been rotated into columns.
In addition, the quantity sold is calculated for each quarter for ALL products, ALL
channels, and ALL countries.
For example, the total number of shorts sold for all channels, all countries, for quarter 2 is
3,500. The total number of Kids Jeans sold for all channels, all countries, for quarter 2 is
2,000.
In this example, the first table contains six rows before the pivoting operation.
In the second table, and after pivoting the QUARTER column, only two rows are displayed.
Pivoting transforms multiple rows of input into fewer and generally wider rows.

You can use the PIVOT clause to write cross-tabulation queries that rotate rows into
columns, aggregating the data in the process of rotation.
The XML keyword is required when you use either a subquery or the wildcard ANY in the
pivot_in_clause to specify pivot values. You cannot specify XML when you specify
explicit pivot values using expressions in the pivot_in_clause.
If the XML keyword is used, the output will include grouping columns and one column of
XMLType rather than a series of pivoted columns.
table_reference PIVOT [ XML ]
( aggregate_function ( expr ) [[AS] alias ]
[, aggregate_function ( expr ) [[AS] alias ] ]...
pivot_for_clause
pivot_in_clause )
The optional AS alias enables you to specify an alias for each measure.
The aggregate_function operates on the table's data, and the result of the
computation appears in the cross-tab report. It has an implicit GROUP BY based on the
columns in the source data.
The expr argument for the aggregate function is the measure to be pivoted. It must be a
column or expression of the query_table_expression on which the PIVOT clause is
operating.
You use the pivot_for_clause to specify one or more columns whose values are to
be pivoted into columns.
pivot_for_clause =
FOR { column |( column [, column]... ) }
In the pivot_in_clause, you specify the pivot column values from the columns you
specified in the pivot_for_clause.
For expr, you specify a constant value of a pivot column. You can optionally provide an
alias for each pivot column value.
pivot_in_clause =
IN ( { { { expr | ( expr [, expr]... ) } [ [ AS] alias] }...
| subquery | { ANY | ANY [, ANY]...} } )
You can use a subquery to extract the pivot column values by way of a nested subquery.
If you specify ANY, all values of the pivot columns are pivoted into columns.

Subqueries and wildcards are useful if you do not know the specific values in the pivot
columns. However, you will need to do further processing to convert the XML output into a
tabular format.
The values evaluated by the pivot_in_clause become the columns in the pivoted
data.
pivot_in_clause =
IN ( { { { expr | ( expr [, expr]... ) } [ [ AS] alias] }...
| subquery | { ANY | ANY [, ANY]...} } )
Suppose you have recently created a new view, sales_view, in the SH schema.
You want to pivot some of the data in the sales_view view.
SQL> CREATE OR REPLACE VIEW sales_view AS
2 SELECT
3
prod_name AS product,
4
country_name AS country,
5
channel_id AS channel,
6
SUBSTR(calendar_quarter_desc, 6,2) AS quarter,
7
SUM(amount_sold) AS amount_sold,
8
SUM(quantity_sold) AS quantity_sold
9 FROM sh.sales, sh.times, sh.customers,
10
sh.countries, sh.products
11 WHERE sales.time_id = times.time_id AND
12
sales.prod_id = products.prod_id AND
13
sales.cust_id = customers.cust_id AND
14
customers.country_id = countries.country_id
15 GROUP BY prod_name, country_name, channel_id,
16 SUBSTR(calendar_quarter_desc, 6, 2);
This code displays the definition of sales_view.
SQL> DESCRIBE sales_view
Name
Null?
Type
---------------------------------------------------------PRODUCT
NOT NULL VARCHAR2(50)
COUNTRY
NOT NULL VARCHAR2(40)
CHANNEL
NOT NULL NUMBER
QUARTER
VARCHAR2(2)
AMOUNT_SOLD
NUMBER
QUANTITY_SOLD
NUMBER
This code displays some of the calendar_quarter_desc column values in the SH
schema.

SQL> SELECT DISTINCT calendar_quarter_desc


2 FROM sh.times;
CALENDAR_QUARTER_DESC
--------------------1999-02
2000-04
2002-03
2000-03
2001-04
1998-01
1999-04
2002-02
2002-04
2000-02
2001-01
. . .
20 rows selected.

Note
The two-character quarter value is extracted from the calendar_quarter_desc
column starting at position 6.
This code shows some sample data from the sales_view.
There are currently 9,502 rows selected.
SQL> SELECT product, country, channel, quarter, quantity_sold
FROM sales_view;
PRODUCT
-----------Y Box
Y Box
Y Box
. . .
Y Box
Y Box
Y Box
Y Box
Y Box
. . .
Bounce
Bounce

COUNTRY
CHANNEL
------------ ---------Italy
4
Italy
4
Italy
4

QUARTER QUANTITY_SOLD
-------- ------------01
21
02
17
03
20

Japan
Japan
Japan
Japan
Japan

2
2
2
2
3

01
02
03
04
01

35
39
36
46
65

Italy
Italy

2 01
2 02

34
43

. . .
9502 rows selected.
If you use the QUARTER column to pivot, two key changes are performed by the PIVOT
operator:

the QUARTER column will become multiple columns, each holding one quarter

the row count will drop from 9,502 to just 71, representing the distinct products in the schema
This statement displays the distinct channel_id and channel_desc column values
from the CHANNEL table.
The valid quarter values are 01, 02, 03, and 04.
SQL> SELECT DISTINCT channel_id, channel_desc
2 FROM sh.channels
3 ORDER BY channel_id;
CHANNEL_ID CHANNEL_DESC
---------- -------------------2
Partners
3
Direct Sales
4
Internet
5
Catalog
9
Tele Sales
This code shows how to pivot on the QUARTER column in the sales_view data.
It uses a subquery inline view in the FROM clause. This is required, because when you
issue a SELECT * directly from the sales view, the query outputs rows for each row of
sales_view.
SQL> SELECT *
2 FROM
3
(SELECT product, quarter, quantity_sold
4
FROM sales_view) PIVOT (sum(quantity_sold)
5
FOR quarter IN ('01', '02', '03', '04'))
6 ORDER BY product DESC;

PRODUCT
'01'
'02'
'03'
'04'
------------------ ---------- ---------- ---------- ---------Y Box
1455
1766
1716
1992
Xtend Memory
3146
4121
4122
Unix/Windows 1-user
4259
3887
4601
4049
Standard Mouse
3376
1699
2654
2427
Smash up Boxing
1608
2127
1999
2110

. . .
71 rows selected.
SQL> SELECT *
The result set shows the product column followed by a column for each value of the
quarter specified in the IN clause.
The numbers in the pivoted output are the sum of quantity_sold for each product at
each quarter.
If you also specify an alias for each measure, the column name is a concatenation of the
pivot column value or alias, an underscore (_), and the measure alias.
For example, you can use this code to specify aliases.
SQL> SELECT *
2 FROM (SELECT product, quarter, quantity_sold
3
FROM sales_view) PIVOT (sum(quantity_sold)
4
FOR quarter IN ('01' AS Q1, '02' AS Q2, '03 AS Q3,
5
'04' AS Q4))
6 ORDER BY product DESC;

PRODUCT
'01'
'02'
'03'
'04'
---------------- ---------- ---------- ---------- ---------Y Box
1455
1766
1716
1992
Xtend Memory
3146
4121
4122
Unix/Windows
4259
3887
4601
4049
Standard Mouse
3376
1699
2654
2427
. . .
71 rows selected.
Prior to Oracle Database 11g, you could accomplish this using the CASE expression
syntax.
The advantage of the new syntax over the syntax used prior to Oracle Database 11g is
that it enables greater query optimization by the Oracle database. The query optimizer
recognizes the PIVOT keyword and, as a result, uses algorithms optimized to process it
efficiently.
SQL> SELECT product,
2 SUM(CASE when quarter = '01'
3
THEN quantity_sold ELSE NULL END) Q1,
4 SUM(CASE when quarter = '02'
5
THEN quantity_sold ELSE NULL END) Q2,
6 SUM(CASE when quarter = '03'
7
THEN quantity_sold ELSE NULL END) Q3,

8 SUM(CASE when quarter = '04'


9
THEN quantity_sold ELSE NULL END) Q4
10 FROM (SELECT product, quarter, quantity_sold
11
FROM SALES_VIEW )
12 GROUP BY product;
This code shows an example of pivoting using the Order Entry (OE) schema.
The oe.orders table contains information about when an order was placed
(order_date), how it was placed (order_mode), and the total amount of the order
(order_total), as well as other information.
This example shows how to use the PIVOT clause to pivot order_mode values into
columns, aggregating order_total data in the process, to get yearly totals by order
mode.
SQL> CREATE TABLE pivot_table AS
2 SELECT * FROM
3 (SELECT EXTRACT(YEAR FROM order_date) year, order_mode,
4
order_total FROM orders)
5 PIVOT
6 (SUM(order_total) FOR order_mode IN ('direct' AS Store,
7
'online' AS Internet))
8 ORDER BY year;
SQL> SELECT * FROM pivot_table ORDER BY year;
YEAR
STORE
INTERNET
---------- ---------- ---------1990
61655.7
1996
5546.6
1997
310
1998
309929.8
100056.6
1999 1274078.8 1271019.5
2000
252108.3
393349.4
6 rows selected.
This example creates a new table called pivot_table by using a subquery.
The rows inserted into the table are generated by the subquery.
Here, aliases for the direct and online pivot column values have been used.
SQL> CREATE TABLE pivot_table AS
2 SELECT * FROM
3 (SELECT EXTRACT(YEAR FROM order_date) year, order_mode,
4
order_total FROM orders)

5 PIVOT
6 (SUM(order_total) FOR order_mode IN ('direct' AS Store,
7
'online' AS Internet))
8 ORDER BY year;
SQL> SELECT * FROM pivot_table ORDER BY year;
YEAR
STORE
INTERNET
---------- ---------- ---------1990
61655.7
1996
5546.6
1997
310
1998
309929.8
100056.6
1999 1274078.8 1271019.5
2000
252108.3
393349.4
6 rows selected.

3. Pivoting multiple columns


You can pivot on multiple columns, subject to two conditions:

a pivoting column is required to be a column of the table reference on which the pivot is operating

the pivoting column cannot be an arbitrary expression


If you need to pivot on an expression, you should alias the expression in a view before
the pivot operation.
This example pivots on both the CHANNEL and QUARTER columns.
The example uses only CHANNEL values 3 (Direct Sales) and 4 (Internet), and
only the Q1 value for the QUARTER column.
SQL> SELECT *
2 FROM
3
(SELECT product, channel, quarter, quantity_sold
4
FROM sales_view) PIVOT (sum(quantity_sold) FOR (channel,
5
quarter) IN ((3, '01') AS Direct_Sales_Q1,
6
(4, '01') AS Internet_Sales_Q1))
7 ORDER BY product DESC;
PRODUCT
DIRECT_SALES_Q1 INTERNET_SALES_Q1
------------------------- --------------- ----------------Y Box
771
253
Xtend Memory
1935
350
Unix/Windows 1-user pack
2544
397
Standard Mouse
2326
256
Smash up Boxing
1114
129

. . .
71 rows selected.
In this example, more values for the QUARTER column have been specified.
SQL> SELECT *
2 FROM
3
(SELECT product, channel, quarter, quantity_sold
4
FROM sales_view
5
) PIVOT (sum(quantity_sold) FOR (channel, quarter) IN
6
((3, '01') AS Direct_Sales_Q1,
7
(3, '02') AS Direct_Sales_Q2,
8
(3, '03') AS Direct_Sales_Q3,
9
(3, '04') AS Direct_Sales_Q4,
10
(4, '01') AS Internet_Sales_Q1,
11
(4, '02') AS Internet_Sales_Q2,
12
(4, '03') AS Internet_Sales_Q3,
13
(4, '04') AS Internet_Sales_Q4))
14 ORDER BY product DESC;
Oracle Database 11g enables you to pivot using multiple aggregations.
The query in this example pivots SALES_VIEW on the CHANNEL column. The
amount_sold and quantity_sold measures are pivoted. The query creates column
headings by concatenating the pivot columns with the aliases of the aggregate functions,
plus an underscore.
When you use multiple aggregation, you can omit the alias for only one aggregation. If
you omit an alias, the corresponding result column name is the pivot value, or the alias for
the pivot value.
SQL> SELECT *
2 FROM
3
(SELECT product, channel, amount_sold, quantity_sold
4
FROM sales_view) PIVOT (SUM(amount_sold) AS sums,
5
SUM(quantity_sold) as sumq
6
FOR channel IN (3 AS Dir_Sales, 4 AS Int_Sales))
7 ORDER BY product DESC;

PRODUCT
DIR_SALES_SUMS DIR_SALES_SUMQ INT_SALES_SUMS
INT_SALES_SUMQ
----------- -------------- -------------- --------------------------Y Box
1081050.96
3552
382767.45
1339
Xtend Memory
217011.38
8562
40553.93

1878
Unix/Windows
1999882.17
1872
Standard Mouse
153199.63
1195
Smash up Boxing 174592.24
904
...
71 rows selected.

9313

376071.62

6140

28768.04

5106

27858.84

You can distinguish between NULL values that are generated from the use of PIVOT and
those that exist in the source data.
This example illustrates NULL that PIVOT generates.
The first code example assumes an existing table named sales2.
SQL> SELECT * FROM sales2;
PROD_ID
------100
100
100
200

QTR
--Q1
Q1
Q2
Q1

AMOUNT_SOLD
----------10
20
50

The query in this second code example returns prod_id rows and the resulting pivot
columns Q1, Q1_COUNT_TOTAL, Q2, and Q2_COUNT_TOTAL.
For each unique value of prod_id, Q1_COUNT_TOTAL, the query returns the total
number of rows whose QTR value is Q1. The unique value of Q2_COUNT_TOTAL returns
the total number of rows whose QTR value is Q2.
SQL> SELECT * FROM sales2;
PROD_ID
------100
100
100
200

QTR
--Q1
Q1
Q2
Q1

AMOUNT_SOLD
----------10
20
50

SQL> SELECT *
2 FROM
3 (SELECT prod_id, qtr, amount_sold
4
FROM sales2) PIVOT (SUM(amount_sold), COUNT(*) AS count_total

5
FOR qtr IN ('Q1', 'Q2') )
6 ORDER BY prod_id DESC;
PROD_ID
------100
200

Q1 Q1_COUNT_TOTAL Q2
Q2_COUNT_TOTAL
--- -------------- --------- -------------30
2
1
50
1
0

The result set for the second code example shows that there are two sales rows for
prod_id 100 for quarter Q1, and one sales row for prod_id 100 and quarter Q2.
For prod_id 200, there is one sales row for quarter Q1 and no sales row for quarter Q2.
Using Q2_COUNT_TOTAL, you can identify that the NULL for PROD_ID 100 in Q2 is the
result of a row in the original table whose measure is of NULL value. The NULL for
PROD_ID 200 in Q2 is due to no row being present in the original table for prod_id
200 in quarter Q2.
SQL> SELECT * FROM sales2;
PROD_ID
------100
100
100
200

QTR
--Q1
Q1
Q2
Q1

AMOUNT_SOLD
----------10
20
50

SQL> SELECT *
2 FROM
3 (SELECT prod_id, qtr, amount_sold
4
FROM sales2) PIVOT (SUM(amount_sold), COUNT(*) AS count_total
5
FOR qtr IN ('Q1', 'Q2') )
6 ORDER BY prod_id DESC;
PROD_ID
------100
200

Q1 Q1_COUNT_TOTAL Q2
Q2_COUNT_TOTAL
--- -------------- --------- -------------30
2
1
50
1
0

You use the XML keyword to specify pivot values. You can do this using either of two
methods:

the ANY keyword

a subquery
the ANY keyword

If you use the ANY keyword, the XML string for each row includes only the pivot values
found in the input data for that row.
a subquery
If you use a subquery, the XML string includes all pivot values found by the subquery, even
if there are no aggregate values.
Each output row will include

the implicit Group By columns

a single column of XMLType containing an XML string for all value and measure pairs
The XML string for each row will hold aggregated data corresponding to the row's implicit
GROUP BY value. The values of the pivot column are evaluated at execution time.
The ANY keyword acts as a wildcard. If you specify ANY, all values found in the pivot
column will be used for pivoting.
When you use the ANY keyword, the ANY string for each output row will include only the
pivot values found in the input data corresponding to that row.
This example shows the use of the ANY wildcard keyword.
SQL> SET LONG 1024;
2 SELECT *
3 FROM
4
(SELECT product, channel, quantity_sold
5
FROM sales_view
6
) PIVOT XML (SUM(quantity_sold) FOR channel IN (ANY) )
7 ORDER BY product DESC;
The XML output includes all channel values in the sales_view view. The ANY keyword is
available only in PIVOT operations as part of an XML operation. This output includes data
for cases where the channel exists in the data set.
You can use wildcards or subqueries to specify the pivot IN list members when the
values of the pivot column are not known.
PRODUCT
-------------------------------------------------CHANNEL_XML
-----------------------------------------------------------------. . .
1.44MB External 3.5" Diskette
<PivotSet>
<item><column name = "CHANNEL">3</column><column name =

"SUM(QUANTITY_SOLD)">14189</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">6455</column></item>
<item><column name = "CHANNEL">4</column><column name =
"SUM(QUANTITY_SOLD)">2464</column></item></PivotSet>
71 rows selected.
This example shows how to specify PIVOT values using a subquery.
SQL> SELECT *
2 FROM
3
(SELECT product, channel, quantity_sold
4
FROM sales_view
5
) PIVOT XML(SUM(quantity_sold)
6
FOR channel IN (SELECT distinct channel_id
7
FROM sh.channels));
This code shows part of the output after running the query. The XML output includes all
channel values and the sales data corresponding to each channel and for each product.
PRODUCT
---------CHANNEL_XML
--------------------------------------------------------------. . .
Y Box
<PivotSet>
<item><column name = "CHANNEL">9</column><column name =
"SUM(QUANTITY_SOLD)">1</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">2037</column></item>
<item><column name = "CHANNEL">5</column><column name =
"SUM(QUANTITY_SOLD)"></column></item>
<item><column name = "CHANNEL">3</column><column name =
"SUM(QUANTITY_SOLD)">3552</column></item>
. . .
Subquery-based pivots give results different from those of the ANY wildcard. In this
example, when you use a subquery, the XMLType column will show value and measure
pairs for all channels for each product even if the input data has no such
product/channel combination.
For example, the XML string in this example shows Channel 5, although it has no value
for the SUM(QUANTITY_SOLD) column. Pivots that use a subquery will, therefore, often
have longer output than queries based on the ANY keyword.

PRODUCT
---------CHANNEL_XML
--------------------------------------------------------------. . .
Y Box
<PivotSet>
<item><column name = "CHANNEL">9</column><column name =
"SUM(QUANTITY_SOLD)">1</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">2037</column></item>
<item><column name = "CHANNEL">5</column><column name =
"SUM(QUANTITY_SOLD)"></column></item>
<item><column name = "CHANNEL">3</column><column name =
"SUM(QUANTITY_SOLD)">3552</column></item>
. . .
Depending on how you process the query results, subquery-style output may be more
convenient to work with than the results derived from ANY.
PRODUCT
---------CHANNEL_XML
--------------------------------------------------------------. . .
Y Box
<PivotSet>
<item><column name = "CHANNEL">9</column><column name =
"SUM(QUANTITY_SOLD)">1</column></item>
<item><column name = "CHANNEL">2</column><column name =
"SUM(QUANTITY_SOLD)">2037</column></item>
<item><column name = "CHANNEL">5</column><column name =
"SUM(QUANTITY_SOLD)"></column></item>
<item><column name = "CHANNEL">3</column><column name =
"SUM(QUANTITY_SOLD)">3552</column></item>
. . .

4. Performing UNPIVOT operations


This example shows two tables, which demonstrate how unpivoting works.
This example unpivots the QUARTER column in the first table. This turns the quarters
columns into the values of the QUARTER column in the second table.
An UNPIVOT operation does not reverse a PIVOT operation. Instead, it rotates data found
in multiple columns of a single row into multiple rows of a single column.

If you are working with pivoted data, an UNPIVOT operation cannot reverse any
aggregations that have been made by PIVOT or any other means.
In this example, the first table contains two rows before the unpivoting operation. In the
second table, the unpivoting operation on the QUARTER column displays five rows.
Unpivoting generally transforms fewer rows of input into more rows.
Data from sources such as spreadsheets and flat files is often in pivoted form. For
instance, sales data will often be stored in a separate column for each time period.
UNPIVOT can normalize such data, transforming multiple columns into a single column.
When the data is normalized with UNPIVOT, it is much more accessible to relational
database processing with SQL. By placing data in a normalized layout, queries can
readily apply SQL aggregate and analytic functions, enabling powerful analysis. Similarly,
it is more efficient to specify the WHERE clause predicates on normalized data.
The UNPIVOT clause rotates columns from a previously pivoted table or a regular table
into rows.
When using the UNPIVOT operator, you need to specify

the measure columns to be unpivoted

the names for the columns that will result from the unpivot operation

the columns that will be unpivoted back into values of the column specified in the
unpivot_for_clause
You can use an alias to map the column name to another value.
The UNPIVOT operation turns a set of value columns into one column. The data types of
all the value columns must be of the same data type, such as numeric or character.
If all the value columns are CHAR, the unpivoted column is CHAR. If any value column is
VARCHAR2, the unpivoted column is VARCHAR2. If all the value columns are NUMBER, the
unpivoted column is NUMBER.
If any value column is BINARY_DOUBLE, the unpivoted column is BINARY_DOUBLE. If no
value column is BINARY_DOUBLE, but any value column is BINARY_FLOAT, the
unpivoted column is BINARY_FLOAT.
The UNPIVOT clause rotates columns into rows.
The [INCLUDE] | [EXCLUDE] [NULLS] clause gives you the option of including or

excluding null-valued rows. [INCLUDE] [NULLS] causes the unpivot operation to


include null-valued rows. [EXCLUDE] [NULLS] eliminates null-valued rows from the
return set. If you omit this clause, the unpivot operation excludes nulls.
For column, you can specify a name for each output column that will hold measure
values, such as sales_quantity.
table_reference UNPIVOT [{INCLUDE|EXCLUDE} NULLS]
( { column | ( column [, column]... ) }
unpivot_for_clause
unpivot_in_clause )
You use the unpivot_for_clause to specify one or more names for the columns that
will result from the unpivot operation. For example, you can specify a name for each
output column that will hold descriptor values, such as quarter or product.
unpivot_for_clause =
FOR { column | ( column [, column]... ) }
In the unpivot_in_clause, you can specify the input data columns whose names will
become values in the output columns of the unpivot_for_clause. These input data
columns can have names specifying a category value, such as Q1, Q2, Q3, and Q4. The
optional alias enables you to map the column name to any desired value.
unpivot_in_clause =
( { column | ( column [, column]... ) }
[ AS { constant | ( constant [, constant]... ) } ]
[, { column | ( column [, column]... ) }
[ AS { constant | ( constant [, constant]...) } ] ]...)
This code example creates a new table named pivotedtable in the SH schema.
SQL> CREATE TABLE pivotedtable AS
2 SELECT *
3 FROM
4
(SELECT product, quarter, quantity_sold
5
FROM sales_view) PIVOT (sum(quantity_sold)
6
FOR quarter IN ('01' AS Q1, '02' AS Q2,
7
'03' AS Q3, '04' AS Q4));
Table created.
SQL> SELECT * FROM pivotedtable
2 ORDER BY product DESC;
PRODUCT

Q1

Q2

Q3

Q4

--------------- ---------- ---------- ---------- ---------Y Box


1455
1766
1716
1992
Xtend Memory
3146
4121
4122
. . .
71 rows selected.
This code shows how to unpivot on the QUARTER column.
In this example, the QUARTER column was formatted using the COLUMN quarter
FORMAT A7 command.
SQL> SELECT *
2 FROM pivotedtable
3 UNPIVOT (quantity_sold For Quarter IN (Q1, Q2, Q3, Q4))
4 ORDER BY product DESC, quarter;
PRODUCT
-------------------------Y Box
Y Box
Y Box
Y Box
Xtend Memory
Xtend Memory
Xtend Memory
Xtend Memory
Unix/Windows 1-user pack
. . .
284 rows selected.

QUARTER
------Q1
Q2
Q3
Q4
Q1
Q2
Q3
Q4
Q1

QUANTITY_SOLD
------------1455
1766
1716
1992
3146
4121
4122
3802
4259

Prior to Oracle Database 11g, you could simulate the UNPIVOT syntax using these
existing SQL commands.
As with PIVOT, the UNPIVOT syntax enables more efficient query processing. The
UNPIVOT keyword alerts the query optimizer to the desired behavior. As a result, the
optimizer calls highly efficient algorithms.
SQL> SELECT product, 'Q1' as quarter, Q1 as quantity_sold
2 FROM pivotedTable WHERE Q1 is not NULL
3 union all
4 SELECT product, 'Q2' as quarter, Q2 as quantity_sold
5 FROM pivotedTable WHERE Q2 is not NULL
6 union all
7 SELECT product, 'Q3' as quarter, Q3 as quantity_sold
8 FROM pivotedTable WHERE Q3 is not NULL
9 union all

10 SELECT product, 'Q4' as quarter, Q4 as quantity_sold


11 FROM pivotedTable WHERE Q4 is not NULL;
The UNPIVOT clause enables you to restore a pivoted table, or a table with similar
structure, so that selected columns are pivoted into values in a single column.
This example shows how the UNPIVOT clause has been used to unpivot the
ORDER_MODE column in the OE schema.
In this example, column aliases have been used.
SQL> SELECT * FROM pivot_table
2
UNPIVOT (yearly_total FOR order_mode IN
3
(store AS 'direct', internet AS 'online'))
4 ORDER BY year, order_mode;
YEAR
----1990
1996
1997
1998
1998
1999
1999
2000
2000

ORDER_MODE YEARLY_TOTAL
----------- -----------direct
61655.7
direct
5546.6
direct
310
direct
309929.8
online
100056.6
direct
1274078.8
online
1271019.5
direct
252108.3
online
393349.4

9 rows selected.

Question
What are the functions of the UNPIVOT operator?
Options:
1.

It can be used to normalize data

2.

It reverses a PIVOT operation

3.

It reverses existing aggregations

4.

It rotates data from columns into rows

Answer
The UNPIVOT operator is used to rotate data from columns into rows, or to
normalize data.

Option 1 is correct. Data from sources such as spreadsheets and flat files is often
in pivoted form. UNPIVOT can normalize such data, transforming multiple columns
into a single column.
Option 2 is incorrect. An UNPIVOT does not reverse a PIVOT operation. Instead, it
rotates data from columns into rows.
Option 3 is incorrect. If you are working with pivoted data, an UNPIVOT operation
cannot reverse any aggregations that have been made by PIVOT or any other
means.
Option 4 is correct. An UNPIVOT operation does not reverse a PIVOT operation.
Instead, it rotates data found in multiple columns of a single row into multiple rows
of a single column.

Question
Suppose the data type of one value column is BINARY_DOUBLE and the data type
of the remainder of the value columns is BINARY_FLOAT.
When unpivoting these columns, what is the resulting unpivoted column data
type?
Options:
1.

BINARY_DOUBLE

2.

BINARY_FLOAT

3.

CHAR

4.

NUMBER

5.

VARCHAR2

Answer
The data type of the resulting unpivoted column will be BINARY_DOUBLE.
Option 1 is correct. If the data type of any value column is BINARY_DOUBLE, the
resulting unpivoted column data type will be BINARY_DOUBLE.
Option 2 is incorrect. If no value column has the data type of BINARY_DOUBLE,
and the data type of any value column is BINARY_FLOAT, the resulting unpivoted
column data type will be BINARY_FLOAT.
Option 3 is incorrect. If the data type of all value columns is CHAR, the resulting
unpivoted column data type will be CHAR.

Option 4 is incorrect. If the data type of all value columns is NUMBER, the resulting
unpivoted column data type will be NUMBER.
Option 5 is incorrect. If the data type of any value column is VARCHAR2, the
resulting unpivoted column data type will be VARCHAR2.

5. Unpivoting multiple columns


You can use the UNPIVOT clause to unpivot on multiple columns.
This example shows the code used to create a pivot table, named multi_col_pivot,
using the CHANNEL and QUARTER columns in the SH schema.
The example uses only the CHANNEL values 3 (Direct Sales) and 4 (Internet), and
only the Q1 value for the QUARTER column.
The query in this example returns 71 rows.
SQL> CREATE TABLE multi_col_pivot AS
2 SELECT *
3 FROM
4
(SELECT product, channel, quarter, quantity_sold
5
FROM sales_view) PIVOT (sum(quantity_sold) FOR (channel,
6
quarter) IN ((3, '01') AS Direct_Sales_Q1,
7
(4, '01') AS Internet_Sales_Q1))
8 ORDER BY product DESC;
Table created.
SQL> SELECT *
FROM multi_col_pivot;
PRODUCT
DIRECT_SALES_Q1 INTERNET_SALES_Q1
--------------------- --------------- ----------------Y Box
771
253
Xtend Memory
1935
350
. . .
71 rows selected.
However, you can use other values for both columns. This code shows the structure of
the newly created table.
SQL> DESCRIBE multi_col_pivot
Name
Null?
Type
--------------------- -------- -----------------------PRODUCT
NOT NULL VARCHAR2(50)

DIRECT_SALES_Q1
INTERNET_SALES_Q1

NUMBER
NUMBER

This example shows the code used to unpivot the CHANNEL and QUARTER columns using
the multi_col_pivot table.
Here, explicit values have been used for the unpivoted CHANNEL and QUARTER columns.
The query in this example returns 142 rows.
SQL> SELECT *
2 FROM multi_col_pivot
3 UNPIVOT (quantity_sold FOR (channel, quarter) IN
4
( Direct_Sales_Q1 AS ('Direct', 'Q1'),
5
Internet_Sales_Q1 AS ('Internet', 'Q1') ) )
6 ORDER BY product DESC, quarter;

PRODUCT
------------------------Y Box
Y Box
Xtend Memory
Xtend Memory
. . .
142 rows selected.

CHANNEL
-------Internet
Direct
Internet
Direct

QUARTER QUANTITY_SOLD
------- ------------Q1
253
Q1
771
Q1
350
Q1
1935

This example demonstrates unpivoting on the CHANNEL and QUARTER columns without
using aliases as explicit values for the unpivoted columns.
In this case, each unpivoted column uses the column name as its value.
SQL> SELECT *
2 FROM multi_col_pivot
3 UNPIVOT (quantity_sold FOR (channel, quarter) IN
4
(Direct_Sales_Q1, Internet_Sales_Q1 ) );
PRODUCT
CHANNEL
------------ ----------------Y Box
DIRECT_SALES_Q1
Y Box
INTERNET_SALES_Q1
Xtend Memory DIRECT_SALES_Q1
...
142 rows selected.

QUARTER
QUANTITY_SOLD
----------------- ------------DIRECT_SALES_Q1
771
INTERNET_SALES_Q1
253
DIRECT_SALES_Q1
1935

This example shows the code used to create the multi_agg_pivot table using the
CHANNEL column, and the amount_sold and quantity_sold measures in the SH

schema.
This example uses only the CHANNEL values 3 (Direct Sales) and 4 (Internet), but
you can use other values for the CHANNEL column.
In this case, the query creates column headings by concatenating the pivot columns with
the aliases of the aggregate functions, plus an underscore.
SQL> CREATE TABLE multi_agg_pivot AS
2 SELECT *
3 FROM
4 (SELECT product, channel, quarter, quantity_sold, amount_sold
5
FROM sales_view) PIVOT
6
(sum(quantity_sold) sumq, sum(amount_sold) suma
7
FOR channel IN (3 AS Direct, 4 AS Internet) )
8 ORDER BY product DESC;
Table created.
SQL> SELECT * FROM multi_agg_pivot;
PRODUCT
QUARTER DIRECT_SUMQ DIRECT_SUMA INTERNET_SUMQ
INTERNET_SUMA
---------- ------- ----------- ----------- -----------------------. . .
Bounce
01
1000
21738.97
347
6948.76
Bounce
02
1212
26417.37
453
9173.59
Bounce
03
1746
37781.27
528
10029.99
Bounce
04
1741
38838.63
632
12592.07
. . .
283 rows selected.
When you use multiple aggregation, you can omit the alias for only one aggregation. If
you omit an alias, the corresponding result column name is the pivot value, or the alias for
the pivot value.
This code shows the structure of the newly created multi_agg_pivot table.
SQL> DESCRIBE multi_agg_pivot
Name
Null?
Type
------------------- -------- --------------------------PRODUCT
NOT NULL VARCHAR2(50)
QUARTER
VARCHAR2(8)

DIRECT_SUMQ
DIRECT_SUMA
INTERNET_SUMQ
INTERNET_SUMA

NUMBER
NUMBER
NUMBER
NUMBER

This example uses the newly created multi_agg_pivot table. This code unpivots the
measures amount_sold and quantity_sold.
Channels are mapped to the value "3" for Direct_sumq and Direct_suma, and to the
value "4" for Internet_sumq and Internet_suma.
The channel mapping is consistent with the values used in the pivot operation that
created the multi_agg_pivot table. However, any values could have been used for the
channel mappings.
SQL> SELECT *
2 FROM multi_agg_pivot
3 UNPIVOT ((total_amount_sold, total_quantity_sold)
4 FOR channel IN ((Direct_sumq, Direct_suma) AS 3,
5
(Internet_sumq, Internet_suma) AS 4 ))
6 ORDER BY product DESC, quarter, channel;
PRODUCT QUARTER CHANNEL TOTAL_AMOUNT_SOLD TOTAL_QUANTITY_SOLD
------- ------- ------- ----------------- ------------------Bounce
01
3
1000
21738.97
Bounce
01
4
347
6948.76
Bounce
02
3
1212
26417.37
Bounce
02
4
453
9173.59
Bounce
03
3
1746
37781.27
Bounce
03
4
528
10029.99
Bounce
04
3
1741
38838.63
Bounce
04
4
632
12592.07
. . .
566 rows selected.

Summary
The new pivot functionality in Oracle Database 11g enables you to transform multiple
rows of input into fewer rows, generally with fewer columns. When pivoting, an
aggregation operator is applied, enabling the query to condense large data sets into
smaller, more readable results. Pivoting is therefore a key technique in business
intelligence (BI) queries. You can perform pivots on the server side, which can improve
processing speed and reduce network load.
You use the PIVOT clause to write cross-tabulation queries that rotate rows into columns,
aggregating the data in the process of rotation. You use the pivot_in_clause to

specify pivot values. You use the pivot_for_clause to specify one or more columns
whose values are to be pivoted into columns. When pivoting you can specify an alias for
each measure, and you can optionally provide an alias for each pivot column value.
You can pivot multiple columns. You can also pivot using multiple aggregations. Using the
XML keyword in the PIVOT syntax requires using either the ANY wildcard keyword or a
subquery. If you use the ANY keyword, the XML string for each row includes only the pivot
values found in the input data for that row. If you use a subquery, the XML string includes
all the pivot values found by the subquery, even if there are no aggregate values.
An UNPIVOT operation rotates data found in multiple columns of a single row into multiple
rows of a single column. You use the unpivot_for_clause to specify one or more
names for the columns that will result from the unpivot operation. You use the
unpivot_in_clause to specify the input data columns whose names will become
values in the output columns of the unpivot_for_clause. When unpivoting, you can
use an alias to map the column name to another value.
You can use the UNPIVOT clause to unpivot on multiple columns and on multiple
aggregations.

Implementing Pivoting
Learning objective

After completing this topic, you should be able to create reports with the PIVOT operator.

Exercise overview
In this exercise, you're required to identify the code that correctly creates reports using
pivoting.
This involves the following tasks:

creating reports using pivoting

modifying reports using pivoting

Task 1: Creating reports using pivoting


You have created a view based on the SH schema called MY_SALES_VIEW. You now
want to locate the distinct MONTH_YEAR values, and then create a report that pivots on
the MONTH_YEAR column.
You are currently viewing the definition of the newly created view MY_SALES_VIEW.

DESCRIBE my_sales_view
NAME
---------------------------------------------------------PRODUCT
COUNTRY
CHANNEL
MONTH_YEAR
AMOUNT_SOLD
QUANTITY_SOLD

Null
Type
-------NOT NULL VARCHAR2(50)
NOT NULL VARCHAR2(40)
NOT NULL NUMBER
VARCHAR2(43)
NUMBER
NUMBER

6 rows selected.

Step 1 of 2
You want to view the distinct values in the MONTH_YEAR column of the
MY_SALES_VIEW view so you can use them to pivot. The results should be sorted
by MONTH_YEAR.
Which statement will return the required results?
Options:
1.

SELECT DISTINCT month_year FROM my_sales_view;

2.

SELECT DISTINCT month_year ORDER BY month_year;

3.

SELECT DISTINCT month_year FROM my_sales_view ORDER BY


month_year;

4.

SELECT month_year FROM my_sales_view ORDER BY month_year;

Result
This statement will return the required results:
SELECT DISTINCT month_year FROM my_sales_view ORDER BY
month_year;
Option 1 is incorrect. This statement will return the distinct MONTH_YEAR values
from the MY_SALES_VIEW view. However, it will not sort them by MONTH_YEAR as
required.
Option 2 is incorrect. This statement will result in an error because a FROM clause
is required in the SELECT statement.
Option 3 is correct. This statement will return all of the distinct MONTH_YEAR
column values from the MY_SALES_VIEW view.

Option 4 is incorrect. To return the distinct values from a column, the DISTINCT
keyword is required in the SELECT statement.

Step 2 of 2
You want to create a report that pivots on the MONTH_YEAR column of the
MY_SALES_VIEW view. The numbers in the pivoted output should be the sum of
the QUANTITY_SOLD for each product for the first and fourth months in 2007.
Which code completes the statement that will create the required report?

SELECT *
FROM
(SELECT product, month_year, quantity_sold
<MISSING CODE>
ORDER BY product DESC;
Options:
1.

FROM my_sales_view) PIVOT (sum(quantity_sold) FOR month_year

2.

FROM my_sales_view) PIVOT (quantity_sold) FOR month_year IN


('2007-01', '2007-04' ))

3.

FROM my_sales_view) PIVOT (sum(quantity_sold) FOR month_year


IN ('2007-01', '2007-04' ))

Result
To create the required report, you complete the statement using a PIVOT
statement to return the list of products.
Option 1 is incorrect. To create the required report, IN ('2007-01', '200704' )) must be included to complete the FOR clause.
Option 2 is incorrect. To create the required report, the PIVOT statement should
be PIVOT (sum(quantity_sold).
Option 3 is correct. This statement will return a list of products for the
MONTH_YEAR values 2007-01 and 2007-04. The numbers in the pivoted output
are the sum of the QUANTITY_SOLD for each product.

Task 2: Modifying reports using pivoting

You want to create a report that pivots on both the CHANNEL and MONTH_YEAR columns
for the quantity sold.

Step 1 of 2
You want to create a report that pivots on both the CHANNEL and MONTH_YEAR
columns for the quantity sold. You only want to use the CHANNEL column values 3
(Direct Sales) and 4 (Internet), and only the 2007-M1 value for the
MONTH_YEAR column.
Which code completes the statement that will create the required report?

SELECT *
FROM
(SELECT product, channel, month_year, quantity_sold
FROM my_sales_view) <MISSING CODE>
AS Direct_Sales_01_2007,
(4, '2007-01') AS
Internal_Sales_01_2007))
ORDER BY product DESC;
Options:
1.

(sum(quantity_sold) FOR (channel, month_year) IN ((3, '200701')

2.

PIVOT (sum(quantity_sold) FOR (channel, month_year) IN ((3,


'2007-01')

3.

PIVOT (sum(quantity_sold) FOR (channel, month_year) ((3,


'2007-01')

Result
To create the required report, you complete the statement using the code
PIVOT (sum(quantity_sold) FOR (channel, month_year) IN ((3,
'2007-01')
Option 1 is incorrect. To create the required report, the PIVOT keyword is required
before (sum(quantity_sold).
Option 2 is correct. This statement creates a report that pivots on both the
CHANNEL and MONTH_YEAR columns of the MY_SALES_VIEW view for the quantity
sold. The results will include PRODUCT, DIRECT_SALES_01_2007, and
INTERNET_SALES_01_2007 columns.

Option 3 is incorrect. To create the required report, the IN keyword is required


before ((3, '2007-01').

Step 2 of 2
You want to create a report that displays the product, channel, amount sold, and
quantity sold from the MY_SALES_VIEW view. The report should pivot on the sum
of the amount sold and display the sum of the quantity sold. Only the CHANNEL
column values 3 (Direct Sales) and 4 (Internet) should be used.
Which code completes the statement that will create the required report?

SELECT *
FROM
(SELECT product, channel, amount_sold, quantity_sold
FROM my_sales_view) PIVOT
<MISSING CODE>
ORDER BY product DESC;
Options:
1.

(amount_sold) AS sums, SUM(quantity_sold) as sumq FOR channel


IN (3 AS Dir_Sales, 4 AS Int_Sales))

2.

(SUM(amount_sold) AS sums, SUM(quantity_sold) as sumq FOR


channel IN (3 AS Dir_Sales))

3.

(SUM(amount_sold) AS sums, SUM(quantity_sold) as sumq FOR


channel IN (3 AS Dir_Sales, 4 AS Int_Sales))

Result
To create the required report, you complete the statement to pivot on the sum of
the values.
Option 1 is incorrect. To create the required report, the statement should pivot on
the sum of the values in the AMOUNT_SOLD column and not on just the
AMOUNT_SOLD column itself.
Option 2 is incorrect. To create the required report, the IN clause should specify
that the values returned be from CHANNEL 3 (Direct Sales) and 4
(Internet Sales).
Option 3 is correct. This statement will create the required report, displayed in
PRODUCT, DIR_SALES_SUMS, DIR_SALES_SUMQ, and INT_SALES_SUMS
columns.

You have successfully created and modified reports using the PIVOT operator in Oracle
Database 11g.

You might also like