You are on page 1of 23

PLSQL 101 2/19/2014

Sign In/Register Help Country Communities I am a... I want to... Search

Products Solutions Downloads Store Support Training Partners About OTN

Oracle Technology Network Oracle Magazine Issue Archive 2012


Database Fusion September
Middleware2012 Servers Store Help
Spotlight
Oracle Database Java Desktops Store FAQs
Oracle
OracleMagazine Online
Database, MySQLJanuary 2012 March 2012 Application
May 2012ServerJuly
and2012 September 2012 November 2012 Global Licensing Policies
Standard Edition Application Grid Peripherals
2014 See All... Price Lists
License See All... See All...
2013
Oracle Crystal Ball Operating TECHNOLOGY:
Systems and PL/SQL As Published In Partner Store
Infrastructure Applications Storage and Tape
MySQL Standard
2012 Place Orders
Linux AutoVue Switches and Directors
Edition Subscription Bulk Processing with BULK Register Deals
2011 Solaris Oracle Crystal Ball Host Bus Adapters
Oracle Solaris COLLECT and FORALL See All...
Premier Subscription
2010 Virtualization
By Steven Feuerstein Primavera
for Non-Oracle See All... See All... Networking Products2012
September/October
2009
Hardware
Keyboard and Mouse
2008 Part 9 in a series of articles on understanding and using PL/SQLEngineered Systems
Sun Multithreaded 10
2007 In the previous article in this series, I introduced readers to PL/SQLSystem Components
collections. These data
GbE Networking Card structures come in very handy when implementing algorithms thatSpare manipulate
Parts lists of program
data, but they are also key elements in some of the powerful performance optimization features
in PL/SQL.
In this article, I will cover the two most important of these features: BULK COLLECT and FORALL.
BULK COLLECT: SELECT statements that retrieve multiple rows with a single fetch,
improving the speed of data retrieval
FORALL: INSERTs, UPDATEs, and DELETEs that use collections to change multiple rows
of data very quickly
You may be wondering what very quickly might meanhow much impact do these features
really have? Actual results will vary, depending on the version of Oracle Database you are running
and the specifics of your application logic. You can download and run the script to compare the
performance of row-by-row inserting with FORALL inserting. On my laptop running Oracle
Database 11g Release 2, it took 4.94 seconds to insert 100,000 rows, one at a time. With
FORALL, those 100,000 were inserted in 0.12 seconds. Wow!
Given that PL/SQL is so tightly
integrated with the SQL language,
you might be wondering why special Answer to the Challenge
features would be needed to
The PL/SQL Challenge question in last issues
improve the performance of SQL Working with Collections article tested your knowledge
statements inside PL/SQL. The of iterating through the contents of a sparsely populated
explanation has everything to do collection. Choice (c) is the only correct choice, and
with how the runtime engines for offers the simplest algorithm for accomplishing this
both PL/SQL and SQL communicate task:
with each otherthrough a context
switch.
Context Switches and DECLARE
Performance l_names DBMS_UTILITY.maxname_array;
BEGIN
Almost every program PL/SQL l_names (1) := Strawberry;
developers write includes both l_names (10) := Blackberry;
PL/SQL and SQL statements. l_names (2) := Raspberry;
PL/SQL statements are run by the
PL/SQL statement executor; SQL DECLARE
statements are run by the SQL indx PLS_INTEGER := l_names.FIRST;
statement executor. When the BEGIN
PL/SQL runtime engine encounters
a SQL statement, it stops and WHILE (indx IS NOT NULL)
passes the SQL statement over to LOOP
DBMS_OUTPUT.put_line (l_names (indx));
the SQL engine. The SQL engine indx := l_names.NEXT (indx);
executes the SQL statement and END LOOP;
returns information back to the END;
PL/SQL engine (see Figure 1). This END;
transfer of control is called a context /
switch, and each one of these
switches incurs overhead that slows
down the overall performance of
your programs.

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 1 / 23
PLSQL 101 2/19/2014

Figure 1: Switching between PL/SQL and SQL engines

Lets look at a concrete example to explore context switches more thoroughly and identify the
reason that FORALL and BULK COLLECT can have such a dramatic impact on performance.
Suppose my manager asked me to write a procedure that accepts a department ID and a salary
percentage increase and gives everyone in that department a raise by the specified percentage.
Taking advantage of PL/SQLs elegant cursor FOR loop and the ability to call SQL statements
natively in PL/SQL, I come up with the code in Listing 1.
Code Listing 1: increase_salary procedure with FOR loop
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
BEGIN
FOR employee_rec
IN (SELECT employee_id
FROM employees
WHERE department_id =
increase_salary.department_id_in)
LOOP
UPDATE employees emp
SET emp.salary = emp.salary +
emp.salary * increase_salary.increase_pct_in
WHERE emp.employee_id = employee_rec.employee_id;
END LOOP;
END increase_salary;

Suppose there are 100 employees in department 15. When I execute this block,
BEGIN
increase_salary (15, .10);
END;

the PL/SQL engine will switch over to the SQL engine 100 times, once for each row being
updated. Tom Kyte, of AskTom (asktom.oracle.com), refers to row-by-row switching like this as
slow-by-slow processing, and it is definitely something to be avoided.
I will show you how you can use PL/SQLs bulk processing features to escape from slow-by-
slow processing. First, however, you should always check to see if it is possible to avoid the
context switching between PL/SQL and SQL by doing as much of the work as possible within
SQL.
Take another look at the increase_salary procedure. The SELECT statement identifies all the
employees in a department. The UPDATE statement executes for each of those employees,
applying the same percentage increase to all. In such a simple scenario, a cursor FOR loop is
not needed at all. I can simplify this procedure to nothing more than the code in Listing 2.
Code Listing 2: Simplified increase_salary procedure without FOR loop
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
BEGIN
UPDATE employees emp
SET emp.salary =
emp.salary
+ emp.salary * increase_salary.increase_pct_in
WHERE emp.department_id =
increase_salary.department_id_in;
END increase_salary;

Now there is just a single context switch to execute one UPDATE statement. All the work is done
in the SQL engine.
Of course, in most real-world scenarios, lifeand codeis not so simple. We often need to

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 2 / 23
PLSQL 101 2/19/2014

perform other steps prior to execution of our data manipulation language (DML) statements.
Suppose that, for example, in the case of the increase_salary procedure, I need to check
employees for eligibility for the increase in salary and if they are ineligible, send an e-mail
notification. My procedure might then look like the version in Listing 3.
Code Listing 3: increase_salary procedure with eligibility checking added
PROCEDURE increase_salary (
department_id_in IN employees.department_id%TYPE,
increase_pct_in IN NUMBER)
IS
l_eligible BOOLEAN;
BEGIN
FOR employee_rec
IN (SELECT employee_id
FROM employees
WHERE department_id =
increase_salary.department_id_in)
LOOP
check_eligibility (employee_rec.employee_id,
increase_pct_in,
l_eligible);
IF l_eligible
THEN
UPDATE employees emp
SET emp.salary =
emp.salary
+ emp.salary
* increase_salary.increase_pct_in
WHERE emp.employee_id = employee_rec.employee_id;
END IF;
END LOOP;
END increase_salary;

I can no longer do everything in SQL, so am I then resigned to the fate of slow-by-slow


processing? Not with BULK COLLECT and FORALL in PL/SQL.
Bulk Processing in PL/SQL
The bulk processing features of PL/SQL are designed specifically to reduce the number of
context switches required to communicate from the PL/SQL engine to the SQL engine.
Use the BULK COLLECT clause to fetch multiple rows into one or more collections with a single
context switch.
Use the FORALL statement when you need to execute the same DML statement repeatedly for
different bind variable values. The UPDATE statement in the increase_salary procedure fits this
scenario; the only thing that changes with each new execution of the statement is the employee
ID.
I will use the code in Listing 4 to explain how these features affect context switches and how you
will need to change your code to take advantage of them.
Code Listing 4: Bulk processing for the increase_salary procedure
1 CREATE OR REPLACE PROCEDURE increase_salary (
2 department_id_in IN employees.department_id%TYPE,
3 increase_pct_in IN NUMBER)
4 IS
5 TYPE employee_ids_t IS TABLE OF employees.employee_id%TYPE
6 INDEX BY PLS_INTEGER;
7 l_employee_ids employee_ids_t;
8 l_eligible_ids employee_ids_t;
9
10 l_eligible BOOLEAN;
11 BEGIN
12 SELECT employee_id
13 BULK COLLECT INTO l_employee_ids
14 FROM employees
15 WHERE department_id = increase_salary.department_id_in;
16
17 FOR indx IN 1 .. l_employee_ids.COUNT
18 LOOP
19 check_eligibility (l_employee_ids (indx),
20 increase_pct_in,
21 l_eligible);
22
23 IF l_eligible
24 THEN
25 l_eligible_ids (l_eligible_ids.COUNT + 1) :=
26 l_employee_ids (indx);
27 END IF;
28 END LOOP;
29
30 FORALL indx IN 1 .. l_eligible_ids.COUNT
31 UPDATE employees emp
32 SET emp.salary =
33 emp.salary
34 + emp.salary * increase_salary.increase_pct_in
35 WHERE emp.employee_id = l_eligible_ids (indx);
36 END increase_salary;

Lines Description

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 3 / 23
PLSQL 101 2/19/2014

58 Declare a new nested table type and two collection variables based on this type.
One variable, l_employee_ids, will hold the IDs of all employees in the department.
The other, l_eligible_ids, will hold the IDs of all those employees who are eligible
for the salary increase.

12 Use BULK COLLECT to fetch all the IDs of employees in the specified department
15 into the l_employee_ids collection.

17 Check for salary increase eligibility: If ineligible, an e-mail is sent. (Note:


28 Implementation of check_eligibility is not included in this article.) If eligible, add the
ID to the l_eligible_ids collection.

30 Use a FORALL statement to update all the rows identified by employee IDs in the
35 l_eligible_ids collection.

Listing 4 also contains an explanation of the code in this new-and-improved increase_salary


procedure. There are three phases of execution:
Fetch rows with BULK COLLECT into one or more collections. A single context switch is needed
for this step.
Modify the contents of collections as required (in this case, remove ineligible employees).
Change the table with FORALL using the modified collections.
Rather than move back and forth between the PL/SQL and SQL engines to update each row,
FORALL bundles up all the updates and passes them to the SQL engine with a single context
switch. The result is an extraordinary boost in performance.
I will first explore BULK COLLECT in more detail, and then cover FORALL.
About BULK COLLECT
To take advantage of bulk processing for queries, you simply put BULK COLLECT before the
INTO keyword and then provide one or more collections after the INTO keyword. Here are some
things to know about how BULK COLLECT works:
It can be used with all three types of collections: associative arrays, nested tables, and
VARRAYs.
You can fetch into individual collections (one for each expression in the SELECT list) or a
single collection of records.
The collection is always populated densely, starting from index value 1.
If no rows are fetched, then the collection is emptied of all elements.
Listing 5 demonstrates an example of fetching values for two columns into a collection of
records.
Code Listing 5: Fetching values for two columns into a collection
DECLARE
TYPE two_cols_rt IS RECORD
(
employee_id employees.employee_id%TYPE,
salary employees.salary%TYPE
);
TYPE employee_info_t IS TABLE OF two_cols_rt;
l_employees employee_info_t;
BEGIN
SELECT employee_id, salary
BULK COLLECT INTO l_employees
FROM employees
WHERE department_id = 10;
END;

If you are fetching lots of rows, the collection that is being filled could consume too much session
memory and raise an error. To help you avoid such errors, Oracle Database offers a LIMIT clause
for BULK COLLECT. Suppose that, for example, there could be tens of thousands of employees
in a single department and my session does not have enough memory available to store 20,000
employee IDs in a collection.
Instead I use the approach in Listing 6.
Code Listing 6: Fetching up to the number of rows specified
DECLARE
c_limit PLS_INTEGER := 100;
CURSOR employees_cur
IS
SELECT employee_id
FROM employees
WHERE department_id = department_id_in;
TYPE employee_ids_t IS TABLE OF
employees.employee_id%TYPE;
l_employee_ids employee_ids_t;
BEGIN
OPEN employees_cur;
LOOP
FETCH employees_cur

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 4 / 23
PLSQL 101 2/19/2014

BULK COLLECT INTO l_employee_ids


LIMIT c_limit;
EXIT WHEN l_employee_ids.COUNT = 0;
END LOOP;
END;

With this approach, I open the cursor that identifies all the rows I want to fetch. Then, inside a
loop, I use FETCH-BULK COLLECT-INTO to fetch up to the number of rows specified by the
c_limit constant (set to 100). Now, no matter how many rows I need to fetch, my session will
never consume more memory than that required for those 100 rows, yet I will still benefit from the
improvement in performance of bulk querying.
About FORALL
Whenever you execute a DML statement inside of a loop, you should convert that code to use
FORALL. The performance improvement will amaze you and please your users.
The FORALL statement is not a loop; it is a declarative statement to the PL/SQL engine:
Generate all the DML statements that would have been executed one row at a time, and send
them all across to the SQL engine with one context switch.
As you can see in Listing 4, lines 30 through 35, the header of the FORALL statement looks just
like a numeric FOR loop, yet there are no LOOP or END LOOP keywords.
Here are some things to know about FORALL:
Each FORALL statement may contain just a single DML statement. If your loop contains two
updates and a delete, then you will need to write three FORALL statements.
PL/SQL declares the FORALL iterator (indx on line 30 in Listing 4) as an integer, just as it
does with a FOR loop. You do not need toand you should notdeclare a variable with this
same name.
In at least one place in the DML statement, you need to reference a collection and use the
FORALL iterator as the index value in that collection (see line 35 in Listing 4).
When using the IN low_value . . . high_value syntax in the FORALL header, the collections
referenced inside the FORALL statement must be densely filled. That is, every index value
between the low_value and high_value must be defined.
If your collection is not densely filled, you should use the INDICES OF or VALUES OF syntax
in your FORALL header.
FORALL and DML Errors
Suppose that Ive written a program that is supposed to insert 10,000 rows into a table. After
inserting 9,000 of those rows, the 9,001st insert fails with a DUP_VAL_ON_INDEX error (a
unique index violation). The SQL engine passes that error back to the PL/SQL engine, and if the
FORALL statement is written like the one in Listing 4, PL/SQL will terminate the FORALL
statement. The remaining 999 rows will not be inserted.
If you want the PL/SQL engine to execute as many of the DML statements as possible, even if
errors are raised along the way, add the SAVE EXCEPTIONS clause to the FORALL header.
Then, if the SQL engine raises an error, the PL/SQL engine will save that information in a
pseudocollection named SQL%BULK_EXCEPTIONS, and continue executing statements. When
all statements have been attempted, PL/SQL then raises the ORA-24381 error.
You canand shouldtrap that error in the exception section and then iterate through the
contents of SQL%BULK_EXCEPTIONS to find out which errors have occurred. You can then write
error information to a log table and/or attempt recovery of the DML statement.
Listing 7 contains an example of using SAVE EXCEPTIONS in a FORALL statement; in this case,
I simply display on the screen the index in the l_eligible_ids collection on which the error
occurred, and the error code that was raised by the SQL engine.
Code Listing 7: Using SAVE EXCEPTIONS with FORALL
BEGIN
FORALL indx IN 1 .. l_eligible_ids.COUNT SAVE EXCEPTIONS
UPDATE employees emp
SET emp.salary =
emp.salary + emp.salary * increase_pct_in
WHERE emp.employee_id = l_eligible_ids (indx);
EXCEPTION
WHEN OTHERS
THEN
IF SQLCODE = -24381
THEN
FOR indx IN 1 .. SQL%BULK_EXCEPTIONS.COUNT
LOOP
DBMS_OUTPUT.put_line (
SQL%BULK_EXCEPTIONS (indx).ERROR_INDEX
|| :
|| SQL%BULK_EXCEPTIONS (indx).ERROR_CODE);
END LOOP;
ELSE
RAISE;
END IF;
END increase_salary;

From SQL to PL/SQL


This article talks mostly about the context switch from the PL/SQL engine to the SQL engine that
occurs when a SQL statement is executed from within a PL/SQL block. It is important to
remember that a context switch also takes place when a user-defined PL/SQL function is invoked
from within an SQL statement.

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 5 / 23
PLSQL 101 2/19/2014

Suppose that I have written a function named betwnstr that returns the string between a start and
end point. Heres the header of the function:
FUNCTION betwnstr (
string_in IN VARCHAR2
, start_in IN INTEGER
, end_in IN INTEGER
)
RETURN VARCHAR2

I can then call this function as follows:


SELECT betwnstr (last_name, 2, 6)
FROM employees
WHERE department_id = 10

If the employees table has 100 rows and 20 of those have department_id set to 10, then there
will be 20 context switches from SQL to PL/SQL to run this function.
You should, consequently, play close attention to all invocations of user-defined functions in SQL,
especially those that occur in the WHERE clause of the statement. Consider the following query:
SELECT employee_id
FROM employees
WHERE betwnstr (last_name, 2, 6) = 'MITHY'

In this query, the betwnstr function will be executed 100 timesand there will be 100 context
switches.
FORALL with Sparse Collections
If you try to use the IN low_value .. Next Steps
high_value syntax with FORALL and there
is an undefined index value within that DOWNLOAD
range, Oracle Database will raise the Oracle Database 11g
ORA-22160: element at index [N] does script for this article
not exist error.
To avoid this error, you can use the TEST your PL/SQL knowledge
INDICES OF or VALUES OF clauses. To
see how these clauses can be used, READ PL/SQL 101, Parts 1-8
lets go back to the code in Listing 4. In
this version of increase_salary, I declare
a second collection, l_eligible_ids, to READ more about INDICES OF and VALUES OF
hold the IDs of those employees who are
eligible for a raise.
Instead of doing that, I can simply remove all ineligible IDs from the l_employee_ids collection,
as follows:
FOR indx IN 1 .. l_employee_ids.COUNT
LOOP
check_eligibility (l_employee_ids (indx),
increase_pct_in,
l_eligible);
IF NOT l_eligible
THEN
l_employee_ids.delete (indx);
END IF;
END LOOP;

But now my l_employee_ids collection may have gaps in it: index values that are undefined
between 1 and the highest index value populated by the BULK COLLECT.
No worries. I will simply change my FORALL statement to the following:
FORALL indx IN INDICES OF l_employee_ids
UPDATE employees emp
SET emp.salary =
emp.salary
+ emp.salary *
increase_salary.increase_pct_in
WHERE emp.employee_id =
l_employee_ids (indx);

Now I am telling the PL/SQL engine to use only those index values that are defined in
l_employee_ids, rather than specifying a fixed range of values. Oracle Database will simply skip
any undefined index values, and the ORA-22160 error will not be raised.
This is the simplest application of INDICES OF. Check the documentation for more-complex
usages of INDICES OF, as well as when and how to use VALUES OF.
Bulk Up Your Code!
Optimizing the performance of your code can be a difficult and time-consuming task. It can also
be a relatively easy and exhilarating experienceif your code has not yet been modified to take
advantage of BULK COLLECT and FORALL. In that case, you have some low-hanging fruit to
pick!

Take the Challenge


Each PL/SQL 101 article offers a quiz to test your knowledge of the information provided in it.

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 6 / 23
PLSQL 101 2/19/2014

The quiz appears below and also at PL/SQL Challenge, a Website that offers online quizzes
on the PL/SQL language as well as SQL and Oracle Application Express.

I create and populate my employees table as follows:


CREATE TABLE plch_employees
(
employee_id INTEGER,
last_name VARCHAR2 (100)
)
/
BEGIN
INSERT INTO plch_employees
VALUES (100, Picasso);
INSERT INTO plch_employees
VALUES (200, Mondrian);
INSERT INTO plch_employees
VALUES (300, OKeefe);
COMMIT;
END;
/

Question
Which of these blocks will uppercase the last names of all employees in the table?

a.
DECLARE
TYPE ids_t IS TABLE OF plch_employees.employee_id%TYPE;
l_ids ids_t := ids_t (100, 200, 300);
BEGIN
FORALL indx IN 1 .. l_ids.COUNT
LOOP
UPDATE plch_employees
SET last_name = UPPER (last_name)
WHERE employee_id = l_ids (indx);
END LOOP;
END;
/

b.
DECLARE
TYPE ids_t IS TABLE OF plch_employees.employee_id%TYPE;

l_ids ids_t := ids_t (100, 200, 300);


BEGIN
FORALL indx IN 1 .. l_ids.COUNT
UPDATE plch_employees
SET last_name = UPPER (last_name)
WHERE employee_id = l_ids (indx);
END;
/

c.
BEGIN
UPDATE plch_employees
SET last_name = UPPER (last_name);
END;
/
d.
DECLARE
TYPE ids_t IS TABLE OF plch_employees.employee_id%TYPE;
l_ids ids_t := ids_t (100, 200, 300);
BEGIN
FORALL indx IN INDICES OF l_ids
UPDATE plch_employees
SET last_name = UPPER (last_name)
WHERE employee_id = l_ids (indx);
END;
/

Steven Feuerstein (steven.feuerstein@quest.com) is Quest Softwares


PL/SQL evangelist. He has published 10 books on Oracle PL/SQL
(OReilly Media) and is an Oracle ACE Director. More information is
available at stevenfeuerstein.com.

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 7 / 23
PLSQL 101 2/19/2014

Send us your comments

E-mail this page Printer View

ORACLE CLOUD JAVA CUSTOMERS AND EVENTS COMMUNITIES SERVICES AND STORE
Learn About Oracle Cloud Learn About Java Explore and Read Customer Blogs Log In to My Oracle Support
Get a Free Trial Download Java for Consumers Stories Discussion Forums Training and Certification
Learn About PaaS Download Java for Developers All Oracle Events Wikis Become a Partner
Learn About SaaS Java Resources for Developers Oracle OpenWorld Oracle ACEs Find a Partner Solution
Learn About IaaS Java Cloud Service JavaOne User Groups Purchase from the Oracle Store
Java Magazine Social Media Channels
CONTACT AND CHAT
Phone: +1.800.633.0738
Global Contacts
Oracle Support
Partner Support

Subscribe Careers Contact Us Site Maps Legal Notices Terms of Use Privacy Oracle Mobile

http://www.oracle.com/technetwork/issue-archive/2012/12-sep/o52plsql-1709862.html 8 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

Home Blogs rleishman's blog


User login
8 Bulk Update Methods Compared
Login: * Submitted by rleishman on Mon, 2010-11-08 17:29 SQL & PL/SQL
What I love about writing SQL Tuning articles is that I very rarely
end up publishing the findings I set out to achieve. With this one, I
Password: * set out to demonstrate the advantages of PARALLEL DML, didn't
find what I thought I would, and ended up testing 8 different
techniques to find out how they differed. And guess what? I still
Login didn't get the results I expected. Hey, at least I learned something.
[ Register ] [ Forgot
password ] As an ETL designer, I hate updates. They are just plain nasty. I
spend an inordinate proportion of design time of an ETL system
worrying about the relative proportion of rows inserted vs updated.
Site navigation I worry about how ETL tools apply updates (did you know DataStage
applys updates singly, but batches inserts in arrays?), how I might
cluster rows together that are subject to updates, and what I might
About do if I just get too many updates to handle.
Blogs
Feed aggregator It would be fair to say I obsess about them. A little bit.
Books
Directories The two most common forms of Bulk Updates are:
Events 1. Update (almost) every row in the table. This is common when applying data patches and adding new columns.
FAQ's 2. Updating a small proportion of rows in a very large table.
Forums
Mailing Lists Case 1 is uninteresting. The fastest way to update every row in the table is to rebuild the table from scratch. All of these
Papers methods below will perform worse.
Scripts
Tools Case 2 is common in Data Warehouses and overnight batch jobs. We have a table containing years worth of data, most of
USENET News which is static; we are updating selected rows that were recently inserted and are still volatile. This case is the subject of
Wiki our test. For the purposes of the test, we will assume that the target table of the update is arbitrarily large, and we want
XML Feeds to avoid things like full-scans and index rebuilds.

And the nominees are...


The methods covered include both PL/SQL and SQL approaches. I want to test on a level playing field and remove special
factors that unfairly favour one method, so there are some rules:
Accumulating data for the update can be arbitrarily complex. SQL updates can have joins with grouping and sub-
queries and what-not; PL/SQL can have cursor loops with nested calls to other procedures. I'm not testing the
relative merits of how to accumulate the data, so each test will use pre-preared update data residing in a Global
Temporary Table.
Some methods - such as MERGE - allow the data source to be joined to the update target using SQL. Other
methods don't have this capability and must use Primary Key lookups on the update target. To make these methods
comparable, the "joinable" techniques will use a Nested Loops join to most closely mimic the Primary Key lookup of
the other methods. Even though a Hash join may be faster than Nested Loops for some distributions of data, that
is not always the case and - once again - we're assuming an arbitraily large target table, so a full scan is not
necessarily feasible.
Having said that we're not comparing factors outside of the UPDATE itself, some of the methods do have
differences unrelated to the UPDATE. I have included these deliberately because they are reasonably common and
have different performance profiles; I wouldn't want anyone to think that because their statements were *similar*
to those shown here that they have the same performance profile.
The 8 methods I am benchmarking here are as follows (in rough order of complexity):
1. Explicit Cursor Loop
2. Implicit Cursor Loop
3. UPDATE with nested SET subquery
4. BULK COLLECT / FORALL UPDATE
5. Updateable Join View
6. MERGE
7. Parallel DML MERGE
8. Parallel PL/SQL
For all of the tests, the following table structures will be used:
TEST{n} (Update Source) - 100K rows TEST (Update target) - 10M rows

http://www.orafaq.com/node/2450 9 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

Name Type Name Type


------------------------------ ------------ ------------------------------ ------------
PK NUMBER PK NUMBER
FK NUMBER FK NUMBER
FILL VARCHAR2(40) FILL VARCHAR2(40)

The data has the following characteristics:


TEST.PK will contain values 0 .. 9999999, but not in that order.
TEST.PK is poorly clustered. It is generated by reversing the digits in LPAD(ROWNUM, '0', 7). PK values of 1,2, and
3 are adjacent in the primary key index but one-million rows apart in the table.
TEST.FK will contain values 0 .. 99, unevenly distributed to favour lower numbers.
For the first round of testing, the column FK will be indexed with a simple b-tree index.

Method 1: Explicit Cursor Loop


Not many people code this way, but there are some Pro*C programmers out there who are used to Explicit Cursor Loops
(OPEN, FETCH and CLOSE commands) and translate these techniques directly to PL/SQL. The UPDATE portion of the code
works in an identical fashion to the Implicit Cursor Loop, so this is not really a separate "UPDATE" method as such. The
interesting thing about this method is that it performs a context-switch between PL/SQL and SQL for every FETCH; this
is less efficient. I include it here because it allows us to compare the cost of context-switches to the cost of updates.
DECLARE
CURSOR c1 IS
SELECT *
FROM test6;

rec_cur c1%rowtype;
BEGIN
OPEN c1;
LOOP
FETCH c1 INTO rec_cur;
EXIT WHEN c1%notfound;

UPDATE test
SET fk = rec_cur.fk
, fill = rec_cur.fill
WHERE pk = rec_cur.pk;
END LOOP;
CLOSE C1;
END;
/

Method 2: Implicit Cursor Loop


This is the simplest PL/SQL method and very common in hand-coded PL/SQL applications. Update-wise, it looks as
though it should perform the same as the Explicit Cursor Loop. The difference is that the Implicit Cursor internally
performs bulk fetches, which should be faster than the Explicit Cursor because of the reduced context switches.
BEGIN
FOR rec_cur IN (
SELECT *
FROM test3
) LOOP
UPDATE test
SET fk = rec_cur.fk
, fill = rec_cur.fill
WHERE pk = rec_cur.pk;
END LOOP;
END;
/

Method 3: UPDATE with nested SET subquery


This method is pretty common. I generally recommend against it for high-volume updates because the SET sub-query is
nested, meaning it is performed once for each row updated. To support this method, I needed to create an index on
TEST8.PK.
UPDATE test
SET (fk, fill) = (
SELECT test8.fk, test8.fill
FROM test8
WHERE pk = test.pk
)
WHERE pk IN (
SELECT pk
FROM test8
);

Method 4: BULK COLLECT / FORALL UPDATE


This one is gaining in popularity. Using BULK COLLECT and FORALL statements is the new de-facto standard for PL/SQL
programmers concerned about performance because it reduces context switching overheads between the PL/SQL and SQL
engines.

http://www.orafaq.com/node/2450 10 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

The biggest drawback to this method is readability. Since Oracle does not yet provide support for record collections in
FORALL, we need to use scalar collections, making for long declarations, INTO clauses, and SET clauses.
DECLARE
CURSOR rec_cur IS
SELECT *
FROM test4;

TYPE num_tab_t IS TABLE OF NUMBER(38);


TYPE vc2_tab_t IS TABLE OF VARCHAR2(4000);

pk_tab NUM_TAB_T;
fk_tab NUM_TAB_T;
fill_tab VC2_TAB_T;
BEGIN
OPEN rec_cur;
LOOP
FETCH rec_cur BULK COLLECT INTO pk_tab, fk_tab, fill_tab LIMIT 1000;
EXIT WHEN pk_tab.COUNT() = 0;

FORALL i IN pk_tab.FIRST .. pk_tab.LAST


UPDATE test
SET fk = fk_tab(i)
, fill = fill_tab(i)
WHERE pk = pk_tab(i);
END LOOP;
CLOSE rec_cur;
END;
/

Method 5: Updateable Join View


This is really a deprecated pre-9i method; the modern equivalent is the MERGE statement. This needs a unique index on
TEST1.PK in order to enforce key preservation.
UPDATE (
SELECT /*+ ordered use_nl(old)*/ new.pk as new_pk
, new.fk as new_fk
, new.fill as new_fill
, old.*
FROM test1 new
JOIN test old ON (old.pk = new.pk)
)
SET fk = new_fk
, fill = new_fill
/

Method 6: MERGE
The modern equivalent of the Updateable Join View. Gaining in popularity due to its combination of brevity and
performance, it is primarily used to INSERT and UPDATE in a single statement. We are using the update-only version
here. Note that I have included a FIRST_ROWS hint to force an indexed nested loops plan. This is to keep the playing field
level when comparing to the other methods, which also perform primary key lookups on the target table. A Hash join may
or may not be faster, that's not the point - I could increase the size of the target TEST table to 500M rows and Hash would
be slower for sure.
MERGE /*+ FIRST_ROWS*/ INTO test
USING test2 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk
, fill = new.fill
/

Here is the Explain Plan


-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 130K| 9921K| 258K (1)|
| 1 | MERGE | TEST | | | |
| 2 | VIEW | | | | |
| 3 | NESTED LOOPS | | 130K| 11M| 258K (1)|
| 4 | TABLE ACCESS FULL | TEST2 | 128K| 6032K| 172 (5)|
| 5 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 48 | 2 (0)|
| 6| INDEX UNIQUE SCAN | TEST_PK | 1 | | 1 (0)|
-------------------------------------------------------------------------------

Method 7: Parallel DML MERGE


Now we're getting clever... This is the MERGE example on steroids. It uses Oracle's Parallel DML capability to spread the
load over multiple slave threads.
ALTER SESSION ENABLE PARALLEL DML;

MERGE /*+ first_rows parallel(test) parallel(test2) */ INTO test


USING test5 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk

http://www.orafaq.com/node/2450 11 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

, fill = new.fill
/

Note the differences in the Explain Plan.


--------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| TQ |IN-OUT| PQ Distrib |
--------------------------------------------------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 109K| 8325K| 1880 (1)| | | |
| 1 | PX COORDINATOR | | | | | | | |
| 2 | PX SEND QC (RANDOM) | :TQ10002 | 109K| 10M| 1880 (1)| Q1,02 | P->S | QC (RAND) |
| 3 | INDEX MAINTENANCE | TEST | | | | Q1,02 | PCWP | |
| 4 | PX RECEIVE | | 109K| 10M| 1880 (1)| Q1,02 | PCWP | |
| 5| PX SEND RANGE | :TQ10001 | 109K| 10M| 1880 (1)| Q1,01 | P->P | RANGE |
| 6| MERGE | TEST | | | | Q1,01 | PCWP | |
| 7| PX RECEIVE | | 109K| 10M| 1880 (1)| Q1,01 | PCWP | |
| 8| PX SEND HYBRID (ROWID PKEY) | :TQ10000 | 109K| 10M| 1880 (1)| Q1,00 | P->P | HYBRID (ROW|
| 9| VIEW | | | | | Q1,00 | PCWP | |
| 10 | NESTED LOOPS | | 109K| 10M| 1880 (1)| Q1,00 | PCWP | |
| 11 | PX BLOCK ITERATOR | | | | | Q1,00 | PCWC | |
| 12 | TABLE ACCESS FULL | TEST2 | 107K| 5062K| 2 (0)| Q1,00 | PCWP | |
| 13 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 48 | 0 (0)| Q1,00 | PCWP | |
| 14 | INDEX UNIQUE SCAN | TEST_PK | 1 | | 0 (0)| Q1,00 | PCWP | |
--------------------------------------------------------------------------------------------------------------------

Method 8: Parallel PL/SQL


This is much easier to do with DataStage than with native PL/SQL. The goal is to have several separate sessions applying
UPDATE statements at once, rather than using the sometimes restrictive PARALLEL DML alternative. It's a bit of a
kludge, but we can do this in PL/SQL using a Parallel Enable Table Function. Here's the function:
CREATE OR REPLACE FUNCTION test_parallel_update (
test_cur IN SYS_REFCURSOR
)
RETURN test_num_arr
PARALLEL_ENABLE (PARTITION test_cur BY ANY)
PIPELINED
IS
PRAGMA AUTONOMOUS_TRANSACTION;

test_rec TEST%ROWTYPE;
TYPE num_tab_t IS TABLE OF NUMBER(38);
TYPE vc2_tab_t IS TABLE OF VARCHAR2(4000);

pk_tab NUM_TAB_T;
fk_tab NUM_TAB_T;
fill_tab VC2_TAB_T;

cnt INTEGER := 0;
BEGIN
LOOP
FETCH test_cur BULK COLLECT INTO pk_tab, fk_tab, fill_tab LIMIT 1000;
EXIT WHEN pk_tab.COUNT() = 0;

FORALL i IN pk_tab.FIRST .. pk_tab.LAST


UPDATE test
SET fk = fk_tab(i)
, fill = fill_tab(i)
WHERE pk = pk_tab(i);

cnt := cnt + pk_tab.COUNT;


END LOOP;

CLOSE test_cur;

COMMIT;
PIPE ROW(cnt);
RETURN;
END;
/

Note that it receives its data via a Ref Cursor parameter. This is a feature of Oracle's parallel-enabled functions; they will
apportion the rows of a single Ref Cursor amongst many parallel slaves, with each slave running over a different subset of
the input data set.
Here is the statement that calls the Parallel Enabled Table Function:
SELECT sum(column_value)
FROM TABLE(test_parallel_update(CURSOR(SELECT * FROM test7)));

Note that we are using a SELECT statement to call a function that performs an UPDATE. Yeah, I know, it's nasty. You need
to make the function an AUTONOMOUS TRANSACTION to stop it from throwing an error. But just bear with me, it is the
closest PL/SQL equivalent I can make to a third-party ETL Tool such as DataStage with native parallelism.

And on to the testing....


ROUND 1
In this test, we apply the 100K updated rows in Global Temporary Table TEST{n} to permanent table TEST. There are 3

http://www.orafaq.com/node/2450 12 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

runs:
Run 1: The buffer cache is flushed and about 1 hour of unrelated statistics gathering has been used to age out the
disk cache.
Run 2: The buffer cache is flushed and the disk cache has been aged out with about 5-10mins of indexed reads.
Timings indicate that the disk cache is still partially populated with blocks used by the query.
Run 3: The buffer cache is pre-salted with the table and blocks it will need. It should perform very little disk IO.
RUN 1 RUN 2 RUN 3
----------------------------------- ----- ----- -----
1. Explicit Cursor Loop 931.3 783.2 49.3
2. Implicit Cursor Loop 952.7 672.8 40.2
3. UPDATE with nested SET subquery 941.4 891.5 31.5
4. BULK COLLECT / FORALL UPDATE 935.2 826.0 27.9
5. Updateable Join View 933.2 741.0 28.8
6. MERGE 854.6 838.5 28.4
7. Parallel DML MERGE 55.7 46.1 47.7
8. Parallel PL/SQL 28.2 27.2 6.3

The things I found interesting from these results are:


Amongst the non-parallel methods (1-6), context switches only make a significant and noticable difference with
cached data. With uncached data, the cost of disk reads so far outweighs the context switches that they are barely
noticable. Context Switching - whilst important - is not really a game-changer. This tells me that you should avoid
methods 1 and 2 as a best practice, but it is probably not cost-effective to re-engineer existing method 1/2 code
unless your buffer cache hit ratio is 99+% (ie. like RUN 3).
There were no significant differences between the 6 non-parallel methods, however this is not to suggest that it is
not important which one you choose. All of these benchmarks perform Primary Key lookups of the updated table,
however it is possible to run methods 5 and 6 as hash joins with full table scans. If the proportion of blocks
updated is high enough, the hash join can make an enormous difference in the run time. See Appendix 1 for an
example.
Parallel updates are a game changer. The reason for this is disk latency. Almost all of the time for RUN 1 and RUN
2 of the non-parallel methods is spent waiting for reads and writes on disk. The IO system of most computers is
designed to serve many requests at a time, but no ONE request can utilise ALL of the resources. So when an
operation runs serially, it only uses a small proportion of the available resources. If there are no other jobs running
then we get poor utilisation. The parallel methods 7 and 8 allow us to tap into these under-utilised resources.
Instead of issuing 100,000 disk IO requests one after the other, these methods allow (say) 100 parallel threads to
perform just 1000 sequential IO requests.
Method 8, which is the equivalent of running many concurrent versions of Method 4 with different data, is
consistently faster than Oracle's Parallel DML. This is worth exploring. Since the non-parallel equivalents (Methods
4 and 6) show no significant performance difference, it is reasonable to expect that parallelising these two
methods will yield similar results. I ran some traces (see Appendix 2) and found that the Parallel Merge was
creating too many parallel threads and suffering from latch contention. Manually reducing the number of parallel
threads made it perform similarly to the Parallel PL/SQL method. The lesson here is that too much parallelism is a
bad thing.

ROUND 2
Let's see how a Foreign Key constraint affects things. For this round, I have created a parent table and a Foreign Key on
the FK column.
For brevity, this time we'll just flush the buffer cache and run about 5 minutes worth of indexed reads to cycle the disk
cache.
RUN 1 RUN 2
----------------------------------- ----- -----
1. Explicit Cursor Loop 887.1 874.6
2. Implicit Cursor Loop 967.0 752.1
3. UPDATE with nested SET subquery 920.1 795.2
4. BULK COLLECT / FORALL UPDATE 840.9 759.2
5. Updateable Join View 727.5 851.8
6. MERGE 807.8 833.6
7. Parallel DML MERGE 26.8 29.2
8. Parallel PL/SQL 25.3 23.8

Summary of findings:
It looks as though there is a small premium associated with checking the foreign key, although it does not appear
to be significant. It's worth noting that the parent table in this case is very small and quickly cached. A very large
parent table would result in considerably greater number of cache misses and resultant disk IO. Foreign keys are
often blamed for bad performance; whilst they can be limiting in some circumstances (e.g. direct path loads),
updates are not greatly affected when the parent tables are small.
I was expecting the Parallel DML MERGE to be slower. According to the Oracle Database Data Warehousing Guide
- 10g Release 2, INSERT and MERGE are "Not Parallelized" when issed against the child of a Foreign Key constraint,
whereas parallel UPDATE is "supported". As a test, I issued a similar MERGE statement and redundantly included
the WHEN NOT MATCHED THEN INSERT clause: it was not parallelized and ran slower. The lesson here: there may

http://www.orafaq.com/node/2450 13 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

be merit in applying an upsert (insert else update) as an update-only MERGE followed by an INSERT instead of using
a single MERGE.

ROUND 3
The two things I hear most about Bitmap indexes is that:
They are inappropriate for tables that undergo concurrent updates, and
They are slow to update.
Surely no comparison of update methods could possibly be complete without a test of Bitmap index maintenance.
In this round, I have removed the Foreign Key used in Round 2, and included a Bitmap index on TEST.FK
RUN 1 RUN 2
----------------------------------- ----- -----
1. Explicit Cursor Loop 826.0 951.2
2. Implicit Cursor Loop 898.7 877.2
3. UPDATE with nested SET subquery 588.9 633.4
4. BULK COLLECT / FORALL UPDATE 898.0 926.7
5. Updateable Join View 547.8 687.1
6. MERGE 689.3 763.4
7. Parallel DML MERGE 30.2 28.4
8. Parallel PL/SQL ORA-00060: deadlock detected

Well, if further proof was needed that Bitmap indexes are inappropriate for tables that are maintained by multiple
concurrent sessions, surely this is it. The Deadlock error raised by Method 8 occurred because bitmap indexes are locked
at the block-level, not the row level. With hundreds of rows represented by each block in the index, the chances of two
sessions attempting to lock the same block are quite high. The very clear lesson here: don't update bitmap indexed tables
in parallel sessions; the only safe parallel method is PARALLEL DML.
The other intesting outcome is the differing impact of the bitmap index on SET-based updates vs transactional updates
(SQL solutions vs PL/SQL solutions). PL/SQL solutions seem to incur a penalty when updating bitmap indexed tables. A
single bitmap index has added around 10% to the overall runtime of PL/SQL solutions, whereas the set-based (SQL-based)
solutions run faster than the B-Tree indexes case (above). Although not shown here, this effect is magnified with each
additional bitmap index. Given that most bitmap-indexed tables would have several such indexes (as bitmap indexes are
designed to be of most use in combination), this shows that PL/SQL is virtually non-viable as a means of updating a large
number of rows.

SUMMARY OF FINDINGS
Context Switches in cursor loops have greatest impact when data is well cached. For updates with buffer cache hit-
ratio >99%, convert to BULK COLLECT or MERGE.
Use MERGE with a Hash Join when updating a significant proportion of blocks (not rows!) in a segment.
Parallelize large updates for a massive performance improvement.
Tune the number of parallel query servers used by looking for latch contention thread startup waits.
Don't rashly drop Foreign Keys without benchmarking; they may not be costing very much to maintain.
MERGE statements that UPDATE and INSERT cannot be parallelised when a Foreign Key is present. If you want to
keep the Foreign Key, you will need to use multiple concurrent sessions (insert/update variant of Method 8) to
achieve parallelism.
Don't use PL/SQL to maintain bitmap indexed tables; not even with BULK COLLECT / FORALL. Instead, INSERT
transactions into a Global Temporary Table and apply a MERGE.

APPENDIX 1 - Nested Loops MERGE vs. Hash Join MERGE


Although we are updating only 1% of the rows in the table, those rows are almost perfectly distributed throughout the
table. As a result, we end up updating almost 100% of the blocks. This makes it a good candidate for hash joins and full
scans to out-perform indexed nested loops. Of course, as you decrease the percentage of blocks updated, the balance will
swing in favour of Nested Loops; but this trace demonstrates that MERGE definitely has it's place in high-volume updates.
MERGE /*+ FIRST_ROWS*/ INTO test
USING test2 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk
, fill = new.fill

-------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 95331 | 7261K| 191K (1)|
| 1 | MERGE | TEST | | | |
| 2 | VIEW | | | | |
| 3 | NESTED LOOPS | | 95331 | 8937K| 191K (1)|
| 4 | TABLE ACCESS FULL | TEST2 | 95331 | 4468K| 170 (3)|
| 5 | TABLE ACCESS BY INDEX ROWID| TEST | 1 | 48 | 2 (0)|
| 6| INDEX UNIQUE SCAN | TEST_PK | 1 | | 1 (0)|
-------------------------------------------------------------------------------

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------

http://www.orafaq.com/node/2450 14 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

------- ------ -------- ---------- ---------- ---------- ---------- ----------


Parse 1 0.01 0.01 0 4 1 0
Execute 1 57.67 829.77 95323 383225 533245 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 57.68 829.78 95323 383229 533246 100000

Misses in library cache during parse: 1


Optimizer mode: FIRST_ROWS
Parsing user id: 140

Rows Row Source Operation


------- ---------------------------------------------------
1 MERGE TEST (cr=383225 pr=95323 pw=0 time=127458586 us)
100000 VIEW (cr=371028 pr=75353 pw=0 time=619853020 us)
100000 NESTED LOOPS (cr=371028 pr=75353 pw=0 time=619653018 us)
100000 TABLE ACCESS FULL TEST2 (cr=750 pr=386 pw=0 time=505310 us)
100000 TABLE ACCESS BY INDEX ROWID TEST (cr=370278 pr=74967 pw=0 time=615942540 us)
100000 INDEX UNIQUE SCAN TEST_PK (cr=200015 pr=227 pw=0 time=4528703 us)(object id 141439)

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file scattered read 37 0.20 0.72
db file sequential read 94936 0.39 781.52
buffer exterminate 1 0.97 0.97
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.05 0.05
********************************************************************************

MERGE INTO test


USING test2 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk
, fill = new.fill

---------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes |TempSpc| Cost (%CPU)|
---------------------------------------------------------------------------
| 0 | MERGE STATEMENT | | 95331 | 7261K| | 46318 (3)|
| 1 | MERGE | TEST | | | | |
| 2 | VIEW | | | | | |
| 3 | HASH JOIN | | 95331 | 8937K| 5592K| 46318 (3)|
| 4 | TABLE ACCESS FULL| TEST2 | 95331 | 4468K| | 170 (3)|
| 5 | TABLE ACCESS FULL| TEST | 10M| 458M| | 16949 (4)|
---------------------------------------------------------------------------

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.03 0.40 1 4 1 0
Execute 1 54.50 123.48 94547 82411 533095 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 54.53 123.88 94548 82415 533096 100000

Misses in library cache during parse: 1


Optimizer mode: ALL_ROWS
Parsing user id: 140

Rows Row Source Operation


------- ---------------------------------------------------
1 MERGE TEST (cr=82411 pr=94547 pw=0 time=123480418 us)
100000 VIEW (cr=75424 pr=74949 pw=0 time=48081374 us)
100000 HASH JOIN (cr=75424 pr=74949 pw=0 time=47981370 us)
100000 TABLE ACCESS FULL TEST2 (cr=750 pr=335 pw=0 time=1207771 us)
9999999 TABLE ACCESS FULL TEST (cr=74674 pr=74614 pw=0 time=10033917 us)

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 19606 0.37 41.24
db file scattered read 4720 0.52 34.20
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.03 0.03

That's a pretty significant difference: the same method (MERGE) is 6-7 times faster when performed as a Hash Join.
Although the number of physical disk blocks and Current Mode Gets are about the same in each test, the Hash Join
method performs multi-block reads, resulting in fewer visits to the disk.
All 8 methods above were benchmarked on the assumption that the target table is arbitrarily large and the subset of
rows/blocks to be updated are relatively small. If the proportion of updated blocks increases, then the average cost of
finding those rows decreases; the exercise becomes one of tuning the data access rather than tuning the update.

APPENDIX 2 - Parallel DML vs. PARALLEL PL/SQL


Why is the Parallel PL/SQL (Method 8) approach much faster than the Parallel DML MERGE (Method 7)? To shed some
light, here are some traces. Below we see the trace from the Parallel Coordinator session of Method 7:
MERGE /*+ first_rows */ INTO test
USING test5 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk
, fill = new.fill

http://www.orafaq.com/node/2450 15 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 4 1 0
Execute 1 1.85 57.91 1 7 2 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 1.87 57.94 1 11 3 100000

Misses in library cache during parse: 1


Optimizer mode: FIRST_ROWS
Parsing user id: 140

Rows Row Source Operation


------- ---------------------------------------------------
128 PX COORDINATOR (cr=7 pr=1 pw=0 time=57912088 us)
0 PX SEND QC (RANDOM) :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 INDEX MAINTENANCE TEST (cr=0 pr=0 pw=0 time=0 us)(object id 0)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND RANGE :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
0 MERGE TEST (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HYBRID (ROWID PKEY) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
0 VIEW (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL TEST5 (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID TEST (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN TEST_PK (cr=0 pr=0 pw=0 time=0 us)(object id 141439)

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
db file sequential read 1 0.02 0.02
reliable message 1 0.00 0.00
enq: RO - fast object reuse 1 0.00 0.00
os thread startup 256 0.09 23.61
PX Deq: Join ACK 7 0.00 0.00
PX Deq: Parse Reply 15 0.09 0.19
PX Deq Credit: send blkd 35 0.00 0.00
PX qref latch 5 0.00 0.00
PX Deq: Execute Reply 1141 1.96 30.30
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.05 0.05

We can see here that the Parallel Co-ordinator spent 23.61 seconds (of the 57.94 elapsed) simply starting up the parallel
threads, and 30.3 seconds waiting for them to do their stuff.
And here are the wait events for just ONE of the parallel threads from the same test case:
Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
cursor: pin S wait on X 3 0.02 0.06
PX Deq: Execution Msg 16 1.96 10.94
PX Deq: Msg Fragment 2 0.00 0.00
latch: parallel query alloc buffer 7 5.89 7.52
db file sequential read 825 0.10 12.00
read by other session 17 0.06 0.18
log buffer space 1 0.03 0.03
PX Deq Credit: send blkd 1 0.02 0.02
PX Deq: Table Q Normal 28 0.19 0.35
latch: cache buffers chains 1 0.01 0.01
db file parallel read 1 0.11 0.11

From this, we can see that of the 30.3 seconds the Co-ordinator spent waiting for the parallel threads, this one spent
7.52 waiting for shared resources (latches) held by other parallel threads, and just 12 seconds reading blocks from disk.
For comparison, here is the trace of the Co-ordinator session of a Parallel PL/SQL run:
SELECT sum(column_value)
FROM TABLE(test_parallel_update(
CURSOR(SELECT * FROM TEST7)
))

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.05 0.13 7 87 1 0
Execute 1 0.20 12.47 0 3 0 0
Fetch 2 0.21 20.13 0 0 0 1
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 4 0.46 32.74 7 90 1 1

Misses in library cache during parse: 1


Optimizer mode: ALL_ROWS
Parsing user id: 140

Rows Row Source Operation


------- ---------------------------------------------------
1 SORT AGGREGATE (cr=3 pr=0 pw=0 time=32609316 us)
128 PX COORDINATOR (cr=3 pr=0 pw=0 time=252152371 us)
0 PX SEND QC (RANDOM) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
0 SORT AGGREGATE (cr=0 pr=0 pw=0 time=0 us)
0 VIEW (cr=0 pr=0 pw=0 time=0 us)
0 COLLECTION ITERATOR PICKLER FETCH TEST_PARALLEL_UPDATE (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)

http://www.orafaq.com/node/2450 16 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

0 TABLE ACCESS FULL TEST7 (cr=0 pr=0 pw=0 time=0 us)

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: RO - fast object reuse 1 0.00 0.00
os thread startup 128 0.10 11.85
PX Deq: Join ACK 4 0.00 0.00
PX Deq: Parse Reply 46 0.00 0.10
PX Deq Credit: send blkd 128 0.00 0.06
SQL*Net message to client 2 0.00 0.00
PX Deq: Execute Reply 143 1.96 19.86
PX Deq: Signal ACK 4 0.00 0.00
enq: PS - contention 2 0.00 0.00
SQL*Net message from client 2 0.03 0.06

And the wait events for a single parallel thread:


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
PX Deq: Execution Msg 8 0.13 0.22
PX Deq: Msg Fragment 1 0.00 0.00
library cache load lock 3 0.19 0.30
db file sequential read 872 0.11 16.47
read by other session 13 0.06 0.25
latch: cache buffers chains 3 0.01 0.02

The Parallel PL/SQL spent just 11.85 seconds starting parallel threads, compared to 23.61 seconds for PARALLEL DML. I
noticed from the trace that PARALLEL DML used 256 parallel threads, whereas the PL/SQL method used just 128. Looking
more closely at the trace files I suspect that the PARALLEL DML used 128 readers and 128 writers, although it hard to be
sure. Whatever Oracle is doing here, it seems there is certainly a significant cost of opening parallel threads.
Also, looking at the wait events for the Parallel PL/SQL slave thread, we see no evidence of resource contention as we did
in the PARALLEL DML example.
In theory, we should be able to reduce the cost of thread startup and also reduce contention by reducing the number of
parallel threads. Knowing from above that the parallel methods were 10-20 time faster than the non-parallel methods, I
suspect that benefits of parallelism diminish after no more than 32 parallel threads. In support of that theory, here is a
trace of a PARALLEL DML test case with 32 parallel threads:
First the Parallel Co-ordinator:
MERGE /*+ first_rows parallel(test5 32) parallel(test 32) */ INTO test
USING test5 new ON (test.pk = new.pk)
WHEN MATCHED THEN UPDATE SET
fk = new.fk
, fill = new.fill

call count cpu elapsed disk query current rows


------- ------ -------- ---------- ---------- ---------- ---------- ----------
Parse 1 0.02 0.02 0 4 1 0
Execute 1 0.55 31.14 0 7 2 100000
Fetch 0 0.00 0.00 0 0 0 0
------- ------ -------- ---------- ---------- ---------- ---------- ----------
total 2 0.57 31.17 0 11 3 100000

Misses in library cache during parse: 1


Optimizer mode: FIRST_ROWS
Parsing user id: 140

Rows Row Source Operation


------- ---------------------------------------------------
32 PX COORDINATOR (cr=7 pr=0 pw=0 time=30266841 us)
0 PX SEND QC (RANDOM) :TQ10002 (cr=0 pr=0 pw=0 time=0 us)
0 INDEX MAINTENANCE TEST (cr=0 pr=0 pw=0 time=0 us)(object id 0)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND RANGE :TQ10001 (cr=0 pr=0 pw=0 time=0 us)
0 MERGE TEST (cr=0 pr=0 pw=0 time=0 us)
0 PX RECEIVE (cr=0 pr=0 pw=0 time=0 us)
0 PX SEND HYBRID (ROWID PKEY) :TQ10000 (cr=0 pr=0 pw=0 time=0 us)
0 VIEW (cr=0 pr=0 pw=0 time=0 us)
0 NESTED LOOPS (cr=0 pr=0 pw=0 time=0 us)
0 PX BLOCK ITERATOR (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS FULL TEST5 (cr=0 pr=0 pw=0 time=0 us)
0 TABLE ACCESS BY INDEX ROWID TEST (cr=0 pr=0 pw=0 time=0 us)
0 INDEX UNIQUE SCAN TEST_PK (cr=0 pr=0 pw=0 time=0 us)(object id 141439)

Elapsed times include waiting on following events:


Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
reliable message 1 0.00 0.00
enq: RO - fast object reuse 1 0.00 0.00
os thread startup 64 0.09 5.89
PX Deq: Join ACK 9 0.00 0.00
PX Deq: Parse Reply 18 0.04 0.06
PX Deq: Execute Reply 891 1.78 24.18
SQL*Net message to client 1 0.00 0.00
SQL*Net message from client 1 0.03 0.03

http://www.orafaq.com/node/2450 17 / 23
8 Bulk Update Methods Compared | Oracle FAQ 2/19/2014

And the wait events for one of those threads:


Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
cursor: pin S wait on X 1 0.02 0.02
PX Deq: Execution Msg 34 0.11 0.16
db file sequential read 2419 0.30 22.21
read by other session 5 0.02 0.05
buffer busy waits 1 0.00 0.00
PX Deq Credit: send blkd 1 0.00 0.00
PX Deq: Table Q Normal 4 0.18 0.19

Note in this case


OS thread startup of just 5.89 seconds
No more resource contention
Performance now in line with that of the PARALLEL PL/SQL solution
rleishman's blog Login to post comments

.:: Blogger Home :: Wiki Home :: Forum Home :: Privacy :: Contact ::.

http://www.orafaq.com/node/2450 18 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014

current community
chat blog
Stack Overflow
Meta Stack Overflow
Careers 2.0

more stack exchange communities


Stack Exchange
sign up log in tour help
Tour Start here for a quick overview of the site
Help Center Detailed answers to any questions you might have
Meta Discuss the workings and policies of this site
careers 2.0
search

Stack Overflow
Questions
Tags
Tour
Users
Ask Question
Take the 2-minute tour
Stack Overflow is a question and answer site for professional and enthusiast programmers. It's 100% free, no registration required.

Pl/SQL Bulk Bind/ Faster Update Statements


I'm having problems using Bulk Bind in PL/SQL. Basically what I want is for a table(Component) to update its fieldvalue
dependent on the Component_id and fieldname. All of these are passed in as paramaters (the type varchar2_nested_table
is effectively and array of strings, one element for each update statement that needs to occur). So for instance if
Component_id = 'Compid1' and fieldname = 'name' then fieldvalue should be updated to be 'new component name'.
I typed up the code below in relation to this http://www.oracle.com/technetwork/issue-archive/o14tech-plsql-l2-091157.html .
The code works but is no faster than a simple loop that performs an update for every element in the IN parameters. So if the
parameters have 1000 elements then 1000 update statements will be executed. I also realise I'm not using BULK COLLECT
INTO but I didn't think I needed it as I don't need to select anything from the database, just update.
At the moment both take 4-5 seconds for 1000 updates. I assume I'm using the bulk bind incorrectly or have a
misunderstanding of the subject as in examples I can find people are executing 50,000 rows in 2 seconds etc. From what I
understand FORALL should improve performance by reducing the number of context switches. I have tried another method I
found online using cursors and bulk binds but had the same outcome. Perhaps my performance expectations are too much? I
don't think so from seeing others results. Any help would be greatly appreciated.
create or replace procedure BulkUpdate(sendSubject_in IN varchar2_nested_table_type,
fieldname_in IN varchar2_nested_table_type,fieldvalue_in IN varchar2_nested_table_type) is

TYPE component_aat IS TABLE OF component.component_id%TYPE


INDEX BY PLS_INTEGER;
TYPE fieldname_aat IS TABLE OF component.fieldname%TYPE
INDEX BY PLS_INTEGER;
TYPE fieldvalue_aat IS TABLE OF component.fieldvalue%TYPE
INDEX BY PLS_INTEGER;

fieldnames fieldname_aat;
fieldvalues fieldvalue_aat;
up vote approved_components component_aat;
0 down
vote PROCEDURE partition_eligibility
favorite IS
BEGIN
FOR indx IN sendSubject_in.FIRST .. sendSubject_in.LAST
LOOP
approved_components(indx) := sendSubject_in(indx);
fieldnames(indx):= fieldname_in(indx);
fieldvalues(indx) := fieldvalue_in(indx);

http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 19 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014
fieldvalues(indx) := fieldvalue_in(indx);
END LOOP;
END;

PROCEDURE update_components
IS
BEGIN
FORALL indx IN approved_components.FIRST .. approved_components.LAST
UPDATE Component
SET Fieldvalue = fieldvalues(indx)
WHERE Component_id = approved_components(indx)
AND Fieldname = fieldnames(indx);
END;

BEGIN
partition_eligibility;
update_components;
END BulkUpdate;

sql plsql bulk forall


asked Mar 8 '12 at 10:13
share|improve this question
user1255191
32
Could you add dbms_utility.get_time to measure the time before partition_eligibility, after it and after
update_components to see where the time is spent. Eggi Mar 8 '12 at 10:59
I've measured it already. It takes 25 milliseconds when you don't execute update_components. With update_components
takes 4500 milliseconds. user1255191 Mar 8 '12 at 11:10
add comment

1 Answer
active oldest votes

There is something else going on, I suspect your individual updates are each taking a lot of time, maybe because there are
triggers or inefficient indexes. (Note that if each statement is expensive individually, using bulk updates won't save you a lot
of time since the context switches are negligible compared to the actual work).
Here is my test setup:
CREATE TABLE Component (
Component_id NUMBER,
fieldname VARCHAR2(100),
Fieldvalue VARCHAR2(100),
CONSTRAINT component_pk PRIMARY KEY (component_id, fieldname)
);

-- insert 1 million rows


INSERT INTO component
(SELECT ROWNUM, to_char(MOD(ROWNUM, 100)), dbms_random.string('p', 10)
FROM dual
CONNECT BY LEVEL <= 1e6);
CREATE OR REPLACE TYPE varchar2_nested_table_type AS TABLE OF VARCHAR2(100);
/

up vote 0 SET SERVEROUTPUT ON SIZE UNLIMITED FORMAT WRAPPED


DECLARE
down l_id varchar2_nested_table_type;
vote l_names varchar2_nested_table_type;
accepted l_value varchar2_nested_table_type;
l_time NUMBER;
BEGIN
SELECT rownum, to_char(MOD(rownum, 100)), dbms_random.STRING('p', 10)
BULK COLLECT INTO l_id, l_names, l_value
FROM dual
CONNECT BY LEVEL <= 100000;
l_time := dbms_utility.get_time;
BulkUpdate(l_id, l_names, l_value);
dbms_output.put_line((dbms_utility.get_time - l_time) || ' cs elapsed.');
END;
/

100000 rows updated in about 1.5 seconds on an unremarkable test machine. Updating the same data set row by row
takes about 4 seconds.
Can you run a similar script with a newly created table?

http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 20 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014

answered Mar 8 '12 at 13:32


edited Mar 8 '12 at 13:45
share|improve this answer
Vincent Malgrat
40.3k43470
Oh really? I carried out a similar test and my results were 4.3 seconds for 1,000 rows. This was on a somewhat good
spec machine too. Assuming You didn't change anything in BulkUpdate then my problem seems to be elsewhere. My
database isn't doing anything else though. There are no triggers or index's etc. All its doing is inserting (Which takes .25
seconds for 1000 rows) and then updating. Thank you for taking the time out to test anyway. user1255191 Mar 8 '12
at 14:42
Can you run the test as I did (with a newly created table)? Vincent Malgrat Mar 8 '12 at 14:44
Thank you very much vincent. While looking at your test I saw that you had set a primary key to (id,fieldname). I hadn't
set one as no field was unique and didn't think that i needed one. It its now running much faster (.049 seconds for
1,000). Thank you very much for your reply as I would not have even thought of it had i not seen your test. Thanks.
user1255191 Mar 8 '12 at 15:00
This is because oracle automatically creates an index for primary keys. Eggi Mar 8 '12 at 17:04
add comment

Your Answer

Sign up or log in
Sign up using Google
Sign up using Facebook
Sign up using Stack Exchange

Post as a guest

Name
Email required, but not shown
Post Your Answer discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged sql plsql
bulk forall or ask your own question.

asked 1 year ago

viewed 1938 times

http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 21 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014

active 1 year ago

Community Bulletin

event
Moderator candidates' answers to your questions ends in 6 days

event
2014 Community Moderator Election ends in 6 days

Related

3
Why is PL/SQL Bulk DML running slowing for large datasets with parent-child constrained tables?
0
How to insert into a table correctly using table of records and forall in pl/sql
1
Multiple SQL statements in FORALL loop
0
Procedure Error PLS-00436: implementation restriction: cannot reference fields of BULK In-BIND table of records
1
Update multiple columns in MERGE statement ORACLE
0
Dealing with PL/SQL Collections
3
Is this how non-bulk binding PL/SQL code should be translated to bulk-binding code, and is there ever a reason to forgo buk binding?
1
Bulk Collect into is storing less no of rows in collection when using LIMIT?
0
how dows bulk update flow of execution works
0
SELECT *DISTINCT* with Oracle CURSOR and BULK COLLECT TO

Hot Network Questions

Paranoid Parent: "WiFi safe for baby?"


Why is the mouse cursor slightly tilted and not straight?
Weirdest way to produce a stack overflow
Is 3d6 the same as 1d18?
I don't get this joke. Is it some kind of play on "water, too?"
Simple card game to learn OOP
Difference between public static void Main() and private static void Main() in C#?
Happy Birthday to Me!
Deleted /bin/rm
How does your mind bend \expandafter to its will?
Best way to respond to a request for CV
Where do you store your personal private GPG key?
Password checker in PHP
Is it safe to drink water with strong chlorine smell?
Sort numbers by binary 1's count
How does wandering within a city work?
Why do we still use pilots to fly airplanes?
Sum of four squares not a prime
PHP, MYSQL - is it possible to select a random column in one row?
Letters in phone numbers
Why don't relays incorporate flyback diodes?
(Short) Exact sequences with no commutative diagram between them
Is there a word/term for a question where the asker knows he'll criticise any answer?
Useful strategies for answering "dumb" questions in a talk?
more hot questions
question feed
about help badges blog chat data legal privacy policy jobs advertising info mobile contact us feedback
Culture /
Technology Life / Arts Science Other
Recreation

1. Programmers 1. English
2. Unix & Linux Language

http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 22 / 23
plsql - Pl/SQL Bulk Bind/ Faster Update Statements - Stack Overflow 2/19/2014

1. Stack 3. Ask Different 1. Photography & Usage 1. Mathematics


Overflow (Apple) 1. Database 2. Science 2. Skeptics 2. Cross 1. Stack
2. Server Fault 4. WordPress Administrators Fiction & 3. Mi Yodeya Validated Apps
3. Super User Answers 2. Drupal Fantasy (Judaism) (stats) 2. Meta
4. Web 5. Geographic Answers 3. Seasoned 4. Travel 3. Theoretical Stack
Applications Information 3. SharePoint Advice 5. Christianity Computer Overflow
5. Ask Ubuntu Systems 4. User (cooking) 6. Arqade Science 3. Area 51
6. Webmasters 6. Electrical Experience 4. Home (gaming) 4. Physics 4. Stack
7. Game Engineering 5. Mathematica Improvement 7. Bicycles 5. MathOverflow Overflow
Development 7. Android 6. more (14) 5. more (13) 8. Role- 6. more (7) Careers
8. TeX - LaTeX Enthusiasts playing
8. Information Games
Security 9. more (21)

site design / logo 2014 stack exchange inc; user contributions licensed under cc by-sa 3.0 with attribution required
rev 2014.2.18.1378

http://stackoverflow.com/questions/9615934/pl-sql-bulk-bind-faster-update-statements 23 / 23