You are on page 1of 23

DATABASE

It is the age of information technology and data & database play a very key role in
this age. A layperson of these days needs no introduction to databases, whether it
is a personal telephone directory or the bank passbook database are omnipresent.
In this session we learn about database management systems in general with an
emphasis on the relational model of the DBMS.

The conventional data processing approach is to develop a program (or many


programs) for each application. This result in one or more data files for each
application. Some of the data may be common between files. However one
application may require the file to be organized on a particular field, while other
application may require the file to be organized on another field. A major
drawback of the conventional method is that the storage access methods are built
in to the program. Therefore, though the same data may be required by two
applications, the data will have to be sorted in two different places because each
application depends on the way that the data stored.

There are various drawbacks of conventional data file processing environment.


Some of them are listed below:

Data Redundancy:
Some data elements like name, address, identification code, are used in
various applications. Since data is required by multiple applications, it is
stored in multiple data files. In most cases, there is a repetition of data. This
is referred to as data redundancy, and leads to various other problems.

Data Integrity Problems:


Data redundancy is one reason for the problem of data integrity. Since the
same data is stored in different places, it is inevitable that some
inconsistency will creep in.

Data Availability Constraints:


When data is scattered in different files, the availability of information from
a combination of files is constrained to some extent.

Database Management System


A database management system (DBMS) consists of a collection of interrelated data
and a set of programs to access the data. The collection of data is usually referred to
as the database. A Database system is designed to maintain large volumes of data.
Management of data involves:

 Defining the structures for the storage of data


 Providing the mechanisms for the manipulation of the data
 Providing for the security of the data against unauthorized access
Users of the DBMS:
Broadly, there are three types of DBMS users:

 The application programmer


 The end user
 The database administrator (DBA)

The application programmer writes application programs that use the database.
These programs operate on the data in the database. These operations include
retrieving information, inserting data, deleting or changing data.

The end user interacts with the system either by invoking an application program
or by writing their queries in a database query language. The database query
language allows the end user to perform all the basic operations (retrieval, deletion,
insertion and updating) on the data.

The DBA has to coordinate the functions of collecting information about the data to
be stored, designing and maintaining the database and its security. The database
must be designed and maintained to provide the right information at the right time
to authorized people. These responsibilities belong to the DBA and his staff.

ADVANTAGES OF A DBMS
The major advantage that the database approach has over the conventional
approach is that a database system provides centralized control of data. Most
benefits accrue from this notion of centralized control.

REDUNDANCY CAN BE CONTROLLED


Unlike the conventional approach, each application does not have to
maintain its own data files. Centralized control of data by the DBA avoids
unnecessary duplication of data and effectively reduces the total amount of
data storage required. It also eliminates the extra processing necessary to
trace the required data in a large mass of data present. Any redundancies
that exist in the DBMS are controlled and the system ensures that these
multiple copies are consistent.

INCONSISTENCY CAN BE AVOIDED


Since redundancy is reduced, inconsistency can also be avoided to some
extent. The DBMS guarantee and that the database is never inconsistent, by
ensuring that a change made to any entry automatically applies to the other
entries as well. The process is known as propagating update.

THE DATA CAN BE SHARED


A database allows the sharing of data under its control by any number of
application program or users. Sharing of data does not merely imply that
existing applications can share the data in the database, it also means that
new applications can be developed to operate using the same database.

STANDARDS CAN BE ENFORCED


Since there is centralized control of data, the database administrator can
ensure that standards are maintained in the representation of the stored data
formats. This is particularly useful for data interchange, or migration of data
between two systems.

SECURITY RESTRICTIONS CAN BE APPLIED


The DBMS guarantees that only authorized persons can access the database.
The DBA defines the security checks to be carried out. Different checks can
be applied to different operations on the same data. For instance, a person
may have the access rights to query on a file, but may not have the right to
delete or update that file. The DBMS allows such security checks to be
established for each piece of data in the database.

INTEGRITY CAN BE MAINTAINED


Centralized control can also ensure that adequate checks are incorporated in
the DBMS to provide data integrity. Data integrity means that the data
contain in the database is both accurate and consistent. Inconsistency
between two entries can lead to integrity problems. However, even if there
is no redundancy, the data can still be inconsistent. For example a student
may have enrolled in 10 courses in a semester when the maximum number
of courses one can enroll in is 7. Another example could be that of a student
enrolling in a course that is not being offered that semester. Such problems
can be avoided in a DBMS by establishing certain integrity checks to be
carried out whenever any update operation is done. These checks can be
specified at the database level, besides the application programs.

DATA INDEPENDENCE
In non-database systems, the requirement of the application dictates the way
in which the data is stored and the access techniques. Besides, the
knowledge of the organization of the data, the access techniques are built
into the logic and code of the application. These systems are data dependent.
Consider this example, suppose the university has an application that
processes the student file. For performance reason, the file is indexed on the
roll number. The application would be aware of the existing index, and the
internal structure of the application would be built around this knowledge.
Now consider that the some reason, the file is to index on the registration
data. In this case it is impossible to change the structure of the stored data
without affecting the application too. Such an application is a data
dependent one.
It is desirable to have data independent applications. Suppose two applications X
and Y need to access the same file. However both the applications require a
particular field to be stored in different formats. Application X requires the field
“customer balance” to be stored in decimal format, while the application Y requires
it to be stored in binary format. This would pose a problem in an old system. In a
DBMS differences may exist in the way that data is actually stored, and the way
that it is seen and used by a given application.

DATABASE MODELS:

HIERARCHICAL MODEL
IBM introduced this model in the Information Management System (IMS)
development in 1968. This mode is like a hierarchical tree structure, used to
construct a hierarchy of records in the form of nodes and branches. The data
elements present in the structure have a Parent-child relationship. Closely
related information in the parent child structure is stored together as a
logical unit. A present unit may have many child units, but a child is
restricted to have only one parent.

In order to gain an understanding of the various database models, we take an


example of a sample database consisting of supplier, parts and shipments. In this
view, data is represented by a simple tree structure, with parts superior to
suppliers. The user sees four individual trees, of hierarchical occurrence one of each
part. Each tree consists of one part record occurrence, together with a set of
subordinate supplier record occurrence, one of each supplier of the part. Each
supplier occurrence includes the corresponding shipment quantity. Note that the
set of supplier occurrences for a given part may contain any number of members,
including zero.

The record type at the top of the tree; the part record type in our example is usually
known as the “root”. In general the root may have any number of dependents, and
so on, to any number of levels.

Such a file, however, is a more complex object than the tables. In the first place, it
contains several types of records, not just one in our example there are two, one for
parts, and one for suppliers. Second, it also contained links connecting occurrences
of these records; in our example there are links between part occurrences and
supplier occurrences, representing the associated shipments.

The Drawbacks of this model are:

1. The Hierarchical structure is not flexible enough to represent all the


relationship proportions, which occur in the real world.
2. It cannot demonstrate the over all data model for the enterprise because of
the non-availability of actual data at the time of designing the data model.

3. It cannot represent the Many-to-Many relationship.

4. The Hierarchical model is used only when the concerned data has a clearly
hierarchical character with a single root, for e.g. the DOS directory structure.

NETWORK MODEL:
It is an improvement on the hierarchical model. Here multiple parent child
relationships are used. Rapid and easy access to data is possible in this model due
to multiple access paths to the data elements.

In this model, as in the hierarchical approach, records and links represent data.
However, a network is a more general structure than a hierarchy because a given
record occurrence may have any number of immediate superiors and dependents.
We can have more than one, unlike hierarchical. The network approach thus
allows us to model a many to many correspondence. In addition to the record
types representing the suppliers and parts themselves, we introduce a third type a
record, which we will call the connector.

A connector occurrence, represents the association between one superior and one
part, and contains data describing the association. All connector for given supplier
or part are placed on a chain starting at and returning to that supplier or part.

Transaction is maintained using pointers and having to trace the pointers is the
drawback of this design.

Relational Model

 Does not maintain physical connection between relations.


 Data is organized in terms of rows and columns in a table.
 The position of a row and / or column in a table is of no importance
 The intersection of a row and column must give a single value.
 All values appearing in the columns are derived from the underlying
domain.

FEATURES OF RDBMS:

 The ability to create multiple relations and enter data into them
 An interactive query language
 Retrieval of information stored in more than one table

An RDBMS product has to satisfy at least six of the 12 rules of CODD to be


accepted as a full-fledged RDBMS.
The publication of the paper “A Relational Model of Data for Large Shared
Databases" by E. F. Codd in June 1970 in the “communication of ACM”, set of trend
for vigorous and extensive investigation into a theoretical frame work to support
further work in the area of Data Modeling. The end result is the Relational
Database Management System (RDBMS)

The model has eliminated the explicit Parent Child relationships. In RDBMS the
data is organized in tables consisting of rows and columns.

Supplier
Supplier Supplier Supplier City
Code Name Status Supplier
SO1 Ram 15 Calcutta
SO2 Shyam 20 Chennai
SO3 Amit 25 Hyderabad
SO4 Rajesh 15 Mumbai

Parts

Pcode Partname Weight City

PO1 Screw 8 Calcutta


PO2 Nut 16 Chennai
PO3 Bolt 20 Hyderabad
PO4 Nut 16 Mumbai

Shipment

Scode Pcode Qty

SO1 PO1 150


SO1 PO2 200
SO2 PO3 250
SO2 PO4 100
SO3 PO1 300

Consider the tables shown in the sample database. Each supplier status has a
unique supplier code, which uniquely identifies the entire row of the table and
exactly one name, and city. Likewise each part has a unique Pcode and exactly one
name, size and city and at any given time no more than one shipment exists for a
given supplier or part combination.
The term “relation’ is used to describe these, which is a more precise way of
defining than the traditional data processing term “ file” or “table”. Rows of such
tables are referred to as typelos, again a more precise definition than rows or
records. A domain is a set of values from which actual values appearing in a given
column are drawn. For e.g. the values appearing in the part column of both parts
and shipments tables are drawn from the underlying domain of all valid part
numbers. This domain itself may not be explicitly recorded but will be defined in
the appropriate schema and will have a name of its own. We can see that relations’
supplier and parts have a domain in Common. So do supplier and shipment.

An important feature of relational data structure is that association between


tupelos is represented solely by data values in columns drawn from a common
domain. The fact is that the supplier SO3 and PO2 are located in the same city is
represented by the appearance of the same value in the city column for the two
tupelos concern. It is a characteristic of the relational approach that all the
information in the database, both entities and relationships is represented in a
single uniform manner, namely in the form of relational or tables.

An entity type in the relational model is represented as a named table of values.


Named columns represent the attributes of the entity and each row in the table
corresponds to an entity occurrence. One or more columns must contain attribute
values, which enable each entity occurrence (row) to be uniquely identified.

E. F. CODD’S 12 RULES:
In 1969,Dr.E.F.Codd published the first paper to define a model for database based
on the mathematical concept of relational sets. The Relational Model (RDM) has
been constantly refined since then, most notably in Dr. Codd’s 1985 paper that laid
out the “12 Rules” for relational databases. It represents all data in the database as
simple row-column tables of data values. A DBMS that employ’s relational model
for designing the database is called a Relational Database Management System
(RDBMS).

A RDBMS can be defined as a database management system where all data visible
to the user is organized strictly as tables of database values, and where all database
operations work on these tables. For any RDBMS to be accepted as a full-fledged
RDMS, it has to follows the twelve CODD rules.

The 12 Rules Are As Follows:

1. THE INFORMATION RULE:


This rule simply requires all information to be represented as data values in
the rows and Columns of tables. This is the basis of the relational model.
2. THE GUARANTEED ACCESS RULE:
Every data value in a relational database should be logically accessible by
specifying a combination of the table name, the primary key value and the
column name.

3. SYSTEMATIC TREATMENT OF NULL VALUES:


DBMS must support NULL values to represent missing or inapplicable
information. The must be distinct from zero or spaces. The NULL values
must be independent of data type; i.e. NULL values for all types of data are
the same.

4. ACTIVE ON-LINE CATALOG BASED ON THE RELATIONAL MODEL:


The system catalog is a collection of tables that the DBMS maintains of its
own use. These tables hold the description of the structure of the database.
These tables are created, owned and maintained by the DBMS. The users can
access them in the same manner as ordinary tables, depending on the user’s
privileges. System tables are read-only.

5. THE COMPREHENSIVE DATA SUB-LANGUAGE RULE:


This rule states that the system must support at least one language that
performs all the following functions:

 Data Definition
 View Definition
 Data manipulation operations
 Security and integrity constraints
 Transaction Management operations

6. THE VIEW UPDATION RULE:


All views that are theoretically updateable must updateable by the system.

7. HIGH LEVEL INSERT, UPDATE AND DELETE:


This rule states that rows should be treated as sets in insert, delete, and
update operations.

8. PHYSICAL DATA INDEPENDENCE:


Application program must remain unimpaired when any change are made
in storage representation or access methods.

9. LOGICAL DATA INDEPENDENCE:


Users and user programs should be independent of the logical structure of
the database.
10. INTEGRITY INDEPENDENCE:
Integrity constraints must be storable in the system catalog.

11. DISTRIBUTION INDEPENDENCE:


Database must allow manipulation of distributed data located on other
computer systems.

12. NON-SUBVERSION RULE:


The rule states that different levels of the language cannot subvert or bypass
the integrity rules and constraints. For example, assume there is an
application program that takes care of the integrity of data in a database.
Another user could write another program, say in the language that could
by pass the integrity constraints, imposed by the application program. Such
an integrity violation is unacceptable in RDBMS. Therefore, integrity
constraints should be specified at the DBMS level, and through application
programs only. The DBMS must ensure that no other level can bypass the
constraints to maintain the integrity of the database.

DATABASE DESIGN:
Having identified all the data in the system, it is necessary to arrive at the logical
database design. Database design involves designing the conceptual model of the
database. This model is independent of the physical representation of data. Before
actually implementing the database, the conceptual model is designed using
various techniques.

The requirements of al the users are taken into account to decide the actual data
that needs to be stored in the system. Once the conceptual model is designed, it
can then be mapped to the DBMS/RDBMS that is actually being used. Two of the
widely used approaches are Entity-relationship (E/R) Modeling and
Normalization.

The E/R model is an object based model and is based on a perception of the real
world that is made up of a collection of objects or entities and the relationships
among these. E/R modeling is generally used as a top down approach for new
systems.

ENTITY:
Entity is an object or place or event, which can be stored on the system. A
physical object can be as employee, customer, and machinery. An abstract
object can be as dept, accounting. An event can be as registration or
application form. A place can be as city, state. Before a table is created it is
known as entity. It is denoted as a rectangle diagram.
Attribute:
Attribute is describing the entity. Example an entity employees can contain
empno, ename, sal, hiredate etc. It is represented by a circle.

RELATION
A “relation” is a two-dimensional table. It consists of ‘rows” which represent
records and ‘columns’ which show the attributes of the entity. A relation is also
called a file, it consists of a number of records, which are also called as tuples.
Record consists of a number of attributes, which are also known as fields or
domains.

In order for a relational structure to be useful and manageable, the relation tables
must first be “normalized”.
Attributes

Relations

Data Item 1 Data Item 2 Data Item 3

Records

Fig 3.1 shows the components of a relation.

Some of the properties of a relation are

No duplication - In the sense that no two records are identical


Unique Key - Each relation has a unique key by which it
can be accessed

Order - There is no significant order of data in the table.

In case we want the names of all the employees whose grade is 20, we can scan the
employee relation noting the grade. Here the Unique key is the employee number.

NORMALIZATION
Normalization is a process of simplifying the relationship between data elements in
a record. It is the transformation of complex data stores to a set of smaller, stable
data structures.
Normalized data structures are simpler, more stable and are easier to maintain.
Normalization can therefore be defined as a process of simplifying the relationship
between data elements in a record.

Purpose For Normalization:


Normalization is carried out for the following four reasons:

 To structure the data so that there is no repetition of data, this helps in


saving space.
 To permit simple retrieval of data in response to query and report requests.
 To simplify the maintenance of the data through updates, insertions and
deletions.
 To reduce the need to restructure or reorganize data when new application
requirements arise.

STEPS OF NORMALIZATION:
Systems analysts should be familiar with the steps in normalization, since the
process can improve the quality of design for an application. Starting with a data
store developed for a data dictionary the analyst normalized a data structure in
three steps. Each step involves an important procedure to simplify the data
structure.

It consists of basic three steps.

1. First Normal Form, which decomposes all data groups into two-dimensional
records.
2. Second Normal form, which eliminates any relationships in which data
elements do not fully depend on the primary key of the record.
3. Third Normal Form which eliminates any relationships that contain
transitive dependencies.
Fig 3.2 steps involved in the process of normalization

User Views/
Data Stores

Un-normalized
Relations
Step 1: Remove repeating groups.
Fix record
First Normal
Form
Step 2 : Removal of data items
which are not
Second Normal
Form
Step 3 : Removal of transitive
dependencies.
Third Normal
Form

The relation obtained from the data store such as Employee Register will most
likely be un-normalized. It will consist of repeating groups and record will not be
of fixed length.

Unnormalised Employee Record


Employee No, Employee Name, Employee Details (department code, grade, datae
of joining, exit code and exit date), annual salary earned (MMY, Net paid), BANK
details (bank code, bank name, address, employees account_no).

Here it is clearly seen that the employee’s annual salary earned details which are :
Month and Year Paid, Net paid, are being repeated. Therefore this relation is not a
first normal form.

FIRST NORMAL FORM:


The basic improvement the analyst should make to such a record structure is to
design the record structure so that all records in the file are of fixed length.

A repeating group, that is, the reoccurrence of a data item or group of data items
within a record, is actually another relation. This is removed from the record and
treated as an additional record structure, or relation.
Emp.No. Name Emp. Salary Bank Income Tax
Details Details Details
A01 Jacob Rego

A02 Suren Gupta

Emp.No MMYY NET PAID

A01 0195 3500

A01 0295 3800

A01 0395 3600

A01 0495 3500

A02 0495 6000

Fig 3.4 First Normal Form

It shows the normalization to first normal form for the employee record.

AS mentioned above the first normal form is carried out by removing the repeating
group. In this case we remove the Annual Salary earned items and include them in
a new file or relation called Annual Salary earned record, Employee number is still
the primary key in the employee record. A combination of employee number and
MMYY is the Primary Key in the Annual Salary earned record.

We thus form two record structures of fixed length.

Employee record consisting of: Employee No, Employee Name, Employee details
(Department Code, Grade, Date of Joining, exit code and Exit date), bank details
(bank code, bank name, address, employee A/c. No.).

Annual salary earned record consisting of – Employee No, Month & Year (MMY)
and net paid.

SECOND NORMAL FORM:


The second normal form is achieved when every data item in a record that is not
dependent on the primary key of the record should be removed and used to form a
separate relation.
The PF department ensures that only one employee in the state is assigned a
specific PF number. This is called a one-to-one relation. The PF number uniquely
identifies a specific employee, an employee is associated with one and only one PF
number. Thus, if you know the Employee No., you can determine the PF number.
This is functional dependency. Therefore a data item is functionally dependant if
its value is uniquely associated with a specific data item.

Now consider our Employee record example. The employee record in fig 3.4 is the
first normal form.

Try the following test to check whether it is also a second normal form :
In case the bank code is known, will you know the employee no.? In this case “no”,
as a one-to-one relation does not exist. Because the bank name is dependent on
bank code and not employee no, and because the relation between the primary key
of bank code and employee no, is not one-to-one (it is not functionally dependent)
we know that the second normal form has not been achieved. Therefore, add
additional record structure called bank record is created as shown in fig.3.5

Fig 3.5 Second Normal Form

The record structures that are created are:


1. Employee record consisting of: Employee_NO, Employee_Name, Employee
details (department code, grade, date of joining, exit code and exit date), bank
details (bank code, bank name, address, employees a/c. no.)

2. Annual Salary earned record consisting of Employee_no, Month&Year (MMY)


and net paid.

3. Bank record consisting of: Bank_code, Bank Name and Bank address. All the
attributes of this relation are fully dependent on Bank code.

The primary key of each of the record structures are underlined.


THIRD NORMAL FORM:
Third normal form is achieved when transitive dependencies are removed from a
record design. Some of the non-key attributes are dependent not only on the
primary key but also on a non-key attribute. This is referred to as a transitive
dependency.

Conversion to third normal form removes transitive dependence by spliting the


relation into two relations.

A B

B C

Fig 3.6 Third Normal form

Reason for concern, when there is a transitive dependence, deleting A will cause
deletion of B and C as well.

As per fig 3.6

A, B and C are three data items in a record]


If C is functionally dependent on B and
B is functionally dependent on A<
Then C is functionally dependent on A

Therefore, a transitive dependency exists.

The record in fig 3.5 is in second normal form. There are no transitive
dependencies, so it is also in third normal form.
INTRODUCTION TO ORACLE
Oracle is comprehensive operating environment that packs h power of
mainframe relation database management system into user’s microcomputer. It
provides a set of functional program that user can use as tools to build structures
and perform tasks. Because applications are developed on oracle are completely
portable to the other versions of the programmer can create a complex
application in a single user, environment and then move it to a multi-user
platform. Users do not have to be an expert to appreciate oracle but the better
user understands the program, the more productively and creatively he can use
the tools it provides.

Relational Database Management System

 Oracle the right tool


 Oracle gives you security and control
 Database management tools
 Structure of Oracle Database

Oracle database can be describe at two different levels

 Physical Structure
 Logical Structure

Physical Structure:

a) One or more data files


b) Two or more log files
c) One control file

Logical Structure

a) Table spaces
b) Segments
c) Extents
d) Data Blocks

The data files contain all user data in terms of tables, index and views. The log
files contain the information to open and be recovered, of undone after a
transaction (Rollback).

The control file physical data, media information to open and manage data files.
If the control file is damaged the server will not be able to open or use the
database even if the database is undamaged.
THE APPLICATION DEVELOPER'S GUIDE
Fundamentals contain information that describes the features and functionality
of the Oracle8i and the Oracle8i Enterprise Edition products. Oracle8i and
Oracle8i Enterprise Edition have the same basic features. However, several
advanced features are available only with the Enterprise Edition, and some of
these are optional. For example, to use object functionality, you must have the
Enterprise Edition and the Objects Option. For information about the differences
between Oracle8i and the Oracle8i Enterprise Edition and the features and
options that are available to you, see

PL/SQL has complete description of this high-level programming language,


which is Oracle Corporation's procedural extension to SQL.

The Oracle Call Interface (OCI) can be used to build third-generation language
(3GL) applications that access the Oracle Server.
Oracle Corporation also provides the Pro* series of pre-compilers, which allow
you to embed SQL and PL/SQL in your application programs. If you write 3GL
application programs in Ada, C, C++, COBOL, or FORTRAN that incorporate
embedded SQL, then refer to the corresponding pre-compiler manual. For
example, if you program in C or C++, then refer to the

Oracle Developer/2000 is a cooperative development environment that provides


several tools including a form builder, reporting tools, and a debugging
environment for PL/SQL.

INTRODUCTION
This part introduces several different ways that you can write Oracle
applications. You might need to use more than one language or development
environment for a single application. Some database features are only supported,
or are easier to access from, certain languages.
"Understanding the Oracle Programmatic Environments" outlines the strengths
of the languages, development environments, and APIs that Oracle provides.

DESIGNING THE DATABASE


Before you develop an application, you need to plan the characteristics of the
associated database. You must choose all the pieces that go into the database,
and how they are put together. Good database design helps ensure good
performance and scalability, and reduces the amount of application logic you
code by making the database responsible for things like error checking and fast
data access.
"Managing Schema Objects" explains how to manage objects such as tables,
views, numeric sequences, and synonyms. It also discusses performance
enhancements to data retrieval through the use of indexes and clusters.

"Selecting a Datatype" explains how to represent your business data in the


database. The datatypes include fixed- and variable-length character strings,
numeric data, dates, raw binary data, and row identifiers (ROWIDs).

"Maintaining Data Integrity" explains how to use constraints to move error-


checking logic out of your application and into the database.

"Selecting an Index Strategy" and"Speeding Up Index Access with Index-


Organized Tables"explain how to speed up queries.

"Processing SQL Statements" explains SQL topics such as commits, cursors,


and locking that you can take advantage of in your applications.

"Dynamic SQL" describes dynamic SQL, native dynamic SQL vs. the
DBMS_SQL package, when to use dynamic SQL.

"Using Procedures and Packages" explains how to store reusable procedures


in the database, and how to group procedures into packages. "External
Routines" explains how to code the bodies of computationally intensive
procedures in languages other than PL/SQL.

"Establishing Security Policies" explains how to move authentication logic out


of your application and into the database.

THE ACTIVE DATABASE


You can include all sorts of programming logic in the database itself, making the
benefits available to many applications and saving repetitious coding work.

"Using Triggers" explains how to make the database do special processing


before, after, or instead of running SQL statements. You can use triggers for
things like validating or transforming data, or logging database access.
"Working With System Events" explains how to retrieve information, such as
the user ID and database name, about the event that fires a trigger.

"Using Publish-Subscribe" introduces the Oracle model for asynchronous


communication, also known as messaging or queuing.
DEVELOPING SPECIALIZED APPLICATIONS
"Developing Web Applications with PL/SQL" explains how to create dynamic
web pages and applications that work with the Internet, e-mail, and so on, using
the PL/SQL language.

"Working with Transaction Monitors with Oracle XA" describes how to connect
Oracle with a transaction monitor.

PL/SQL SUPPORT
PL/SQL is Oracle's procedural extension to SQL, the standard database access
language. An advanced 4GL (fourth-generation programming language),
PL/SQL offers seamless SQL access, tight integration with the Oracle server and
tools, portability, security, and modern software engineering features such as
data encapsulation, overloading, exception handling, and information hiding.

With PL/SQL, you can use SQL statements to manipulate Oracle data and flow-
of- control statements to process the data. You can also declare constants and
variables, define procedures and functions, use collections and object types, and
trap run-time errors. Thus, PL/SQL combines the data manipulating power of
SQL with the data processing power of procedural languages.

Applications are written using any of the oracle programmatic interfaces (Oracle
Call Interface, Java, Pro*C/C++, or Pro*COBOL) can call PL/SQL stored
procedures and send anonymous blocks of PL/SQL code to the server for
execution. 3GL applications have full access to PL/SQL scalar and composite
datatypes via host variables and implicit datatype conversion.

PL/SQL's tight integration with Oracle Developer lets you use one language to
develop the client and server components of your application, which can be
partitioned to achieve optimum performance and scalability. Also, Oracle's Web
Forms allows you to deploy your applications in a multi-tier Internet or intranet
environment without modifying a single line of code.

How Does PL/SQL Work?


A good way to get acquainted with PL/SQL is to look at a sample program.
Consider the procedure below, which debits a bank account. When called,
procedure debit_account accepts an account number and a debit amount. It uses
the account number to select the account balance from the database table. Then,
it uses the debit amount to compute a new balance. If the new balance is less
than zero, an exception is raised; otherwise, the bank account is updated.
ADVANTAGES OF PL/SQL
PL/SQL is a completely portable, high-performance transaction processing
language that offers the following advantages:

FULL SUPPORT FOR SQL


PL/SQL lets you use all the SQL data manipulation, cursor control, and
transaction control commands, as well as all the SQL functions, operators, and
pseudocolumns. So, you can manipulate Oracle data flexibly and safely. Also,
PL/SQL fully supports SQL datatypes. That reduces the need to convert data
passed between your applications and the database.

PL/SQL also supports dynamic SQL, an advanced programming technique that


makes your applications more flexible and versatile. Your programs can build
and process SQL data definition, data control, and session control statements "on
the fly" at run time.

Both PL/SQL and Oracle are based on SQL, and PL/SQL supports all the SQL
datatypes. Combined with the direct access that SQL provides, these shared
datatypes integrate PL/SQL with the Oracle data dictionary.

The %TYPE and %ROWTYPE attributes further integrate PL/SQL with the data
dictionary. For example, you can use the %TYPE attribute to declare variables,
basing the declarations on the definitions of database columns. If a definition
changes, the variable declaration changes accordingly at run time. This provides
data independence, reduces maintenance costs, and allows programs to adapt as
the database changes to meet new business needs.

BETTER PERFORMANCE
Without PL/SQL, Oracle must process SQL statements one at a time. Each SQL
statement results in another call to Oracle and higher performance overhead. In a
networked environment, the overhead can become significant. Every time a SQL
statement is issued, it must be sent over the network, creating more traffic.

However, with PL/SQL, an entire block of statements can be sent to Oracle at


one time. This can drastically reduce communication between your application
and Oracle. If your application is database intensive, you can use PL/SQL blocks
to group SQL statements before sending them to Oracle for execution.

PL/SQL stored procedures are compiled once and stored in executable form, so
procedure calls are quick and efficient. Also, stored procedures, which execute in
the server, can be invoked over slow network connections with a single call. This
reduces network traffic and improves round-trip response times. Executable
code is automatically cached and shared among users. That lowers memory
requirements and invocation overhead.

HIGHER PRODUCTIVITY
PL/SQL adds capabilities to non-procedural tools such as Oracle Forms and
Oracle Reports. With PL/SQL in these tools, you can use familiar procedural
constructs to build applications. For example, you can use an entire PL/SQL
block in an Oracle Forms trigger. You need not use multiple trigger steps,
macros, or user exits. Thus, PL/SQL increases productivity by putting better
tools in your hands.

PL/SQL is the same in all environments. As soon as you master PL/SQL with
one Oracle tool, you can transfer your knowledge to other tools, and so multiply
the productivity gains. For example, scripts written with one tool can be used by
other tools.

SCALABILITY
PL/SQL stored procedures increase scalability by isolating application
processing on the server. Also, automatic dependency tracking for stored
procedures aids the development of scalable applications.

The shared memory facilities of the Multithreaded Server (MTS) enable Oracle to
support 10,000+ concurrent users on a single node. For more scalability, you can
use the Net8 Connection Manager to multiplex Net8 connections.

MAINTAINABILITY
Once validated, a PL/SQL stored procedure can be used with confidence in any
number of applications. If its definition changes, only the procedure is affected,
not the applications that calls it. This simplifies maintenance and enhancement.
Also, maintaining a procedure on the server is easier than maintaining copies on
various client machines.

SUPPORT FOR OBJECT-ORIENTED PROGRAMMING

OBJECT TYPES
An object type is a user-defined composite datatype that encapsulates a data
structure along with the functions and procedures needed to manipulate the
data. The variables that form the data structure are called attributes. The
functions and procedures that characterize the behavior of the object type are
called methods, which you can implement in PL/SQL.

Object types are an ideal object-oriented modeling tool, which you can use to
reduce the cost and time required to build complex applications. Besides
allowing you to create software components that are modular, maintainable,
and reusable, object types allow different teams of programmers to develop
software components concurrently.

PORTABILITY
Applications written in PL/SQL can run on any operating system and
hardware platform where Oracle runs. You can write portable program
libraries and reuse them in different environments.

SECURITY
PL/SQL stored procedures enable you to partition application logic between
the client and server. That way, you can prevent client applications from
manipulating sensitive Oracle data. Database triggers written in PL/SQL can
disable application updates selectively and do content-based auditing of user
queries.

Furthermore, you can restrict access to Oracle data by allowing users to


manipulate it only through stored procedures that execute with their definer's
privileges. For example, you can grant users access to a procedure that updates a
table, but not grant them access to the table itself.

ABOUT SQL
This part introduces several different ways that you can write Oracle
applications. You might need to use more than one language or development
environment for a single application. Some database features are only supported,
or are easier to access from, certain languages.

"Understanding the Oracle Programmatic Environments" outlines the strengths


of the languages, development environments, and APIs that Oracle provides.

DESIGNING THE DATABASE:


Before you develop an application, you need to plan the characteristics of the
associated database. You must choose all the pieces that go into the database,
and how they are put together. Good database design helps ensure good
performance and scalability, and reduces the amount of application logic you
code by making the database responsible for things like error checking and fast
data access.

"Managing Schema Objects" explains how to manage objects such as tables,


views, numeric sequences, and synonyms. It also discusses performance
enhancements to data retrieval through the use of indexes and clusters.
"Selecting a Datatype" explains how to represent your business data in the
database. The datatypes include fixed- and variable-length character strings,
numeric data, dates, raw binary data, and row identifiers (ROWIDs).

"Maintaining Data Integrity" explains how to use constraints to move error-


checking logic out of your application and into the database.

"Selecting an Index Strategy" and "Speeding Up Index Access with Index-


Organized Tables" explain how to speed up queries.

"Processing SQL Statements" explains SQL topics such as commits, cursors, and
locking that you can take advantage of in your applications.

"Dynamic SQL" describes dynamic SQL, native dynamic SQL vs. the DBMS_SQL
package, when to use dynamic SQL.

"Using Procedures and Packages" explains how to store reusable procedures in


the database, and how to group procedures into packages. "External Routines"
explains how to code the bodies of computationally intensive procedures in
languages other than PL/SQL.

"Establishing Security Policies" explains how to move authentication logic out of


your application and into the database.

THE ACTIVE DATABASE


You can include all sorts of programming logic in the database itself, making the
benefits available to many applications and saving repetitious coding work.

"Using Triggers," explains how to make the database do special processing


before, after, or instead of running SQL statements. You can use triggers for
things like validating or transforming data, or logging database access. "Working
With System Events" explains how to retrieve information, such as the user ID
and database name, about the event that fires a trigger.

"Using Publish-Subscribe" introduces the Oracle model for asynchronous


communication also known as messaging or queuing.

You might also like