Professional Documents
Culture Documents
User’s Guide
PART ONE
Starting and Connecting to Your Database ..................... 1
2 Connecting to a Database............................................... 31
Introduction to connections ..................................................... 32
Connecting from Sybase Central or Interactive
SQL ......................................................................................... 36
Simple connection examples .................................................. 39
Working with ODBC data sources .......................................... 47
Connecting to a database using OLE DB ............................... 58
Connection parameters........................................................... 60
Troubleshooting connections .................................................. 63
Using integrated logins ........................................................... 73
3 Client/Server Communications....................................... 81
Network communication concepts .......................................... 82
Real world protocol stacks ...................................................... 87
Supported network protocols .................................................. 90
Using the TCP/IP protocol ...................................................... 91
Using the SPX protocol........................................................... 94
iii
Using the NetBIOS protocol.................................................... 96
Using Named Pipes ................................................................ 97
Troubleshooting network communications ............................. 98
PART TWO
Working with Databases ............................................... 105
iv
Joining more than two tables ................................................ 210
Joins involving derived tables ............................................... 213
Transact-SQL outer joins ...................................................... 214
v
PART THREE
Relational Database Concepts ..................................... 319
PART FOUR
Adding Logic to the Database ...................................... 421
vi
Errors and warnings in procedures and triggers................... 461
Using the EXECUTE IMMEDIATE statement in
procedures ............................................................................ 470
Transactions and savepoints in procedures and
triggers .................................................................................. 471
Some tips for writing procedures .......................................... 472
Statements allowed in batches ............................................. 474
Calling external libraries from procedures ............................ 475
vii
Using JDBC to access data .................................................. 590
Using the Sybase jConnect JDBC driver .............................. 598
Creating distributed applications........................................... 602
PART FIVE
Database Administration and Advanced Use .............. 625
viii
Using views and procedures for extra security ..................... 741
How user permissions are assessed .................................... 744
Managing the resources connections use ............................ 745
Users and permissions in the system tables ........................ 746
ix
Join enumeration and index selection .................................. 840
Cost estimation ..................................................................... 842
Subquery caching ................................................................. 843
x
PART SIX
The Adaptive Server Family .......................................... 929
PART SEVEN
Appendixes .................................................................. 1005
xi
B Property Sheet Descriptions........................................1035
Introduction to property sheets ........................................... 1037
Service properties ............................................................... 1038
Server properties ................................................................ 1041
Statistics properties............................................................. 1043
Database properties............................................................ 1044
Table properties .................................................................. 1046
Column properties............................................................... 1049
Foreign Key properties........................................................ 1052
Index properties .................................................................. 1055
Trigger properties................................................................ 1056
View properties ................................................................... 1057
Procedures and Functions properties................................. 1058
Users and Groups properties.............................................. 1059
Integrated Logins properties ............................................... 1062
Java Objects properties ...................................................... 1063
Domains properties............................................................. 1064
Events properties ................................................................ 1065
Publications properties........................................................ 1066
Articles properties ............................................................... 1067
Remote Users properties .................................................... 1069
Message Types properties.................................................. 1073
Connected Users properties ............................................... 1074
Database Space properties ................................................ 1075
Remote Servers properties ................................................. 1076
Glossary........................................................................1077
Index..............................................................................1095
xii
About This Manual
Subject
This manual describes how to use Adaptive Server Anywhere. It includes
material needed to develop applications that work with Adaptive Server
Anywhere and material for designing, building, and administering Adaptive
Server Anywhere databases.
Audience
This manual is for all users of Adaptive Server Anywhere.
Contents
Topic Page
Related documentation xiv
Documentation conventions xv
The sample database xviii
xiii
Related documentation
Adaptive Server Anywhere is a part of SQL Anywhere Studio. For an
overview of the different components of SQL Anywhere Studio, see
Introducing SQL Anywhere Studio.
The Adaptive Server Anywhere documentation consists of the following
books:
♦ Getting Started Intended for all users of Adaptive Server Anywhere,
this book describes the following:
♦ New features in Adaptive Server Anywhere
♦ Behavior changes from previous releases
♦ Upgrade procedures
♦ Introductory material for beginning users.
♦ Programming Interfaces Guide Intended for application developers
writing programs that directly access the ODBC, Embedded SQL, or
Open Client interfaces, this book describes how to develop applications
for Adaptive Server Anywhere.
This book is not required for users of Application Development tools
with built-in ODBC support, such as Sybase PowerBuilder.
♦ Reference A full reference to Adaptive Server Anywhere. This book
describes the database server, the administration utilities, the SQL
language, and error messages.
♦ Quick Reference A handy printed booklet with complete SQL syntax
and other key reference material in a concise format.
♦ Read Me First (UNIX only) A separate booklet is provided with UNIX
versions of Adaptive Server Anywhere, describing installation and
adding some UNIX-specific notes.
The format of these books (printed or online) may depend on the product in
which you obtained Adaptive Server Anywhere. Depending on which
package you have purchased, you may have additional books describing
other components of your product.
xiv
Documentation conventions
This section lists the typographic and graphical conventions used in this
documentation.
Syntax conventions
The following conventions are used in the SQL syntax descriptions:
♦ Keywords All SQL keywords are shown in UPPER CASE. However,
SQL keywords are case insensitive, so you can enter keywords in any
case you wish; SELECT is the same as Select which is the same as
select.
♦ Placeholders Items that must be replaced with appropriate identifiers
or expressions are shown in italics.
♦ Continuation Lines beginning with ... are a continuation of the
statements from the previous line.
♦ Repeating items Lists of repeating items are shown with an element
of the list followed by an ellipsis (three dots). One or more list elements
are allowed. If more than one is specified, they must be separated by
commas.
♦ Optional portions Optional portions of a statement are enclosed by
square brackets. For example,
RELEASE SAVEPOINT [ savepoint-name ]
indicates that the savepoint-name is optional. The square brackets should
not be typed.
♦ Options When none or only one of a list of items must be chosen, the
items are separated by vertical bars and the list enclosed in square
brackets. For example,
[ ASC | DESC ]
indicates that you can choose one of ASC, DESC, or neither. The square
brackets should not be typed.
♦ Alternatives When precisely one of the options must be chosen, the
alternatives are enclosed in curly braces. For example,
QUOTES { ON | OFF }
indicates that exactly one of ON or OFF must be provided. The braces
should not be typed.
xv
Graphic icons
The following icons are used in this documentation:
Icon Meaning
A client application.
A database.
In some high-level diagrams, the icon may be used to
represent both the database and the database server
that manages it.
A programming interface.
API
xvi
Installed files
The following terms are used throughout the manual:
♦ Installation directory The directory into which you install Adaptive
Server Anywhere.
♦ Executable directory The executables and other files for each
operating system are held in an executable subdirectory of the
installation directory. This subdirectory has the following name:
♦ Windows NT and Windows 95/98 win32
♦ UNIX bin
♦ NetWare and Windows CE The executables are held in the
Adaptive Server Anywhere installation directory itself on these
platforms.
xvii
The sample database
There is a sample database included with Adaptive Server Anywhere. Many
of the examples throughout the documentation use this sample database.
The sample database represents a small company. It contains internal
information about the company (employees, departments, and financial data)
as well as product information (products), sales information (sales orders,
customers, and contacts), and financial information (fin_code, fin_data).
The following figure shows the tables in the sample database and how they
relate to each other.
asademo.db
product employee
id <pk> integer sales_order_items
emp_id <pk> integer
name char(15) id <pk,fk> integer
manager_id integer
description char(30) line_id <pk> smallint
emp_fname char(20)
size char(18) id = prod_id prod_id <fk> integer
emp_lname char(20)
color char(6) quantity integer
dept_id <fk> integer
quantity integer ship_date date
street char(40)
unit_price numeric(15,2)
city char(20)
state char(4)
id = id
emp_id = sales_rep zip_code char(9)
phone char(10)
customer status char(1)
ss_number char(11)
id <pk> integer sales_order
salary numeric(20,3)
fname char(15) id <pk> integer start_date date
lname char(20) cust_id <fk> integer termination_date date
address char(35) order_date date birth_date date
city char(20) id = cust_id fin_code_id <fk> char(2) bene_health_ins char(1)
state char(2) region char(7) bene_life_ins char(1)
zip char(10) sales_rep <fk> integer bene_day_care char(1)
phone char(12)
sex char(1)
company_name char(35) code = fin_code_id
fin_code
code <pk> char(2)
contact dept_id = dept_id
type char(10)
id <pk> integer description char(50) emp_id = dept_head_id
last_name char(15)
first_name char(15)
title char(2) code = code
street char(30)
city char(20) fin_data
state char(2) year <pk> char(4) department
zip char(5) quarter <pk> char(2) dept_id <pk> integer
phone char(10) code <pk,fk> char(2) dept_name char(40)
fax char(10) amount numeric(9) dept_head_id <fk> integer
xviii
P A R T O N E
This part describes how to start the Adaptive Server Anywhere database
server, and how to connect to your database from a client application.
1
2
C H A P T E R 1
About this chapter This chapter describes how to start and stop the Adaptive Server Anywhere
database server, and the options open to you on startup under different
operating systems.
Contents
Topic Page
Introduction 4
Starting the server 7
Some common command-line switches 9
Stopping the database server 15
Starting and stopping databases 17
Running the server outside the current session 18
Troubleshooting server startup 29
3
Introduction
Introduction
Adaptive Server Anywhere provides two versions of the database server:
♦ The personal database server This executable does not support
client/server communications across a network. Although provided for
single-user, same-machine usefor example, as an embedded database
engineit is also useful for development work.
On Windows 95/98 and Windows NT the name of the personal server
executable is dbeng7.exe. On UNIX operating systems is dbeng7.
♦ The network database server Intended for multi-user use, this
executable supports client/server communications across a network.
On Windows 95/98 and Windows NT the name of the network server
executable is dbsrv7.exe. On Novell NetWare the name is dbsrv7.nlm,
and on UNIX operating systems it is dbsrv7.
Server differences The request-processing engine is identical in the two servers. Each supports
exactly the same SQL, and exactly the same database features. The main
differences include:
♦ Network protocol support Only the network server supports
communications across a network.
♦ Number of connections The personal server has a limit of ten
simultaneous connections. The limit for the network server depends on
your license.
♦ Number of CPUs The personal database server uses a maximum of
two CPUs for request processing. The network database server has no
set limit.
♦ Default number of internal threads You can configure the number of
requests the server can process at one time using the -gn command-line
switch. The network database server has a default of 20 threads and no
set limit, while the personal database server has a default and limit of 10
threads.
$ For information on database server command-line switches, see
"The database server" on page 14 of the book ASA Reference.
♦ Startup defaults To reflect their use as a personal server and a server
for many users, the startup defaults are slightly different for each.
4
Chapter 1 Running the Database Server
First steps
You can start a personal server running on a single database very simply. For
example, you can start both a personal server and a database called test.db by
typing the following command in the directory where test.db is located:
dbeng7 test
Where to enter You can enter commands in several ways, depending on your operating
commands system. For example, you can:
♦ type the command at a system command prompt.
♦ place the command in a shortcut or desktop icon.
♦ run the command in a batch file.
♦ include the command as a StartLine parameter in a connection string.
$ For more information, see "StartLine connection parameter" on
page 59 of the book ASA Reference.
There are slight variations in the basic command from platform to platform,
described in the following section.
$ You can also start a personal server using a database file name in a
connection string. For more information, see "Connecting to an embedded
database" on page 41.
5
Introduction
6
Chapter 1 Running the Database Server
Caution
The database file and the transaction log file must be located on
the same physical machine as the database server. Database files
and transaction log files located on a network drive can lead to
poor performance and data corruption.
♦ Database switches For each database file you start, you can provide
database switches that control certain aspects of its behavior.
$ In this section, we look at some of the more important and commonly-
used options. For full reference information on each of these switches, see
"The database server" on page 14 of the book ASA Reference.
In examples throughout this chapter where there are several command-line
options, we show them for clarity on separate lines, as they could be written
in a configuration file. If you enter them directly on a command line, you
must enter them all on one line.
Case sensitivity Command-line parameters are generally case sensitive. You should enter all
parameters in lower case.
7
Starting the server
Listing available
command-line v To list the database server command-line switches:
switches ♦ Open a command prompt, and enter the following command:
dbeng7 -?
8
Chapter 1 Running the Database Server
Naming databases You may want to provide a database name to provide a more meaningful
name than the file name for users of client applications. The database will be
identified by that name until it is stopped.
If you don’t provide a database name, the default name is the root of the
database file name (the file name without the .db extension). For example, in
the following command line the first database is named asademo, and the
second sample.
dbeng7 asademo.db sample.db
Naming the server You may want to provide a database server name to avoid conflicts with
other server names on your network, or to provide a meaningful name for
users of client applications. The server keeps its name for its lifetime (until it
is shut down). If you don’t provide a server name, the server is given the
name of the first database started.
You can name the server by supplying a –n switch before the first database
file. For example, the following command line starts a server on the
asademo database, and gives it the name Cambridge:
dbeng7 –n Cambridge asademo.db
If you supply a server name, you can start a database server with no database
started. The following command starts a server named Galt with no database
started:
dbeng7 –n Galt
$ For information about starting databases on a running server, see
"Starting and stopping databases" on page 17.
Case sensitivity Server names and database names are case insensitive as long as the
character set is single-byte. For more information, see "Connection strings
and character sets" on page 303.
10
Chapter 1 Running the Database Server
11
Some common command-line switches
A database file is also arranged in pages, with a size that is specified on the
command line. Every database page must fit into a cache page. By default,
the server page size is the same as the largest page size of the databases on
the command line. Once the server starts, you cannot start a database with a
larger page size than the server.
To allow databases with larger page sizes to be started after startup, you can
force the server to start with a specified page size using the – gp option. If
you use larger page sizes, remember to increase your cache size. A cache of
the same size will accommodate only a fraction of the number of the larger
pages, leaving less flexibility in arranging the space.
The following command starts a server that reserves an 8 Mb cache and can
accommodate databases of page sizes up to 4096 bytes.
dbsrv7 –gp 4096 –c 8M –n myserver
12
Chapter 1 Running the Database Server
Available protocols The network database server (dbsrv7.exe) supports the following protocols:
for the network
♦ Shared memory This protocol is for same-machine communications,
server
and always remains available. It is available on all platforms.
♦ SPX This protocol is supported on all platforms except for UNIX.
♦ TCP/IP This protocol is supported on all platforms.
♦ IPX This protocol is supported on all platforms except for UNIX.
Although IPX is still available, it is recommended that you now use SPX
instead of IPX.
♦ NetBIOS This protocol is supported on all platforms except for
NetWare and UNIX.
♦ Named Pipes Provided on Windows NT only, named pipes is for
same machine communications for applications that wish to run under a
certified security environment.
$ For more information on running the server using these options, see
"Supported network protocols" on page 90.
13
Some common command-line switches
Specifying You can instruct a server to use only some of the available network protocols
protocols when starting up, by using the –x command-line switch. The following
command starts a server using the TCP/IP and SPX protocols:
dbsrv7 –x "tcpip,spx"
Although not strictly required in this example, the quotes are necessary if
there are spaces in any of the arguments to –x.
You can add additional parameters to tune the behavior of the server for each
protocol. For example, the following command line (entered all on one line)
instructs the server to use two network cards, one with a specified port
number.
dbsrv7 -x
"tcpip{MyIP=192.75.209.12:2367,192.75.209.32}"
path\asademo.db
$ For detailed descriptions of the available network communications
parameters that can serve as part of the –x switch, see "Network
communications parameters" on page 61 of the book ASA Reference.
14
Chapter 1 Running the Database Server
15
Stopping the database server
It is better to stop the database server explicitly before closing the operating
system session. On NetWare, however, shutting down the NetWare server
machine properly does stop the database server cleanly.
Examples of commands that will not stop a server cleanly include:
♦ Stopping the process in Windows NT Task Manager.
♦ Using a UNIX slay or kill command.
16
Chapter 1 Running the Database Server
Starting a You can also start databases after starting a server, in one of the following
database on a ways:
running server
♦ While connected to a server, connect to a database using a DBF
parameter. This parameter specifies a database file for a new connection.
The database file is started on the current server.
$ For more information, see "Connecting to an embedded database"
on page 41.
♦ Use the START DATABASE statement, or select Start Database from
the File menu in Sybase Central when you have a server selected.
$ For a description, see "START DATABASE statement" on
page 604 of the book ASA Reference.
Limitations ♦ The server holds database information in memory using pages of a fixed
size. Once a server has been started, you cannot start a database that has
a larger page size than the server.
♦ The -gd server command-line option determines the permissions
required to start databases.
17
Running the server outside the current session
18
Chapter 1 Running the Database Server
Limitations of When you start a program, it runs under your Windows NT login session,
running as a which means that if you log off the computer, the program terminates. Only
standard one person logs onto Windows NT (on any one computer) at one time. This
executable restricts the use of the computer if you wish to keep a program running much
of the time, as is commonly the case with database servers. You must stay
logged onto the computer running the database server for the database server
to keep running. This can also present a security risk as the Windows NT
computer must be left in a logged on state.
Advantages of Installing an application as a Windows NT service enables it to run even
services when you log off.
When you start a service, it logs on using a special system account called
LocalSystem (or using another account you specify). Since the service is not
tied to the user ID of the person starting it, the service remains open even
when that person who started it logs off. You can also configure a service to
start automatically when the NT computer starts, before a user logs on.
Managing services Sybase Central provides a more convenient and comprehensive way of
managing Adaptive Server Anywhere services than the Windows NT
services manager.
Managing services
You can carry out the following service management tasks in the Services
folder in Sybase Central:
♦ Add, edit, and remove services.
♦ Start, stop, and pause services.
19
Running the server outside the current session
Adding a service
This section describes how to set up services using Sybase Central.
Notes ♦ Service names must be unique within the first eight characters.
♦ If you choose to start a service automatically, it starts whenever the
computer starts Windows NT. If you choose to start manually, you need
to start the service from Sybase Central each time. You may want to
select Disabled if you are setting up a service for future use.
♦ Enter command-line switches for the executable, without the executable
name itself, in the window. For example, if you want a network server to
run using the sample database with a cache size of 20Mb and a name of
myserver, you would enter the following in the Parameters box:
20
Chapter 1 Running the Database Server
-c 20M
-n myserver c:\asa7\asademo.db
Line breaks are optional. For information on valid command-line
switches, see the description of each program in "Database
Administration Utilities" on page 71 of the book ASA Reference.
♦ Choose the account under which the service will run: the special
LocalSystem account or another user ID. For more information about
this choice, see "Setting the account options" on page 23.
♦ If you want the service to be accessible from the Windows NT desktop,
check Allow Service to Interact with Desktop. If this option is
unchecked, no icon or window appears on the desktop.
Removing a service
Removing a service removes the server name from the list of services.
Removing a service does not remove any software from your hard disk.
If you wish to re-install a service you previously removed, you need to re-
enter the command-line switches.
v To remove a service:
1 In Sybase Central, open the Services folder.
2 In the right pane, right-click the icon of the service you want to remove
and choose Delete from the popup menu.
Configuring services
A service runs a database server or other application with a set of command-
line switches. For a full description of the command-line switches for each of
the administration utilities, see "Database Administration Utilities" on
page 71 of the book ASA Reference.
In addition to the command-line switches, services accept other parameters
that specify the account under which the service runs and the conditions
under which it starts.
21
Running the server outside the current session
2 In the right pane, right-click the service you want to change and choose
Properties from the popup menu.
3 Alter the parameters as needed on the pages of the Properties dialog.
4 Click OK when finished.
Changes to a service configuration take effect next time someone starts the
service. The Startup option is applied the next time Windows NT is started.
$ For a full description of the service property sheet, see "Service
properties" on page 1038.
22
Chapter 1 Running the Database Server
-c "uid=dba;pwd=sql;dbn=asademo"
$ The command-line switches for a service are the same as those for the
executable. For a full description of the command-line switches for each
program, see "The Database Server" on page 13 of the book ASA Reference.
23
Running the server outside the current session
If you choose to run the service under an account other than LocalSystem,
that account must have the "log on as a service" privilege. This can be
granted from the Windows NT User Manager application, under Advanced
Privileges.
When an icon Whether or not an icon for the service appears on the taskbar or desktop
appears on the depends on the account you select, and whether Allow Service to Interact
taskbar with Desktop is checked, as follows:
♦ If a service runs under LocalSystem, and Allow Service to Interact with
Desktop is checked in the service property sheet, an icon appears on the
desktop of every user logged in to NT on the computer running the
service. Consequently, any user can open the application window and
stop the program running as a service.
♦ If a service runs under LocalSystem, and Allow Service to Interact with
Desktop is unchecked in the service property sheet, no icon appears on
the desktop for any user. Only users with permissions to change the state
of services can stop the service.
♦ If a service runs under another account, no icon appears on the desktop.
Only users with permissions to change the state of services can stop the
service.
24
Chapter 1 Running the Database Server
25
Running the server outside the current session
26
Chapter 1 Running the Database Server
Service dependencies
In some circumstances you may wish to run more than one executable as a
service, and these executables may depend on each other. For example, you
may wish to run a server and a SQL Remote Message Agent or Log Transfer
Manager to assist in replication.
In cases such as these, the services must start in the proper order. If a SQL
Remote Message Agent service starts up before the server has started, it fails
because it cannot find the server.
You can prevent these problems using service groups, which you manage
from Sybase Central.
Before you can configure your services to ensure they start in the correct
order, you must check that your service is a member of an appropriate group.
You can check which group a service belongs to, and change this group,
from Sybase Central.
27
Running the server outside the current session
28
Chapter 1 Running the Database Server
Ensure that you have sufficient disk space for your temporary file
Adaptive Server Anywhere uses a temporary file to store information while
running. This file is stored in the directory pointed to by the TMP or TEMP
environment variable, typically c:\temp.
If you do not have sufficient disk space available to the temporary directory,
you will have problems starting the server.
29
Troubleshooting server startup
30
C H A P T E R 2
Connecting to a Database
About this chapter This chapter describes how client applications connect to databases. It
contains information about connecting to databases from ODBC, OLE DB,
and embedded SQL applications. It also describes connecting from Sybase
Central and Interactive SQL.
$ For information on connecting to a database from Sybase Open Client
applications, see "Adaptive Server Anywhere as an Open Server" on
page 963.
$ For information on connecting via JDBC (if you are not working in
Sybase Central or Interactive SQL), see "Data Access Using JDBC" on
page 577.
Contents
Topic Page
Introduction to connections 32
Connecting from Sybase Central or Interactive SQL 36
Simple connection examples 39
Working with ODBC data sources 47
Connecting to a database using OLE DB 58
Connection parameters 60
Troubleshooting connections 63
Using integrated logins 73
31
Introduction to connections
Introduction to connections
Any client application that uses a database must establish a connection to
that database before any work can be done. The connection forms a channel
through which all activity from the client application takes place. For
example, your user ID determines permissions to carry out actions on the
database—and the database server has your user ID because it is part of the
request to establish a connection.
How connections To establish a connection, the client application calls functions in one of the
are established Adaptive Server Anywhere interfaces. Adaptive Server Anywhere provides
the following interfaces:
♦ ODBC ODBC connections are discussed in this chapter.
♦ OLE DB OLE DB connections are discussed in this chapter.
♦ Embedded SQL Embedded SQL connections are discussed in this
chapter.
♦ Sybase Open Client Open Client connections are not discussed in this
chapter. For information on connecting from Open Client applications,
see "Adaptive Server Anywhere as an Open Server" on page 963.
♦ JDBC Sybase Central and Interactive SQL have the connection logic
described in this chapter built into them. Other JDBC applications
cannot use the connection logic discussed in this chapter.
$ For general information on connecting via JDBC, see "Data Access
Using JDBC" on page 577.
The interface uses connection information included in the call from the client
application, perhaps together with information held on disk in a file data
source, to locate and connect to a server running the required database. The
following figure is a simplified representation of the pieces involved.
Client Database
application server
Interface
Library
32
Chapter 2 Connecting to a Database
Representing This chapter has many examples of connection strings represented in the
connection strings following form:
parameter1=value1
parameter2=value2
...
This is equivalent to the following connection string:
33
Introduction to connections
parameter1=value1;parameter2=value2
You must enter a connection string on a single line, with the parameter
settings separated by semicolons.
34
Chapter 2 Connecting to a Database
35
Connecting from Sybase Central or Interactive SQL
36
Chapter 2 Connecting to a Database
Tip
You can make subsequent connections to a given database easier and
faster by using a connection profile.
38
Chapter 2 Connecting to a Database
Note You do not need to enter a user ID and a password for this connection
because the data source already contains this information.
39
Simple connection examples
40
Chapter 2 Connecting to a Database
Tips
If the database is already loaded (started) on the server, you only need to
provide a database name for a successful connection. The database file is
not necessary.
You can connect using a data source (a stored set of connection
parameters) for either of the above scenarios by selecting the appropriate
data source option at the bottom of the Identification tab of the Connect
dialog. For information about using data sources in conjunction with the
JDBC driver (jConnect), see "Specifying a driver for your connection" on
page 37.
$ See also
♦ "Opening the Connect dialog" on page 36
♦ "Simple connection examples" on page 39
41
Simple connection examples
Using the Start The following connection parameters show how you can customize the
parameter startup of the sample database as an embedded database. This is useful if you
wish to use command-line options, such as the cache size:
Start=dbeng7 -c 8M
dbf=path\asademo.db
uid=dba
pwd=sql
$ See also
♦ "Opening the Connect dialog" on page 36
♦ "Simple connection examples" on page 39
The ASA 7.0 Sample data source holds a set of connection parameters,
including the database file and a Start parameter to start the database.
42
Chapter 2 Connecting to a Database
$ See also
♦ "Opening the Connect dialog" on page 36
♦ "Simple connection examples" on page 39
Interface
library
Network
Specifying the Adaptive Server Anywhere server names must be unique on a local domain
server for a given network protocol. The following connection parameters provide a
simple example for connecting to a server running elsewhere on a network:
eng=svr_name
dbn=db_name
uid=user_id
pwd=password
CommLinks=all
The client library first looks for a personal server of the given name, and then
looks on the network for a server of the given name.
$ The above example finds any server started using the default port
number. However, you can start servers using other port numbers by
providing more information in the CommLinks parameter. For information,
see "CommLinks connection parameter" on page 50 of the book ASA
Reference.
43
Simple connection examples
Specifying the If several protocols are available, you can instruct the network library which
protocol ones to use to improve performance. The following parameters use only the
TCP/IP protocol:
eng=svr_name
dbn=db_name
uid=user_id
pwd=password
CommLinks=tcpip
The network library searches for a server by broadcasting over the network,
which can be a time-consuming process. Once the network library locates a
server, the client library stores its name and network address in a file, and
reuses this entry for subsequent connection attempts to that server using the
specified protocol. Subsequent connections can be many times faster than a
connection achieved by broadcast.
$ Many other connection parameters are available to assist Adaptive
Server Anywhere in locating a server efficiently over a network. For more
information see "Network communications parameters" on page 61 of the
book ASA Reference.
Tips
You can connect using a data source (a stored set of connection
parameters) by selecting the appropriate data source option at the bottom
of the Identification tab of the Connect dialog. For information about
using data sources in conjunction with the JDBC driver (jConnect), see
"Specifying a driver for your connection" on page 37.
By default, all network connections in Sybase Central and Interactive
SQL use the TCP/IP network protocol. For more information about
network protocol options, see "Network communication concepts" on
page 82.
$ See also
44
Chapter 2 Connecting to a Database
Default database If more than one database is loaded on a single personal server, you can
server leave the server as a default, but you need to specify the database you wish to
connect to:
dbn=db_name
uid=user_id
pwd=password
Default database If more than one server is running, you need to specify which server you
wish to connect to. If only one database is loaded on that server, you do not
need to specify the database name. The following connection string connects
to a named server, using the default database:
eng=server_name
uid=user_id
pwd=password
No defaults The following connection string connects to a named server, using a named
database:
eng=server_name
dbn=db_name
uid=user_id
pwd=password
$ For more information about default behavior, see "Troubleshooting
connections" on page 63.
45
Simple connection examples
$ For a description of command line switches for each database tool, see
the chapter "Database Administration Utilities" on page 71 of the book ASA
Reference.
46
Chapter 2 Connecting to a Database
For Adaptive Server Anywhere, the use of ODBC data sources goes beyond
Windows applications using the ODBC interface:
♦ Adaptive Server Anywhere client applications on UNIX can use ODBC
data sources, as well as those on Windows operating systems.
♦ Adaptive Server Anywhere client applications using the OLE DB or
embedded SQL interfaces can use ODBC data sources, as well as ODBC
applications.
♦ Interactive SQL and Sybase Central can use ODBC data sources.
47
Working with ODBC data sources
Before you begin This section describes how to create an ODBC data source. Before you
create a data source, you need to know which connection parameters you
want to include in it.
$ For more information, see "Simple connection examples" on page 39,
and "Connection parameters" on page 60.
ODBC On Windows 95/98 and Windows NT, you can use the Microsoft ODBC
Administrator Administrator to create and edit data sources. You can work with User Data
Sources, File Data Sources, and System Data Sources in this utility.
48
Chapter 2 Connecting to a Database
3 From the list of drivers, choose Adaptive Server Anywhere 7.0, and
click Finish. The ODBC Configuration for Adaptive Server Anywhere
window appears.
Most of the fields in this window are optional. Click the question mark
at the top right of the window and click a dialog field to find more
information about that field.
$ For more information about the fields in the dialog, see
"Configuring ODBC data sources using the ODBC Administrator" on
page 50.
4 When you have specified the parameters you need, click OK to close the
window and create the data source.
To edit a data source, find and select one in the ODBC Administrator main
window and click Configure.
49
Working with ODBC data sources
You can create User Data Sources using the dbdsn command-line utility.
Creating an ODBC You cannot create File Data Sources or System Data Sources. File and
data source from System Data Sources are limited to Windows operating systems only, and
the command line you can use the ODBC Administrator to create them.
$ For more information on the dbdsn utility, see "The Data Source
utility" on page 84 of the book ASA Reference.
ODBC tab
Data source name The Data Source Name is used to identify the ODBC
data source. You can use any descriptive name for the data source (spaces
are allowed) but it is recommended that you keep the name short, as you may
need to enter it in connection strings.
$ For more information, see "DataSourceName connection parameter" on
page 53 of the book ASA Reference.
Isolation level Select the desired isolation level for this data source:
♦ 0 Dirty reads, non-repeatable reads and phantom rows may occur. This
is the default.
50
Chapter 2 Connecting to a Database
♦ 1 Non-repeatable rows and phantom rows may occur. Dirty reads are
prevented.
♦ 2 Phantom rows may occur. Dirty reads and non-repeatable rows are
prevented.
♦ 3 Dirty reads, non-repeatable reads and phantom rows are prevented.
$ For more information, see "Choosing isolation levels" on page 382.
Microsoft applications (keys in SQL Statistics) Check this box if you
wish foreign keys to be returned by SQL statistics. The ODBC
specifications states that primary and foreign keys should not be returned by
SQL statistics. However, some Microsoft applications (such as Visual Basic
and Access) assume that primary and foreign keys are returned by SQL
statistics.
Prevent driver not capable errors The Adaptive Server Anywhere ODBC
driver returns a Driver not capable error code because it does not support
qualifiers. Some ODBC applications do not handle this error properly. Check
this box to disable this error code, allowing such applications to work.
Delay AutoCommit until statement close Check this box if you wish the
Adaptive Server Anywhere ODBC driver to delay the commit operation until
a statement has been closed.
Describe cursor behavior Select how often you wish a cursor to be re-
described when a procedure is executed or resumed.
51
Working with ODBC data sources
Login tab
Use integrated login Connects using an integrated login. The User ID and
password do not need to be specified. To use this type of login users must
have been granted integrated login permission. The database being connected
to must also be set up to accept integrated logins. Only users with DBA
access may administer integrated login permissions.
$ For more information, see "Using integrated logins" on page 73.
User ID Provides a place for you to enter the User ID for the connection.
$ For more information, see "Userid connection parameter" on page 60 of
the book ASA Reference.
Password Provides a place for you to enter the password for the
connection.
$ For more information, see "Password connection parameter" on
page 58 of the book ASA Reference.
Encrypt password Check this box if you wish the password to be stored in
encrypted form in the profile.
$ For more information, see "EncryptedPassword connection parameter"
on page 55 of the book ASA Reference.
Database tab
Server name Provides a place for you to enter the name of the Adaptive
Server Anywhere personal or network server.
$ For more information, see "EngineName connection parameter" on
page 55 of the book ASA Reference.
Start line Enter the server that should be started. Only provide a Start Line
parameter if a database server is being connected to that is not currently
running. For example:
C:\Program Files\Sybase\ SQL Anywhere 7\win32\dbeng7.exe -c 8m
Database name Provides a place for you to enter the name of the Adaptive
Server Anywhere database that you wish to connect to.
$ For more information, see "DatabaseName connection parameter" on
page 53 of the book ASA Reference.
52
Chapter 2 Connecting to a Database
Database file Provides a place for you to enter the full path and name of
the Adaptive Server Anywhere database file on the server PC. You may also
click Browse to locate the file. For example:
C:\Program Files\Sybase\SQL Anywhere 7\asademo.db
Network tab
Select the network protocols and specify any protocol-specific options
where necessary These check boxes specify what protocol or protocols
the ODBC DSN uses to access a network database server. In the adjacent
boxes, you may enter communication parameters that establish and tune
connections from your client application to a database.
$ For more information see "CommLinks connection parameter" on
page 50 of the book ASA Reference, and "Network communications
parameters" on page 61 of the book ASA Reference.
53
Working with ODBC data sources
Advanced tab
Connection name The name of the connection that is being created.
Character set Lets you specify a character set (a set of 256 letters,
numbers, and symbols specific to a country or language). The ANSI
character set is used by Microsoft Windows. An OEM character set is any
character set except the ANSI character set.
Display debugging information in a log file The name of the file in which
the debugging information is to be saved.
54
Chapter 2 Connecting to a Database
Embedded SQL applications can also use ODBC file data sources.
55
Working with ODBC data sources
Each data source itself is held in a file. The file has the same name as the
data source, with an extension of .dsn.
$ For more information about file data sources, see "Using file data
sources on Windows" on page 54.
With the -z switch, the server writes out its IP address during startup.
The address may change if you disconnect your HPC from the network
and then re-connect it.
56
Chapter 2 Connecting to a Database
57
Connecting to a database using OLE DB
OLE DB providers
You need an OLE DB provider for each type of data source you wish to
access. Each provider is a dynamic-link library. There are two OLE DB
providers you can use to access Adaptive Server Anywhere:
♦ Sybase ASA OLE DB provider The Adaptive Server Anywhere OLE
DB provider provides access to Adaptive Server Anywhere as an OLE
DB data source without the need for ODBC components. The short
name for this provider is ASAProv.
When the ASAProv provider is installed, it registers itself. This
registration process includes making registry entries in the COM section
of the registry, so that ADO can locate the DLL when the ASAProv
provider is called. If you change the location of your DLL, you must
reregister it.
♦ Microsoft OLE DB provider for ODBC Microsoft provides an
OLE DB provider with a short name of MSDASQL.
The MSDASQL provider makes ODBC data sources appear as OLE DB
data sources. It requires the Adaptive Server Anywhere ODBC driver.
58
Chapter 2 Connecting to a Database
59
Connection parameters
Connection parameters
The following table lists the Adaptive Server Anywhere connection
parameters.
$ For a full description of each of these connection parameters, see
"Connection and Communication Parameters" on page 43 of the book ASA
Reference. For character set issues in connection strings, see "Connection
strings and character sets" on page 303.
60
Chapter 2 Connecting to a Database
Notes ♦ Boolean values Boolean (true or false) arguments are either YES,
ON, 1, or TRUE if true, or NO, OFF, 0, or FALSE if false.
♦ Case sensitivity Connection parameters are case insensitive.
♦ The connection parameters used by the interface library can be obtained
from the following places (in order of precedence):
♦ Connection string You can pass parameters explicitly in the
connection string.
♦ SQLCONNECT environment variable The SQLCONNECT
environment variable can store connection parameters.
♦ Data sources ODBC data sources can store parameters.
♦ Character set restrictions The server name must be composed of the
ASCII character set in the range 1 to 127. There is no such limitation on
other parameters.
$ For more information on the character set issues, see "Connection
strings and character sets" on page 303.
♦ Priority The following rules govern the priority of parameters:
♦ The entries in a connect string are read left to right. If the same
parameter is specified more than once, the last one in the string
applies.
♦ If a string contains a data source or file data source entry, the profile
is read from the configuration file, and the entries from the file are
used if they are not already set. For example, if a connection string
contains a data source name and sets some of the parameters
contained in the data source explicitly, then in case of conflict the
explicit parameters are used.
61
Connection parameters
62
Chapter 2 Connecting to a Database
Troubleshooting connections
Who needs to read In many cases, establishing a connection to a database is straightforward
this section? using the information presented in the first part of this chapter.
However, if you are having problems establishing connections to a server,
you may need to understand the process by which Adaptive Server
Anywhere establishes connections in order to resolve your problems. This
section describes how Adaptive Server Anywhere connections work.
The software follows exactly the same procedure for each of the following
types of client application:
♦ ODBC Any ODBC application using the SQLDriverConnect
function, which is the common method of connection for ODBC
applications. Many application development systems, such as Sybase
PowerBuilder and Power++, belong to this class of application.
♦ Embedded SQL Any client application using Embedded SQL and
using the recommended function for connecting to a database
(db_string_connect).
The SQL CONNECT statement is available for Embedded SQL
applications and in Interactive SQL. It has two forms: CONNECT AS...
and CONNECT USING. All the database administration tools, including
Interactive SQL, use db_string_connect.
63
Troubleshooting connections
64
Chapter 2 Connecting to a Database
When the library is Once the client application locates the interface library, it passes a
located connection string to it. The interface library uses the connection string to
assemble a list of connection parameters, which it uses to establish a
connection to a server.
Yes No
Is there a
Does the data
Yes compatibility data
source exist?
source?
No
No
Failure Yes
Read parameters
Connection
not already
parameters
specified from the
complete
data source
65
Troubleshooting connections
Locating a server
In the next step toward establishing a connection, Adaptive Server Anywhere
attempts to locate a server. If the connection parameter list includes a server
name (ENG parameter), it carries out a search first for a local server
(personal server or network server running on the same machine) of that
name, followed by a search over a network. If no ENG parameter is supplied,
Adaptive Server Anywhere looks for a default server.
66
Chapter 2 Connecting to a Database
Is there a local
Is there a default
server named
personal server?
ENG?
No Yes
Yes
Yes No
No
Attempt to locate
Start up required
a server named Can a server be
network protocol
ENG using located?
ports
available ports
No
Attempt to start a
personal server
67
Troubleshooting connections
♦ The network search involves a search over one or more of the protocols
supported by Adaptive Server Anywhere. For each protocol, the network
library starts a single port. All connections over that protocol at any one
time use a single port.
♦ You can specify a set of network communication parameters for each
network port in the argument to the CommLinks parameter. Since these
parameters are necessary only when the port first starts, the interface
library ignores any connection parameters specified in CommLinks for
a port already started.
♦ Each attempt to locate a server (the local attempt and the attempt for
each network port) involves two steps. First, Adaptive Server Anywhere
looks in the server name cache to see if a server of that name is
available. Second, it uses the available connection parameters to attempt
a connection.
68
Chapter 2 Connecting to a Database
Is DBN
No
specified?
Is DBF
Yes No
specified?
Attempt to
connect Yes Is there a default
No
database running?
Is a database Failure
running whose
name is the root of Yes
DBF? Attempt to
No connect
Attempt to located?
connect
Yes
Failure
Load and attempt to
connect
69
Troubleshooting connections
Is there a START
No Yes
parameter?
Is there a DBF
No
parameter?
70
Chapter 2 Connecting to a Database
How the cache is When a connection specifies a server name, and a server with that name is
used not found, the network library looks first in the server name cache to see if
the server is known. If there is an entry for the server name, an attempt is
made to connect using the link and address in the cache. If the server is
located using this method, the connection is much faster, as no broadcast is
involved.
If the server is not located using cached information, the connection string
information and CommLinks parameter are used to search for the server
using a broadcast. If the broadcast is successful, the server name entry in the
named cache is overwritten.
71
Troubleshooting connections
Examples The following command line tests to see if a server named Waterloo is
available over a TCP/IP connection:
dbping -c "eng=Waterloo;CommLinks=tcpip"
The following command tests to see if a default server is available on the
current machine.
dbping
$ For more information on dbping options, see "The Ping utility" on
page 114 of the book ASA Reference.
$ For more information about printing more output during connection
attempts, see "Debug connection parameter" on page 54 of the book ASA
Reference. This feature is especially useful when used in conjunction with
the "Logfile connection parameter" on page 58 of the book ASA Reference.
72
Chapter 2 Connecting to a Database
Caution
Integrated logins offer the convenience of a single security system but
there are important security implications which database administrators
should be familiar with.
Caution
Setting the LOGIN_MODE database option to Integrated restricts
connections to only those users who have been granted an
integrated login mapping. Attempting to connect using a user ID
and password generates an error. The only exception to this are
users with DBA authority (full administrative rights).
Example The following SQL statement sets the value of the LOGIN_MODE database
option to Mixed, allowing both standard and integrated login connections:
SET OPTION Public.LOGIN_MODE = Mixed
74
Chapter 2 Connecting to a Database
Example The following SQL statement allows Windows NT users fran_whitney and
matthew_cobb to log in to the database as the user DBA, without having to
know or provide the DBA user ID or password.
GRANT INTEGRATED LOGIN
TO fran_whitney, matthew_cobb
AS USER dba
Example The following SQL statement removes integrated login permission from the
Windows NT user dmelanso.
REVOKE INTEGRATED LOGIN
FROM dmelanso
$ See also
♦ "GRANT statement" on page 526 of the book ASA Reference
76
Chapter 2 Connecting to a Database
Interactive SQL For example, a connection attempt using the following Interactive SQL
Examples statement will succeed, providing the user has logged on with a user profile
name that matches a integrated login mapping in a default database of a
server:
CONNECT USING ’INTEGRATED=yes’
The following Interactive SQL statement...
CONNECT
...can connect to a database if all the following are true:
♦ A server is currently running.
♦ The default database on the current server is enabled to accept integrated
login connections.
♦ An integrated login mapping has been created that matches the current
user’s user profile name.
♦ If the user is prompted with a dialog box by the server for more
connection information (such as occurs when using the Interactive SQL
utility), the user clicks OK without providing more information.
Integrated logins A client application connecting to a database via ODBC can use an
via ODBC integrated login by including the Integrated parameter among other attributes
in its Data Source configuration.
Setting the attribute Integrated=yes in an ODBC data source causes
database connection attempts using that DSN to attempt an integrated login.
If the LOGIN_MODE database option is set to Standard, the ODBC driver
prompts the user for a database user ID and password.
77
Using integrated logins
Caution
Leaving the user profile Guest enabled can permit unrestricted access to a
database that is hosted by that server.
If the Guest user profile is enabled and has a blank password, any attempt to
log in to the server will be successful. It is not required that a user profile
exist on the server, or that the login ID provided have domain login
permissions. Literally any user can log in to the server using any login ID
and any password: they are logged in by default to the Guest user profile.
This has important implications for connecting to a database with the
integrated login feature enabled.
Consider the following scenario, which assumes the Windows NT server
hosting a database has a "Guest" user profile that is enabled with a blank
password.
♦ An integrated login mapping exists between the user fran-whitney and
the database user ID DBA. When the user fran-whitney connects to the
server with her correct login ID and password, she connects to the
database as DBA, a user with full administrative rights.
But anyone else attempting to connect to the server as fran-whitney will
successfully log in to the server regardless of the password they provide
because Windows NT will default that connection attempt to the "Guest"
user profile. Having successfully logged in to the server using the
fran_whitney login ID, the unauthorized user successfully connects to
the database as DBA using the integrated login mapping.
78
Chapter 2 Connecting to a Database
If the database is shut down and restarted, the option value remains the same
and integrated logins are still enabled.
Changing the LOGIN_MODE option temporarily will still allow user access
via integrated logins. The following statement will change the option value
temporarily:
SET TEMPORARY OPTION Public.LOGIN_MODE = Mixed
If the permanent option value is Standard, the database will revert to that
value when it is shut down.
Setting temporary public options can be considered an additional security
measure for database access since enabling integrated logins means that the
database is relying on the security of the operating system on which it is
running. If the database is shut down and copied to another machine (such as
a user’s machine) access to the database reverts to the Adaptive Server
Anywhere security model and not the security model of the operating system
of the machine where the database has been copied.
$ For more information on using the SET OPTION statement see "SET
OPTION statement" on page 596 of the book ASA Reference.
79
Using integrated logins
80
C H A P T E R 3
Client/Server Communications
About this chapter Each network environment has its own peculiarities. This chapter describes
those aspects of network communication that are relevant to the proper
functioning of your database server, and provides some tips for diagnosing
network communication problems. It describes how networks operate, and
provides hints on running the network database server under each protocol.
Contents
Topic Page
Network communication concepts 82
Real world protocol stacks 87
Supported network protocols 90
Using the TCP/IP protocol 91
Using the SPX protocol 94
Using the NetBIOS protocol 96
Using Named Pipes 97
Troubleshooting network communications 98
81
Network communication concepts
Physical Physical
transmission Physical
Computer A Computer B
82
Chapter 3 Client/Server Communications
83
Network communication concepts
84
Chapter 3 Client/Server Communications
Examples of Novell’s SPX, Microsoft and IBM’s NetBEUI, and Named Pipes are widely-
transport protocols used transport protocols. The TCP/IP suite of protocols includes more than
one transport layer. NetBIOS is an interface specification to the transport
layer from IBM and Microsoft that is commonly (but not necessarily) paired
with the NetBEUI protocol.
Adaptive Server Anywhere supports the NetBIOS interface to the transport
layer. In addition, Adaptive Server Anywhere has an interface to Named
Pipes for same-machine communications only.
Adaptive Server Anywhere applies its own checks to the data passed
between client application and server, to further ensure the integrity of data
transfer.
85
Network communication concepts
At the data link layer, ODI-based protocol stacks can be made compatible
with NDIS-based protocol stacks using translation drivers, as discussed in
"Working with multiple protocol stacks" on page 88.
86
Chapter 3 Client/Server Communications
87
Real world protocol stacks
88
Chapter 3 Client/Server Communications
Troubleshooting tip
Not all translation drivers achieve complete compatibility. Be sure to get
the latest available version of the driver you need. Although we provide
some tips concerning network troubleshooting, the primary source of
assistance in troubleshooting a particular protocol stack is the
documentation for the network communications software that you install.
89
Supported network protocols
90
Chapter 3 Client/Server Communications
91
Using the TCP/IP protocol
92
Chapter 3 Client/Server Communications
93
Using the SPX protocol
94
Chapter 3 Client/Server Communications
95
Using the NetBIOS protocol
96
Chapter 3 Client/Server Communications
97
Troubleshooting network communications
98
Chapter 3 Client/Server Communications
When you download Novell client software, it has ODI drivers for some
network adapters in addition to the Novell software that is used for all
network adapters.
99
Troubleshooting network communications
100
Chapter 3 Client/Server Communications
2 Start the telnet client process on the other machine, and see if you get a
connection. Again, check with your TCP/IP software for how to do this.
For command line programs, you would typically type the following
instruction:
telnet server_name
where server_name is the name or IP address of the computer running
the telnet server process.
If a telnet connection is established between these two machines, the
protocol stack is stable and the client and server should be able to
communicate using the TCP/IP link between the two computers. If a telnet
connection cannot be established, there is a problem. You should ensure that
your TCP/IP protocol stack is working correctly before proceeding.
101
Troubleshooting network communications
102
Chapter 3 Client/Server Communications
The default packet size for Adaptive Server Anywhere is 1024 bytes. The
maximum buffer size used by the adapter board should be greater than this to
allow for protocol information in the packet. The computer running the
database server might need more than the default number of buffers used by
a driver.
103
Troubleshooting network communications
104
P A R T TW O
This part of the manual describes the mechanics of carrying out common tasks
with Adaptive Server Anywhere.
105
106
C H A P T E R 4
About this chapter This chapter describes the mechanics of creating, altering, and deleting
database objects such as tables, views, and indexes.
Contents
Topic Page
Introduction 108
Working with databases 111
Working with tables 120
Working with views 134
Working with indexes 141
107
Introduction
Introduction
With the Adaptive Server Anywhere tools, you can create a database file to
hold your data. Once this file is created, you can begin managing the
database. For example, you can add database objects, such as tables or users,
and you can set overall database properties.
This chapter describes how to create a database and the objects within it. It
includes procedures for Sybase Central, Interactive SQL, and command-line
utilities. If you want more conceptual information before you begin, see the
following chapters:
♦ "Designing Your Database" on page 321
♦ "Ensuring Data Integrity" on page 345
♦ "About Sybase Central" on page 36 of the book Introducing SQL
Anywhere Studio
♦ "Using Interactive SQL" on page 145 of the book Getting Started with
ASA
The SQL statements for carrying out the tasks in this chapter are called the
data definition language (DDL). The definitions of the database objects
form the database schema: you can think of the schema as an empty
database.
$ Procedures and triggers are also database objects, but they are
discussed in "Using Procedures, Triggers, and Batches" on page 423.
Chapter contents This chapter contains the following material:
♦ An introduction to working with database objects (this section)
♦ A description of how to create and work with the database itself
♦ A description of how to create and alter tables, views, and indexes
Questions and To answer the question... Consider reading...
answers
How do I create or erase a database? "Creating a database" on page 111 and
"Erasing a database" on page 114
How do I disconnect from a database? "Disconnecting from a database" on
page 115
How do I set the properties for any "Setting properties for database
database object? objects" on page 116
How do I set database options? "Setting database options" on
page 116
How do I set the consolidated database? "Setting a consolidated database" on
108
Chapter 4 Working with Database Objects
109
Introduction
110
Chapter 4 Working with Database Objects
Creating a database
Adaptive Server Anywhere provides a number of ways to create a database:
in Sybase Central, in Interactive SQL, and with the command line. Creating a
database is also called initializing it. Once the database is created, you can
connect to it and build the tables and other objects that you need in the
database.
Transaction log When you create a database, you must decide where to place the transaction
log. This log stores all changes made to a database, in the order in which they
are made. In the event of a media failure on a database file, the transaction
log is essential for database recovery. It also makes your work more
efficient. By default, it is placed in the same directory as the database file,
but this is not recommended for production use.
$ For information on placing the transaction log, see "Configuring your
database for data protection" on page 645.
Database file An Adaptive Server Anywhere database is an operating system file. It can be
compatibility copied to other locations just like any other file is copied.
Database files are compatible among all operating systems, except where file
system file size limitations or Adaptive Server Anywhere support for large
files apply. A database created from any operating system can be used from
another operating system by copying the database file(s). Similarly, a
database created with a personal server can be used with a network server.
Adaptive Server Anywhere servers can manage databases created with
earlier versions of the software, but old servers cannot manage newer
databases.
$ For more information about limitations, see "Size and number
limitations" on page 932 of the book ASA Reference.
111
Working with databases
112
Chapter 4 Working with Database Objects
113
Working with databases
Erasing a database
Erasing a database deletes all tables and data from disk, including the
transaction log that records alterations to the database. All database files are
read-only to prevent accidental modification or deletion of the database files.
In Sybase Central, you can erase a database using the Erase Database utility.
You need to connect to a database to access this utility, but the Erase
Database wizard lets you specify any database for erasing. In order to erase a
non-running database, the database server must be running.
In Interactive SQL, you can erase a database using the DROP DATABASE
statement. Required permissions can be set using the database server -gu
command-line option. The default setting is to require DBA authority.
You can also erase a database from a command line with the dberase utility.
The database to be erased must not be running when this utility is used.
114
Chapter 4 Working with Database Objects
Example 1 The following statement shows how to use DISCONNECT from Interactive
SQL to disconnect all connections:
DISCONNECT ALL
115
Working with databases
116
Chapter 4 Working with Database Objects
Tips
With the Set Options dialog, you can also set database options for specific
users and groups.
When you set options for the database itself, you are actually setting
options for the PUBLIC group in that database, because all users and
groups inherit option settings from PUBLIC.
$ For more information, see "Set Options dialog" on page 1010, and
"Database Options" on page 143 of the book ASA Reference.
117
Working with databases
In Interactive SQL, you cannot show a list of all system objects, but you can
browse the contents of a system table; for more information, see "Showing
system tables" on page 132.
118
Chapter 4 Working with Database Objects
Example Start the database file c:\asa7\sample_2.db as sam2 on the server named
sample.
START DATABASE ’c:\asa7\sample_2.db’
AS sam2
ON sample
$ For more information, see "START DATABASE statement" on
page 604 of the book ASA Reference.
119
Working with tables
Table Editor Once you have opened the Table Editor, a toolbar (shown below) provides
toolbar you with the fields and buttons for common commands.
120
Chapter 4 Working with Database Objects
The first half of this toolbar shows the name of the current table and its
owner (or creator). For a new table, you can specify new settings in both of
these fields. For an existing table, you can type a new name but you cannot
change the owner.
With the buttons on the second half of the toolbar, you can:
♦ Add a new column. It appears at the bottom of the list of existing
columns.
♦ Delete selected columns
♦ View the Column Properties dialog for the selected column
♦ View the Advanced Table Properties for the entire table
♦ View and change the Table Editor options
♦ Add Java classes to the selected column’s data types
♦ Save the table but keep the Table Editor open
♦ Save the table and close the Table Editor in a single step
As an easy reminder of what these buttons do, you can hold your cursor over
each button to see a popup description.
$ For more information, see "Column properties" on page 1049.
121
Working with tables
Dialog components ♦ Base table Designates this table as a base table (one that permanently
holds the data until you delete it). You cannot change this setting for
existing tables; you can only set the table type when you create a new
table.
♦ on DB space Lets you select the database space used by the table.
This option is only available if you create a new table as a base
table.
♦ Global temporary table Designates this table as a global temporary
table (one that holds data for a single connection only). You cannot
change this setting for existing tables; you can only set the table type
when you create a new table.
♦ ON COMMIT Delete rows Sets the global temporary table to
delete its rows when a COMMIT is executed.
♦ ON COMMIT Preserve rows Sets the global temporary table to
preserve its rows when a COMMIT is executed.
♦ Comment Provides a place for you to type a comment (text
description) of this object. For example, you could use this area to
describe the object’s purpose in the system.
Tip
The table type and comment are also shown in the table's property sheet.
Creating tables
When a database is first created, the only tables in the database are the
system tables, which hold the database schema. You can then create new
tables to hold your actual data, either with SQL statements in Interactive
SQL or with Sybase Central.
There are two types of tables that you can create:
♦ Base table A table that holds persistent data. The table and its data
continue to exist until you explicitly delete the data or drop the table. It
is called a base table to distinguish it from temporary tables and from
views.
122
Chapter 4 Working with Database Objects
123
Working with tables
Altering tables
This section describes how to change the structure or column definitions of a
table. For example, you can add columns, change various column attributes,
or delete columns entirely.
In Sybase Central, you can perform these tasks using the buttons on the
Table Editor toolbar. In Interactive SQL, you can perform these tasks with
the ALTER TABLE statement.
If you are working with Sybase Central, you can also manage columns (add
or remove them from the primary key, change their properties, or delete
them) by working with menu commands when you have a column selected in
the Columns folder.
$ For information on altering database object properties, see "Setting
properties for database objects" on page 116.
$ For information on granting and revoking table permissions, see
"Granting permissions on tables" on page 722 and "Revoking user
permissions" on page 729.
Tips
You can also add columns by opening a table’s Columns folder and
double-clicking Add Column.
You can also delete columns by opening a table’s Columns folder, right-
clicking the column, and choosing Delete from the popup menu.
124
Chapter 4 Working with Database Objects
$ For more information, see "Using the Sybase Central Table Editor" on
page 120, and "ALTER TABLE statement" on page 380 of the book
ASA Reference.
Examples The following command adds a column to the skill table to allow space for an
optional description of the skill:
ALTER TABLE skill
ADD skill_description CHAR( 254 )
This statement adds a column called skill_description that holds up to a few
sentences describing the skill.
You can also modify column attributes with the ALTER TABLE statement.
The following statement shortens the skill_description column of the sample
database from a maximum of 254 characters to a maximum of 80:
ALTER TABLE skill
MODIFY skill_description CHAR( 80 )
Any current entries that are longer than 80 characters are trimmed to
conform to the 80-character limit, and a warning appears.
The following statement changes the name of the skill_type column to
classification:
ALTER TABLE skill
RENAME skill_type TO classification
The following statement deletes the classification column.
ALTER TABLE skill
DROP classification
The following statement changes the name of the entire table:
ALTER TABLE skill
RENAME qualification
125
Working with tables
These examples show how to change the structure of the database. The
ALTER TABLE statement can change just about anything pertaining to a
table—you can use it to add or delete foreign keys, change columns from one
type to another, and so on. In all these cases, once you make the change,
stored procedures, views and any other item referring to this column will no
longer work.
$ For more information, see "ALTER TABLE statement" on page 380 of
the book ASA Reference, and "Ensuring Data Integrity" on page 345.
Deleting tables
This section describes how to delete tables from a database. You can use
either Sybase Central or Interactive SQL to perform this task. In Interactive
SQL deleting a table is also called dropping it.
You cannot delete a table that is being used as an article in a SQL Remote
publication. If you try to do this in Sybase Central, an error appears.
Example The following DROP TABLE command deletes all the records in the skill
table and then removes the definition of the skill table from the database
DROP TABLE skill
Like the CREATE statement, the DROP statement automatically executes a
COMMIT statement before and after dropping the table. This makes
permanent all changes to the database since the last COMMIT or
ROLLBACK. The drop statement also drops all indexes on the table.
$ For more information, see "DROP statement" on page 491 of the book
ASA Reference.
126
Chapter 4 Working with Database Objects
v To create and edit the primary key using the Columns folder:
1 Open the Tables folder and double-click a table.
2 Open the Columns folder for that table and right-click a column.
3 From the popup menu, do one of the following:
♦ Choose Add to Primary Key if the column is not yet part of the
primary key and you want to add it.
♦ Choose Remove From Primary Key if the column is part of the
primary key and you want to remove it.
v To create and edit the primary key using the Table Editor:
1 Open the Tables folder.
2 Right-click a table and choose Edit Columns from the popup menu.
3 In the Table Editor, click the icons in the Key fields (at the far left of the
Table Editor) to add a column to the primary key or remove it from the
key.
$ For more information, see "Managing foreign keys" on page 129.
Example 1 The following statement creates the same skill table as before, except that it
adds a primary key:
128
Chapter 4 Working with Database Objects
Example 2 The following statement adds the columns skill_id and skill_type to the
primary key for the skill table:
ALTER TABLE skill (
ADD PRIMARY KEY ( "skill_id", "skill_type" )
)
If a PRIMARY KEY clause is specified in an ALTER TABLE statement, the
table must not already have a primary key that was created by the CREATE
TABLE statement or another ALTER TABLE statement.
Example 3 The following statement removes all columns from the primary key for the
skill table. Before you delete a primary key, make sure you are aware of the
consequences in your database.
ALTER TABLE skill
DELETE PRIMARY KEY
After you have created a foreign key, you can keep track of them in each
table’s Referenced By folder; this folder shows any foreign tables that
reference the currently selected table.
v To show which tables have foreign keys that reference a given table
(Sybase Central):
1 Open the desired table.
2 Open the Referenced By folder.
Tips
When you create a foreign key using the wizard, you can set properties for
the foreign key. To set or change properties after the foreign key is
created, right-click the foreign key and choose Properties from the popup
menu.
You can view the properties of a referencing table by right-clicking the
table and choosing Properties from the popup menu.
130
Chapter 4 Working with Database Objects
Example 1 You can create a table named emp_skill, which holds a description of each
employee’s skill level for each skill in which they are qualified, as follows:
Example 2 You can add a foreign key called "foreignkey" to the existing table skill and
reference this foreign key to the primary key in the table contact, as follows:
ALTER TABLE skill
ADD FOREIGN KEY "foreignkey" ("skill_id")
REFERENCES "DBA"."contact" ("id")
This example creates a relationship between the skill_id column of the table
skill (the foreign table) and the id column of the table contact (the primary
table). The "DBA" signifies the owner of the table contact.
Example 3 You can specify properties for the foreign key as you create it. For example,
the following statement creates the same foreign key as in Example 2, but it
defines the foreign as NOT NULL along with restrictions for when you
update or delete.
ALTER TABLE skill
ADD NOT NULL FOREIGN KEY "foreignkey" ("skill_id")
REFERENCES "DBA"."contact" ("id")
ON UPDATE RESTRICT
ON DELETE RESTRICT
131
Working with tables
In Sybase Central, you can also specify properties in the foreign key creation
wizard or on the foreign key’s property sheet.
$ For more information, see "ALTER TABLE statement" on page 380 of
the book ASA Reference, and "Managing foreign keys (Sybase Central)" on
page 129.
Example Show the contents of the table sys.systable in the Results pane.
SELECT *
FROM SYS.SYSTABLE
$ For more information, see "System Tables" on page 961 of the book
ASA Reference.
133
Working with views
Differences There are some differences between views and permanent tables:
between views and
permanent tables ♦ You cannot create indexes on views.
♦ You cannot perform UPDATE, INSERT, and DELETE operations on all
views.
♦ You cannot assign integrity constraints and keys to views.
♦ Views refer to the information in base tables, but do not hold copies of
that information. Views are recomputed each time you invoke them.
Benefits of tailoring Views let you tailor access to data in the database. Tailoring access serves
access several purposes:
♦ Improved security By allowing access to only the information that is
relevant.
♦ Improved usability By presenting users and application developers
with data in a more easily understood form than in the base tables.
♦ Improved consistency By centralizing in the database the definition
of common queries.
Creating views
When you browse data, a SELECT statement operates on one or more tables
and produces a result set that is also a table. Just like a base table, a result set
from a SELECT query has columns and rows. A view gives a name to a
particular query, and holds the definition in the database system tables.
Suppose you frequently need to list the number of employees in each
department. You can get this list with the following statement:
134
Chapter 4 Working with Database Objects
Example Create a view called DepartmentSize that contains the results of the SELECT
statement given at the beginning of this section:
CREATE VIEW DepartmentSize AS
SELECT dept_ID, count(*)
FROM employee
GROUP BY dept_ID
Since the information in a view is not stored separately in the database,
referring to the view executes the associated SELECT statement to retrieve
the appropriate data.
On one hand, this is good because it means that if someone modifies the
employee table, the information in the DepartmentSize view is automatically
brought up to date. On the other hand, complicated SELECT statements may
increase the amount of time SQL requires to find the correct information
every time you use the view.
$ For more information, see "CREATE VIEW statement" on page 469 of
the book ASA Reference.
135
Working with views
Using views
Restrictions on There are some restrictions on the SELECT statements you can use as views.
SELECT In particular, you cannot use an ORDER BY clause in the SELECT query. A
statements characteristic of relational tables is that there is no significance to the
ordering of the rows or columns, and using an ORDER BY clause would
impose an order on the rows of the view. You can use the GROUP BY
clause, subqueries, and joins in view definitions.
To develop a view, tune the SELECT query by itself until it provides exactly
the results you need in the format you want. Once you have the SELECT
query just right, you can add a phrase in front of the query to create the view.
For example,
CREATE VIEW viewname AS
Updating views UPDATE, INSERT, and DELETE statements are allowed on some views,
but not on others, depending on its associated SELECT statement.
You cannot update views containing aggregate functions, such as
COUNT(*). Nor can you update views containing a GROUP BY clause in
the SELECT statement, or views containing a UNION operation. In all these
cases, there is no way to translate the UPDATE into an action on the
underlying tables.
Copying views In Sybase Central, you can copy views between databases. To do so, select
the view in the right pane of Sybase Central and drag it to the Views folder
of another connected database. A new view is then created, and the original
view’s code is copied to it.
Note that only the view code is copied to the new view. The other view
properties, such as permissions, are not copied.
136
Chapter 4 Working with Database Objects
137
Working with views
When you create a view using the WITH CHECK OPTION, any UPDATE
or INSERT statement on the view is checked to ensure that the new row
matches the view condition. If it does not, the operation causes an error and
is rejected.
The following modified sales_employee view rejects the update statement,
generating the following error message:
Invalid value for column ’dept_id’ in table ’employee’
♦ Create a view displaying the employees in the sales department
(second attempt) Use WITH CHECK OPTION this time.
CREATE VIEW sales_employee
AS SELECT emp_id, emp_fname, emp_lname, dept_id
FROM employee
WHERE dept_id = 200
WITH CHECK OPTION
The check option is If a view (say V2) is defined on the sales_employee view, any updates or
inherited inserts on V2 that cause the WITH CHECK OPTION criterion on
sales_employee to fail are rejected, even if V2 is defined without a check
option.
Modifying views
You can modify a view using both Sybase Central and Interactive SQL.
When doing so, you cannot rename an existing view directly. Instead, you
must create a new view with the new name, copy the previous code to it, and
then delete the old view.
In Sybase Central, a Code Editor lets you edit the code of views, procedures,
and functions. In Interactive SQL, you can use the ALTER VIEW statement
to modify a view. The ALTER VIEW statement replaces a view definition
with a new definition, but it maintains the permissions on the view.
138
Chapter 4 Working with Database Objects
Example Rename the column names of the DepartmentSize view (described in the
"Creating views" on page 134 section) so that they have more informative
names.
ALTER VIEW DepartmentSize
(Dept_ID, NumEmployees)
AS
SELECT dept_ID, count(*)
FROM Employee
GROUP BY dept_ID
Deleting views
You can delete a view in both Sybase Central and Interactive SQL.
139
Working with views
$ For more information, see "DROP statement" on page 491 of the book
ASA Reference.
140
Chapter 4 Working with Database Objects
141
Working with indexes
Creating indexes
Indexes are created on a specified table. You cannot create an index on a
view. To create an index, you can use either Sybase Central or Interactive
SQL.
Validating indexes
You can validate an index to ensure that every row referenced in the index
actually exists in the table. For foreign key indexes, a validation check also
ensures that the corresponding row exists in the primary table, and that their
hash values match. This check complements the validity checking carried out
by the VALIDATE TABLE statement.
142
Chapter 4 Working with Database Objects
Examples Validate an index called EmployeeIndex. If you supply a table name instead
of an index name, the primary key index is validated.
VALIDATE INDEX EmployeeIndex
Validate an index called EmployeeIndex. The -i switch specifies that each
object name given is an index.
dbvalid –i EmployeeIndex
Deleting indexes
If an index is no longer required, you can remove it from the database in
Sybase Central or in Interactive SQL.
143
Working with indexes
Example The following statement removes the index from the database:
DROP INDEX EmpNames
$ For more information, see "DROP statement" on page 491 of the book
ASA Reference.
144
C H A P T E R 5
About this chapter The SELECT statement retrieves data from the database. You can use it to
retrieve a subset of the rows in one or more tables and to retrieve a subset of
the columns in one or more tables.
This chapter focuses on the basics of single-table SELECT statements.
Advanced uses of SELECT are described later in this manual.
Contents
Topic Page
Query overview 146
The SELECT clause: specifying columns 149
The FROM clause: specifying tables 157
The WHERE clause: specifying rows 158
145
Query overview
Query overview
A query requests data from the database and receives the results. This
process is also known as data retrieval. All SQL queries are expressed using
the SELECT statement.
146
Chapter 5 Queries: Selecting Data from a Table
Entering queries
In this manual, SELECT statements and other SQL statements are displayed
with each clause on a separate row, and with the SQL keywords in upper
case. This is not a requirement. You can enter SQL keywords in any case,
and you can break lines at any point.
Keywords and line For example, the following SELECT statement finds the first and last names
breaks of contacts living in California from the Contact table.
SELECT first_name, last_name
FROM Contact
WHERE state = ’CA’
It is equally valid, though not as readable, to enter this statement as follows:
SELECT first_name,
last_name from contact
wHere state
= ’CA’
Case sensitivity of Identifiers (that is, table names, column names, and so on) are case
strings and insensitive in Adaptive Server Anywhere databases.
identifiers
Strings are case sensitive by default, so that ’CA’, ’ca’, ’cA’, and ’Ca’ are
equivalent, but if you create a database as case-sensitive then the case of
strings is significant. The sample database is case insensitive.
Qualifying You can qualify the names of database identifiers if there is ambiguity about
identifiers which object is being referred to. For example, the sample database contains
several tables with a column called city, so you may have to qualify
references to city with the name of the table. In a larger database you may
also have to use the name of the owner of the table to identify the table.
SELECT dba.contact.city
FROM contact
WHERE state = ’CA’
147
Query overview
148
Chapter 5 Queries: Selecting Data from a Table
149
The SELECT clause: specifying columns
FROM department
The results look like this:
You get exactly the same results by listing all the column names in the table
in order after the SELECT keyword:
SELECT dept_id, dept_name, dept_head_id
FROM department
Like a column name, "*" can be qualified with a table name, as in the
following query:
SELECT department.*
FROM department
Rearranging the The order in which you list the column names determines the order in which
order of columns the columns are displayed. The two following examples show how to specify
column order in a display. Both of them find and display the department
names and identification numbers from all five of the rows in the department
table, but in a different order.
SELECT dept_id, dept_name
FROM department
150
Chapter 5 Queries: Selecting Data from a Table
dept_id dept_name
100 R&D
200 Sales
300 Finance
400 Marketing
500 Shipping
dept_name dept_id
R&D 100
Sales 200
Finance 300
Marketing 400
Shipping 500
151
The SELECT clause: specifying columns
Using spaces and The Identifying Number alias for dept_id is enclosed in double quotes
keywords in alias because it is an identifier. You also use double quotes if you wish to use
keywords in aliases. For example, the following query is invalid without the
quotation marks:
SELECT dept_name AS Department,
dept_id AS "integer"
FROM department
If you wish to ensure compatibility with Adaptive Server Enterprise, you
should use quoted aliases of 30 bytes or less.
Department
The department’s name is R&D
The department’s name is Sales
The department’s name is Finance
The department’s name is Marketing
The department’s name is Shipping
152
Chapter 5 Queries: Selecting Data from a Table
Suppose the practice is to replenish the stock of a product when there are ten
items left in stock. The following query lists the number of each product that
must be sold before re-ordering:
SELECT name, quantity - 10
AS "Sell before reorder"
FROM product
153
The SELECT clause: specifying columns
You can also combine the values in columns. The following query lists the
total value of each product in stock:
SELECT name,
quantity * unit_price AS "Inventory value"
FROM product
Arithmetic operator When there is more than one arithmetic operator in an expression,
precedence multiplication, division, and modulo are calculated first, followed by
subtraction and addition. When all arithmetic operators in an expression have
the same level of precedence, the order of execution is left to right.
Expressions within parentheses take precedence over all other operations.
For example, the following SELECT statement calculates the total value of
each product in inventory, and then subtracts five dollars from that value.
SELECT name, quantity * unit_price - 5
FROM product
To avoid misunderstandings, it is recommended that you use parentheses.
The following query has the same meaning and gives the same results as the
previous one, but some may find it easier to understand:
154
Chapter 5 Queries: Selecting Data from a Table
emp_id Name
102 Fran Whitney
105 Matthew Cobb
129 Philip Chin
148 Julie Jordan
... ...
Date and time Although you can use operators on date and time columns, this typically
operations involves the use of functions. For information on SQL functions, see "SQL
Functions" on page 291 of the book ASA Reference.
155
The SELECT clause: specifying columns
FROM contact
NULL values are The DISTINCT keyword treats NULL values as duplicates of each other. In
not distinct other words, when DISTINCT is included in a SELECT statement, only one
NULL is returned in the results, no matter how many NULL values are
encountered.
156
Chapter 5 Queries: Selecting Data from a Table
157
The WHERE clause: specifying rows
158
Chapter 5 Queries: Selecting Data from a Table
159
The WHERE clause: specifying rows
The NOT operator The NOT operator negates an expression. Either of the following two queries
will find all Tee shirts and baseball caps that cost $10 or less. However, note
the difference in position between the negative logical operator (NOT) and
the negative comparison operator (!>).
SELECT id, name, quantity
FROM product
WHERE (name = ’Tee Shirt’ OR name = ’BaseBall Cap’)
AND NOT unit_price > 10
SELECT id, name, quantity
FROM product
WHERE (name = ’Tee Shirt’ OR name = ’BaseBall Cap’)
AND unit_price !> 10
v To list all the products with prices between $10 and $15, inclusive:
♦ Enter the following query:
SELECT name, unit_price
FROM product
WHERE unit_price BETWEEN 10 AND 15
name unit_price
Tee Shirt 14.00
Tee Shirt 14.00
Baseball Cap 10.00
Shorts 15.00
160
Chapter 5 Queries: Selecting Data from a Table
You can use NOT BETWEEN to find all the rows that are not inside the
range.
v To list all the products cheaper than $10 or more expensive than
$15:
♦ Enter the following query:
SELECT name, unit_price
FROM product
WHERE unit_price NOT BETWEEN 10 AND 15
name unit_price
Tee Shirt 9.00
Tee Shirt 9.00
Visor 7.00
Visor 7.00
Sweatshirt 24.00
Sweatshirt 24.00
161
The WHERE clause: specifying rows
Symbols Meaning
% Matches any string of 0 or more characters
_ Matches any one character
[specifier] The specifier in the brackets may take the following forms:
♦ Range A range is of the form rangespec1-rangespec2, where
rangespec1 indicates the start of a range of characters, the hyphen
indicates a range, and rangespec2 indicates the end of a range of
characters
♦ Set A set can be comprised of any discrete set of values, in any order.
For example, [a2bR].
Note that the range [a-f], and the sets [abcdef] and [fcbdae] return the same
set of values.
[^specifier] The caret symbol (^) preceding a specifier indicates non-inclusion. [^a-f]
means not in the range a-f; [^a2bR] means not a, 2, b, or R.
You can match the column data to constants, variables, or other columns that
contain the wildcard characters shown in the table. When using constants,
you should enclose the match strings and character strings in single quotes.
Examples All the following examples use LIKE with the last_name column in the
Contact table. Queries are of the form:
SELECT last_name
FROM contact
WHERE last_name LIKE match-expression
The first example would be entered as
SELECT last_name
FROM contact
WHERE last_name LIKE ’Mc%’
162
Chapter 5 Queries: Selecting Data from a Table
Wildcards require Wildcard characters used without LIKE are interpreted as literals rather than
LIKE as a pattern: they represent exactly their own values. The following query
attempts to find any phone numbers that consist of the four characters 415%
only. It does not find phone numbers that start with 415.
SELECT phone
FROM Contact
WHERE phone = ’415%’
Using LIKE with You can use LIKE on date and time fields as well as on character data. When
date and time you use LIKE with date and time values, the dates are converted to the
values standard DATETIME format, and then to VARCHAR.
One feature of using LIKE when searching for DATETIME values is that,
since date and time entries may contain a variety of date parts, an equality
test has to be written carefully in order to succeed.
For example, if you insert the value 9:20 and the current date into a column
named arrival_time, the clause:
WHERE arrival_time = ’9:20’
fails to find the value, because the entry holds the date as well as the time.
However, the clause below would find the 9:20 value:
WHERE arrival_time LIKE ’%9:20%’
Using NOT LIKE With NOT LIKE, you can use the same wildcard characters that you can use
with LIKE. To find all the phone numbers in the Contact table that do not
have 415 as the area code, you can use either of these queries:
163
The WHERE clause: specifying rows
SELECT phone
FROM Contact
WHERE phone NOT LIKE ’415%’
SELECT phone
FROM Contact
WHERE NOT phone LIKE ’415%’
v To set the quoted_identifier option off for the current user ID:
♦ Type the following command:
SET OPTION quoted_identifier = ’OFF’
Quotation marks in There are two ways to specify literal quotations within a character entry. The
strings first method is to use two consecutive quotation marks. For example, if you
have begun a character entry with a single quotation mark and want to
include a single quotation mark as part of the entry, use two single quotation
marks:
’I don’’t understand.’
With double quotation marks (quoted_identifier OFF):
"He said, ""It is not really confusing."""
The second method, applicable only with quoted_identifier OFF, is to enclose
a quotation in the other kind of quotation mark. In other words, surround an
entry containing double quotation marks with single quotation marks, or vice
versa. Here are some examples:
’George said, "There must be a better way."’
"Isn’t there a better way?"
’George asked, "Isn’’t there a better way?"’
164
Chapter 5 Queries: Selecting Data from a Table
When NULLs are When NULLS are retrieved, displays of query results in Interactive SQL
retrieved show (NULL) in the appropriate position:
SELECT *
FROM department
165
The WHERE clause: specifying rows
Properties of NULL
The following list expands on the properties of NULL.
♦ The difference between FALSE and UNKNOWN Although neither
FALSE nor UNKNOWN returns values, there is an important logical
difference between FALSE and UNKNOWN, because the opposite of
false ("not false") is true. For example,
166
Chapter 5 Queries: Selecting Data from a Table
1 = 2
evaluates to false and its opposite,
1 != 2
evaluates to true. But "not unknown" is still unknown. If null values are
included in a comparison, you cannot negate the expression to get the
opposite set of rows or the opposite truth value.
♦ Substituting a value for NULLs Use the ISNULL built-in function to
substitute a particular value for nulls. The substitution is made only for
display purposes; actual column values are not affected. The syntax is:
ISNULL( expression, value )
For example, use the following statement to select all the rows from test,
and display all the null values in column t1 with the value unknown.
SELECT ISNULL(t1, ’unknown’)
FROM test
♦ Expressions that evaluate to NULL An expression with an arithmetic
or bitwise operator evaluates to NULL if any of the operands are null.
For example:
1 + column1
evaluates to NULL if column1 is NULL.
♦ Concatenating strings and NULL If you concatenate a string and
NULL, the expression evaluates to the string. For example:
SELECT ’abc’ || NULL || ’def’
returns the string abcdef.
167
The WHERE clause: specifying rows
Using OR The OR operator also connects two or more conditions, but it returns results
when any of the conditions is true. The following query searches for rows
containing variants of Elizabeth in the first_name column.
SELECT *
FROM contact
WHERE first_name = ’Beth’
OR first_name = ’Liz’
Using NOT The NOT operator negates the expression that follows it. The following
query lists all the contacts who do not live in California:
SELECT *
FROM contact
WHERE NOT state = ’CA’
When more than one logical operator is used in a statement, AND operators
are normally evaluated before OR operators. You can change the order of
execution with parentheses. For example:
SELECT *
FROM contact
WHERE ( city = ’Lexington’
OR city = ’Burlington’ )
AND state = ’MA’
168
C H A P T E R 6
About this chapter Aggregate functions display summaries of the values in specified columns.
You can also use the GROUP BY clause, HAVING clause, and ORDER BY
clause to group and sort the results of queries using aggregate functions, and
the UNION operator to combine the results of queries.
This chapter describes how to group and sort query results.
Contents
Topic Page
Summarizing query results using aggregate functions 170
The GROUP BY clause: organizing query results into groups 174
Understanding GROUP BY 175
The HAVING clause: selecting groups of data 179
The ORDER BY clause: sorting query results 182
The UNION operation: combining queries 185
Standards and compatibility 187
169
Summarizing query results using aggregate functions
To use the aggregate functions, you must give the function name followed by
an expression on whose values it will operate. The expression, which is the
salary column in this example, is the function’s argument and must be
specified inside parentheses.
The following aggregate functions are available:
♦ avg (expression ) The mean of the supplied expression over the
returned rows.
♦ count ( expression ) The number of rows in the supplied group where
the expression is not NULL.
♦ count(*) The number of rows in each group.
♦ list (string-expr) A string containing a comma-separated list
composed of all the values for string-expr in each group of rows.
♦ max (expression ) The maximum value of the expression, over the
returned rows.
♦ min (expression ) The minimum value of the expression, over the
returned rows.
♦ sum(expression ) The sum of the expression, over the returned rows.
You can use the optional keyword DISTINCT with AVG, SUM, LIST, and
COUNT to eliminate duplicate values before the aggregate function is
applied.
The expression to which the syntax statement refers is usually a column
name. It can also be a more general expression.
For example, with this statement you can find what the average price of all
products would be if one dollar were added to each price:
170
Chapter 6 Summarizing, Grouping, and Sorting Query Results
171
Summarizing query results using aggregate functions
count(*) avg(unit_price)
5 18.200
count(distinct city)
16
count(DISTINCT name)
0
SELECT AVG(unit_price)
FROM product
WHERE unit_price > 50
AVG ( unit_price)
( NULL )
173
The GROUP BY clause: organizing query results into groups
name Price
Baseball Cap 9.500
Shorts 15.000
Sweatshirt 24.000
Tee Shirt 12.333
Visor 7.000
AVG(unit_price)
13.300000
174
Chapter 6 Summarizing, Grouping, and Sorting Query Results
Understanding GROUP BY
Understanding which queries are valid and which are not can be difficult
when the query involves a GROUP BY clause. This section describes a way
to think about queries with GROUP BY so that you may understand the
results and the validity of queries better.
Table Intermediate
result
WHERE
clause
175
Understanding GROUP BY
Second
Intermediate
intermediate
result
result
GROUP BY
clause
3 Apply the HAVING clause Any rows from this second intermediate
result that do not meet the criteria of the HAVING clause are removed at
this point.
4 Project out the results to display This action takes from step 3 only
those columns that need to be displayed in the result set of the query –
that is, it takes only those columns corresponding to the expressions
from the select-list.
Second
intermediate Final
result result
Projection
176
Chapter 6 Summarizing, Grouping, and Sorting Query Results
177
Understanding GROUP BY
Only the rows with id values of more than 400 are included in the groups that
are used to produce the query results.
An example
The following query illustrates the use of WHERE, GROUP BY, and
HAVING clauses in one query:
SELECT name, SUM(quantity)
FROM product
WHERE name LIKE ’%shirt%’
GROUP BY name
HAVING SUM(quantity) > 100
name SUM(quantity)
Tee Shirt 157
In this example:
1 The WHERE clause includes only rows that have a name including the
word shirt (Tee Shirt, Sweatshirt).
2 The GROUP BY clause collects the rows with a common name.
3 The SUM aggregate calculates the total quantity of products available
for each group.
4 The HAVING clause excludes from the final results the groups whose
inventory totals do not exceed 100.
178
Chapter 6 Summarizing, Grouping, and Sorting Query Results
Using HAVING This statement is an example of simple use of the HAVING clause with an
with aggregate aggregate function.
functions
v To list those products available in more than one size or color:
♦ You need a query to group the rows in the product table by name, but
eliminate the groups that include only one distinct product:
SELECT name
FROM product
GROUP BY name
HAVING COUNT(*) > 1
name
Baseball Cap
Sweatshirt
Tee Shirt
Visor
Using HAVING The HAVING clause can also be used without aggregates.
without aggregate
functions v To list all product names that start with letter B:
♦ The following query groups the products, and then restricts the result set
to only those groups for which the name starts with B.
SELECT name
FROM product
GROUP BY name
HAVING name LIKE ’B%’
179
The HAVING clause: selecting groups of data
name
Baseball Cap
More than one More than one condition can be included in the HAVING clause. They are
condition in combined with the AND, OR, or NOT operators, as the following example
HAVING shows.
v To list those products available in more than one size or color, for
which one version costs more than $10:
♦ You need a query to group the rows in the product table by name, but
eliminate the groups that include only one distinct product, and
eliminate those groups for which the maximum unit price is under $10:
SELECT name
FROM product
GROUP BY name
HAVING COUNT(*) > 1
AND MAX(unit_price) > 10
name
Sweatshirt
Tee Shirt
SQL extension Some of the previous HAVING examples adhere to the SQL/92 standard,
which specifies that expressions in a HAVING clause must have a single
value, and must be in the select list or GROUP BY clause. However,
Adaptive Server Anywhere and Adaptive Server Enterprise support
extensions to HAVING that allow aggregate functions not in the select list
and not in the GROUP BY clause as the previous example.
Outer references in A column reference in an aggregate function is called an outer reference.
aggregate When an aggregate function contains an outer reference, then the aggregate
functions function must appear in a subquery of a HAVING clause. The following
example illustrates this case.
v To list those products shipped during the year 1993, for which the
maximum shipped quantity is greater than the available quantity of
that product:
♦ Enter the following query:
180
Chapter 6 Summarizing, Grouping, and Sorting Query Results
id name
301 Tee Shirt
301 Tee Shirt
401 Baseball Cap
500 Visor
501 Visor
600 Sweatshirt
601 Sweatshirt
181
The ORDER BY clause: sorting query results
id Name
400 Baseball Cap
401 Baseball Cap
700 Shorts
600 Sweatshirt
601 Sweatshirt
300 Tee Shirt
301 Tee Shirt
302 Tee Shirt
500 Visor
501 Visor
Sorting by more If you name more than one column in the ORDER BY clause, the sorts are
than one column nested.
The following statement sorts the shirts in the product table first by name in
ascending order, then by quantity (descending) within each name:
SELECT id, name, quantity
FROM product
WHERE name like ’%shirt%’
ORDER BY name, quantity DESC
id name Quantity
600 Sweatshirt Baseball Cap
601 Sweatshirt Baseball Cap
302 Tee Shirt Shorts
301 Tee Shirt Sweatshirt
300 Tee Shirt Sweatshirt
182
Chapter 6 Summarizing, Grouping, and Sorting Query Results
Using the column You can use the position number of a column in a select list instead of the
position column name. Column names and select list numbers can be mixed. Both of
the following statements produce the same results as the preceding one.
SELECT id, name, quantity
FROM product
WHERE name like ’%shirt%’
ORDER BY 2, 3 DESC
SELECT id, name, quantity
FROM product
WHERE name like ’%shirt%’
ORDER BY 2, quantity DESC
Most versions of SQL require that ORDER BY items appear in the select
list, but Adaptive Server Anywhere has no such restriction. The following
query orders the results by quantity, although that column does not appear in
the select list.
SELECT id, name
FROM product
WHERE name like ’%shirt%’
ORDER BY 2, quantity DESC
ORDER BY and With ORDER BY, NULL comes before all other values, whether the sort
NULL order is ascending or descending.
ORDER BY and The effects of an ORDER BY clause on mixed-case data depend on the
case sensitivity database collation and case sensitivity specified when the database is created.
183
The ORDER BY clause: sorting query results
Name AVG(unit_price)
Visor 7.000
Baseball Cap 9.500
Tee Shirt 12.333
Shorts 15.000
Sweatshirt 24.000
184
Chapter 6 Summarizing, Grouping, and Sorting Query Results
185
The UNION operation: combining queries
186
Chapter 6 Summarizing, Grouping, and Sorting Query Results
187
Standards and compatibility
♦ GROUP BY and ALL Adaptive Server Anywhere does not support the
use of ALL in the GROUP BY clause.
♦ HAVING with no GROUP BY Adaptive Server Anywhere does not
support the use of HAVING with no GROUP BY clause unless all the
expressions in the select and having clauses are aggregate functions. For
example, the following query is supported in Adaptive Server
Anywhere:
SELECT ANY(unit_price)
FROM product
HAVING COUNT(*) > 8;
♦ HAVING conditions Adaptive Server Enterprise supports extensions
to HAVING that allow non-aggregate functions not in the select list and
not in the GROUP BY clause. Only aggregate functions of this type are
allowed in Adaptive Server Anywhere.
♦ DISTINCT with ORDER BY or GROUP BY Adaptive Server
Enterprise permits the use of columns in the ORDER BY or GROUP
BY clause that do not appear in the select list, even in SELECT
DISTINCT queries. This can lead to repeated values in the SELECT
DISTINCT result set. Adaptive Server Anywhere does not support this
behavior.
♦ Column names in UNIONS Adaptive Server Enterprise permits the
use of columns in the ORDER BY clause in unions of queries. In
Adaptive Server Anywhere, the ORDER BY clause must use an integer
to mark the column by which the results are being ordered.
188
C H A P T E R 7
About this chapter When you create a database, you normalize the data by placing information
specific to different objects in different tables, rather than in one large table
with many redundant entries.
A join operation recreates a larger table using the information from two or
more tables (or views). Using different joins, you can construct a variety of
these virtual tables, each suited to a particular task.
Before your start This chapter assumes some knowledge of queries and the syntax of the select
statement. Information about queries appears in "Queries: Selecting Data
from a Table" on page 145.
Contents
Topic Page
How joins work 190
How joins are structured 192
Key joins 194
Natural joins 196
Joins using comparisons 197
Inner, left-outer, and right-outer joins 199
Self-joins and correlation names 203
Cross joins 205
How joins are processed 208
Joining more than two tables 210
Joins involving derived tables 213
Transact-SQL outer joins 214
189
How joins work
190
Chapter 7 Joins: Retrieving Data from Several Tables
191
How joins are structured
192
Chapter 7 Joins: Retrieving Data from Several Tables
193
Key joins
Key joins
The simplest way to join tables is to connect them using the foreign key
relationships built into the database. This method is particularly economical
in syntax and especially efficient.
Answer the question, "Which orders has Beth Reiser placed?"
SELECT customer.fname, customer.lname,
sales_order.id, sales_order.order_date
FROM customer KEY JOIN sales_order
WHERE customer.fname = ’Beth’
AND customer.lname = ’Reiser’
Use the key join wherever possible. A key join is valid if and only if exactly
one foreign key is identified between the two tables. Otherwise, an error
indicating the ambiguity results. Some constraints on these joins mean that
they will not always be an available option.
♦ A foreign-key relationship must exist in the database. You cannot use a
key join to join two tables that are not related through a foreign key.
♦ Only one foreign key relationship can exist between the two tables. If
more than one such relationship exists, Adaptive Server Anywhere
cannot decide which relationship to use and generates an error indicating
the ambiguity. You cannot specify the suitable foreign key in your
statement since the syntax of the SQL language does not provide a
means to do so.
♦ A suitable foreign key relationship must exist. You may need to create a
join using particular columns. A foreign-key relationship between the
two tables may not suit your purpose.
Key joins are the Key join is the default join type in Adaptive Server Anywhere. Anywhere
default performs a key join if you do not specify the type of join explicitly, using a
keyword such as KEY or NATURAL, or by including an ON phrase.
194
Chapter 7 Joins: Retrieving Data from Several Tables
195
Natural joins
Natural joins
A natural join matches the rows from two tables by comparing the values
from columns, one in each table, that have the same name. It restricts the
results by comparing the values of columns in the two tables with the same
column name. An error results if there is no common column name.
For example, you can join the employee and department tables using a
natural join because they have only one column name in common, namely
the dept_id column.
SELECT emp_fname, emp_lname, dept_name
FROM employee NATURAL JOIN department
ORDER BY dept_name, emp_lname, emp_fname
196
Chapter 7 Joins: Retrieving Data from Several Tables
197
Joins using comparisons
198
Chapter 7 Joins: Retrieving Data from Several Tables
Because inner joins are the default, you obtain the same result using the
following clause.
FROM customer JOIN sales_order
By contrast, an outer join contains rows whether or not a row exists in the
opposite table to satisfy the join condition. Use the keywords LEFT or
RIGHT to identify the table that is to appear in its entirety.
♦ A LEFT OUTER JOIN contains every row in the left-hand table.
♦ A RIGHT OUTER JOIN contains every row in the right-hand table.
For example, the outer join
SELECT fname, lname, order_date
FROM customer
KEY LEFT OUTER JOIN sales_order
ORDER BY order_date
199
Inner, left-outer, and right-outer joins
The keywords INNER, LEFT OUTER, and RIGHT OUTER may appear as
modifiers in key joins, natural joins, and joins that use a comparison. These
modifiers do not apply to cross joins.
200
Chapter 7 Joins: Retrieving Data from Several Tables
201
Inner, left-outer, and right-outer joins
202
Chapter 7 Joins: Retrieving Data from Several Tables
203
Self-joins and correlation names
Using correlation Choose short, concise correlation names to make your statements easier to
names read. In many cases, names only one or two characters in length suffice.
While you must use correlation names for a self-join to distinguish multiple
instances of a table, they can make many other statements more readable too.
For example, the statement
SELECT customer.fname, customer.lname,
sales_order.id, sales_order.order_date
FROM customer KEY JOIN sales_order
WHERE customer.fname = ’Beth’
AND customer.lname = ’Reiser’
becomes more compact if you use the correlation name c for customer and
so for sales_order:
SELECT c.fname, c.lname, so.id, so.order_date
FROM customer AS c KEY JOIN sales_order AS so
WHERE c.fname = ’Beth’
AND c.lname = ’Reiser’
For brevity, you can even eliminate the keyword AS. It is redundant because
the syntax of the SQL language identifies the correlation names: they are
separated from the corresponding table name by only a space, not a comma.
SELECT c.fname, c.lname, so.id, so.order_date
FROM customer c KEY JOIN sales_order so
WHERE c.fname = ’Beth’
AND c.lname = ’Reiser’
204
Chapter 7 Joins: Retrieving Data from Several Tables
Cross joins
As for other types of joins, each row in a cross join is a combination of one
row from the first table and one row from the second table. Unlike other
joins, a cross join contains no restrictions. All possible combinations of rows
are present.
Each row of the first table appears exactly once with each row of the second
table. Hence, the number of rows in the join is the product of the number of
rows in the individual tables.
Inner and outer Except in the presence of additional restrictions in the WHERE clause, all
modifiers do not rows of both tables always appear in the result. Thus, the keywords INNER,
apply to cross joins LEFT OUTER and RIGHT OUTER are not applicable to cross joins.
The query
SELECT *
FROM table1 CROSS JOIN table2
has a result set as follows:
♦ As long as table1 is not the same name as table2:
♦ A row in the result set includes all columns in table1 and all
columns in table2.
♦ There is one row in the result set for each combination of a row in
table1 and a row in table2. If table1 has n rows and table2 has n
rows, the query returns n x m rows.
♦ If table1 is the same table as table2, and neither is given a correlation
name, the result set is simply the rows of table1.
205
Cross joins
Since the employee table has 75 rows, this join contains 75 x 75 = 5625
rows. It includes, as well, rows that list each employee with themselves. For
example, it contains the row
If the order of the names is not important, you can produce a list of the
(75 x 74)/2 = 2775 unique pairs.
SELECT a.emp_fname, a.emp_lname,
b.emp_fname, b.emp_lname
FROM employee AS a CROSS JOIN employee AS b
WHERE a.emp_id < b.emp_id
206
Chapter 7 Joins: Retrieving Data from Several Tables
207
How joins are processed
Tips
Adaptive Server Anywhere accepts a wide range of syntax. This
flexibility means that most queries result in an answer, but sometimes not
the one you intended. The following precautions may help you avoid this
peril.
1 Always use correlation names.
2 Try eliminating a WHERE clause when testing a new statement.
3 Avoid mixing inner joins with left-outer or right-outer joins.
4 Examine the plan for your query—does it include all the tables?
208
Chapter 7 Joins: Retrieving Data from Several Tables
Performance considerations
Generally, Adaptive Server Anywhere prefers to process joins by selecting
information in one table, then performing an indexed look-up to get the rows
it needs from another. Anywhere carefully optimizes each of your statements
before executing it. As long as your statement correctly identifies the
information you want, it usually doesn’t matter what syntax you use.
In particular, Adaptive Server Anywhere is free to reconstruct your statement
to any form that is semantically equivalent. It almost always does so, to help
compute your result efficiently. You can determine the result of a statement
using the above methods, but Anywhere usually obtains the result by another
means.
Adaptive Server Anywhere improves performance using indexes whenever
doing so improves performance. Columns that are part of a primary or
secondary key are indexed automatically. Other columns are not. Creating
additional indexes on columns involved in a join, either as part of a join
condition or in a where clause, can improve performance dramatically.
$ For further performance tips, see "Monitoring and Improving
Performance" on page 777.
209
Joining more than two tables
When you want to join a number of tables sequentially, the above syntax
makes a lot of sense. However, sometimes you need to join a single table to
several others that surround it.
Star joins
Some joins must join a single table to several others around it. This type of
join is called a star join.
As an example, create a list the names of the customers who have placed
orders with Rollin Overbey.
SELECT c.fname, c.lname, o.order_date
FROM sales_order AS o KEY JOIN customer AS c,
sales_order AS o KEY JOIN employee AS e
WHERE e.emp_fname = ’Rollin’ AND e.emp_lname = ’Overbey’
ORDER BY o.order_date
Notice that one of the tables in the FROM clause, employee, does not
contribute any columns to the results. Nor do any of the columns that are
joined—such as customer.id or employee.id—appear in the results.
Nonetheless, this join is possible only using the employee table in the FROM
clause.
210
Chapter 7 Joins: Retrieving Data from Several Tables
The following statement uses a star join around the sales_order table. The
result is a list showing all the customers and the total quantity of each type of
product they have ordered. Some customers have not placed orders, so the
other values for these customers are NULL. In addition, it shows the name of
the manager of the sales person through whom they placed the orders.
SELECT c.fname, p.name, SUM(i.quantity), m.emp_fname
FROM sales_order o
KEY LEFT OUTER JOIN sales_order_items i
KEY LEFT OUTER JOIN product p, customer c
KEY LEFT OUTER JOIN sales_order o,
sales_order o
KEY LEFT OUTER JOIN employee e
LEFT OUTER JOIN employee m
ON e.manager_id = m.emp_id
WHERE c.state = ’CA’
GROUP BY c.fname, p.name, m.emp_fname
ORDER BY SUM(i.quantity) DESC, c.fname
Note the following details of this statement:
♦ The join centers on the sales_order table.
♦ The keyword AS is optional and has been omitted.
♦ All joins must be outer joins to keep in the result set the customers who
haven’t placed any orders.
♦ The condition e.manager_id = m.emp_id must be placed in the ON
clause instead of the WHERE clause. The result of this statement would
be inner join if this condition moved into the WHERE clause.
♦ The query is syntactically correct in Adaptive Server Anywhere only if
the EXTENDED_JOIN_SYNTAX option is ON.
$ For more information about the EXTENDED_JOIN_SYNTAX option,
see "Database Options" on page 143 of the book ASA Reference.
The statement produces the results partially shown in the table below.
211
Joining more than two tables
212
Chapter 7 Joins: Retrieving Data from Several Tables
213
Transact-SQL outer joins
Preserved and For an outer join, a table is either preserved or null-supplying. If the join
null-supplying operator is *=, the second table is the null-supplying table; if the join
tables operator is =*, the first table is the null-supplying table.
In addition to using it in the outer join, you can compare a column from the
preserved table to a constant. For example, you can use the following
statement to find information about customers in California.
SELECT fname, lname, order_date
FROM customer, sales_order
WHERE customer.state = ’CA’
AND customer.id *= sales_order.cust_id
ORDER BY order_date
However, the null-supplying table in a Transact-SQL outer join cannot also
participate in another regular or outer join.
214
Chapter 7 Joins: Retrieving Data from Several Tables
Bit columns
Since bit columns do not permit null values, a value of 0 appears in an
outer join when the bit column is in the null-supplying table, and this table
generates NULL values.
215
Transact-SQL outer joins
216
C H A P T E R 8
Using Subqueries
About this chapter When you create a query, you use WHERE and HAVING clauses to restrict
the rows the query displays.
Sometimes, the rows you select depend on information stored in more than
one table. A subquery in the WHERE or HAVING clause allows you to
select rows from one table according to specifications obtained from another
table. Additional ways to do this can be found in "Joins: Retrieving Data
from Several Tables" on page 189
Before your start This chapter assumes some knowledge of queries and the syntax of the select
statement. Information about queries appears in "Queries: Selecting Data
from a Table" on page 145.
Contents
Topic Page
What is a subquery? 218
Using Subqueries in the WHERE clause 219
Subqueries in the HAVING clause 220
Subquery comparison test 222
Quantified comparison tests with ANY and ALL 224
Testing set membership with IN conditions 227
Existence test 229
Outer references 231
Subqueries and joins 232
Nested subqueries 235
How subqueries work 237
217
What is a subquery?
What is a subquery?
A relational database stores information about different types of objects in
different tables. For example, you should store information particular to
products in one table, and information that pertains to sales orders in another.
The product table contains the information about the various products. The
sales order items table contains information about customers’ orders.
In general, only the simplest questions can be answered using only one table.
For example, if the company reorders products when there are fewer than 50
of them in stock, then it is possible to answer the question "Which products
are nearly out of stock?" with this query:
SELECT id, name, description, quantity
FROM product
WHERE quantity < 50
However, if "nearly out of stock" depends on how many items of each type
the typical customer orders, the number "50" will have to be replaced by a
value obtained from the sales_order_items table.
Structure of the A subquery is structured like a regular query, and appears in the main query’s
subquery WHERE or HAVING clause. In the above example, for instance, you can
use a subquery to select the average number of items that a customer orders,
and then use that figure in the main query to find products that are nearly out
of stock. The following query finds the names and descriptions of the
products which number less than double the average number of items of each
type that a customer orders.
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items
)
SQL subqueries always appear in the WHERE or HAVING clauses of the
main query. In the WHERE clause, they help select the rows from the tables
listed in the FROM clause that appear in the query results. In the HAVING
clause, they help select the row groups, as specified by the main query’s
GROUP BY clause, that appear in the query results.
218
Chapter 8 Using Subqueries
219
Subqueries in the HAVING clause
name avg(quantity)
Baseball Cap 62.000000
Shorts 80.000000
Tee Shirt 52.333333
220
Chapter 8 Using Subqueries
Prod_id line_id
300 1
401 2
500 1
501 2
600 1
… …
In this example, the subquery must produce the in-stock quantity of the
product corresponding to the row group being tested by the HAVING clause.
The subquery selects records for that particular product, using the outer
reference sales_order_items.prod_id.
A subquery with a This query uses the comparison ">", suggesting that the subquery must
comparison returns return exactly one value. In this case, it does. Since the id field of the
a single value product table is a primary key, there is only one record in the product table
corresponding to any particular product id.
Subquery tests
The chapter "Queries: Selecting Data from a Table" on page 145 describes
simple search conditions you can use in the HAVING clause. Since a
subquery is just an expression that appears in the WHERE or HAVING
clauses, the search conditions on subqueries may look familiar.
They include:
♦ Subquery comparison test Compares the value of an expression to a
single value produced by the subquery for each record in the table(s) in
the main query.
♦ Quantified comparison test Compares the value of an expression to
each of the set of values produced by a subquery.
♦ Subquery set membership test Checks if the value of an expression
matches one of the set of values produced by a subquery.
♦ Existence test Checks if the subquery produces any rows.
221
Subquery comparison test
222
Chapter 8 Using Subqueries
223
Quantified comparison tests with ANY and ALL
Id Cust_id
2006 105
2007 106
2008 107
2009 108
… …
In executing this query, the main query tests the order dates for each order
against the shipping dates of every product of the order #2005. If an order
date is greater than the shipping date for one shipment of order #2005, then
that id and customer id from the sales_order table are part of the result set.
The ANY test is thus analogous to the OR operator: the above query can be
read, "Was this sales order placed after the first product of the order #2005
was shipped, or after the second product of order #2005 was shipped, or…"
224
Chapter 8 Using Subqueries
The ANY operator can be a bit confusing. It is tempting to read the query as
Understanding the "Return those orders placed after any products of order #2005 were shipped".
ANY operator But this means the query will return the order ID’s and customer ID’s for the
orders placed after all products of order #2005 were shipped – which is not
what the query does!
Instead, try reading the query like this: "Return the order and customer ID's
for those orders placed after at least one product of order #2005 was
shipped." Using the keyword SOME may provide a more intuitive way to
phrase the query. The following query is equivalent to the previous query.
SELECT id, cust_id
FROM sales_order
WHERE order_date > SOME (
SELECT ship_date
FROM sales_order_items
WHERE id=2005)
The keyword SOME is equivalent to the keyword ANY.
Notes about the There are two additional important characteristics of the ANY test:
ANY operator ♦ Empty subquery result set If the subquery produces an empty result
set, the ANY test returns FALSE. This makes sense, since if there are no
results, then it is not true that at least one result satisfies the comparison
test.
♦ NULL values in subquery result set Assume that there is at least one
NULL value in the subquery result set. If the comparison test is FALSE
for all non-NULL data values in the result set, the ANY search returns
FALSE. This is because in this situation, you cannot conclusively state
whether there is a value for the subquery for which the comparison test
holds. There may or may not be a value, depending on the "correct"
values for the NULL data in the result set.
225
Quantified comparison tests with ANY and ALL
Id cust_id
2002 102
2003 103
2004 104
2005 101
… …
In executing this query, the main query tests the order dates for each order
against the shipping dates of every product of order #2001. If an order date is
greater than the shipping date for every shipment of order #2001, then the id
and customer id from the sales_order table are part of the result set. The
ALL test is thus analogous to the AND operator: the above query can be
read, "Was this sales order placed before the first product of order #2001 was
shipped, and before the second product of order #2001 was shipped, and…"
Notes about the There are three additional important characteristics of the ALL test:
ALL operator ♦ Empty subquery result set If the subquery produces an empty result
set, the ALL test returns TRUE. This makes sense, since if there are no
results, then it is true that the comparison test holds for every value in
the result set.
♦ NULL values in subquery result set Assume that there is at least one
NULL value in the subquery result set. If the comparison test is FALSE
for all non NULL data values in the result set, the ALL search returns
FALSE. In this situation you cannot conclusively state whether the
comparison test holds for every value in the subquery result set; it may
or may not, depending on the "correct" values for the NULL data.
♦ Negating the ALL test The following expressions are not equivalent.
NOT a = ALL (subquery)
a <> ALL (subquery)
$ This is explained in detail in "Quantified comparison test" on
page 239.
226
Chapter 8 Using Subqueries
emp_fname Emp_lname
Mary Anne Shea
Jose Martinez
227
Testing set membership with IN conditions
Negation of the set You can also use the subquery set membership test to extract those rows
membership test whose column values are not equal to any of those produced by a subquery.
To negate a set membership test, insert the word NOT in front of the
keyword IN.
Example The subquery in this query returns the first and last names of the employees
that are not heads of the Finance or Shipping departments.
SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id NOT IN (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ OR dept_name =
’Shipping’))
228
Chapter 8 Using Subqueries
Existence test
Subqueries used in the subquery comparison test and set membership test
both return data values from the subquery table. Sometimes, however, you
may be more concerned with whether the subquery returns any results, rather
than which results. The existence test (EXISTS) checks whether a subquery
produces any rows of query results. If the subquery produces one or more
rows of results, the EXISTS test returns TRUE. Otherwise, it returns FALSE.
Example Here is an example of a request expressed using a subquery: "Which
customers placed orders after July 13, 1994?"
SELECT fname, lname
FROM customer
WHERE EXISTS (
SELECT *
FROM sales_order
WHERE (order_date > ’1994-07-13’) AND (customer.id =
sales_order.cust_id))
Fname lname
Grover Pendelton
Ling Ling Andrews
Bubba Murphy
Almen de Joie
Explanation of the Here, for each row in the customer table, the subquery checks if that
existence test customer ID corresponds to one that has placed an order after July 13, 1994.
If it does, the query extracts the first and last names of that customer from
the main table.
The EXISTS test does not use the results of the subquery; it just checks if the
subquery produces any rows. So the existence test applied to the following
two subqueries return the same results:
SELECT *
FROM sales_order
WHERE (order_date > ’1994-07-13’) AND (customer.id =
sales_order.cust_id)
SELECT ship_date
FROM sales_order
WHERE (order_date > ’1994-07-13’) AND (customer.id =
sales_order.cust_id)
229
Existence test
It does not matter which columns from the sales_order table appear in the
SELECT statement, though by convention, the "SELECT *" notation is used.
Negating the You can reverse the logic of the EXISTS test using the NOT EXISTS form.
existence test In this case, the test returns TRUE if the subquery produces no rows, and
FALSE otherwise.
Correlated You may have noticed that the subquery contains a reference to the id
subqueries column from the customer table. References to columns or expressions in
the main table(s) are called outer references and the subquery is said to be
correlated. Conceptually, SQL processes the above query by going through
the customer table, and performing the subquery for each customer. If the
order date in the sales_order table is after July 13, 1994, and the customer
ID in the customer and sales_order tables match, then the first and last
names from the customer table appear. Since the subquery references the
main query, the subquery in this section, unlike those from previous sections,
returns an error if you attempt to run it by itself.
230
Chapter 8 Using Subqueries
Outer references
Within the body of a subquery, it is often necessary to refer to the value of a
column in the active row of the main query. Consider the following query:
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items
WHERE product.id = sales_order_items.prod_id)
This query extracts the names and descriptions of the products whose in-
stock quantities are less than double the average ordered quantity of that
product — specifically, the product being tested by the WHERE clause in the
main query. The subquery does this by scanning the sales_order_items
table. But the product.id column in the WHERE clause of the subquery
refers to a column in the table named in the FROM clause of the main query
— not the subquery. As SQL moves through each row of the product table,
it uses the id value of the current row when it evaluates the WHERE clause
of the subquery.
Description of an The product.id column in this subquery is an example of an outer
outer reference reference. A subquery that uses an outer reference is a correlated
subquery. An outer reference is a column name that does not refer to any of
the columns in any of the tables in the FROM clause of the subquery.
Instead, the column name refers to a column of a table specified in the
FROM clause of the main query. As the above example shows, the value of a
column in an outer reference comes from the row currently being tested by
the main query.
231
Subqueries and joins
Order_date sales_rep
1994-01-05 1596
1993-01-27 667
1993-11-11 467
1994-02-04 195
1994-02-19 195
1994-04-02 299
1993-11-09 129
1994-01-29 690
1994-05-25 299
The subquery yields a list of customer ID’s that correspond to the two
customers whose names are listed in the WHERE clause, and the main query
finds the order dates and sales representatives corresponding to those two
people’s orders.
Replacing a The same question can be answered using joins. Here is an alternative form
subquery with a of the query, using a two-table join:
join SELECT order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)
This form of the query joins the sales_order table to the customer table to
find the orders for each customer, and then returns only those records for
Suresh and Mrs. Clarke.
232
Chapter 8 Using Subqueries
Some joins cannot Both of these queries find the correct order dates and sales representatives,
be written as and neither is more right than the other. Many people will find the subquery
subqueries form more natural, because the request doesn’t ask for any information about
customer ID’s, and because it might seem odd to join the sales_order and
customer tables together to answer the question.
If, however, the request changes to include some information from the
customer table, the subquery form no longer works. For example, the
request "When did Mrs. Clarke and Suresh place their orders, and by which
representatives, and what are their full names?", it is necessary to include the
customer table in the main WHERE clause:
SELECT fname, lname, order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)
Some subqueries Similarly, there are cases where a subquery will work but a join will not. For
cannot be written example:
as joins SELECT name, description, quantity
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)
233
Subqueries and joins
In this case, the inner query is a summary query and the outer query is not, so
there is no way to combine the two queries by a simple join.
$ For more information on joins, see "Queries: Selecting Data from a
Table" on page 145.
234
Chapter 8 Using Subqueries
Nested subqueries
As we have seen, subqueries always appear in the HAVING clause or the
WHERE clause of a query. A subquery may itself contain a WHERE clause
and/or a HAVING clause, and, consequently, a subquery may appear in
another subquery. Subqueries inside other subqueries are called nested
subqueries.
Examples List the order IDs and line IDs of those orders shipped on the same day when
any item in the fees department was ordered.
SELECT id, line_id
FROM sales_order_items
WHERE ship_date = ANY (
SELECT order_date
FROM sales_order
WHERE fin_code_id IN (
SELECT code
FROM fin_code
WHERE (description = ’Fees’)))
Id line_id
2001 1
2001 2
2001 3
2002 1
2002 2
… …
Explanation of the ♦ In this example, the innermost subquery produces a column of financial
nested subqueries codes whose descriptions are "Fees":
SELECT code
FROM fin_code
WHERE (description = ’Fees’)
♦ The next subquery finds the order dates of the items whose codes match
one of the codes selected in the innermost subquery:
SELECT order_date
FROM sales_order
WHERE fin_code_id IN (subquery)
♦ Finally, the outermost query finds the order ID’s and line ID’s of the
orders shipped on one of the dates found in the subquery.
235
Nested subqueries
Nested subqueries can also have more than three levels. Though there is no
maximum number of levels, queries with three or more levels take
considerably longer to run than do smaller queries.
236
Chapter 8 Using Subqueries
Correlated subqueries
In a simple query, the database server evaluates and processes the query’s
WHERE clause once for each row of the query. Sometimes, though, the
subquery returns only one result, making it unnecessary for the database
server to evaluate it more than once for the entire result set.
Uncorrelated Consider this query:
subqueries SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)
In this example, the subquery calculates exactly one value: the average
quantity from the sales_order_items table. In evaluating the query, the
database server computes this value once, and compares each value in the
quantity field of the product table to it to determine whether to select the
corresponding row.
Correlated When a subquery contains an outer reference, you cannot use this shortcut.
subqueries For instance, the subquery in the query
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items
WHERE product.id=sales_order_items.prod_id)
returns a value dependent upon the active row in the product table. Such
subqueries are called correlated subqueries. In these cases, the subquery
might return a different value for each row of the outer query, making it
necessary for the database server to perform more than one evaluation.
237
How subqueries work
238
Chapter 8 Using Subqueries
Comparison operators
A subquery that follows a comparison operator (=, <>, <, <=, >, >=) must
satisfy certain conditions if it is to be converted into a join. Subqueries that
follow comparison operators in general are valid only if they return exactly
one value for each row of the main query. In addition to this criterion, a
subquery is converted to a join only if the subquery
♦ does not contain a GROUP BY clause
♦ does not contain the keyword DISTINCT
♦ is not a UNION query
♦ is not an aggregate query
Example Suppose the request "When were Suresh’s products ordered, and by which
sales representative?" were phrased as the subquery
SELECT order_date, sales_rep
FROM sales_order
WHERE cust_id = (
SELECT id
FROM customer
WHERE fname = ’Suresh’)
This query satisfies the criteria, and therefore, it would be converted to a
query using a join:
SELECT order_date, sales_rep
FROM sales_order, customer
WHERE cust_id=customer.id AND (lname = ’Clarke’ OR fname
= ’Suresh’)
However, the request, "Find the products whose in-stock quantities are less
than double the average ordered quantity" cannot be converted to a join, as
the subquery contains the aggregate function avg:
SELECT name, description
FROM product
WHERE quantity < 2 * (
SELECT avg(quantity)
FROM sales_order_items)
239
How subqueries work
240
Chapter 8 Using Subqueries
241
How subqueries work
242
Chapter 8 Using Subqueries
243
How subqueries work
The conditions that must be fulfilled for a subquery that follows the IN
keyword and the ANY keyword to be converted to a join are identical. This
is not a coincidence, and the reason for this is that the expression
A query with an IN WHERE column-name IN (subquery)
operator can be is logically equivalent to the expression
converted to one
with an ANY WHERE column-name = ANY (subquery)
operator So the query
SELECT emp_fname, emp_lname
FROM employee
WHERE emp_id IN (
SELECT dept_head_id
FROM department
WHERE (dept_name=’Finance’ or dept_name = ’Shipping’))
Existence test
A subquery that follows the keyword EXISTS is converted to a join only if it
satisfies the following two conditions:
♦ The main query does not contain a GROUP BY clause, and is not an
aggregate query, or the subquery returns exactly one value.
♦ The conjunct ’EXISTS (subquery)’ is not negated.
♦ The subquery is correlated; that is, it contains an outer reference.
Example Therefore, the request, "Which customers placed orders after July 13,
1994?", which can be formulated by this query whose non-negated subquery
contains the outer reference customer.id = sales_order.cust_id, could be
converted to a join.
244
Chapter 8 Using Subqueries
245
How subqueries work
246
C H A P T E R 9
About this chapter This chapter describes how to modify the data in a database.
Most of the chapter is devoted to the INSERT, UPDATE, and DELETE
statements, as well as statements for bulk loading and unloading.
Contents
Topic Page
Data modification statements 248
Adding data using INSERT 249
Changing data using UPDATE 253
Deleting data using DELETE 255
247
Data modification statements
248
Chapter 9 Adding, Changing, and Deleting Data
Notes ♦ Enter the values in the same order as the column names in the original
CREATE TABLE statement, that is, first the ID number, then the name,
then the department head ID.
♦ Surround the values by parentheses.
♦ Enclose all character data in single quotes.
♦ Use a separate insert statement for each row you add.
249
Adding data using INSERT
Adding data in only two columns, for example, dept_id and dept_name,
requires a statement like this:
INSERT INTO department (dept_id, dept_name)
VALUES ( 703, ’Western Sales’ )
The dept_head_id column has no default, but can allow NULL. A NULL is
assigned to that column.
The order in which you list the column names must match the order in which
you list the values. The following example produces the same results as the
previous one:
INSERT INTO department (dept_name, dept_id )
VALUES (’Western Sales’, 703)
Values for When you specify values for only some of the columns in a row, one of four
unspecified things can happen to the columns with no values specified:
columns
♦ NULL entered NULL appears if the column allows NULL and no
default value exists for the column.
♦ A default value entered The default value appears if a default exists
for the column.
♦ A unique, sequential value entered A unique, sequential value
appears if the column has the AUTOINCREMENT default or the
IDENTITY property.
♦ INSERT rejected, and an error message appears An error message
appears if the column does not allow NULL and no default exists.
By default, columns allow NULL unless you explicitly state NOT NULL in
the column definition when creating tables. You can alter the default using
the ALLOW_NULLS_BY_DEFAULT option.
Restricting column You can create constraints for a column or domain. Constraints govern the
data using kind of data you can or cannot add.
constraints
$ For information on constraints, see "Using table and column
constraints" on page 355.
Explicitly inserting You can explicitly insert NULL into a column by entering NULL. Do not
NULL enclose this in quotes, or it will be taken as a string.
For example, the following statement explicitly inserts NULL into the
dept_head_id column:
INSERT INTO department
VALUES (703, ’Western Sales’, NULL )
Using defaults to You can define a column so that, even though the column receives no value,
supply values a default value automatically appears whenever a row is inserted. You do this
by supplying a default for the column.
250
Chapter 9 Adding, Changing, and Deleting Data
251
Adding data using INSERT
FROM product
WHERE name = ’Tee Shirt’
252
Chapter 9 Adding, Changing, and Deleting Data
The SET clause The SET clause specifies the columns to be updated, and their new values.
The WHERE clause determines the row or rows to be updated. If you do not
have a WHERE clause, the specified columns of all rows are updated with
the values given in the SET clause.
253
Changing data using UPDATE
You can provide any expression of the correct data type in the SET clause.
The WHERE The WHERE clause specifies the rows to be updated. For example, the
clause following statement replaces the One Size Fits All Tee Shirt with an Extra
Large Tee Shirt
UPDATE product
SET size = ’Extra Large’
WHERE name = ’Tee Shirt’
AND size = ’One Size Fits All’
The FROM clause You can use a FROM clause to pull data from one or more tables into the
table you are updating.
254
Chapter 9 Adding, Changing, and Deleting Data
The WHERE Use the WHERE clause to specify which rows to remove. If no WHERE
clause clause appears, the DELETE statement remove all rows in the table.
The FROM clause The FROM clause in the second position of a DELETE statement is a special
feature allowing you to select data from a table or tables and delete
corresponding data from the first-named table. The rows you select in the
FROM clause specify the conditions for the delete.
Example This example uses the sample database. To execute the statements in the
example, you should set the option WAIT_FOR_COMMIT to OFF. The
following statement does this for the current connection only:
SET TEMPORARY OPTION WAIT_FOR_COMMIT = ’OFF’
This allows you to delete rows even if they contain primary keys referenced
by a foreign key, but does not permit a COMMIT unless the corresponding
foreign key is deleted also.
The following view displays products and the value of that product that has
been sold:
CREATE VIEW ProductPopularity as
SELECT product.id,
SUM(product.unit_price * sales_order_items.quantity)
as "Value Sold"
FROM product JOIN sales_order_items
ON product.id = sales_order_items.prod_id
GROUP BY product.id
Using this view, you can delete those products which have sold less than
$20,000 from the product table.
DELETE
FROM product
FROM product NATURAL JOIN ProductPopularity
WHERE "Value Sold" < 20000
You should roll back your changes when you have completed the example:
ROLLBACK
255
Deleting data using DELETE
256
C H A P T E R 1 0
About this chapter Previous chapters have described SQL statements as you execute them in
Interactive SQL or in some other interactive utility.
When you include SQL statements in an application there are other questions
you need to ask. For example, how does your application handle query result
sets? How can you make your application efficient?
While many aspects of database application development depend on your
application development tool, database interface, and programming
language, there are some common problems and principles that affect
multiple aspects of database application development. This chapter describes
some principles common to most or all interfaces and provides a few
pointers for more information. It does not provide a detailed guide for
programming using any one interface.
Contents
Topic Page
Executing SQL statements in applications 258
Preparing statements 260
Introduction to cursors 263
Types of cursor 266
Working with cursors 269
Describing result sets 275
Controlling transactions in applications 277
257
Executing SQL statements in applications
$ For more For detailed information on how to include SQL in your application, see your
information development tool documentation. If you are using ODBC or JDBC, consult
the software development kit for those interfaces.
258
Chapter 10 Using SQL in Applications
259
Preparing statements
Preparing statements
Each time a statement is sent to a database, the server must first prepare the
statement. Preparing the statement can include:
♦ Parsing the statement and transforming it into an internal form.
♦ Verifying the correctness of all references to database objects by
checking, for example, that columns named in a query actually exist.
♦ Causing the query optimizer to generate an access plan if the statement
involves joins or subqueries,.
♦ Executing the statement after all these steps have been carried out.
Reusing prepared If you find yourself using the same statement repeatedly, for example,
statements can inserting many rows into a table, repeatedly preparing the statement causes a
improve significant and unnecessary overhead. To remove this overhead, some
performance database programming interfaces provide ways of using prepared
statements. Generally, using these methods requires the following steps:
1 Prepare the statement In this step you generally provide the
statement with some placeholder character instead of the values.
2 Repeatedly execute the prepared statement In this step you supply
values to be used each time the statement is executed. The statement
does not have to be prepared each time.
3 Drop the statement In this step you free the resources associated with
the prepared statement. Some programming interfaces handle this step
automatically.
Do not prepare In general, you should not prepare statements if you’ll only execute them
statements that are once. There is a slight performance penalty for separate preparation and
used only once execution, and it introduces an unnecessary complexity into your application.
In some interfaces, however, you do need to prepare a statement to associate
it with a cursor. For information about cursors, see "Introduction to cursors"
on page 263.
The calls for preparing and executing statements are not a part of SQL, and
they differ from interface to interface. Each of the Adaptive Server
Anywhere programming interfaces provides a method for using prepared
statements.
260
Chapter 10 Using SQL in Applications
261
Preparing statements
262
Chapter 10 Using SQL in Applications
Introduction to cursors
When you execute a query in an application, the result set consists of a
number of rows. In general, you do not know how many rows you are going
to receive before you execute the query. Cursors provide a way of handling
query result sets in applications.
The way you use cursors, and the kinds of cursors available to you, depend
on the programming interface you use. JDBC 1.0 provides rudimentary
handling of result sets, while ODBC and Embedded SQL have many
different kinds of cursors. Open Client cursors can only move forward
through a result set.
$ For information on the kinds of cursors available through different
programming interfaces, see "Availability of cursors" on page 267.
What is a cursor?
A cursor is a symbolic name associated with a SELECT statement or stored
procedure that returns a result set. It consists of a cursor result set (the set of
rows resulting from the execution of a query associated with the cursor) and
a cursor position (a pointer to one row within the cursor result set).
A cursor is like a handle on the result set of a SELECT statement. It enables
you to examine and possibly manipulate one row at a time. In Adaptive
Server Anywhere, cursors support forward and backward movement through
the query results.
1 -n
2 -n + 1
3 -n +2
n- 2 -3
n- 1 -2
n -1
After last row
n+ 1 0
263
Introduction to cursors
264
Chapter 10 Using SQL in Applications
Prefetching rows In some cases, the interface library may carry out performance optimizations
under the covers (such as prefetching results) so these steps in the client
application may not correspond exactly to software operations.
265
Types of cursor
Types of cursor
You can choose from several kinds of cursors in Adaptive Server Anywhere
when you declare the cursor. Cursors have the following properties:
♦ Unique or non-unique Declaring a cursor to be unique forces the
query to return all the columns required to uniquely identify each row.
Often this means returning all the columns in the primary key. Any
columns required but not specified are added to the result set. The
default cursor type is non-unique.
♦ Read only or updatable A cursor declared as read only may not be
used in an UPDATE (positioned) or a DELETE (positioned) operation.
The default cursor type us updatable.
♦ Scrollability You can declare cursors to behave different ways as you
move through the result set.
♦ No Scroll Declaring a cursor NO SCROLL restricts fetching
operations to fetching the next row or the same row again. You
cannot rely on prefetches with no scroll cursors, so performance
may be compromised.
♦ Dynamic scroll cursors With DYNAMIC SCROLL cursors you
can carry out more flexible fetching operations. You can move
backwards and forwards in the result set, or move to an absolute
position.
♦ Scroll cursors Similar to DYNAMIC SCROLL cursors,
SCROLL cursors behave differently when the rows in the cursor are
modified or deleted after the first time the row is read. SCROLL
cursors have more predictable behavior when other connections
make changes to the database.
♦ Insensitive cursors Also called STATIC cursors in ODBC, a
cursor declared INSENSITIVE has its membership fixed when it is
opened; and a temporary table is created with a copy of all the
original rows. Fetching from an INSENSITIVE cursor does not see
the effect of any other operation from a different cursor. It does see
the effect of operations on the same cursor. Also, ROLLBACK or
ROLLBACK TO SAVEPOINT do not affect INSENSITIVE
cursors; these operations do not change the cursor contents.
It is easier to write an application using INSENSITIVE cursors,
since you only have to worry about changes you make explicitly to
the cursor. You do not have to worry about actions taken by other
users or by other parts of your application.
266
Chapter 10 Using SQL in Applications
Availability of cursors
Not all interfaces provide support for all kinds of cursors.
♦ JDBC 1.1 does not use cursors, although the ResultSet object does have
a next method that allows you to scroll through the results of a query in
the client application. JDBC 2 does provide cursor operations.
♦ ODBC supports all kinds of cursors.
ODBC provides a cursor type called a BLOCK cursor. When you use a
BLOCK cursor, you can use SQLFetchScroll or SQLExtendedFetch
to fetch a block of rows, rather than a single row. Block cursors behave
identically to ESQL ARRAY fetches.
♦ Embedded SQL supports all the kinds of cursors.
♦ Sybase Open Client supports only NO SCROLL cursors. Also, a severe
performance penalty results when using updateable, non-unique cursors.
267
Types of cursor
♦ UNION ALL
♦ DISTINCT
♦ GROUP BY
♦ A subselect in the select list
♦ A subquery in the WHERE or HAVING clause
Example For example, an application could remember that Cobb is the second row in
the cursor for the following query:
SELECT emp_lname
FROM employee
If someone deletes the first employee (Whitney) while the SCROLL cursor is
still open, a FETCH ABSOLUTE 2 still positions on Cobb while FETCH
ABSOLUTE 1 returns an error. Similarly, if the cursor is on Cobb, FETCH
PREVIOUS will return the Row Not Found error.
In addition, a fetch on a SCROLL cursor returns the warning
SQLE_ROW_UPDATED (104) if the row has changed since last reading.
The warning only happens once. Subsequent fetches of the same row do not
produce the warning.
Similarly, an UPDATE (positioned) or DELETE (positioned) statement on a
row modified since it was last fetched returns the
SQLE_ROW_UPDATED_SINCE_READ error. An application must fetch the
row again for the UPDATE or DELETE on a SCROLL cursor to work.
An update to any column causes the warning/error, even if the column is not
referenced by the cursor. For example, a cursor on a query returning
emp_lname would report the update even if only the salary column were
modified.
268
Chapter 10 Using SQL in Applications
Cursor positioning
A cursor can be positioned at one of three places:
♦ On a row
♦ Before the first row
♦ After the last row
1 -n
2 -n + 1
3 -n +2
n- 2 -3
n- 1 -2
n -1
After last row
n+ 1 0
When a cursor is opened, it appears before the first row. You can move the
cursor position using the FETCH command (see "FETCH statement" on
page 509 of the book ASA Reference) to an absolute position from the start or
the end of the query results (using FETCH ABSOLUTE, FETCH FIRST, or
FETCH LAST), or to a position relative to the current cursor position (using
FETCH RELATIVE, FETCH PRIOR, or FETCH NEXT). The NEXT
keyword is the default qualifier for the FETCH statement.
269
Working with cursors
The number of row positions you can fetch in a cursor is governed by the
size of an integer. You can fetch rows numbered up to number 2147483646,
which is one less than the value that can be held in an integer. When using
negative numbers (rows from the end) you can fetch down to one more than
the largest negative value that can be held in an integer.
You can use special positioned versions of the UPDATE and DELETE
statements to update or delete the row at the current position of the cursor. If
the cursor is positioned before the first row or after the last row, a No current
row of cursor error will be returned.
270
Chapter 10 Using SQL in Applications
271
Working with cursors
Prefetching rows
Prefetches and multiple-row fetches are different. Prefetches can be carried
out without explicit instructions from the client application. Prefetching
retrieves rows from the server into a buffer on the client side, but does not
make those rows available to the client application until the application
fetches the appropriate row.
By default, the Adaptive Server Anywhere client library prefetches multiple
rows whenever an application fetches a single row. The Adaptive Server
Anywhere client library stores the additional rows in a buffer.
Prefetching assists performance by cutting down on client/server traffic, and
increases throughput by making many rows available without a separate
request to the server for each row or block of rows.
$ For information on controlling prefetches, see "PREFETCH option" on
page 193 of the book ASA Reference.
Controlling ♦ The PREFETCH option controls whether or not prefetching occurs. You
prefetching from an can set the PREFETCH option to ON or OFF for a single connection. By
application default it is set to ON.
♦ In Embedded SQL, you can control prefetching on a per-cursor basis
when you open a cursor, on an individual FETCH operation using the
BLOCK clause.
272
Chapter 10 Using SQL in Applications
273
Working with cursors
Which table are If you attempt a positioned delete on a cursor, the table from which rows are
rows deleted from? deleted is determined as follows:
1 If no FROM clause is included in the delete statement, the cursor must
be on a single table only.
2 If the cursor is for a joined query (including using a view containing a
join), then the FROM clause must be used. Only the current row of the
specified table is deleted. The other tables involved in the join are not
affected.
3 If a FROM clause is included, and no table owner is specified, the table-
spec value is first matched against any correlation names.
$ For more information, see the "FROM clause" on page 518 of the
book ASA Reference.
4 If a correlation name exists, the table-spec value is identified with the
correlation name.
5 If a correlation name does not exist, the table-spec value must be
unambiguously identifiable as a table name in the cursor.
6 If a FROM clause is included, and a table owner is specified, the table-
spec value must be unambiguously identifiable as a table name in the
cursor.
7 The positioned DELETE statement can be used on a cursor open on a
view as long as the view is updateable.
274
Chapter 10 Using SQL in Applications
275
Describing result sets
Implementation ♦ In Embedded SQL, a SQLDA (SQL Descriptor Area) structure holds the
notes descriptor information.
$ For more information, see "The SQL descriptor area (SQLDA)" on
page 44 of the book ASA Programming Interfaces Guide.
♦ In ODBC, a descriptor handle allocated using SQLAllocHandle
provides access to the fields of a descriptor. You can manipulate these
fields using SQLSetDescRec, SQLSetDescField, SQLGetDescRec,
and SQLGetDescField.
Alternatively, you can use SQLDescribeCol and SQLColAttributes to
obtain column information.
♦ In Open Client, you can use ct_dynamic to prepare a statement and
ct_describe to describe the result set of the statement. However, you can
also use ct_command to send a SQL statement without preparing it
first, and use ct_results to handle the returned rows one by one. This is
the more common way of operating in Open Client application
development.
♦ In JDBC, the java.sql.ResultSetMetaData class provides information
about result sets.
♦ You can also use descriptors for sending data to the engine (for example,
with the INSERT statement), however, this is a different kind of
descriptor than for result sets.
$ For more information about input and output parameters of the
DESCRIBE statement, see the "DESCRIBE statement" on page 486 of
the book ASA Reference.
276
Chapter 10 Using SQL in Applications
277
Controlling transactions in applications
278
C H A P T E R 1 1
About this chapter This chapter describes how to configure your Adaptive Server Anywhere
installation to handle international language issues.
Contents
Topic Page
Introduction to international languages and character sets 280
Understanding character sets in software 283
Understanding locales 288
Understanding collations 294
Understanding character set translation 302
Collation internals 305
International language and character set tasks 310
279
Introduction to international languages and character sets
280
Chapter 11 International Languages and Character Sets
281
Introduction to international languages and character sets
282
Chapter 11 International Languages and Character Sets
283
Understanding character sets in software
Code pages
Many languages have few enough characters to be represented in a single-
byte character set. In such a character set, each character is represented by a
single byte: a two-digit hexadecimal number.
At most, 256 characters can be represented in a single byte. No single-byte
character set can hold all of the characters used internationally, including
accented characters. This problem was addressed by the development of a set
of code pages, each of which describes a set of characters appropriate for
one or more national languages. For example, code page 869 contains the
Greek character set, and code page 850 contains an international character
set suitable for representing many characters in a variety of languages.
Upper and lower With few exceptions, characters 0 to 127 are the same for all the single-byte
pages code pages. The mapping for this range of characters is called the ASCII
character set. It includes the English language alphabet in upper and lower
case, as well as common punctuation symbols and the digits. This range is
often called the seven-bit range (because only seven bits are needed to
represent the numbers up to 127) or the lower page. The characters from 128
to 256 are called extended characters, or upper code-page characters, and
vary from code page to code page.
284
Chapter 11 International Languages and Character Sets
Problems with code page compatibility are rare if the only characters used
are from the English alphabet, as these are represented in the ASCII portion
of each code page (0 to 127). However, if other characters are used, as is
generally the case in any non-English environment, there can be problems if
the database and the application use different code pages.
Example Suppose a database holding French language strings uses code page 850, and
the client operating system uses code page 437. The character À (upper case
A grave) is held in the database as character \xB7 (decimal value 183). In
code page 437, character \xB7 is a graphical character. The client application
receives this byte and the operating system displays it on the screen, the user
sees a graphical character instead of an A grave.
285
Understanding character sets in software
286
Chapter 11 International Languages and Character Sets
287
Understanding locales
Understanding locales
Both the database server and the client library recognize their language and
character set environment using a locale definition.
Introduction to locales
The application locale, or client locale, is used by the client library when
making requests to the database server, to determine the character set in
which results should be returned. If character-set translation is enabled, the
database server compares its own locale with the application locale to
determine whether character set translation is needed. Different databases on
a server may have different locale definitions.
$ For information on enabling character-set translation, see "Starting a
database server using character set translation" on page 314.
The locale consists of the following components:
♦ Language The language is a two-character string using the ISO-639
standard values: DE for German, FR for French, and so on. Both the
database server and the client have language values for their locale.
The database server uses the locale language to determine the following
behavior:
♦ Which language library to load.
♦ The language is used together with the character set to determine
which collation to use when creating databases, if no collation is
explicitly specified.
The client library uses the locale language to determine the following
behavior:
♦ Which language library to load.
♦ Which language to request from the database.
$ For more information, see "Understanding the locale language" on
page 289.
♦ Character set The character set is the code page in use. The client and
server both have character set values, and they may differ. If they differ,
character set translation may be required to enable interoperability.
For machines that use both OEM and ANSI code pages, the ANSI code
page is the value used here.
288
Chapter 11 International Languages and Character Sets
Language label The following table shows the valid language label values, together with the
values equivalent ISO 639 labels:
289
Understanding locales
290
Chapter 11 International Languages and Character Sets
Character set The following table shows the valid character set label values, together with
labels the equivalent IANA labels and a description:
291
Understanding locales
292
Chapter 11 International Languages and Character Sets
293
Understanding collations
Understanding collations
This section describes the supplied collations, and provides suggestions as to
which collations to use under certain circumstances.
$ For information on how to create a database with a specific collation,
see "Creating a database with a named collation" on page 313. For
information on changing a database from one collation to another, see
"Changing a database from one collation to another" on page 317.
Supplied collations
The following collations are supplied with Adaptive Server Anywhere. You
can obtain this list by entering the following command at a system command
line:
dbinit /l
294
Chapter 11 International Languages and Character Sets
295
Understanding collations
ANSI or OEM?
Adaptive Server Anywhere collations are based on code pages that are
designated as either ANSI or OEM. In most cases, use of an ANSI code page
is recommended.
296
Chapter 11 International Languages and Character Sets
If you choose to use an ANSI code page, you must not use the ODBC
translation driver in the ODBC data source configuration window.
If you choose to use an OEM code page, you must do the following:
♦ Choose a code page that matches the OEM code pages on your users’
client machines.
♦ When setting up data sources for Windows-based ODBC applications,
do choose the Adaptive Server Anywhere translation driver in the
ODBC data source configuration.
The translation driver converts between the OEM code page on your
machine and the ANSI code page used by Windows. If the database
collation is a different OEM code page than the one on your machine, an
incorrect translation will be applied.
Both Interactive SQL and Sybase Central detect whether the database
collation is ANSI or OEM by checking the first few characters, and either
enable or disable translation as needed.
$ For more information about code page translation in Interactive SQL,
see "CHAR_OEM_TRANSLATION option" on page 164 of the book ASA
Reference.
The 1252LATIN1 This collation is the same as WIN_LATIN1 (see below), but includes the
collation euro currency symbol and several other characters (Z-with-caron and z-with-
caron). For single-byte Windows operating systems, this is the recommended
collation in most cases.
Windows NT service patch 4 changes the default character set in many
locales to a new 1252 character set on which 1252 LATIN1 is based. If you
have this service patch, you should use this collation instead of
WIN_LATIN1.
The euro symbol sorts with the other currency symbols.
297
Understanding collations
The WIN_LATIN1 WIN_LATIN1 is similar to ISO_1, except that Windows has defined
collation characters in places where ISO_1 says "undefined", specifically the range
\x80-\xBF. The differences from Adaptive Server Enterprise’s ISO_1 are as
follows:
♦ The upper case and lower case Icelandic Eth (\xD0 and \xF0) is sorted
with D in Adaptive Server Anywhere, but after all other letters in
Adaptive Server Enterprise.
♦ The upper case and lower case Icelandic Thorn (\xD0 and \xF0) is sorted
with T in Adaptive Server Anywhere, but after all other letters in
Adaptive Server Enterprise.
♦ The upper-case Y-diaresis (\x9F) is sorted with Y in Adaptive Server
Anywhere, and case converts with lower-case Y-diaresis (\xFF). In
Adaptive Server Enterprise it is undefined and sorts after \x9E.
♦ The lower case letter sharp s (\xDF) sorts with the lower case s in
Adaptive Server Anywhere, but after ss in Adaptive Server Enterprise.
♦ Ligatures are two characters combined into a single character. The
ligatures corresponding to AE and ae (\xC6 and \xE6) sort after A and a
respectively in Adaptive Server Anywhere, but after AE and ae in
Adaptive Server Enterprise.
♦ The ligatures corresponding to OE and oe (\x8C and \x9C) sort with O
in Adaptive Server Anywhere, but after OE and oe in Adaptive Server
Enterprise.
♦ The upper case and lower case letter S with caron (\x8A and \x9A) sorts
with S in Adaptive Server Anywhere, but is undefined in Adaptive
Server Enterprise, sorting after \x89 and \x99.
The ISO1LATIN1 This collation is the same as ISO_1, but with sorting for values in the range
collation A0-BF. For compatibility with Adaptive Server Enterprise, the ISO_1
collation has no characters for 0xA0-0xBF. However the ISO Latin 1
character set on which it is based does have characters in these positions. The
ISO1LATIN1 collation reflects the characters in these positions.
If you are not concerned with Adaptive Server Enterprise compatibility,
ISOLATIN1 is generally recommended instead of ISO_1.
The ISO9LATIN1 This collation is the same as ISO1LATIN1, but it includes the euro currency
collation symbol and the other new characters included in the 1252 LATIN1 collation.
If your machine uses the ISO Latin 9 character set, then you should use this
collation.
298
Chapter 11 International Languages and Character Sets
299
Understanding collations
300
Chapter 11 International Languages and Character Sets
301
Understanding character set translation
Translate to
Translate to
client
EUC-JIS
character set
Shift-JIS
Language
library
EUC-JIS
A further character set translation is carried out if the database server -ct
command-line option is used, and if the client character set is different from
that used in the database collation.
303
Understanding character set translation
Also, recall that client/server character set translation takes place only if the
database server is started using the -ct command-line switch.
304
Chapter 11 International Languages and Character Sets
Collation internals
This section describes internal technical details of collations, including the
file format of collation files.
$ This section is of particular use if you want to create a database using a
custom collation. For information on the steps involved, see "Creating a
custom collation" on page 315, and "Creating a database with a custom
collation" on page 317.
You can create a database using a collation different from the supplied
collations. This section describes how to build databases using such a
custom collation.
In building multibyte custom collations, you can specify which ranges of
values for the first byte signify single- and double-byte (or more) characters,
and which specify space, alpha, and digit characters. However, all first bytes
of value less than \x40 must be single-byte characters, and no follow bytes
may have values less than \x40. This restriction is satisfied by all supported
encodings.
Collation files may include the following elements:
♦ Comment lines, which are ignored by the database.
♦ A title line.
♦ A collation sequence section.
♦ An Encodings section.
♦ A Properties section.
Comment lines
In the collation file, spaces are generally ignored. Comment lines start with
either the percent sign (%) or two dashes (--).
305
Collation internals
For example, the Shift-JIS collation file contains the following collation line,
with label SJIS and name (Japanese Shift-JIS Encoding):
Collation SJIS (Japanese Shift-JIS Encoding)
306
Chapter 11 International Languages and Character Sets
Other syntax notes For databases using case-insensitive sorting and comparison (no -c specified
on the dbinit command line), the lower case and upper case mappings are
used to find the lower case and upper case characters that will be sorted
together.
For multibyte character sets, the first byte of a character is listed in the
collation sequence, and all characters with the same first byte are sorted
together, and ordered according to the value of the following bytes. For
example, the following is part of the Shift-JIS collation file:
: \xfb
307
Collation internals
: \xfc
: \xfd
In this collation, all characters with first byte \xfc come after all characters
with first byte \xfb and before all characters with first byte \xfd. The two-
byte character \xfc \x01 would be ordered before the two-byte character \xfc
\x02.
Any characters omitted from the collation are added to the end of the
collation. The tool that processes the collation file issues a warning.
308
Chapter 11 International Languages and Character Sets
digit: [\x30-\x39]
alpha: [\x41-\x5a,\x61-\x7a,\x81-\x9f,\xe0-\xef]
This indicates that characters with first bytes \x09 to \x0d, as well as \x20,
are to be treated as space characters, digits are found in the range \x30 to
\x39 inclusive, and alphabetic characters in the four ranges \x41-\x5a, \x61-
\x7a, \x81-\x9f, and \xe0-\xef.
309
International language and character set tasks
311
International language and character set tasks
Setting locales
You can use the default locale on your operating system, or explicitly set a
locale for use by the Adaptive Server Anywhere components on your
machine.
312
Chapter 11 International Languages and Character Sets
313
International language and character set tasks
314
Chapter 11 International Languages and Character Sets
The translation driver carries out a mapping between the OEM code page in
use in the "DOS box" and the ANSI code page used in the Windows
operating system. If your database uses the same code page as the OEM code
page, the characters are translated properly. If your database does not use the
same code page as your machine’s OEM code page, you will still have
compatibility problems.
Embedded SQL does not provide any such code page translation mechanism.
315
International language and character set tasks
316
Chapter 11 International Languages and Character Sets
317
International language and character set tasks
318
P A R T T H R E E
This part describes key concepts and strategies for effective use of Adaptive
Server Anywhere.
319
320
C H A P T E R 1 2
About this chapter This chapter introduces the basic concepts of relational database design and
gives you step-by-step suggestions for designing your own databases. It uses
the expedient technique known as conceptual data modeling, which focuses
on entities and the relationships between them.
Contents
Topic Page
Introduction 322
Database design concepts 323
The design process 329
Designing the database table properties 342
321
Introduction
Introduction
While designing a database is not a difficult task for small and medium sized
databases, it is an important one. Bad database design can lead to an
inefficient and possibly unreliable database system. Because client
applications are built to work on specific parts of a database, and rely on the
database design, a bad design can be difficult to revise at a later date.
$ This chapter covers database design in an elementary manner. For more
advanced information, you may wish to the DataArchitect documentation.
DataArchitect is a component of Sybase PowerDesigner, a database design
tool.
$ You may also wish to consult an introductory book such as A Database
Primer by C. J. Date. If you are interested in pursuing database theory,
C. J. Date’s An Introduction to Database Systems is an excellent textbook on
the subject.
Java classes and The addition of Java classes to the available data types extends the relational
database design database concepts on which this chapter is based. Database design involving
Java classes is not discussed in this chapter.
$ For information on designing databases that take advantage of Java
class data types, see "Java database design" on page 569.
322
Chapter 12 Designing Your Database
Entities
An entity is the database equivalent of a noun. Distinguishable objects such
as employees, order items, departments and products are all examples of
entities. In a database, a table represents each entity. The entities that you
build into your database arise from the activities for which you will be using
the database, whether that be tracking sales calls, maintaining employee
information, or some other activity.
Attributes and Each entity contains a number of attributes. Attributes are particular
identifiers characteristics of the things that you would like to store. For example, in an
employee entity, you might want to store an employee ID number, first and
last names, an address, and other particular information that pertains to a
particular employee. Attributes are also known as properties.
323
Database design concepts
You depict an entity using a rectangular box. Inside, you list the attributes
associated with than entity.
Employee
Employee Number
First Name
Last Name
Address
Relationships
A relationship between entities is the database equivalent of a verb. An
employee is a member of a department, or an office is located in a city.
Relationships in a database may appear as foreign key relationships between
tables, or may appear as separate tables themselves. You will see examples
of each in this chapter.
The relationships in the database are an encoding of rules or practices that
govern the data in the entities. If each department has one department head,
you can create a one-to-one relationship between departments and employees
to identify the department head.
Once a relationship is built into the structure of the database, there is no
provision for exceptions. There is nowhere to put a second department head.
Duplicating the department entry would involve duplicating the department
ID, which is the identifier. Duplicate identifiers are not allowed.
324
Chapter 12 Designing Your Database
Tip
Strict database structure can benefit you, because it can eliminate
inconsistencies, such as a department with two managers. On the other
hand, you as the designer should make your design flexible enough to
allow some expansion for unforeseen uses. Extending a well-designed
database is usually not too difficult, but modifying the existing table
structure can render an entire database and its client applications obsolete.
Cardinality of There are three kinds of relationships between tables. These correspond to
relationships the cardinality (number) of the entities involved in the relationship.
♦ One-to-one relationships You depict a relationship by drawing a line
between two entities. The line may have other markings on it such as the
two little circles shown. Later sections explain the purpose of these
marks.
Department Employee
Management Relationship
Office Telephones
Parts Warehouses
Storage Relationship
One warehouse can hold many different parts and one type of part can
be stored at many warehouses.
Roles You can describe each relationship with two roles. Roles are verbs or
phrases that describe the relationship from each point of view. For example,
a relationship between employees and departments might be described by the
following two roles.
1 An employee is a member of a department.
325
Database design concepts
Employee
is a member of
Employee Number Department
First Name
Department ID
Last Name
Department Name
Address contains
Roles are very important because they afford you a convenient and effective
means of verifying your work.
Tip
Whether reading from left-to-right or from right-to-left, the following rule
makes it easy to read these diagrams: Read the
1 name of the first entity,
2 role next to the first entity,
3 cardinality from the connection to the second entity, and
4 name of the second entity.
Mandatory The little circles just before the end of the line that denotes the relation serve
elements an important purpose. A circle means that an element can exist in the one
entity without a corresponding element in the other entity.
If a cross bar appears in place of the circle, that entity must contain at least
one element for each element in the other entity. An example will clarify
these statements.
Publisher
ID Number
Publisher Name
publishes
Books is written by Authors
ID Number ID Number
Title First Name
is published by
writes Last Name
326
Chapter 12 Designing Your Database
Tip
Think of the little circle as the digit 0 and the cross bar as the number one.
The circle means at least zero. The cross bar means at least one.
Reflexive Sometimes, a relationship will exist between entries in a single entity. In this
relationships case, the relationship is called reflexive. Both ends of the relationship attach
to a single entity.
Employee
Employee Number
First Name
Last Name
Address
manages reports to
Parts
stored at
Part Number
Description Warehouse
contains Warehouse ID
Address
327
Database design concepts
But you wish to record the quantity of each part stored at each location. This
attribute can only be associated with the relationship. Each quantity depends
on both the parts and the warehouse involved. To represent this situation,
you can redraw the diagram as follows:
Parts
Part Number
stored at
Description
Inventory Warehouse
Quantity Warehouse ID
contains Address
328
Chapter 12 Designing Your Database
329
The design process
Identify the entities Identify the entities (subjects) and the relationships (roles) that connect them.
and relationships Create a diagram based on the description and high-level activities.
Use boxes to show entities and lines to show relationships. Use the two roles
to label each relationship. You should also identify those relationships that
are one-to-many, one-to-one, and many-to-many using the appropriate
annotation.
Below, is a rough entity-relationship diagram. It will be refined throughout
the chapter.
330
Chapter 12 Designing Your Database
Skill
is acquired by
Department
is headed by
is capable of
manages
contains
Employee
works out of
contains is a member of
Office
manages reports to
Break down the The following lower-level activities below are based on the high-level
high-level activities activities listed above:
♦ Add or delete an employee.
♦ Add or delete an office.
♦ List employees for a department.
♦ Add a skill to the skill list.
♦ Identify the skills of an employee.
♦ Identify an employee’s skill level for each skill.
♦ Identify all employees that have the same skill level for a particular skill.
♦ Change an employee’s skill level.
These lower-level activities can be used to identify if any new tables or
relationships are needed.
Identify business Business rules often identify one-to-many, one-to-one, and many-to-many
rules relationships.
The kind of business rules that may be relevant include the following:
♦ There are now five offices; expansion plans allow for a maximum of ten.
♦ Employees can change department or office.
♦ Each department has one department head.
♦ Each office has a maximum of three telephone numbers.
♦ Each telephone number has one or more extensions.
331
The design process
Identify supporting The supporting data you identify will become the names of the attributes of
data the entity. For example, the data below might apply to the Employee entity,
the Skill entity, and the Expert In relationship.
If you make a diagram of this data, it will look something like this picture:
Employee Skill
is acquired by
Employee ID Skill ID
First name Skill name
Last name is capable of Skill description
Home address
332
Chapter 12 Designing Your Database
Observe that not all of the attributes you listed appear in this diagram. The
missing items fall into two categories:
1 Some are contained implicitly in other relationships; for example,
Employee department and Employee office are denoted by the relations
to the Department and Office entities, respectively.
2 Others are not present because they are associated not with either of
these entities, but rather the relationship between them. The above
diagram is inadequate.
The first category of items will fall naturally into place when you draw the
entire entity-relationship diagram.
You can add the second category by converting this many-to-many
relationship into an entity.
Employee
Skill
Employee ID Expert In
Skill ID
First name Skill level
Last name is capable of Date acquired is acquired by Skill name
Skill description
Home address
The new entity depends on both the Employee and the Skill entities. It
borrows its identifiers from these entities because it depends on both of them.
Things to ♦ When you are identifying the supporting data, be sure to refer to the
remember activities you identified earlier to see how you will access the data.
For example, you may need to list employees by first name in some
situations and by last name in others. To accommodate this requirement,
create a First Name attribute and a Last Name attribute, rather than a
single attribute that contains both names. With the names separate, you
can later create two indexes, one suited to each task.
♦ Choose consistent names. Consistency makes it easier to maintain your
database and easier to read reports and output windows.
For example, if you choose to use an abbreviated name such as
Emp_status for one attribute, you should not use a full name, such as
Employee_ID, for another attribute. Instead, the names should be
Emp_status and Emp_ID.
♦ At this stage, it is not crucial that the data be associated with the correct
entity. You can use your intuition. In the next section, you’ll apply tests
to check your judgment.
333
The design process
Why normalize?
The goals of normalization are to remove redundancy and to improve
consistency. For example, if you store a customer’s address in multiple
locations, it is difficult to update all copies correctly should he move.
334
Chapter 12 Designing Your Database
Data and identifiers Before you begin to normalize (test your design), simply list the data and
identify a unique identifier each table. The identifier can be made up of one
piece of data (attribute) or several (a compound identifier).
The identifier is the set of attributes that uniquely identifies each row in an
entity. The identifier for the Employee entity is the Employee ID attribute.
The identifier for the Works In relationship consists of the Office Code and
Employee ID attributes. You can make an identifier for each relationship in
your database by taking the identifiers from each of the entities that it
connects. In the following table, the attributes identified with an asterisk are
the identifiers for the entity or relationship.
335
The design process
Putting data in first ♦ To test for first normal form, look for attributes that can have repeating
normal form values.
♦ Remove attributes when multiple values can apply to a single item.
Move these repeating attributes to a new entity.
In the entity below, Phone number can repeat—an office can have more than
one telephone number.
Remove the repeating attribute and make a new entity called Telephone. Set
up a relationship between Office and Telephone.
Office
has
Office code
Office address Telephone
Phone number
is located at
Putting data in ♦ Remove data that does not depend on the whole key.
second normal
♦ Look only at entities and relationships whose identifier is composed of
form
more than one attribute. To test for second normal form, remove any
data that does not depend on the whole identifier. Each attribute should
depend on all of the attributes that comprise the identifier.
In this example, the identifier of the Employee and Department entity is
composed of two attributes. Some of the data does not depend on both
identifier attributes; for example, the department name depends on only one
of those attributes, Department ID, and Employee first name depends only on
Employee ID.
Move the identifier Department ID, which the other employee data does not
depend on, to a entity of its own called Department. Also move any attributes
that depend on it. Create a relationship between Employee and Department.
336
Chapter 12 Designing Your Database
Employee works in
Employee ID Department
Employee first name
Department ID
Employee last name
contains Department name
Putting data in third ♦ Remove data that doesn’t depend directly on the key.
normal form
♦ To test for third normal form, remove any attributes that depend on other
attributes, rather than directly on the identifier.
In this example, the Employee and Office entity contains some attributes that
depend on its identifier, Employee ID. However, attributes such as Office
location and Office phone depend on another attribute, Office code. They do
not depend directly on the identifier, Employee ID.
Remove Office code and those attributes that depend on it. Make another
entity called Office. Then, create a relationship that connects Employee with
Office.
Employee
Employee ID works out of
Employee first name
Office
Employee last name
Office code
houses Office location
Office phone
Resolving In order to implement relationships that do not carry data, you define foreign
relationships that keys. A foreign key is a column or set of columns that contains primary key
do not carry data values from another table. The foreign key allows you to access data from
more than one table at one time.
A database design tool such as the DataArchitect component of Sybase
PowerDesigner can generate the physical data model for you. However, if
you’re doing it yourself there are some basic rules that help you decide
where to put the keys.
♦ One to many An one-to-many relationship always becomes an entity
and a foreign key relationship.
Employee
is a member of
Employee Number Department
First Name
Department ID
Last Name
Department Name
Address contains
Notice that entities become tables. Identifiers in entities become (at least
part of) the primary key in a table. Attributes become columns. In a one-
to-many relationship, the identifier in the one entity will appear as a new
foreign key column in the many table.
Employee
Employee Number <pk>
Department ID <fk>
First Name
Last Name Department ID = Department ID
Address
Department
Department ID <pk>
Department Name
Vehicle
may be Truck
Vehicle ID
Model Weight rating
Price is a type of
338
Chapter 12 Designing Your Database
Vehicle
Vehicle ID <pk> Truck
Model Vehicle ID = Vehicle ID
Vehicle ID <fk>
Price
Weight rating
Parts
stored at
Part Number
Description Warehouse
contains Warehouse ID
Address
The new Storage Location table relates the Parts and Warehouse tables.
Parts
Part Number <pk>
Description
Storage Location
Part Number <pk,fk> Warehouse ID = Warehouse ID
Part Number = Part Number Warehouse ID <pk,fk>
Warehouse
Warehouse ID <pk>
Address
Resolving Some of your relationships may carry data. This situation often occurs in
relationships that many-to-many relationships.
carry data
Parts
Part Number
stored at
Description
Inventory Warehouse
Quantity Warehouse ID
contains Address
If this is the case, each entity resolves to a table. Each role becomes a foreign
key that points to another table.
339
The design process
Parts
Part Number <pk>
Description
Inventory
Warehouse ID <pk,fk> Warehouse ID = Warehouse ID
Part Number = Part Number Part Number <pk,fk>
Quantity
Warehouse
Warehouse ID <pk>
Address
The Inventory entity borrows its identifiers from the Parts and Warehouse
tables, because it depends on both of them. Once resolved, these borrowed
identifiers form the primary key of the Inventory table.
Tip
A conceptual data model simplifies the design process because it hides a
lot of details. For example, a many-to-many relationship always generates
an extra table and two foreign key references. In a conceptual data model,
you can usually denote all of this structure with a single connection.
340
Chapter 12 Designing Your Database
Skill
is acquired by ID Number
Skill name
Skill description
Department
Expert In
is headed by Department ID
Skill Level
Department name
Date Acquired
manages
contains
Employee
is capable of Employee ID
is a member of
works out of First name
Last name
Home address
houses
manages reports to
Office
ID Number
Office name
Address
Skill
ID Number = ID Number ID Number <pk>
Skill name
Skill description
Department
Employee ID = Employee ID Department ID <pk>
Expert In Employee ID <fk>
ID Number <pk,fk> Department name
Employee ID <pk,fk>
Skill Level
Department ID = Department ID
Date Acquired
Department/Employee
Department ID <pk,fk>
Employee
Employee ID <pk,fk>
Employee ID <pk>
Employee ID = Employee ID
ID Number <fk>
Emp_Employee ID <fk> Employee ID = Employee ID
ID Number = ID Number First name
Last name
Home address
Office
ID Number <pk>
Office name
Address Employee ID = Emp_Employee ID
341
Designing the database table properties
342
Chapter 12 Designing Your Database
NULL and If the column value is mandatory for a record, you define the column as
NOT NULL being NOT NULL. Otherwise, the column is allowed to contain the NULL
value, which represents no value. The default in SQL is to allow NULL
values, but you should explicitly declare columns NOT NULL unless there is
a good reason to allow NULL values.
$ For a complete description of the NULL value, see "NULL value" on
page 247 of the book ASA Reference. For information on its use in
comparisons, see "Search conditions" on page 226 of the book ASA
Reference.
Choosing constraints
Although the data type of a column restricts the values that are allowed in
that column (for example, only numbers or only dates), you may want to
further restrict the allowed values.
You can restrict the values of any column by specifying a CHECK
constraint. You can use any valid condition that could appear in a WHERE
clause to restrict the allowed values. Most CHECK constraints use either the
BETWEEN or IN condition.
$ For more information about valid conditions, see "Search conditions"
on page 226 of the book ASA Reference. For more information about
assigning constraints to tables and columns, see "Ensuring Data Integrity"
on page 345.
Example The sample database has a table called Department, which has columns
named dept_id, dept_name, and dept_head_id. Its definition is as follows:
If you specify NOT NULL, a column value must be supplied for every row
in the table.
343
Designing the database table properties
344
C H A P T E R 1 3
About this chapter Building integrity constraints right into the database is the surest way to
make sure your data stays in good shape. This chapter describes the facilities
in Adaptive Server Anywhere for ensuring that the data in your database is
valid and reliable.
You can enforce several types of integrity constraints. For example, you can
ensure individual entries are correct by imposing constraints and CHECK
conditions on tables and columns. Setting column properties by choosing an
appropriate data type or setting special default values assists this task.
The SQL statements in this chapter use the CREATE TABLE and ALTER
TABLE statements, basic forms of which were introduced in "Working with
Database Objects" on page 107.
Contents
Topic Page
Data integrity overview 346
Using column defaults 350
Using table and column constraints 355
Using domains 359
Enforcing entity and referential integrity 362
Integrity rules in the system tables 367
345
Data integrity overview
Duplicated data ♦ two different people add the same new department (with dept_id 200) to
the department table of the organization's database
Foreign key ♦ The department identified by dept_id 300 closes down, and one
relations employee record inadvertently remains unassigned to a new department.
invalidated
346
Chapter 13 Ensuring Data Integrity
In contrast, constraints built into client applications are vulnerable every time
the software changes, and may need to be imposed in several applications, or
in several places in a single client application.
347
Data integrity overview
348
Chapter 13 Ensuring Data Integrity
349
Using column defaults
350
Chapter 13 Ensuring Data Integrity
351
Using column defaults
Current timestamp The current timestamp is similar to the current date default, but offers greater
accuracy. For example, a user of a contact management application may have
several contacts with a single customer in one day: the current timestamp
default would be useful to distinguish these contacts.
Since it records a date and the time down to a precision of millionths of a
second, you may also find the current timestamp useful when the sequence of
events is important in a database.
$ For more information about timestamps, times, and dates, see "SQL
Data Types" on page 251 of the book ASA Reference.
352
Chapter 13 Ensuring Data Integrity
You can retrieve the most recent value inserted into an autoincrement
column using the @@identity global variable. For more information, see
"@@identity global variable" on page 244 of the book ASA Reference.
Autoincrement and Autoincrement is intended to work with positive integers.
negative numbers
The initial autoincrement value is set to 0 when the table is created. This
value remains as the highest value assigned when inserts are done that
explicitly insert negative values into the column. An insert where no value is
supplied causes the AUTOINCREMENT to generate a value of 1, forcing
any other generated values to be positive. If only negative values were
inserted and the database was stopped and restarted, we would recalculate
the highest value for the column and would then start generating negative
values.
In UltraLite applications, the autoincrement value is not set to 0 when the
table is created, and AUTOINCREMENT generates negative numbers when
a signed data type is used for the column.
You should define AUTOINCREMENT columns as unsigned to prevent
negative values from being used.
Autoincrement and $ A column with the AUTOINCREMENT default is referred to in
the IDENTITY Transact-SQL applications as an IDENTITY column. For information on
column IDENTITY columns, see "The special IDENTITY column" on page 947.
353
Using column defaults
Default strings and numbers are useful when there is a typical entry for a
given column. For example, if an organization has two offices, the
headquarters in city_1 and a small office in city_2, you may want to set a
default entry for a location column to city_1, to make data entry easier.
354
Chapter 13 Ensuring Data Integrity
Caution
Altering tables can interfere with other users of the database. Although
you can execute the ALTER TABLE statement while other connections are
active, you cannot execute the ALTER TABLE statement if any other
connection is using the table you want to alter. For large tables, ALTER
TABLE is a time-consuming operation, and all other requests referencing
the table being altered are prohibited while the statement is processing.
This section describes how to use constraints to help ensure the accuracy of
data entered in the table.
Example 2 ♦ You can ensure that the entry matches one of a limited number of
values. For example, to ensure that a city column only contains one of a
certain number of allowed cities (say, those cities where the organization
has offices), you could use a constraint such as:
ALTER TABLE office
MODIFY city
CHECK ( city IN ( ’city_1’, ’city_2’, ’city_3’ ) )
355
Using table and column constraints
Example 3 ♦ You can ensure that a date or number falls in a particular range. For
example, you may require that the start_date column of an employee
table must be between the date the organization was formed and the
current date using the following constraint:
ALTER TABLE employee
MODIFY start_date
CHECK ( start_date BETWEEN ’1983/06/27’
AND CURRENT DATE )
♦ You can use several date formats. The YYYY/MM/DD format in this
example has the virtue of always being recognized regardless of the
current option settings.
Column CHECK tests only fail if the condition returns a value of FALSE. If
the condition returns a value of UNKNOWN, the change is allowed.
356
Chapter 13 Ensuring Data Integrity
An ALTER TABLE statement with the DELETE CHECK clause deletes all
CHECK conditions from the table definition, including those inherited from
domains.
$ For information on domains, see "Domains" on page 274 of the book
ASA Reference.
357
Using table and column constraints
358
Chapter 13 Ensuring Data Integrity
Using domains
A domain is a user-defined data type that, together with other attributes, can
restrict the range of acceptable values or provide defaults. A domain extends
one of the built-in data types. The range of permissible values is usually
restricted by a check constraint. In addition, a domain can specify a default
value and may or may not allow nulls.
You can define your own domains for a number of reasons.
♦ A number of common errors can be prevented if inappropriate values
cannot be entered. A constraint placed on a domain ensures that all
columns and variables intended to hold values in a desired range or
format can hold only the intended values. For example, a data type can
ensure that all credit card numbers entered into the database contain the
correct number of digits.
♦ Domains can make it much easier to understand applications and the
structure of a database.
♦ Domains can prove convenient. For example, you may intend that all
table identifiers are positive integers that, by default, auto-increment.
You could enforce this restriction by entering the appropriate constraints
and defaults each time you define a new table, but it is less work to
define a new domain, then simply state that the identifier can take only
values from the specified domain.
$ For more information about domains, see "SQL Data Types" on
page 251 of the book ASA Reference.
359
Using domains
2 Right-click the desired column and choose Properties from the popup
menu.
3 On the Data Type tab of the column’s property sheet, assign a domain.
$ For more information, see "Property Sheet Descriptions" on page 1035.
Example 1: Simple Some columns in the database are to be used for people’s names and others
domains are to store addresses. You might then define type following domains.
CREATE DOMAIN persons_name CHAR(30)
CREATE DOMAIN street_address CHAR(35)
Having defined these domains, you can use them much as you would the
built-in data types. For example, you can use these definitions to define a
tables as follows.
CREATE TABLE customer (
id INT DEFAULT AUTOINCREMENT PRIMARY KEY
name persons_name
address street_address
)
Example 2: Default In the above example, the table’s primary key is specified to be of type
values, check integer. Indeed, many of your tables may require similar identifiers. Instead
constraints, and of specifying that these are integers, it is much more convenient to create an
identifiers identifier domain for use in these applications.
When you create a domain, you can specify a default value and provide
check constraint to ensure that no inappropriate values are entered into any
column of this type.
Integer values are commonly used as table identifiers. A good choice for
unique identifiers is to use positive integers. Since such identifiers are likely
to be used in many tables, you could define the following domain.
CREATE DOMAIN identifier INT
DEFAULT AUTOINCREMENT
CHECK ( @col > 0 )
360
Chapter 13 Ensuring Data Integrity
This check constraint uses the variable @col. Using this definition, you can
rewrite the definition of the customer table, shown above.
CREATE TABLE customer (
id identifier PRIMARY KEY
name persons_name
address street_address
)
Example 3: Built-in Adaptive Server Anywhere comes with some domains pre-defined. You can
domains use these pre-defined domains as you would a domain that you created
yourself. For example, the following monetary domain has already been
created for you.
CREATE DOMAIN MONEY NUMERIC(19,4)
NULL
$ For more information, see "CREATE DOMAIN statement" on
page 421 of the book ASA Reference.
Deleting domains
You can use either Sybase Central or a DROP DOMAIN statement in
Interactive SQL to delete a domain.
Only the user DBA or the user who created a domain can drop it. In addition,
since a domain cannot be dropped if any variable or column in the database
is an instance of the domain, you need to first drop any columns or variables
of that type before you can drop the domain.
361
Enforcing entity and referential integrity
362
Chapter 13 Ensuring Data Integrity
363
Enforcing entity and referential integrity
Example 2 Suppose the database also contained an office table listing office locations.
The employee table might have a foreign key for the office table that
indicates which city the employee’s office is in. The database designer can
choose to leave an office location unassigned at the time the employee is
hired, for example, either because they haven’t been assigned to an office yet,
or because they don’t work out of an office. In this case, the foreign key can
allow NULL values, and is optional.
364
Chapter 13 Ensuring Data Integrity
365
Enforcing entity and referential integrity
366
Chapter 13 Ensuring Data Integrity
367
Integrity rules in the system tables
368
C H A P T E R 1 4
About this chapter You can group SQL statements into transactions, which have the property
that either all statements are executed or none is executed. You should design
each transaction to perform a task that changes your database from one
consistent state to another.
This chapter describes transactions and how to use them in applications. It
also describes how Adaptive Server Anywhere you can set isolation levels to
limit the interference among concurrent transaction.
Contents
Topic Page
Introduction to transactions 370
Isolation levels and consistency 374
Transaction blocking and deadlock 380
Choosing isolation levels 382
Isolation level tutorials 386
How locking works 401
Particular concurrency issues 414
Replication and concurrency 416
Summary 418
369
Introduction to transactions
Introduction to transactions
To ensure data integrity it is essential that you can identify states in which
the information in your database is consistent. The concept of consistency is
best illustrated through an example:
Consistency: an Suppose you use your database to handle financial accounts, and you wish to
example transfer money from one client’s account to another. The database is in a
consistent state both before and after the money is transferred; but it is not in
a consistent state after you have debited money from one account and before
you have credited it to the second. During a transferal of money, the database
is in a consistent state when the total amount of money in the clients’
accounts is as it was before any money was transferred. When the money has
been half transferred, the database is in an inconsistent state. Either both or
neither of the debit and the credit must be processed.
Transactions are A transaction is a logical unit of work. Each transaction is a sequence of
logical units of logically related commands that accomplish one task and transform the
work database from one consistent state into another. The nature of a consistent
state depends on your database.
The statements within a transaction are treated as an indivisible unit: either
all are executed or none is executed. At the end of each transaction, you
commit your changes to make them permanent. If for any reason some of the
commands in the transaction do not process properly, then any intermediate
changes are undone, or rolled back. Another way of saying this is that
transactions are atomic.
Grouping statements into transactions is key both to protecting the
consistency of your data even in the event of media or system failure, and to
managing concurrent database operations. Transactions may be safely
interleaved and the completion of each transaction marks a point at which the
information in the database is consistent.
In the event of a system failure or database crash during normal operation,
Adaptive Server Anywhere performs automatic recovery of your data when
the database is next started. The automatic recovery process recovers all
completed transactions, and rolls back any transactions that were
uncommitted when the failure occurred. The atomic character of transactions
ensures that databases are recovered to a consistent state.
$ For more information about database backups and data recovery, see
"Backup and Data Recovery" on page 627.
$ For further information about concurrent database usage, see
"Introduction to concurrency" on page 372.
370
Chapter 14 Using Transactions and Isolation Levels
Using transactions
Adaptive Server Anywhere expects you to group your commands into
transactions. Knowing which commands or actions signify the start or end of
a transaction lets you take full advantage of this feature.
Starting Transactions start with one of the following events:
transactions
♦ The first statement following a connection to a database
♦ The first statement following the end of a transaction
Options in Interactive SQL lets you control when and how transactions from your
Interactive SQL application terminate:
♦ If you set the option AUTO_COMMIT to ON, Interactive SQL
automatically commits your results following every successful statement
and automatically perform a ROLLBACK after each failed statement.
371
Introduction to transactions
Introduction to concurrency
Concurrency is the ability of the database server to process multiple
transactions at the same time. Were it not for special mechanisms within the
database server, concurrent transactions could interfere with each other to
produce inconsistent and incorrect information.
Example A database system in a department store must allow many clerks to update
customer accounts concurrently. Each clerk must be able to update the status
of the accounts as they assist each customer: they cannot afford to wait until
no one else is using the database.
Who needs to Concurrency is a concern to all database administrators and developers. Even
know about if you are working with a single-user database, you must be concerned with
concurrency concurrency if you wish to process instructions from multiple applications or
even from multiple connections from a single application. These applications
and connections can interfere with each other in exactly the same way as
multiple users in a network setting.
Transaction size The way you group SQL statements into transactions can have significant
affects effects on data integrity and on system performance. If you make a
concurrency transaction too short and it does not contain an entire logical unit of work,
then inconsistencies can be introduced into the database. If you write a
transaction that is too long and contains several unrelated actions, then there
is greater chance that a ROLLBACK will unnecessarily undo work that
could have been committed quite safely into the database.
If your transactions are long, they can lower concurrency by preventing other
transactions from being processed concurrently.
There are many factors that determine the appropriate length of a transaction,
depending on the type of application and the environment.
372
Chapter 14 Using Transactions and Isolation Levels
373
Isolation levels and consistency
374
Chapter 14 Using Transactions and Isolation Levels
Cursor instability
Another significant inconsistency is cursor instability. When this
inconsistency is present, a transaction can modify a row that is being
referenced by another transaction's cursor.
Example Transaction A reads a row using a cursor. Transaction B modifies that row.
Not realizing that the row has been modified, Transaction A modifies it,
rendering the affected row's data incorrect.
375
Isolation levels and consistency
Eliminating cursor Adaptive Server Anywhere achieves cursor stability at isolation levels 1, 2,
instability and 3. Cursor stability ensures that no other transactions can modify
information that is contained in the present row of your cursor. The
information in a row of a cursor may be the copy of information contained in
a particular table or may be a combination of data from different rows of
multiple tables. More than one table will likely be involved whenever you
use a join or sub-selection within a SELECT statement.
$ For information on programming SQL procedures and cursors, see
"Using Procedures, Triggers, and Batches" on page 423.
$ Cursors are used only when you are using Adaptive Server Anywhere
through another application. For more information, see "Using SQL in
Applications" on page 257.
376
Chapter 14 Using Transactions and Isolation Levels
Changing an You can change the isolation level of your connection via ODBC by
isolation level via using the function SQLSetConnectOption in the library ODBC32.dll.
ODBC
377
Isolation levels and consistency
String Value
SQL_TXN_ISOLATION 108
SQL_TXN_READ_UNCOMMITTED 1
SQL_TXN_READ_COMMITTED 2
SQL_TXN_REPEATABLE_READ 4
SQL_TXT_SERIALIZABLE 8
Example The following function call sets the isolation level of the connection
MyConnection to isolation level 2:
SQLSetConnectOption(MyConnection.hDbc, SQL_TXN_ISOLATION, SQL_TXN_REPEATABLE_READ)
ODBC uses the isolation feature to support assorted database lock options.
For example, in PowerBuilder you can use the Lock attribute of the
transaction object to set the isolation level when you connect to the database.
The Lock attribute is a string, and is set as follows:
SQLCA.lock = "RU"
The Lock option is honored only at the moment the CONNECT occurs.
Changes to the Lock attribute after the CONNECT have no effect on the
connection.
You may also wish to change the isolation level in mid transaction if, for
example, just one table or group of tables requires serialized access.
$ For an example in which the isolation level is changed in the middle of
a transaction, see "Tutorial 3 – A phantom row" on page 393.
379
Transaction blocking and deadlock
Transaction blocking
When a transaction attempts to carry out an operation, but is forbidden by a
lock held by another transaction, a conflict arises and the progress of the
transaction attempting to carry out the operation is impeded or blocked.
$ "Two-phase locking" on page 410 describes deadlock, which occurs
when two or more transactions are blocked by each other in such a way that
none can proceed.
$ Sometimes a set of transactions arrive at a state where none of them can
proceed. For more information see "Deadlock" on page 381.
380
Chapter 14 Using Transactions and Isolation Levels
Deadlock
Transaction blocking can lead to deadlock, a situation in which a set of
transactions arrive at a state where none of them can proceed.
Reasons for A deadlock can arise for two reasons:
deadlocks
♦ A cyclical blocking conflict Transaction A is blocked on transaction
B, and transaction B is blocked on transaction A. Clearly, more time will
not solve the problem, and one of the transactions must be canceled,
allowing the other to proceed. The same situation can arise with more
than two transactions blocked in a cycle.
♦ All active database threads are blocked When a transaction becomes
blocked, its database thread is not relinquished. If the database is
configured with three threads and transactions A, B, and C are blocked
on transaction D which is not currently executing a request, then a
deadlock situation has arisen since there are no available threads.
Adaptive Server Anywhere automatically cancels the last transaction that
became blocked (eliminating the deadlock situation), and returns an error to
that transaction indicating which form of deadlock occurred.
$ The number of database threads that the server uses depends on the
individual database’s setting. For information on setting the number of
database threads, see "THREAD_COUNT option" on page 200 of the book
ASA Reference and "–ge command-line option" on page 26 of the book ASA
Reference.
Determining who is You can use the sa_conn_info system procedure to determine which
blocked connections are blocked on which other connections. This procedure returns
a result set consisting of a row for each connection. One column of the result
set lists whether the connection is blocked, and if so which other connection
it is blocked on.
$ For more information, see "sa_conn_info system procedure" on
page 936 of the book ASA Reference.
381
Choosing isolation levels
Serializable schedules
To process transactions concurrently, the database server must execute some
component statements of one transaction, then some from other transactions,
before continuing to process further operations from the first. The order in
which the component operations of the various transactions are interleaved is
called the schedule.
Applying transactions concurrently in this manner can result in many
possible outcomes, including the three particular inconsistencies described in
the previous section. Sometimes, the final state of the database also could
have been achieved had the transactions been executed sequentially, meaning
that one transaction was always completed in its entirety before the next was
started. A schedule is called serializable whenever executing the
transactions sequentially, in some order, could have left the database in the
same state as the actual schedule.
382
Chapter 14 Using Transactions and Isolation Levels
383
Choosing isolation levels
Typical level 1 Isolation level 1 is particularly useful in conjunction with cursors, because
transactions this combination ensures cursor stability without greatly increasing locking
requirements. Adaptive Server Anywhere achieves this benefit through the
early release of read locks acquired for the present row of a cursor. These
locks must persist until the end of the transaction at either levels two or three
in order to guarantee repeatable reads.
For example, a transaction that updates inventory levels through a cursor is
particularly suited to this level, because each of the adjustments to inventory
levels as items are received and sold would not be lost, yet these frequent
adjustments would have minimal impact on other transactions.
Typical level 2 At isolation level 2, rows that match your criterion cannot be changed by
transactions other transactions. You can thus employ this level when you must read rows
more than once and rely that rows contained in your first result set won’t
change.
Because of the relatively large number of read locks required, you should use
this isolation level with care. As with level 3 transactions, careful design of
your database and indexes reduce the number of locks acquired and hence
can improve the performance of your database significantly.
Typical level 3 Isolation level 3 is appropriate for transactions that demand the most in
transactions security. The elimination of phantom rows lets you perform multi-step
operations on a set of rows without fear that new rows will appear partway
through your operations and corrupt the result.
However much integrity it provides, isolation level 3 should be used
sparingly on large systems that are required to support a large number of
concurrent transactions. Adaptive Server Anywhere places more locks at this
level than at any other, raising the likelihood that one transaction will impede
the process of many others.
384
Chapter 14 Using Transactions and Isolation Levels
385
Isolation level tutorials
386
Chapter 14 Using Transactions and Isolation Levels
id name unit_price
300 Tee Shirt 104.00
301 Tee Shirt 109.00
302 Tee Shirt 109.00
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 7.00
600 Sweatshirt 24.00
601 Sweatshirt 24.00
700 Shorts 15.00
You observe immediately that you should have entered 0.95 instead
of 95, but before you can fix your error, the Accountant accesses the
database from another office.
6 The company’s Accountant is worried that too much money is tied up in
inventory. As the Accountant, execute the following commands to
calculate the total retail value of all the merchandise in stock:
SELECT SUM( quantity * unit_price )
AS inventory
FROM product
The result is:
inventory
21453.00
387
Isolation level tutorials
id name unit_price
300 Tee Shirt 9.95
301 Tee Shirt 14.95
302 Tee Shirt 14.95
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 7.00
600 Sweatshirt 24.00
601 Sweatshirt 24.00
700 Shorts 15.00
8 The Accountant does not know that the amount he calculated was in
error. You can see the correct value by executing his SELECT statement
again in his window.
SELECT SUM( quantity * unit_price )
AS inventory
FROM product;
inventory
6687.15
388
Chapter 14 Using Transactions and Isolation Levels
9 Finish the transaction in the Sales Manager’s window. She would enter a
COMMIT statement to make his changes permanent, but you may wish
to enter a ROLLBACK, instead, to avoid changing the copy of the
demonstration database on your machine.
ROLLBACK;
389
Isolation level tutorials
On the Advanced tab, enter the following string to make the window
easier to identify:
ConnectionName=Accountant
Click OK to connect.
5 Set the isolation level to 1 for the Accountant’s connection by executing
the following command.
SET TEMPORARY OPTION ISOLATION_LEVEL = 1;
6 Set the isolation level to 1 in the Sales Manager’s window by executing
the following command:
SET TEMPORARY OPTION ISOLATION_LEVEL = 1;
7 The Accountant decides to list the prices of the visors. As the
Accountant, execute the following command:
SELECT id, name, unit_price FROM product
id name unit_price
300 Tee Shirt 9.00
301 Tee Shirt 14.00
302 Tee Shirt 14.00
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 7.00
8 The Sales Manager decides to introduce a new sale price for the plastic
visor. As the Sales Manager, execute the following command:
SELECT id, name, unit_price FROM product
WHERE name = ’Visor’;
UPDATE product
SET unit_price = 5.95 WHERE id = 501;
COMMIT;
id name unit_price
500 Visor 7.00
501 Visor 5.95
390
Chapter 14 Using Transactions and Isolation Levels
9 Compare the price of the visor in the Sales Manager window with the
price for the same visor in the Accountant window. The Accountant
window still shows the old price, even though the Sales Manager has
entered the new price and committed the change.
This inconsistency is called a non-repeatable read, because if the
Accountant did the same select a second time in the same transaction,
he wouldn’t get the same results. Try it for yourself. As the Accountant,
execute the select command again. Observe that the Sales Manager’s
sale price now displays.
SELECT id, name, price
FROM products;
id name unit_price
300 Tee Shirt 9.00
301 Tee Shirt 14.00
302 Tee Shirt 14.00
400 Baseball Cap 9.00
401 Baseball Cap 10.00
500 Visor 7.00
501 Visor 5.95
391
Isolation level tutorials
UPDATE product
SET unit_price = 7.00
WHERE id = 501
The database server must guarantee repeatable reads at isolation level 2.
To do so, it places a read lock on each row of the product table that the
Accountant reads. When the Sales Manager tries to change the price
back, her transaction must acquire a write lock on the plastic visor row
of the product table. Since write locks are exclusive, her transaction
must wait until the Accountant’s transaction releases its read lock.
12 The Accountant is finished looking at the prices. He doesn’t want to risk
accidentally changing the database, so he completes his transaction with
a ROLLBACK statement.
ROLLBACK
Observe that as soon as the database server executes this statement, the
Sales Manager’s transaction completes.
id name unit_price
500 Visor 7.00
501 Visor 7.00
13 The Sales Manager can finish now. She wishes to commit her change to
restore the original price.
COMMIT
Types of Locks When you upgraded the Accountant’s isolation from level 1 to level 2, the
and different database server used read locks where none had previously been acquired. In
isolation levels general, each isolation level is characterized by the types of locks needed and
by how locks held by other transactions are treated.
At isolation level 0, the database server needs only write locks. It makes use
of these locks to ensure that no two transactions make modifications that
conflict. For example, a level 0 transaction acquires a write lock on a row
before it updates or deletes it, and inserts any new rows with a write lock
already in place.
Level 0 transactions perform no checks on the rows they are reading. For
example, when a level 0 transaction reads a row, it doesn’t bother to check
what locks may or may not have been acquired on that row by other
transactions. Since no checks are needed, level 0 transactions are particularly
fast. This speed comes at the expense of consistency. Whenever they read a
row which is write locked by another transaction, they risk returning dirty
data.
392
Chapter 14 Using Transactions and Isolation Levels
At level 1, transactions check for write locks before they read a row.
Although one more operation is required, these transactions are assured that
all the data they read is committed. Try repeating the first tutorial with the
isolation level set to 1 instead of 0. You will find that the Accountant’s
computation cannot proceed while the Sales Manager’s transaction, which
updates the price of the tee shirts, remains incomplete.
When the Accountant raised his isolation to level 2, the database server
began using read locks. From then on, it acquired a read lock for his
transaction on each row that matched his selection.
Transaction In step 10 of the above tutorial, the Sales Manager window froze during the
blocking execution of her UPDATE command. The database server began to execute
her command, then found that the Accountant’s transaction had acquired a
read lock on the row that the Sales Manager needed to change. At this point,
the database server simply paused the execution of the UPDATE. Once the
Accountant finished his transaction with the ROLLBACK, the database
server automatically released his locks. Finding no further obstructions, it
then proceeded to complete execution of the Sales Manager’s UPDATE.
In general, a locking conflict occurs when one transaction attempts to acquire
an exclusive lock on a row on which another transaction holds a lock, or
attempts to acquire a shared lock on a row on which another transaction
holds an exclusive lock. One transaction must wait for another transaction to
complete. The transaction that must wait is said to be blocked by another
transaction.
When the database server identifies a locking conflict which prohibits a
transaction from proceeding immediately, it can either pause execution of the
transaction, or it can terminate the transaction, roll back any changes, and
return an error. You control the route by setting the BLOCKING option.
When BLOCKING is set to ON, then the second transaction waits as in the
above tutorial
$ For further information regarding the blocking option, see "The
BLOCKING option" on page 380.
393
Isolation level tutorials
394
Chapter 14 Using Transactions and Isolation Levels
The new row that appears is called a phantom row because, from the
Accountant’s point of view, it appears like an apparition, seemingly from
nowhere. The Accountant is connected at isolation level 2. At that level,
the database server acquires locks only on the rows that he is using.
Other rows are left untouched and hence there is nothing to prevent the
Sales Manager from inserting a new row.
6 The Accountant would prefer to avoid such surprises in future, so he
raises the isolation level of his current transaction to level 3. Enter the
following commands for the Accountant.
SET TEMPORARY OPTION ISOLATION_LEVEL = 3
SELECT *
FROM department
ORDER BY dept_id
7 The Sales Manager would like to add a second department to handle
sales initiative aimed at large corporate partners. Execute the following
command in the Sales Manager’s window.
INSERT INTO department
(dept_id, dept_name, dept_head_id)
VALUES(700, ’Major Account Sales’, 902)
The Sales Manager’s window will pause during execution because the
Accountant’s locks block the command. Click the Interrupt the SQL
Statement button on the toolbar (or click Stop in the SQL menu) to
interrupt this entry.
8 To avoid changing the demonstration database that comes with Adaptive
Server Anywhere, you should roll back the insertion of the new
departments. Execute the following command in the Sales Manager's
window:
ROLLBACK
395
Isolation level tutorials
When the Accountant raised his isolation to level 3 and again selected all
rows in the department table, the database server placed anti-insert locks on
each row in the table, and one extra phantom lock to avoid insertion at the
end of the table. When the Sales Manager attempted to insert a new row at
the end of the table, it was this final lock that blocked her command.
Notice that the Sales Manager’s command was blocked even though the
Sales Manager is still connected at isolation level 2. The database server
places anti-insert locks, like read locks, as demanded by the isolation level
and statements of each transactions. Once placed, these locks must be
respected by all other concurrent transactions.
$ For more information on locking, see "How locking works" on
page 401.
396
Chapter 14 Using Transactions and Isolation Levels
3 The Sales Manager notices that a big order sold by Philip Chin was not
entered into the database. Philip likes to be paid his commission
promptly, so the Sales manager enters the missing order, which was
placed on April 25.
In the Sales Manager’s window, enter the following commands. The
Sales order and the items are entered in separate tables because one
order can contain many items. You should create the entry for the sales
order before you add items to it. To maintain referential integrity, the
database server allows a transaction to add items to an order only if that
order already exists.
INSERT into sales_order
VALUES ( 2653, 174, ’1994-04-22’, ’r1’,
’Central’, 129);
INSERT into sales_order_items
VALUES ( 2653, 1, 601, 100, ’1994-04-25’ );
COMMIT;
397
Isolation level tutorials
4 The Accountant has no way of knowing that the Sales Manager has just
added a new order. Had the new order been entered earlier, it would
have been included in the calculation of Philip Chin’s April sales.
In the Accountant’s window, calculate the April sales totals again. Use
the same command, and observe that Philip Chin’s April sales changes to
$4560.00.
Imagine that the Accountant now marks all orders placed in April to
indicate that commission has been paid. The order that the Sales
Manager just entered might be found in the second search and marked as
paid, even though it was not included in Philip’s total April sales!
5 At isolation level 3, the database server places anti-insert locks to ensure
that no other transactions can add a row which matches the criterion of a
search or select.
First, roll back the insertion of Philip’s missing order: Execute the
following statement in the Sales Manager’s window.
ROLLBACK
6 In the Accountant’s window, execute the following two statements.
ROLLBACK;
SET TEMPORARY OPTION ISOLATION_LEVEL = 3;
7 In the Sales Manager’s window, execute the following statements to
remove the new order.
398
Chapter 14 Using Transactions and Isolation Levels
DELETE
FROM sales_order_items
WHERE id = 2653;
DELETE
FROM sales_order
WHERE id = 2653;
COMMIT;
8 In the Accountant’s window, execute same query as before.
SELECT emp_id, emp_fname, emp_lname,
SUM(sales_order_items.quantity * unit_price)
AS "April sales"
FROM employee
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN product
WHERE ’1994-04-01’ <= order_date
AND order_date < ’1994-05-01’
GROUP BY emp_id, emp_fname, emp_lname
Because you set the isolation to level 3, the database server will
automatically place anti-insert locks to ensure that the Sales Manager
can’t insert April order items until the Accountant finishes his
transaction.
9 Return to the Sales Manager’s window. Again attempt to enter Philip
Chin’s missing order.
INSERT INTO sales_order
VALUES ( 2653, 174, ’1994-04-22’,
’r1’,’Central’, 129)
The Sales Manager’s window will hang; the operation will not complete.
Click the Interrupt the SQL Statement button on the toolbar (or click
Stop in the SQL menu) to interrupt this entry.
10 The Sales Manager can’t enter the order in April, but you might think
that she could still enter it in May.
Change the date of the command to May 05 and try again.
INSERT INTO sales_order
VALUES ( 2653, 174, ’1994-05-05’, ’r1’,
’Central’, 129)
The Sales Manager’s window will hang again. Click the Interrupt the
SQL Statement button on the toolbar (or click Stop in the SQL menu) to
interrupt this entry. Although the database server places no more locks
than necessary to prevent insertions, these locks have the potential to
interfere with a large number of other transactions.
399
Isolation level tutorials
The database server places locks in table indices. For example, it places
a phantom lock in an index so a new row cannot be inserted immediately
before it. However, when no suitable index is present, it must lock every
row in the table.
In some situations, anti-insert locks may block some insertions into a
table, yet allow others.
11 The Sales Manager wishes to add a second item to order 2651. Use the
following command.
INSERT INTO sales_order_items
VALUES ( 2651, 2, 302, 4, ’1994-05-22’ )
All goes well, so the Sales Manager decides to add the following item to
order 2652 as well.
INSERT INTO sales_order_items
VALUES ( 2652, 2, 600, 12, ’1994-05-25’ )
The Sales Manager’s window will hang. Click the Interrupt the SQL
Statement button on the toolbar (or click Stop in the SQL menu) to
interrupt this entry.
12 Conclude this tutorial by undoing any changes to avoid changing the
demonstration database. Enter the following command in the Sales
Manager’s window.
ROLLBACK
Enter the same command in the Accountant’s window.
ROLLBACK
You may now close both windows.
400
Chapter 14 Using Transactions and Isolation Levels
401
How locking works
Row orderings You can use an index to order rows based on a particular criterion
established when the index was constructed.
When there is no index, Adaptive Server Anywhere orders rows by their
physical placement on disk; in the case of a sequential scan, the specific
ordering is defined by the internal workings of the database server. You
should not rely on the order of rows in a sequential scan. From the point of
view of scanning the rows, however, Adaptive Server Anywhere treats the
request similarly to an indexed scan, albeit using an ordering of its own
choosing. It can place locks on positions in the scan as it would were it using
an index.
Through locking a scan position, a transaction prevents some actions by
other transactions relating to a particular range of values in that ordering of
the rows. Insert and anti-insert locks are always placed on scan positions.
For example, a transaction might delete a row, hence deleting a particular
primary key value. Until this transaction either commits the change or rolls it
back, it must protect its right to do either. In the case of a deleted row, it
must ensure that no other transaction can insert a row using the same primary
key value, hence making a rollback operation impossible. A lock on the scan
position this row occupied reserves this right while having the least impact
on other transactions.
Each of these locks has a separate purpose, and they all work together. Each
prevents a particular set of inconsistencies that could occur in their absence.
Depending on the isolation level you select, the database server will use
some or all of them to maintain the degree of consistency you require.
The above types of locks have the following uses:
♦ A transaction acquires a write lock whenever it inserts, updates, or
deletes a row. No other transaction can obtain either a read or a write
lock on the same row when a write lock is set. A write lock is an
exclusive lock.
402
Chapter 14 Using Transactions and Isolation Levels
403
How locking works
Only one transaction should change any one row at one time. Otherwise, two
simultaneous transactions might try to change one value to two different new
ones. Hence, it is important that a write lock be exclusive.
By contrast, no difficulty arises if more than one transaction wants to read a
row. Since neither is changing it, there is no conflict of interest. Hence, read
locks may be shared.
You may apply similar reasoning to anti-insert and insert locks. Many
transactions can prevent the insertion of a row in a particular scan position by
each acquiring an anti-insert lock. Similar logic applies for insert locks.
When a particular transaction requires exclusive access, it can easily achieve
exclusive access by obtaining both an anti-insert and an insert lock on the
same row. These locks to not conflict when they are held by the same
transaction.
Which specific The following table identifies the combination of locks that conflict.
locks conflict?
read write anti-insert insert
read conflict
write conflict conflict
anti-insert conflict
insert conflict
These conflicts arise only when the locks are held by different transactions.
For example, one transaction can obtain both anti-insert and insert locks on a
single scan position to obtain exclusive access to a location.
404
Chapter 14 Using Transactions and Isolation Levels
The first difference in operation has nothing to do with acquiring locks, but
rather with respecting them. At isolation level 0, a transaction is free to read
any row, whether or not another transaction has acquired a write lock on it.
By contrast, before reading each row an isolation level 1 transaction must
check whether a write lock is in place. It cannot read past any write-locked
rows because doing so might entail reading dirty data.
The second difference in operation creates cursor stability. Cursor stability is
achieved by acquiring a read lock on the current row of a cursor. This read
lock is released when the cursor is moved. More than one row may be
affected if the contents of the cursor is the result of a join. In this case, the
database server acquires read locks on all rows which have contributed
information to the cursor’s current row and removes all these locks as soon as
another row of the cursor is selected as current. A read lock placed to ensure
cursor stability is the only type of lock that does not persist until the end of a
transaction.
SELECT At isolation level 2, Adaptive Server Anywhere modifies its procedures to
statements at ensure that your reads are repeatable. If your SELECT command returns
isolation level 2 values from every row in a table, then the database server acquires a read
lock on each row of the table as it reads it. If, instead, your SELECT contains
a WHERE clause, or another condition which restricts the rows to selected,
then the database server instead reads each row, tests the values in the row
against your criterion, and then acquires a read lock on the row if it meets
your criterion.
As at all isolation levels, the locks acquired at level 2 include all those set at
levels 1 and 0. Thus, cursor stability is again ensured and dirty reads are not
permitted.
SELECT When operating at isolation level 3, Adaptive Server Anywhere is obligated
statements at to ensure that all schedules are serializable. In particular, in addition to the
isolation level 3 requirements imposed at each of the lower levels, it must eliminate phantom
rows.
To accommodate this requirement, the database server uses read locks and
anti-insert locks. When you make a selection, the database server acquires a
read lock on each row that contributes information to your result set. Doing
so ensures that no other transactions can modify that material before you
have finished using it.
405
How locking works
This requirement is similar to the procedures that the database server uses at
isolation level 2, but differs in that a lock must be acquired for each row
read, whether or not it meets any attached criteria. For example, if you select
the names of all employees in the sales department, then the server must lock
all the rows which contain information about a sales person, whether the
transaction is executing at isolation level 2 or 3. At isolation level 3,
however, it must also acquire read locks on each of the rows of employees
which are not in the sales department. Otherwise, someone else accessing the
database could potentially transfer another employee to the sales department
while you were still using your results.
The fact that a read lock must be acquired on each row whether or not it
meets your criteria has two important implications.
♦ The database server may need to place many more locks than would be
necessary at isolation level 2.
♦ The database server can operate a little more efficiently: It can
immediately acquire a read lock on each row at as it reads it, since the
locks must be placed whether or not the information in the row is
accepted.
The number of anti-insert locks the server places can very greatly and
depends upon your criteria and on the indexes available in the table. Suppose
you select information about the employee with Employee ID 123. If the
employee ID is the primary key of the employee table, then the database
server can economize its operations. It can use the index, which is
automatically built for a primary key, to locate the row efficiently. In
addition, there is no danger that another transaction could change another
Employee’s ID to 123 because primary key values must be unique. The
server can guarantee that no second employee is assigned that ID number
simply by acquiring a read lock on only the one row containing information
about the employee with that number.
By contrast, the database server would acquire more locks were you instead
to select all the employees in the sales department. Since any number of
employees could be added to the department, the server will likely have to
read every row in the employee table and test whether each person is in sales.
If this is the case, both read and anti-insert locks must be acquired for each
row.
406
Chapter 14 Using Transactions and Isolation Levels
Uniqueness You can ensure that all values in a particular column, or combination of
columns, are unique. The database server always performs this task by
building an index for the unique column, even if you do not explicitly create
one.
In particular, all primary key values must be unique. The database server
automatically builds an index for the primary key of every table. Thus, you
should not ask the database server to create an index on a primary key, as
that index would be a redundant index.
Orphans and A foreign key is a reference to a primary key, usually in another table. When
referential integrity that primary key doesn’t exist, the offending foreign key is called an orphan.
Adaptive Server Anywhere automatically ensures that your database contains
no orphans. This process is referred to as verifying referential integrity.
The database server verifies referential integrity by counting orphans.
407
How locking works
WAIT FOR You can ask the database server to delay verifying referential integrity to the
COMMIT end of your transaction. In this mode, you can insert one row which contains
a foreign key, then insert a second row which contains the missing primary
key. You must perform both operations in the same transaction. Otherwise,
the database server will not allow your operations.
To request that the database server delay referential integrity checks until
commit time, set the value of the option WAIT_FOR_COMMIT to ON. By
default, this option is OFF. To turn it on, issue the following command:
SET OPTION WAIT_FOR_COMMIT = ON;
Before committing a transaction, the database server verifies that referential
integrity is maintained by checking the number of orphans your transaction
has created. At the end of every transaction, that number must be zero.
Even if the necessary primary key exists at the time you insert the row, the
database server must ensure that it still exists when you commit your results.
It does so by placing a read lock on the target row. With the read lock in
place, any other transaction is still free to read that row, but none can delete
or alter it.
408
Chapter 14 Using Transactions and Isolation Levels
Anti-insert locks The database server must ensure that the DELETE operation can be rolled
back. It does so in part by acquiring anti-insert locks. These locks are not
exclusive; however, they deny other transactions the right to insert rows that
make it impossible to roll back the DELETE operation. For example, the row
deleted may have contained a primary key value, or another unique value.
Were another transaction allowed to insert a row with the same value, the
DELETE could not be undone without violating the uniqueness property.
409
How locking works
Two-phase locking
Often, the general information about locking provided in the earlier sections
will suffice to meet your needs. There are times, however, when you may
benefit from more knowledge of what goes on inside the database server
when you perform basic types of operations. This knowledge will provide
you with a better basis from which to understand and predict potential
problems that users of your database may encounter.
Two-phase locking is important in the context of ensuring that schedules are
serializable. The two-phase locking protocol specifies a procedure each
transaction follows.
410
Chapter 14 Using Transactions and Isolation Levels
In other words, if all transactions follow the two-phase locking protocol, then
none of the inconsistencies mentioned above are possible.
This protocol defines the operations necessary to ensure complete
consistency of your data, but you may decide that some types of
inconsistencies are permissible during some operations on your database.
Eliminating all inconsistency often means reducing the efficiency of your
database.
Write locks are placed on modified, inserted, and deleted rows regardless of
isolation level. They are always held until commit and rollback.
Read locks at Isolation level Read locks
different isolation
levels 0 None
1 On rows that appear in the result set;
they are held only when a cursor is
positioned on a row.
2 On rows that appear in the result set;
they are held until the user executes a
COMMIT or a ROLLBACK.
3 On all rows read and all insertion
points crossed in the computation of a
result set
411
How locking works
Special optimizations
The previous sections describe the locks acquired when all transactions are
operating at a given isolation level. For example, when all transactions are
running at isolation level 2, locking is performed as described in the
appropriate section, above.
412
Chapter 14 Using Transactions and Isolation Levels
413
Particular concurrency issues
414
Chapter 14 Using Transactions and Isolation Levels
415
Replication and concurrency
416
Chapter 14 Using Transactions and Isolation Levels
417
Summary
Summary
Transactions and locking are perhaps second only in importance to relations
between tables. The integrity and performance of any database can benefit
from the judicious use of locking and careful construction of transactions.
Both are essential to creating databases that must execute a large number of
commands concurrently.
Transactions group SQL statements into logical units of work. You may end
each by either rolling back any changes you have made or by committing
these changes and so making them permanent.
Transactions are essential to data recovery in the event of system failure.
They also play a pivotal role in interweaving statements from concurrent
transactions.
To improve performance, multiple transactions must be executed
concurrently. Each transaction is composed of component SQL statements.
When two or more transactions are to be executed concurrently, the database
server must schedule the execution of the individual statements. Concurrent
transactions have the potential to introduce new, inconsistent results that
could not arise were these same transactions executed sequentially.
Many types of inconsistencies are possible, but four typical types are
particularly important because they are mentioned in the ISO SQL/92
standard and the isolation levels are defined in terms of them.
♦ Dirty read One transaction reads data modified, but not yet committed,
by another.
♦ Non-repeatable read A transaction reads the same row twice and gets
different values.
♦ Phantom row A transaction selects rows, using a certain criterion,
twice and finds new rows in the second result set.
♦ Lost Update One transaction’s changes to a row are completely lost
because another transaction is allowed to save an update based on earlier
data.
A schedule is called serializable whenever the effect of executing the
statements according to the schedule is the same as could be achieved by
executing each of the transactions sequentially. Schedules are said to be
correct if they are serializable. A serializable schedule will cause none of the
above inconsistencies.
418
Chapter 14 Using Transactions and Isolation Levels
419
Summary
420
P A R T F O U R
This part describes how to build logic into your database using SQL stored
procedures, triggers, and Java. Storing logic in the database makes it available
automatically to all applications, providing consistency, performance, and
security benefits. The combined Java/Stored Procedure debugger is a powerful
tool for debugging all kinds of logic.
421
422
C H A P T E R 1 5
About this chapter Procedures and triggers store procedural SQL statements in the database for
use by all applications. They enhance the security, efficiency, and
standardization of databases. User-defined functions are one kind of
procedure that return a value to the calling environment for use in queries
and other SQL statements. Batches are sets of SQL statements submitted to
the database server as a group. Many features available in procedures and
triggers, such as control statements, are also available in batches.
$ For many purposes, server-side JDBC provides a more flexible way to
build logic into the database than SQL stored procedures. For information on
JDBC, see "Data Access Using JDBC" on page 577.
Contents
Topic Page
Procedure and trigger overview 424
Benefits of procedures and triggers 425
Introduction to procedures 426
Introduction to user-defined functions 433
Introduction to triggers 437
Introduction to batches 444
Control statements 446
The structure of procedures and triggers 449
Returning results from procedures 453
Using cursors in procedures and triggers 458
Errors and warnings in procedures and triggers 461
Using the EXECUTE IMMEDIATE statement in procedures 470
Transactions and savepoints in procedures and triggers 471
Some tips for writing procedures 472
Statements allowed in batches 474
Calling external libraries from procedures 475
423
Procedure and trigger overview
424
Chapter 15 Using Procedures, Triggers, and Batches
425
Introduction to procedures
Introduction to procedures
To use procedures, you need to understand how to:
♦ Create procedures
♦ Call procedures from a database application
♦ Drop or remove procedures
♦ Control who has permissions to use procedures
This section discusses the above aspects of using procedures, as well as some
different applications of procedures.
Creating procedures
Adaptive Server Anywhere provides a number of tools that let you create a
new procedure.
In Sybase Central, you can use a wizard to provide necessary information
and then complete the code in a generic code editor. Sybase Central also
provides procedure templates (located in the Procedures & Functions folder)
that you can open and modify.
In Interactive SQL, you use the CREATE PROCEDURE statement to create
procedures. However, you must have RESOURCE authority. Where you
enter the statement depends on which tool you use.
426
Chapter 15 Using Procedures, Triggers, and Batches
Tip
You can also create a remote procedure by right-clicking a remote server
in the Remote Servers folder and choosing Add Remote Procedure from
the popup menu.
427
Introduction to procedures
Altering procedures
You can modify an existing procedure using either Sybase Central or
Interactive SQL. You must have DBA authority or be the owner of the
procedure.
In Sybase Central, you cannot rename an existing procedure directly. Instead,
you must create a new procedure with the new name, copy the previous code
to it, and then delete the old procedure.
In Interactive SQL, you can use an ALTER PROCEDURE statement to
modify an existing procedure. You must include the entire new procedure in
this statement (in the same syntax as in the CREATE PROCEDURE
statement that created the procedure). You must also reassign user
permissions on the procedure.
$ For information on altering database object properties, see "Setting
properties for database objects" on page 116.
$ For information on granting or revoking permissions for procedures,
see "Granting permissions on procedures" on page 726 and "Revoking user
permissions" on page 729.
428
Chapter 15 Using Procedures, Triggers, and Batches
Calling procedures
CALL statements invoke procedures. Procedures can be called by an
application program, or by other procedures and triggers.
$ For more information, see "CALL statement" on page 398 of the book
ASA Reference.
The following statement calls the new_dept procedure to insert an Eastern
Sales department:
CALL new_dept( 210, ’Eastern Sales’, 902 );
After this call, you may wish to check the department table to see that the
new department has been added.
All users who have been granted EXECUTE permissions for the procedure
can call the new_dept procedure, even if they have no permissions on the
department table.
$ For more information about EXECUTE permissions, see "EXECUTE
statement" on page 500 of the book ASA Reference.
429
Introduction to procedures
Deleting procedures
Once you create a procedure, it remains in the database until someone
explicitly removes it. Only the owner of the procedure or a user with DBA
authority can drop the procedure from the database.
Example The following statement removes the procedure new_dept from the database:
DROP PROCEDURE new_dept
430
Chapter 15 Using Procedures, Triggers, and Batches
431
Introduction to procedures
Employee ID Salary
102 45700.000
105 62000.000
160 57490.000
243 72995.000
247 48023.690
Interactive SQL can only return multiple result sets if you have this option
enabled on the Commands tab of the Options dialog. For more information,
see "Returning multiple result sets from procedures" on page 456.
432
Chapter 15 Using Procedures, Triggers, and Batches
Note
If you are using a tool other than Interactive SQL or Sybase Central, you
may need to change the command delimiter away from the semicolon
before entering the CREATE FUNCTION statement.
433
Introduction to user-defined functions
Fullname (’Jane’,’Smith’)
Jane Smith
Any user who has been granted EXECUTE permissions for the function can
use the fullname function.
Example The following user-defined function illustrates local declarations of
variables.
The customer table includes some Canadian customers sprinkled among
those from the USA, but there is no country column. The user-defined
function nationality uses the fact that the US zip code is numeric while the
Canadian postal code begins with a letter to distinguish Canadian and US
customers.
CREATE FUNCTION nationality( cust_id INT )
RETURNS CHAR( 20 )
BEGIN
DECLARE natl CHAR(20);
434
Chapter 15 Using Procedures, Triggers, and Batches
Notes While this function is useful for illustration, it may perform very poorly if
used in a SELECT involving many rows. For example, if you used the
SELECT query on a table containing 100 000 rows, of which 10 000 are
returned, the function will be called 10 000 times. If you use it in the
WHERE clause of the same query, it would be called 100 000 times.
435
Introduction to user-defined functions
For example, the creator of the function fullname could allow another_user
to use fullname with the statement:
GRANT EXECUTE ON fullname TO another_user
The following statement revokes permissions to use the function:
REVOKE EXECUTE ON fullname FROM another_user
436
Chapter 15 Using Procedures, Triggers, and Batches
Introduction to triggers
You use triggers whenever referential integrity and other declarative
constraints are insufficient.
$ For information on referential integrity, see "Ensuring Data Integrity"
on page 345 and "CREATE TABLE statement" on page 453 of the book ASA
Reference.
You may want to enforce a more complex form of referential integrity
involving more detailed checking, or you may want to enforce checking on
new data but allow legacy data to violate constraints. Another use for triggers
is in logging the activity on database tables, independent of the applications
using the database.
Action Description
INSERT Invokes the trigger whenever a new row is inserted into the
table associated with the trigger
DELETE Invokes the trigger whenever a row of the associated table is
deleted.
UPDATE Invokes the trigger whenever a row of the associated table is
updated.
UPDATE OF Invokes the trigger whenever a row of the associated table is
column-list updated such that a column in the column-list has been
modified
437
Introduction to triggers
If an error occurs while a trigger is executing, the operation that fired the
trigger fails. INSERT, UPDATE, and DELETE are atomic operations (see
"Atomic compound statements" on page 447). When they fail, all effects of
the statement (including the effects of triggers and any procedures called by
triggers) revert back to their pre-operation state.
Creating triggers
You create triggers using either Sybase Central or Interactive SQL. In
Sybase Central, you can compose the code in a Code Editor. In Interactive
SQL, you can use a CREATE TRIGGER statement. For both tools, you must
have DBA or RESOURCE authority to create a trigger and you must have
ALTER permissions on the table associated with the trigger.
The body of a trigger consists of a compound statement: a set of semicolon-
delimited SQL statements bracketed by a BEGIN and an END statement.
You cannot use COMMIT and ROLLBACK and some ROLLBACK TO
SAVEPOINT statements within a trigger.
$ For more information, see the list of cross-references at the end of this
section.
438
Chapter 15 Using Procedures, Triggers, and Batches
BEGIN
DECLARE err_user_error EXCEPTION
FOR SQLSTATE ’99999’;
IF new_employee.birth_date > ’June 6, 1994’ THEN
SIGNAL err_user_error;
END IF;
END
This trigger fires after any row is inserted into the employee table. It detects
and disallows any new rows that correspond to birth dates later than June 6,
1994.
The phrase REFERENCING NEW AS new_employee allows statements in
the trigger code to refer to the data in the new row using the alias
new_employee.
Signaling an error causes the triggering statement, as well as any previous
effects of the trigger, to be undone.
For an INSERT statement that adds many rows to the employee table, the
check_birth_date trigger fires once for each new row. If the trigger fails for
any of the rows, all effects of the INSERT statement roll back.
You can specify that the trigger fires before the row is inserted rather than
after by changing the first line of the example to:
CREATE TRIGGER mytrigger BEFORE INSERT ON Employee
The REFERENCING NEW clause refers to the inserted values of the row; it
is independent of the timing (BEFORE or AFTER) of the trigger.
You may find it easier in some cases to enforce constraints using declaration
referential integrity or CHECK constraints, rather than triggers. For example,
implementing the above example with a column check constraint proves
more efficient and concise:
CHECK (@col <= ’June 6, 1994’)
Example 2: A row- The following CREATE TRIGGER statement defines a row-level DELETE
level DELETE trigger:
trigger example CREATE TRIGGER mytrigger BEFORE DELETE ON employee
REFERENCING OLD AS oldtable
FOR EACH ROW
BEGIN
...
END
The REFERENCING OLD clause enables the delete trigger code to refer to
the values in the row being deleted using the alias oldtable.
You can specify that the trigger fires after the row is deleted rather than
before, by changing the first line of the example to:
439
Introduction to triggers
Executing triggers
Triggers execute automatically whenever an INSERT, UPDATE, or
DELETE operation is performed on the table named in the trigger. A row-
level trigger fires once for each row affected, while a statement-level trigger
fires once for the entire statement.
When an INSERT, UPDATE, or DELETE fires a trigger, the order of
operation is as follows:
1 BEFORE triggers fire.
2 Referential actions are performed.
3 The operation itself is performed.
4 AFTER triggers fire.
440
Chapter 15 Using Procedures, Triggers, and Batches
Altering triggers
You can modify an existing trigger using either Sybase Central or Interactive
SQL. You must be the owner of the table on which the trigger is defined, or
be DBA, or have ALTER permissions on the table and have RESOURCE
authority.
In Sybase Central, you cannot rename an existing trigger directly. Instead,
you must create a new trigger with the new name, copy the previous code to
it, and then delete the old trigger.
In Interactive SQL, you can use an ALTER TRIGGER statement to modify
an existing trigger. You must include the entire new trigger in this statement
(in the same syntax as in the CREATE TRIGGER statement that created the
trigger).
$ For information on altering database object properties, see "Setting
properties for database objects" on page 116.
441
Introduction to triggers
Dropping triggers
Once you create a trigger, it remains in the database until someone explicitly
removes it. You must have ALTER permissions on the table associated with
the trigger to drop the trigger.
Example The following statement removes the trigger mytrigger from the database:
DROP TRIGGER mytrigger
$ For more information, see "DROP statement" on page 491 of the book
ASA Reference.
442
Chapter 15 Using Procedures, Triggers, and Batches
Also, user_1 must have permissions to carry out the operations specified in
the trigger.
443
Introduction to batches
Introduction to batches
A simple batch consists of a set of SQL statements, separated by semicolons
or separated by a separate line with just the word go on it. The use of go is
recommended. For example, the following set of statements form a batch,
which creates an Eastern Sales department and transfers all sales reps from
Massachusetts to that department.
INSERT
INTO department ( dept_id, dept_name )
VALUES ( 220, ’Eastern Sales’ )
go
UPDATE employee
SET dept_id = 220
WHERE dept_id = 200
AND state = ’MA’
go
COMMIT
go
You can include this set of statements in an application and execute them
together.
Many statements used in procedures and triggers can also be used in batches.
You can use control statements (CASE, IF, LOOP, and so on), including
compound statements (BEGIN and END), in batches. Compound statements
can include declarations of variables, exceptions, temporary tables, or
cursors inside the compound statement.
The following batch creates a table only if a table of that name does not
already exist:
IF NOT EXISTS (
SELECT * FROM SYSTABLE
WHERE table_name = ’t1’ ) THEN
CREATE TABLE t1 (
firstcol INT PRIMARY KEY,
secondcol CHAR( 30 )
)
go
ELSE
444
Chapter 15 Using Procedures, Triggers, and Batches
445
Control statements
Control statements
There are a number of control statements for logical flow and decision
making in the body of the procedure or trigger, or in a batch. Available
control statements include:
446
Chapter 15 Using Procedures, Triggers, and Batches
447
Control statements
All noncompound SQL statements are atomic. You can make a compound
statement atomic by adding the keyword ATOMIC after the BEGIN
keyword.
BEGIN ATOMIC
UPDATE employee
SET manager_ID = 501
WHERE emp_ID = 467;
UPDATE employee
SET birth_date = ’bad_data’;
END
In this example, the two update statements are part of an atomic compound
statement. They must either succeed or fail as one. The first update statement
would succeed. The second one causes a data conversion error since the
value being assigned to the birth_date column cannot be converted to a date.
The atomic compound statement fails and the effect of both UPDATE
statements is undone. Even if the currently executing transaction is
eventually committed, neither statement in the atomic compound statement
takes effect.
You cannot use COMMIT and ROLLBACK and some ROLLBACK TO
SAVEPOINT statements within an atomic compound statement (see
"Transactions and savepoints in procedures and triggers" on page 471).
There is a case where some, but not all, of the statements within an atomic
compound statement are executed. This happens when an exception handler
within the compound statement deals with an error.
$ For more information, see "Using exception handlers in procedures and
triggers" on page 466.
448
Chapter 15 Using Procedures, Triggers, and Batches
449
The structure of procedures and triggers
450
Chapter 15 Using Procedures, Triggers, and Batches
451
The structure of procedures and triggers
Name
Fran Whitney
Matthew Cobb
Philip Chin
Julie Jordan
Robert Breault
...
452
Chapter 15 Using Procedures, Triggers, and Batches
Using the SET The following somewhat artificial procedure returns a value in an OUT
statement parameter assigned using a SET statement:
CREATE PROCEDURE greater ( IN a INT,
IN b INT,
OUT c INT)
BEGIN
IF a > b THEN
SET c = a;
ELSE
453
Returning results from procedures
SET c = b;
END IF ;
END
Using single-row Single-row queries retrieve at most one row from the database. This type of
SELECT query uses a SELECT statement with an INTO clause. The INTO clause
statements follows the select list and precedes the FROM clause. It contains a list of
variables to receive the value for each select list item. There must be the
same number of variables as there are select list items.
When a SELECT statement executes, the server retrieves the results of the
SELECT statement and places the results in the variables. If the query results
contain more than one row, the server returns an error. For queries returning
more than one row, you must use cursors. For information about returning
more than one row from a procedure, see "Returning result sets from
procedures" on page 455.
If the query results in no rows being selected, a row not found warning
appears.
The following procedure returns the results of a single-row SELECT
statement in the procedure parameters.
You can test this procedure in Interactive SQL using the following
statements, which show the number of orders placed by the customer with ID
102:
CREATE VARIABLE orders INT;
CALL OrderCount ( 102, orders );
SELECT orders;
454
Chapter 15 Using Procedures, Triggers, and Batches
Company Value
Chadwicks 8076
Overland Army Navy 8064
Martins Landing 6888
Sterling & Co. 6804
Carmel Industries 6780
... ...
Notes ♦ The number of variables in the RESULT list must match the number of
the SELECT list items. Automatic data type conversion is carried out
where possible if data types do not match.
455
Returning results from procedures
END
Notes ♦ To test this procedure in Interactive SQL, enter the following statement
in the SQL Statements pane:
CALL ListPeople ()
457
Using cursors in procedures and triggers
458
Chapter 15 Using Procedures, Triggers, and Batches
459
Using cursors in procedures and triggers
460
Chapter 15 Using Procedures, Triggers, and Batches
461
Errors and warnings in procedures and triggers
Default error Generally, if a SQL statement in a procedure or trigger fails, the procedure or
handling trigger terminates execution and control returns to the application program
with an appropriate setting for the SQLSTATE and SQLCODE values. This
is true even if the error occurred in a procedure or trigger invoked directly or
indirectly from the first one. In the case of a trigger, the operation causing
the trigger is also undone and the error is returned to the application.
The following demonstration procedures show what happens when an
application calls the procedure OuterProc, and OuterProc in turn calls the
procedure InnerProc, which then encounters an error.
CREATE PROCEDURE OuterProc()
BEGIN
MESSAGE ’Hello from OuterProc.’ TO CLIENT;
CALL InnerProc();
MESSAGE ’SQLSTATE set to ’,
SQLSTATE,’ in OuterProc.’ TO CLIENT
END
CREATE PROCEDURE InnerProc()
BEGIN
DECLARE column_not_found
EXCEPTION FOR SQLSTATE ’52003’;
MESSAGE ’Hello from InnerProc.’ TO CLIENT;
SIGNAL column_not_found;
MESSAGE ’SQLSTATE set to ’,
SQLSTATE, ’ in InnerProc.’ TO CLIENT;
END
462
Chapter 15 Using Procedures, Triggers, and Batches
463
Errors and warnings in procedures and triggers
♦ SET VARIABLE
The following example illustrates how this works.
Drop the Remember to drop both the InnerProc and OuterProc procedures by entering
procedures the following commands in the command window before continuing with the
tutorial:
DROP PROCEDURE OuterProc;
DROP PROCEDURE InnerProc
The following demonstration procedures show what happens when an
application calls the procedure OuterProc; and OuterProc in turn calls the
procedure InnerProc, which then encounters an error. These demonstration
procedures are based on those used earlier in this section:
CREATE PROCEDURE OuterProc()
ON EXCEPTION RESUME
BEGIN
DECLARE res CHAR(5);
MESSAGE ’Hello from OuterProc.’ TO CLIENT;
CALL InnerProc();
SELECT @res=SQLSTATE;
IF res=’52003’ THEN
MESSAGE ’SQLSTATE set to ’,
res, ’ in OuterProc.’ TO CLIENT;
END IF
END;
464
Chapter 15 Using Procedures, Triggers, and Batches
465
Errors and warnings in procedures and triggers
466
Chapter 15 Using Procedures, Triggers, and Batches
END
CREATE PROCEDURE InnerProc()
BEGIN
DECLARE column_not_found
EXCEPTION FOR SQLSTATE ’52003’;
MESSAGE ’Hello from InnerProc.’ TO CLIENT;
SIGNAL column_not_found;
MESSAGE ’Line following SIGNAL.’ TO CLIENT;
EXCEPTION
WHEN column_not_found THEN
MESSAGE ’Column not found handling.’ TO
CLIENT;
WHEN OTHERS THEN
RESIGNAL ;
END
The EXCEPTION statement declares the exception handler itself. The lines
following the EXCEPTION statement do not execute unless an error occurs.
Each WHEN clause specifies an exception name (declared with a
DECLARE statement) and the statement or statements to be executed in the
event of that exception. The WHEN OTHERS THEN clause specifies the
statement(s) to be executed when the exception that occurred does not appear
in the preceding WHEN clauses.
In this example, the statement RESIGNAL passes the exception on to a
higher-level exception handler. RESIGNAL is the default action if WHEN
OTHERS THEN is not specified in an exception handler.
The following statement executes the OuterProc procedure:
CALL OuterProc();
The Interactive SQL Messages window then displays the following:
Hello from OuterProc.
Hello from InnerProc.
Column not found handling.
SQLSTATE set to 00000 in OuterProc.
Notes ♦ The EXCEPTION statements execute, rather than the lines following the
SIGNAL statement in InnerProc.
♦ As the error encountered was a column not found error, the MESSAGE
statement included to handle the error executes, and SQLSTATE resets
to zero (indicating no errors).
♦ After the exception handling code executes, control passes back to
OuterProc, which proceeds as if no error was encountered.
467
Errors and warnings in procedures and triggers
Exception handling When an exception is handled inside a compound statement, the compound
and atomic statement completes without an active exception and the changes before the
compound exception are not reversed. This is true even for atomic compound
statements statements. If an error occurs within an atomic compound statement and is
explicitly handled, some but not all of the statements in the atomic
compound statement are executed.
468
Chapter 15 Using Procedures, Triggers, and Batches
469
Using the EXECUTE IMMEDIATE statement in procedures
470
Chapter 15 Using Procedures, Triggers, and Batches
471
Some tips for writing procedures
472
Chapter 15 Using Procedures, Triggers, and Batches
You can minimize the inconvenience of long fully qualified names by using
a correlation name to provide a convenient name to use for the table within a
statement. Correlation names are described in "FROM clause" on page 518
of the book ASA Reference.
473
Statements allowed in batches
474
Chapter 15 Using Procedures, Triggers, and Batches
Syntax You can create a procedure that calls a function function_name in DLL
library.dll as follows:
CREATE PROCEDURE dll_proc ( parameter-list )
EXTERNAL NAME ’function_name@library.dll’
If you call an external DLL from a procedure, the procedure cannot carry out
any other tasks; it just forms a wrapper around the DLL.
An analogous CREATE FUNCTION statement is as follows:
475
Calling external libraries from procedures
476
Chapter 15 Using Procedures, Triggers, and Batches
477
Calling external libraries from procedures
Notes Calling get_value on an OUT parameter returns the data type of the
argument, and returns data as NULL.
The get_piece function for any given argument can only be called
immediately after the get_value function for the same argument,
To return NULL, set data to NULL in an_extfn_value.
The append field of set_value determines whether the supplied data replaces
(false) or appends to (true) the existing data. You must call set_value with
append=FALSE before calling it with append=TRUE for the same argument.
The append field is ignored for fixed length data types.
The header file itself contains some additional notes.
$ For information about passing parameters to external functions, see
"Passing parameters to external functions" on page 479.
478
Chapter 15 Using Procedures, Triggers, and Batches
Implementing An external function that expects to be canceled must inform the database
cancel processing server by calling the set_cancel API function. You must export a special
function to enable external operations to be canceled. This function must
have the following form:
void an_extfn_cancel( void * cancel_handle )
If the DLL does not export this function, the database server ignores any user
interrupts for functions in the DLL. In this function, cancel_handle is a
pointer provided by the function being cancelled to the database server upon
each call to the external function by the set_cancel API function listed in the
an_extfn_api structure, above.
You cannot use date or time data types, and you cannot use exact numeric
data types.
To provide values for INOUT or OUT parameters, use the set_value API
function. To read IN and INOUT parameters, use the get_value API
function.
479
Calling external libraries from procedures
Passing NULL You can pass NULL as a valid value for all arguments. Functions in external
libraries can supply NULL as a return type for any data type.
External function The following table lists the supported return types, and how they map to the
return types return type of the SQL function or procedure.
If a function in the external library returns NULL, and the SQL external
function was declared to return CHAR(), then the return value of the SQL
extended function is NULL.
480
C H A P T E R 1 6
About this chapter This chapter describes how to use scheduling and event handling features of
Adaptive Server Anywhere to automate database administration and other
tasks.
Contents
Topic Page
Introduction 482
Understanding schedules 484
Understanding events 486
Understanding event handlers 490
Schedule and event internals 492
Scheduling and event handling tasks 494
481
Introduction
Introduction
Many database administration tasks are best carried out systematically. For
example, a regular backup procedure is an important part of proper database
administration procedures.
You can automate routine tasks in Adaptive Server Anywhere by adding an
event to a database, and providing a schedule for the event. Whenever one
of the times in the schedule passes, a sequence of actions called an event
handler is executed by the database server.
Database administration also requires taking action when certain conditions
occur. For example, it may be appropriate to e-mail a notification to a system
administrator when a disk containing the transaction log is filling up, so that
the administrator can handle the situation. These tasks too can be automated
by defining event handlers for one of a set of system events.
Chapter contents This chapter contains the following material:
♦ An introduction to scheduling and event handling (this section).
♦ Concepts and background information to help you design and use
schedules and event handlers:
♦ "Understanding schedules" on page 484.
♦ "Understanding events" on page 486.
♦ A discussion of techniques for developing event handlers:
♦ "Developing event handlers" on page 490.
♦ Internals information:
♦ "Schedule and event internals" on page 492.
♦ Step by step instructions for how to carry out automation tasks.
♦ "Scheduling and event handling tasks" on page 494.
Questions and To answer the question... Consider reading...
answers
What is a schedule? "Understanding schedules" on
page 484.
What is a system event? "Understanding events" on page 486
What is an event handler? "Understanding event handlers" on
page 490
How do I debug event handlers? "Developing event handlers" on
page 490
482
Chapter 16 Automating Tasks Using Schedules and Events
483
Understanding schedules
Understanding schedules
By scheduling activities you can ensure that a set of actions is executed at a
set of preset times. The scheduling information and the event handler are
both stored in the database itself.
You can define complex schedules by associating more than one schedule
with a named event.
The following examples give some ideas for scheduled actions that may be
useful.
Examples Carry out an incremental backup daily at 1:00 am:
create event IncrementalBackup
schedule
start time ’1:00 AM’ every 24 hours
handler
begin
backup database directory ’c:\\backup’
transaction log only
transaction log rename match
end
Summarize orders at the end of each business day:
create event Summarize
schedule
start time ’6:00 pm’
on ( ’Mon’, ’Tue’, ’Wed’, ’Thu’, ’Fri’ )
handler
begin
insert into dba.OrderSummary
select max( date_ordered ),
count( * ),
sum( amount )
from dba.Orders
where date_ordered = current date
end
Defining schedules
Schedule definitions have several components to them, to permit flexibility:
♦ Name Each schedule definition has a name. You can assign more than
one schedule to a particular event, which can be useful in designing
complex schedules.
♦ Start time You can define a start time for the event, which is the time
that it is first executed.
484
Chapter 16 Automating Tasks Using Schedules and Events
485
Understanding events
Understanding events
The database server tracks several kinds of system events. Event handlers are
triggered when the system event is checked by the database server, and
satisfies a provided trigger condition.
By defining event handlers to execute when a chosen system event occurs
and satisfies a trigger condition that you define, you can improve the security
and safety of your data, and help to ease administration.
$ For information on the available system events, see "Choosing a system
event" on page 486. For information on trigger conditions, see "Defining
trigger conditions for events" on page 487.
♦ SQL errors When an error is triggered, you can use the RAISERROR
event type to take actions.
♦ Idle time The database server has been idle for a specified time. You
may want to use this event type to carry out routine maintenance
operations at quiet times.
487
Understanding events
488
Chapter 16 Automating Tasks Using Schedules and Events
489
Understanding event handlers
490
Chapter 16 Automating Tasks Using Schedules and Events
491
Schedule and event internals
492
Chapter 16 Automating Tasks Using Schedules and Events
493
Scheduling and event handling tasks
494
Chapter 16 Automating Tasks Using Schedules and Events
If you are developing event handlers, you can add schedules or system events
to control the triggering of an event later, either using Sybase Central or the
ALTER EVENT statement.
$ See also:
♦ For information on triggering events, see "Triggering an event handler"
on page 496.
♦ For information on altering events, see "ALTER EVENT statement" on
page 375 of the book ASA Reference.
495
Scheduling and event handling tasks
496
Chapter 16 Automating Tasks Using Schedules and Events
497
Scheduling and event handling tasks
498
C H A P T E R 1 7
About this chapter This chapter provides motivation and concepts for using Java in the database.
Adaptive Server Anywhere is a runtime environment for Java or Java
platform. Java provides a natural extension to SQL, turning Adaptive Server
Anywhere into a platform for the next generation of enterprise applications.
Contents
Topic Page
Introduction to Java in the database 500
Java in the database Q & A 503
A Java seminar 509
The runtime environment for Java in the database 519
A Java in the database exercise 527
499
Introduction to Java in the database
The SQLJ The Adaptive Server Anywhere Java implementation is based on the SQLJ
proposed standard Part 1 and SQLJ Part 2 proposed standards. SQLJ Part 1 provides
specifications for calling Java static methods as SQL stored procedures and
user-defined functions. SQLJ Part 2 provides specifications for using Java
classes as SQL domains.
500
Chapter 17 Welcome to Java in the Database
Title Purpose
"Welcome to Java in the Java concepts and how to apply them in
Database" on page 499 (this Adaptive Server Anywhere.
chapter)
"Using Java in the Database" Practical steps to using Java in the database.
on page 535
"Data Access Using JDBC" Accessing data from Java classes, including
on page 577 distributed computing.
"Debugging Logic in the Testing and debugging Java code running in the
Database" on page 607 database.
Adaptive Server Anywhere The Reference Manual includes material on the
Reference. SQL extensions that support Java in the
database.
Reference guide to Sun’s Java Online guide to Java API classes, fields and
API methods. Available as Windows Help only.
Thinking in Java by Bruce Online book that teaches how to program in
Eckel. Java. Supplied in Adobe PDF format in the
jxmp subdirectory of your Adaptive Server
Anywhere installation directory.
501
Introduction to Java in the database
502
Chapter 17 Welcome to Java in the Database
503
Java in the database Q & A
You then install these compiled classes into a database. Once installed, you
can execute these classes in the database server.
Adaptive Server Anywhere is a runtime environment for Java classes, not a
Java development environment. You need a Java development environment,
such as Sybase PowerJ or the Sun Microsystems Java Development Kit, to
write and compile Java.
Why Java?
Java provides a number of features that make it ideal for use in the database:
♦ Thorough error checking at compile time.
♦ Built-in error handing with a well-defined error handling methodology.
♦ Built-in garbage collection (memory recovery).
♦ Elimination of many bug-prone programming techniques.
♦ Strong security features.
♦ Java code is interpreted, so no operations get executed without being
acceptable to the VM.
504
Chapter 17 Welcome to Java in the Database
505
Java in the database Q & A
For example, the SQL function PI(*) returns the value for pi. The Java API
class java.lang.Math has a parallel field named PI returning the same value.
But java.lang.Math also has a field named E that returns the base of the
natural logarithms, as well as a method that computes the remainder
operation on two arguments as prescribed by the IEEE 754 standard.
Other members of the Java API offer even more specialized functionality.
For example, java.util.Stack generates a last-in, first-out queue that can
store ordered lists; java.util.HashTable maps values to keys;
java.util.StringTokenizer breaks a string of characters into individual word
units.
506
Chapter 17 Welcome to Java in the Database
507
Java in the database Q & A
508
Chapter 17 Welcome to Java in the Database
A Java seminar
This section introduces key Java concepts. After reading this section you
should be able to examine Java code, such as a simple class definition or the
invocation of a method, and understand what is taking place.
509
A Java seminar
Subclasses in Java
You can define classes as subclasses of other classes. A class that is a
subclass of another class can use the fields and method of its parent: this is
called inheritance. You can define additional methods and fields that apply
only to the subclass, and redefine the meaning of inherited fields and
methods.
Java is a single-hierarchy language, meaning that all classes you create or use
eventually inherit from one class. This means the low-level classes (classes
further up in the hierarchy) must be present before higher-level classes can
be used. The base set of classes required to run Java applications is called the
runtime Java classes, or the Java API.
510
Chapter 17 Welcome to Java in the Database
Class constructors
You create an object by invoking a class constructor. Constructors are
methods that have the following properties:
♦ A constructor method has the same name as the class, and has no
declared data type. For example, a simple constructor for the Product
class would be declared as follows:
Product () {
...constructor code here...
}
♦ If you include no constructor method in your class definition, a default
method is used that is provided by the Java base object.
♦ You can supply more than one constructor for each class, with different
numbers and types of arguments. When a constructor is invoked, the one
with the proper number and type of arguments is used.
511
A Java seminar
Understanding fields
There are two categories of Java fields:
♦ Instance fields Each object has its own set of instance fields, created
when the object was created. They hold information specific to that
instance. For example, a lineItem1Description field in the Invoice class
holds the description for a line item on a particular invoice. You can
access instance fields only through an object reference.
♦ Class fields A class field holds information that is independent of any
particular instance. A class field is created when the class is first loaded,
and no further instances are created no matter how many objects are
created. Class fields can be accessed either through the class name or the
object reference.
To declare a field in a class, state its type, then its name, followed by a
semicolon. To declare a class field, use the static Java keyword in the
declaration. You declare fields in the body of the class and not within a
method; declaring a variable within a method makes it a part of the method,
not of the class.
Examples The following declaration of the class Invoice has four fields, corresponding
to information that might be contained on two line items on an invoice.
public class Invoice {
Understanding methods
There are two categories of Java methods:
♦ Instance methods A totalSum method in the Invoice class could
calculate and add the tax, and return the sum of all costs, but would only
be useful if it is called in conjunction with an Invoice object, one that
had values for its line item costs. The calculation can only be performed
for an object, since the object (not the class) contains the line items of
the invoice.
512
Chapter 17 Welcome to Java in the Database
// Fields
public String lineItem1Description;
public double lineItem1Cost;
// A method
public double totalSum() {
double runningsum;
return runningsum;
}
}
Within the body of the totalSum method, a variable named runningsum is
declared. First, this holds the sub total of the first and second line item cost.
This sub total is then multiplied by 15 per cent (the rate of taxation) to
determine the total sum.
The local variable (as it is known within the method body) is then returned to
the calling function. When you invoke the totalSum method, it returns the
sum of the two line item cost fields plus the cost of tax on those two items.
Example The parseInt method of the java.lang.Integer class, which is supplied with
Adaptive Server Anywhere, is one example of a class method. When given a
string argument, the parseInt method returns the integer version of the
string.
For example given the string value "1", the parseInt method returns 1, the
integer value, without requiring an instance of the java.lang.Integer class to
first be created, as illustrated by this Java code fragment:
513
A Java seminar
Example The following version of the Invoice class now includes both an instance
method and a class method. The class method named rateOfTaxation
returns the rate of taxation used by the class to calculate the total sum of the
invoice.
The advantage of making the rateOfTaxation method a class method (as
opposed to an instance method or field) is that other classes and procedures
can use the value returned by this method without having to create an
instance of the class first. Only the name of the class and method is required
to return the rate of taxation used by this class.
Making rateofTaxation a method, as opposed to a field, allows the
application developer to change how the rate is calculated without adversely
affecting any objects, applications or procedures that use its return value.
Future versions of Invoice could make the return value of the
rateOfTaxation class method based on a more complicated calculation
without affecting other methods that use its return value.
public class Invoice {
// Fields
public String lineItem1Description;
public double lineItem1Cost;
public String lineItem2Description;
public double lineItem2Cost;
// An instance method
public double totalSum() {
double runningsum;
double taxfactor = 1 + Invoice.rateOfTaxation();
return runningsum;
}
// A class method
public static double rateOfTaxation() {
double rate;
rate = .15;
return rate;
}
}
514
Chapter 17 Welcome to Java in the Database
A Java glossary
The following items outline some of the details regarding Java classes. It is
by no means an exhaustive source of knowledge about the Java language but
may aid in the use of Java classes in Adaptive Server Anywhere.
$ For a thorough examination of the Java language, see the online book
Thinking in Java, by Bruce Eckel, included with Adaptive Server Anywhere
in the file jxmp\Tjava.pdf.
Packages A package is a grouping of classes that share a common purpose or
category. One member of a package has special privileges to access data and
methods in other members of the package, hence the protected access
modifier.
A package is the Java equivalent of a library. It is a collection of classes,
which can be made available using the import statement. The following Java
statement imports the utility library from the Java API:
515
A Java seminar
import java.util.*
Packages are typically held in Jar files, which have the extension .jar or .zip.
Public versus An access modifier determines the visibility (essentially the public, private
private or protected keyword used in front of any declaration) of a field, method or
class to other Java objects.
♦ A public class, method, or field is visible everywhere.
♦ A private class, method, or field is visible only in methods defined
within that class.
♦ A protected method or field is visible to methods defined within that
class, within sublclasses of the class, or within other classes in the same
package.
♦ The default visibility, known as package, means that the method or field
is visible within the class and to other classes in the same package.
Constructors A constructor is a special method of a Java class that is called when an
instance of the class is created.
Classes can define their own constructors, including multiple, overriding
constructors. Which arguments were used in the attempt to create the object
determine which constructor is used. When the type, number and order of
arguments used to create an instance of the class match one of the class’s
constructors, that constructor is used when creating the object.
Garbage collection Garbage collection automatically removes any object with no references to
it, with the exception of objects stored as values in a table.
There is no such thing as a destructor method in Java (as there is in C++).
Java classes can define their own finalize method for clean up operations
when an object is discarded during garbage collection.
Interfaces Java classes can inherit only from one class. Java uses interfaces instead of
multiple-inheritance. A class can implement multiple interfaces, each
interface defines a set of methods and method profiles that must be
implemented by the class for the class to be compiled.
An interface defines what methods and static fields the class must declare.
The implementation of the methods and fields declared in an interface is
located within the class that uses the interface: the interface defines what the
class must declare, it is up to the class to determine how it is implemented.
516
Chapter 17 Welcome to Java in the Database
517
A Java seminar
518
Chapter 17 Welcome to Java in the Database
Java version
The Sybase Java VM executes a subset of JDK version 1.1.6.
Between release 1.0 of the Java Developer’s Kit (JDK) and release 1.1,
several new APIs were introduced. As well, a number were deprecated-the
use of certain APIs became no longer recommended and support for them
may be dropped in future releases.
A Java class file using deprecated APIs generates a warning when compiled,
but does still execute on a Java virtual machine built to release 1.1 standards,
such as the Sybase VM.
The internal JDBC driver supports JDBC version 1.1.
$ For more information on the JDK 1.1 APIs that are supported, please
see "Supported Java packages" on page 276 of the book ASA Reference.
519
The runtime environment for Java in the database
User-defined classes
User-defined classes are installed into a database using the INSTALL
statement. Once installed, they become available to other classes in the
database. If they are public classes, they are available from SQL as domains.
$ For information on installing classes, see "Installing Java classes into a
database" on page 544.
The dot in Java In Java, the dot is an operator that invokes the methods or access the fields
of a Java class or object. It is also part of an identifier, used to identify class
names, as in the fully qualified class name java.util.Hashtable.
520
Chapter 17 Welcome to Java in the Database
In the following Java code fragment, the dot is part of an identifier on the
first line of code. On the second line of code, it is an operator.
java.util.Random rnd = new java.util.Random();
int i = rnd.nextInt();
Invoking Java In SQL, the dot operator can be replaced with the double right angle bracket
methods from SQL (>>). The dot operator is more Java-like, but can lead to ambiguity with
respect to existing SQL names. The use of >> removes this ambiguity.
521
The runtime environment for Java in the database
SeLeCt java.lang.Math.random();
Data types When you use a Java class as a data type for a column, it is a user-defined
SQL data type. However, it is still case sensitive. This convention prevents
ambiguities with Java classes that differ only in case.
522
Chapter 17 Welcome to Java in the Database
523
The runtime environment for Java in the database
Keyword conflicts
SQL keywords can conflict with the names of Java classes, including API
classes. This occurs when the name of a class, such as the Date class, which
is a member of the java.util.* package, is referenced. SQL reserves the word
Date for use as a keyword, even though it also the name of a Java class.
When such ambiguities appear, you can use double quotes to identify that
you are not using the word in question as the SQL reserved word. For
example, the following SQL statement causes an error because Date is a
keyword and SQL reserves its use.
-- This statement is incorrect
CREATE VARIABLE dt java.util.Date
However the following two statements work correctly because the word Date
is within quotation marks.
CREATE VARIABLE dt java.util."Date";
SET dt = NEW java.util."Date"(1997, 11, 22, 16, 11, 01)
The variable dt now contains the date: November 22, 1997, 4:11 p.m.
524
Chapter 17 Welcome to Java in the Database
Classes further up Aclass referenced by another class, either explicitly with a fully qualified
in the hierarchy name or implicitly using an import statement, must also be installed in the
must also be database.
installed.
The import statement works as intended within compiled classes. However,
within the Adaptive Server Anywhere runtime environment, no equivalent to
the import statement exists. All class names used in SQL statements or stored
procedures must be fully qualified. For example, to create a variable of type
String, you would reference the class using the fully qualified name:
java.lang.String.
525
The runtime environment for Java in the database
Public fields
It is a common practice in object oriented programming to define class fields
as private and make their values available only through public methods.
Many of the examples used in this documentation render fields public to
make examples more compact and easier to read. Using public fields in
Adaptive Server Anywhere also offers a performance advantage over
accessing public methods.
The general convention followed in this documentation is that a user-created
Java class designed for use in Adaptive Server Anywhere exposes its main
values in its fields. Methods contain computational automation and logic that
may act on these fields.
526
Chapter 17 Welcome to Java in the Database
Case sensitivity
Java is case sensitive, so the portions of the following examples in this
section pertaining to Java syntax are written using the correct case. SQL
syntax is rendered in upper case.
// Fields
public String lineItem1Description;
public double lineItem1Cost;
// An instance method
public double totalSum() {
double runningsum;
double taxfactor = 1 + Invoice.rateOfTaxation();
return runningsum;
}
// A class method
public static double rateOfTaxation() {
double rate;
rate = .15;
527
A Java in the database exercise
return rate;
}
}
528
Chapter 17 Welcome to Java in the Database
Notes ♦ At this point no Java operations have taken place. The class has been
installed into the database and is ready for use as the data type of a
variable or column in a table.
♦ Changes made to the class file from now on are not automatically
reflected in the copy of the class in the database. You must re-install the
classes if you want the changes reflected.
$ For more information on installing classes, and for information on
updating an installed class, see "Installing Java classes into a database" on
page 544.
529
A Java in the database exercise
Invoking methods The Invoice class has one instance method, which you can invoke once an
you create an Invoice object.
The following SQL statement invokes the totalSum() method of the object
referenced by the variable Inv. It returns the sum of the two cost fields plus
the tax charged on this sum.
SELECT Inv.totalSum();
Calling methods Method names are always followed by parentheses, even when they take no
versus referencing arguments. Field names are not followed by parentheses.
fields
The totalSum() method takes no arguments but returns a value. The brackets
are used because a Java operation is being invoked even though the method
takes no arguments.
530
Chapter 17 Welcome to Java in the Database
Field access is faster than method invokation. Accessing a field does not
require the Java VM to be invoked, while invoking a method requires the
VM to execute the method.
As indicated by the Invoice class definition outlined at the beginning of this
section, the totalSum instance method makes use of the class method
rateOfTaxation.
You can access this class method directly from a SQL statement.
SELECT Invoice.rateOfTaxation();
Notice the name of the class is used, not the name of a variable containing a
reference to an Invoice object. This is consistent with the way Java handles
class methods, even though it is being used in a SQL statement. A class
method can be invoked even if no object based on that class has been
instantiated.
Class methods do not require an instance of the class to work properly, but
they can still be invoked on an object. The following SQL statement yields
the same results as the previously executed SQL statement.
SELECT Inv.rateOfTaxation();
531
A Java in the database exercise
Once an object has been added to the table T1, you can issue select
statements involving the fields and methods of the objects in the table.
For example the following SQL statement returns the value of the field
lineItem1Description for all the objects in the table T1 (right now, there
should only be one object in the table).
SELECT ID, JCol.lineItem1Description
FROM T1;
You can execute similar select statements involving other fields and methods
of the object.
A second method for creating a Java object and adding it to a table involves
the following expression, which always creates a Java object and returns a
reference to it:
NEW Javaclassname()
You can use this expression in a number of ways. For example, the following
SQL statement creates a Java object and inserts it into the table T1.
INSERT INTO T1
VALUES ( 2, NEW Invoice() );
The following SQL statement verifies that these two objects have been saved
as values of column JCol in the table T1.
SELECT ID, JCol.totalSum()
FROM t1
The results of the JCol column (the second row returned by the above
statement) should be 0, because the fields in that object have no values and
the totalSum method is a calculation of those fields.
532
Chapter 17 Welcome to Java in the Database
This is consistent with the way SQL variables are currently handled: the
variable Inv contains a reference to a Java object. The value in the table that
was the source of the variable’s reference is not altered until an UPDATE
statement is executed.
533
A Java in the database exercise
534
C H A P T E R 1 8
About this chapter This chapter describes how to add Java classes and objects to your database,
and how to use these objects in a relational database.
Contents
Topic Page
Overview of using Java 536
Java-enabling a database 539
Installing Java classes into a database 544
Creating columns to hold Java objects 549
Inserting, updating, and deleting Java objects 551
Querying Java objects 556
Comparing Java fields and objects 558
Special features of Java classes in the database 561
How Java objects are stored 566
Java database design 569
Using computed columns with Java classes 572
Configuring memory for Java 575
Before you begin To run the examples in this chapter, first run the file jdemo.sql, included in
the jxmp subdirectory of your installation directory.
$ For full instructions, see "Setting up the Java examples" on page 536.
535
Overview of using Java
536
Chapter 18 Using Java in the Database
Tip
You can also start Interactive SQL and connect to the ASA 7.0 Sample
data source from the command line:
dbisql -c "dsn=ASA 7.0 Sample"
537
Overview of using Java
♦ The runtime Java classes When you create a database, a set of Java
classes becomes available to the database. Java applications in the
database require these runtime classes to work properly.
Management tasks To provide a runtime environment for Java, you need to carry out the
for java following tasks:
♦ Java-enable your database This task involves ensuring the
availability of built-in classes and the upgrading of the database to
Version 7 standards.
$ For more information, see "Java-enabling a database" on page 539.
♦ Install other classes your users need This task involves ensuring
that classes other than the runtime classes are installed and up to date.
$ For more information, see "Installing Java classes into a database"
on page 544.
♦ Configuring your server You must configure your server to make the
necessary memory available to run Java tasks.
$ For more information, see "Configuring memory for Java" on
page 575.
Tools for managing You can carry out all these tasks from Sybase Central or from Interactive
Java SQL.
538
Chapter 18 Using Java in the Database
Java-enabling a database
The Adaptive Server Anywhere Runtime environment for Java requires a
Java VM and the Sybase runtime Java classes. The Java VM is always
available as part of the database server, but you need to Java-enable a
database for it to be able to use the runtime Java classes.
Java is a single-hierarchy language, meaning that all classes you create or use
eventually inherit from one class. This means the low-level classes (classes
further up in the hierarchy) must be present before you can use higher-level
classes. The base set of classes required to run Java applications are the
runtime Java classes, or the Java API.
When not to Java- Java-enabling a database adds many entries into the system tables. This adds
enable a database to the size of the database and, more significantly, adds about 200K to the
memory requirements for running the database, even if you do not use any
Java functionality.
If you are not going to use Java, and if you are running in a limited-memory
environment, you may wish to not Java-enable your database.
Where the runtime The Sybase runtime Java classes are held on disk rather than stored in a
classes are held database like other classes.
When you Java-enable a database, you also update the system tables with a
list of available classes from the system JAR files. You can then browse the
class hierarchy from Sybase Central, but the classes themselves are not
present in the database.
JAR files The database stores runtime class names the under the following JAR files:
539
Java-enabling a database
540
Chapter 18 Using Java in the Database
Database You can create databases using the dbinit.exe command-line database
initialization utility initialization utility. The utility has a –j switch that controls whether or not
to install the Sybase runtime Java classes in the newly-created database.
Using the –j switch prevents the Sybase runtime Java classes from being
installed. Not using the switch installs the Java classes by default.
The same option is available when creating databases using Sybase Central.
541
Java-enabling a database
You can upgrade existing databases created with Sybase SQL Anywhere
Version 5 or earlier using the command-line database upgrade utility or from
Sybase Central.
Database upgrade You can upgrade databases to Adaptive Server Anywhere Version 7
utility standards using the dbupgrad.exe command-line utility. Using the –j switch
prevents the installation of Sybase runtime Java classes. Not using the switch
installs the Java classes by default.
542
Chapter 18 Using Java in the Database
543
Installing Java classes into a database
Creating a class
Although the details of each step may differ depending on whether you are
using a Java development tool such as Sybase PowerJ, the steps involved in
creating your own class generally include the following:
v To create a class:
1 Define your class Write the Java code that defines your class. If you
are using the Sun Java SDK then you can use a text editor. If you are
using a development tool such as Sybase PowerJ, the development tool
provides instructions.
2 Name and save your class Save your class declaration (Java code) in
a file with the extension .java. Make certain the name of the file is the
same as the name of the class and that the case of both names is
identical.
For example, a class called Utility should be saved in a file called
Utility.java.
544
Chapter 18 Using Java in the Database
3 Compile your class This step turns your class declaration containing
Java code into a new, separate file containing byte code. The name of
the new file is the same as the Java code file but has an extension of
.class. You can run a compiled Java class in a Java runtime
environment, regardless of the platform you compiled it on or the
operating system of the runtime environment.
The Sun JDK contains a Java compiler, Javac.exe.
Installing a class
To make your Java class available within the database, you install the class
into the database either from Sybase Central, or using the INSTALL
statement from Interactive SQL or other application. You must know the
path and file name of the class you wish to install.
You require DBA authority to install a class.
545
Installing Java classes into a database
Installing a JAR
It is useful and common practice to collect sets of related classes together in
packages, and to store one or more packages in a JAR file. For information
on JAR files and packages, see the accompanying online book, Thinking in
Java, or another book on programming in Java.
You install a JAR file the same way as you install a class file. A JAR file can
have the extension JAR or ZIP. Each JAR file must have a name in the
database. Usually, you use the same name as the JAR file, without the
extension. For example, if you install a JAR file named myjar.zip, you would
generally give it a JAR name of myjar.
546
Chapter 18 Using Java in the Database
547
Installing Java classes into a database
Tip
You can also update a Java class or JAR file by clicking Update Now on
the General tab of its property sheet.
548
Chapter 18 Using Java in the Database
Case sensitivity Unlike other SQL data types, Java data types are case sensitive. You must
supply the proper case of all parts of the data type.
Java columns and defaults Columns can have as default values any
function of the proper data type, or any preset default. You can use any
function of the proper data type (for example, of the same class as the
column) as a default value for Java columns.
Java columns and NULL Java columns can allow NULL. If a nullable
column with Java data type has no default value, the column contains NULL.
If a Java value is not set, it has a Java null value. This Java null maps onto
the SQL NULL, and you can use the IS NULL and IS NOT NULL search
conditions against the values. For example, suppose the description of a
Product Java object in a column named JProd was not set, you can query all
products with non-null values for the description as follows:
SELECT *
549
Creating columns to hold Java objects
FROM product
WHERE JProd>>description IS NULL
550
Chapter 18 Using Java in the Database
A sample class
This section describes a class that is used in examples throughout the
following sections.
The Product.java class definition, included in the jxmp\asademo directory
under your installation directory, is reproduced in part below:
package asademo;
// public fields
public String name ;
public String description ;
public String size ;
public String color;
public int quantity ;
public java.math.BigDecimal unit_price ;
// Default constructor
Product () {
unit_price = new java.math.BigDecimal( 10.00 );
name = "Unknown";
size = "One size fits all";
}
551
Inserting, updating, and deleting Java objects
String inSize,
java.math.BigDecimal inUnit_price
) {
color = inColor;
description = inDescription;
name = inName;
quantity = inQuantity;
size = inSize;
unit_price=inUnit_price;
}
Notes ♦ The Product class has several public fields that correspond to some of
the columns of the dba.Product table that will be collected together in
this class.
♦ The toString method is provided for convenience. When you include an
object name in a select-list, the toString method is executed and its
return string displayed.
♦ Some methods are provided to set and get the fields. It is common to use
such methods in object-oriented programming rather than to address the
fields directly. Here the fields are public for convenience in tutorials.
552
Chapter 18 Using Java in the Database
553
Inserting, updating, and deleting Java objects
The use of SQL variables is typical of stored procedures and other uses of
SQL to build programming logic into the database. Java provides a more
powerful way of accomplishing this task. You can use server-side Java
classes together with JDBC to insert objects into tables.
Updating the entire You can update the object in much the same way as you insert objects:
object
♦ From SQL, you can use a constructor to update the object to a new
object as the constructor creates it. You can then update individual fields
if you need to.
♦ From SQL, you can use a SQL variable to hold the object you need, and
then update the row to hold the variable.
♦ From JDBC, you can use a prepared statement and the
PreparedStatement.setObject method.
Updating fields of Individual fields of an object have data types that correspond to SQL data
the object types, using the SQL to Java data type mapping described in "Java / SQL
data type conversion" on page 282 of the book ASA Reference.
You can update individual fields using a standard UPDATE statement:
UPDATE Product
SET JProd.unit_price = 16.00
WHERE ID = 302
In the initial release of Java in the database, it was necessary to use a special
function (EVALUATE) to carry out updates. This is no longer necessary.
554
Chapter 18 Using Java in the Database
To update a Java field, the Java data type of the field must map to a SQL
type, the expression on the right hand side of the SET clause must match this
type. You may need to use the CAST function to cast the data types
appropriate.
$ For information on data type mappings between Java and SQL, see
"Java / SQL data type conversion" on page 282 of the book ASA Reference.
Using set methods It is common practice in Java programming not to address fields directly, but
to use methods to get and set the value. It is also common practice for these
methods to return void. You can use set methods in SQL to update a column:
UPDATE jdba.Product
SET JProd.setName( ’Tank Top’)
WHERE id=302
Using methods is slower than addressing the field directly, because the Java
VM must run.
$ For more information, see "Return value of methods returning void" on
page 562.
555
Querying Java objects
Retrieving the From SQL, you can create a variable of the appropriate type, and select the
entire object value from the object into that variable. However, the obvious place in which
you may wish to make use of the entire object is in a Java application.
You can retrieve an object into a server-side Java class using the getObject
method of the ResultSet of a query. You can also retrieve an object to a
client-side Java application.
$ For a description of retrieving objects using JDBC, see "Queries using
JDBC" on page 593.
Retrieving fields of Individual fields of an object have data types that correspond to SQL data
the object types, using the SQL to Java data type mapping described in "Java / SQL
data type conversion" on page 282 of the book ASA Reference.
♦ You can retrieve individual fields by including them in the select-list of
a query, as in the following simple example:
SELECT JProd>>unit_price
FROM product
WHERE ID = 400
♦ If you use methods to set and get the values of your fields, as is common
in object oriented programming, you can include a getField method in
your query:
SELECT JProd>>getName()
FROM Product
WHERE ID = 401
$ For information on using objects in the WHERE clause and other issues
in comparing objects, see "Comparing Java fields and objects" on page 558.
Performance tip
Getting a field directly is faster than invoking a method that gets the field,
because method invocations require starting the Java VM.
The results of You can list the column name in a query select list, as in the following query:
SELECT column- SELECT JProd
name FROM jdba.product
556
Chapter 18 Using Java in the Database
This query returns the Sun serialization of the object to the client application.
When you execute a query that retrieves an object in Interactive SQL, it
displays the return value of the object’s toString method. For the Product
class, the toString method lists, in one string, the size, name, and unit price
of the object. The results of the query are as follows:
JProd
Small Tee Shirt: 9.00
Medium Tee Shirt: 14.00
One size fits all Tee Shirt: 14.00
One size fits all Baseball Cap: 9.00
One size fits all Baseball Cap: 10.00
One size fits all Visor: 7.00
One size fits all Visor: 7.00
Large Sweatshirt: 24.00
Large Sweatshirt: 24.00
Medium Shorts: 15.00
557
Comparing Java fields and objects
Ways of comparing Sorting and ordering rows, whether in a query or in an index, implies a
Java objects comparison between values on each row. If you have a Java column, you can
carry out comparisons in the following ways:
♦ Compare on a public field You can compare on a public field in the
same way you compare on a regular row. For example, you could
execute the following query:
SELECT name, JProd.unit_price
FROM Product
ORDER BY JProd.unit_price
You can use this kind of comparison in queries, but not for indexes and
key columns.
♦ Compare using a compareTo method You can compare Java objects
that have implemented a compareTo method. The Product class on
which the JProd column is based has a compareTo method that
compares objects based on the unit_price field. This permits the
following query:
SELECT name, JProd.unit_price
FROM Product
ORDER BY JProd
The comparison needed for the ORDER BY clause is automatically
carried out based on the compareTo method.
558
Chapter 18 Using Java in the Database
Example The Product class installed into the sample database with the example
classes has a compareTo method as follows:
public int compareTo( Product anotherProduct ) {
// Compare first on the basis of price
// and then on the basis of toString()
int lVal = unit_price.intValue();
int rVal = anotherProduct.unit_price.intValue();
if ( lVal > rVal ) {
return 1;
559
Comparing Java fields and objects
}
else if (lVal < rVal ) {
return -1;
}
else {
return toString().compareTo(
anotherProduct.toString() );{
}
}
}
This method compares the unit price of each object. If the unit prices are the
same, then the names are compared (using Java string comparison, not the
database string comparison). Only if both the unit price and the name are the
same are the two objects considered the same when comparing.
Make toString and When you include a Java column in the select list of a query, and execute it
compareTo in Interactive SQL, the value of the toString method is displayed. When
compatible comparing columns, the compareTo method is used. If the toString and
compareTo methods are not implemented consistently with each other, you
can get inappropriate results such as DISTINCT queries that appear to return
duplicate rows.
For example, suppose the Product class in the sample database had a
toString method that returned the product name, and a compareTo method
based on the price. Then the following query, executed in Interactive SQL,
would display duplicate values:
SELECT DISTINCT JProd
FROM product
JProd
Tee Shirt
Tee Shirt
Baseball Cap
Visor
Sweatshirt
Shorts
560
Chapter 18 Using Java in the Database
Supported classes
You cannot use all classes from the JDK. The runtime Java classes available
for use in the database server belong to a subset of the Java API.
$ For a list of all supported packages, see "Supported Java packages" on
page 276 of the book ASA Reference.
561
Special features of Java classes in the database
562
Chapter 18 Using Java in the Database
When a method returns void, however, the value this is returned to SQL; that
is, the object itself. The feature only affects calls made from SQL, not from
Java.
This feature is particularly useful in UPDATE statements, where set methods
commonly return void. You can use the following UPDATE statement in the
sample database:
update jdba.product
set JProd = JProd.setName(’Tank Top’)
where id=302
The setName method returns void, and so implicitly returns the product
object to SQL.
Example The following simple class has a single method, which executes a query and
passes the result set back to the calling environment.
import java.sql.*;
563
Special features of Java classes in the database
throws SQLException {
Connection conn = DriverManager.getConnection(
"jdbc:default:connection" );
Statement stmt = conn.createStatement();
ResultSet rset =
stmt.executeQuery (
"SELECT CAST( JName.lastName " +
"AS CHAR( 50 ) )" +
"FROM jdba.contact " );
rset1[0] = rset;
}
}
You can expose the result set using a CREATE PROCEDURE statement that
indicates the number of result sets returned from the procedure and the
signature of the Java method.
A CREATE PROCEDURE statement indicating a result set could be defined
as follows:
CREATE PROCEDURE result_set()
DYNAMIC RESULT SETS 1
EXTERNAL NAME
’MyResultSet.return_rset ([Ljava/sql/ResultSet;)V’
LANGUAGE JAVA
You can open a cursor on this procedure just as you can with any ASA
procedure returning result sets.
The string (Ljava/sql/ResultSet;)V is a Java method signature, which is a
compact character representation of the number and type of the parameters
and return value.
$ For more information about Java method signatures, see "CREATE
PROCEDURE statement" on page 440 of the book ASA Reference.
564
Chapter 18 Using Java in the Database
}
}
The following procedure uses the testOut method:
CREATE PROCEDURE sp_testOut ( OUT p INTEGER )
EXTERNAL NAME ’TestClass/testOut ([I)Z’
LANGUAGE JAVA
The string ([I)Z is a Java method signature, indicating that the method has a
single parameter, which is an array of integers, and returns a boolean. You
must define the method so that the method parameter you wish to use as an
OUT or INOUT parameter is an array of a Java data type that corresponds to
the SQL data type of the OUT or INOUT parameter.
$ For details of the syntax, including the method signature, see "CREATE
PROCEDURE statement" on page 440 of the book ASA Reference.
$ For more information, see "Java / SQL data type conversion" on
page 282 of the book ASA Reference.
565
How Java objects are stored
566
Chapter 18 Using Java in the Database
2 Create a table using that class as the data type for a column.
3 Insert rows into the table.
4 Install a new version of the class.
How will the existing rows work with the new version of the class?
Accessing rows Adaptive Server Anywhere provides a form of class versioning to allow the
when a class is new class to work with the old rows. The rules for accessing these older
updated values are as follows:
♦ If a serializable field is in the old version of the class, but is either
missing or not serializable in the new version, the field is ignored.
♦ If a serializable field is in the new version of the class, but was either
missing or not serializable in the old version, the field is initialized to a
default value. The default value is 0 for primitive types, false for
Boolean values, and NULL for object references.
♦ If there was a superclass of the old version that is not a superclass of the
new version, the data for that superclass is ignored.
♦ If there is a superclass of the new version that was not a superclass of
the old version, the data for that superclass is initialized to default
values.
♦ If a serializable field changes type between the older version and the
newer version, the field is initialized to a default values. Type
conversions are not supported; this is consistent with Sun Microsystems
serialization.
When objects are A serialized object is unaccessible if the class of the object or any of its
inaccessible superclasses has been removed from the database, at any time. This behavior
is consistent with Sun Microsystems serialization.
Moving objects These changes make cross database transfer of objects possible even when
across databases the versions of classes differ. Cross database transfer can occur as follows:
♦ Objects are replicated to a remote database.
♦ A table of objects is unloaded and reloaded into another database.
♦ A log file containing objects is translated and applied against another
database.
When the new Each connection’s VM loads the class definition for each class the first time
class is used that class is used.
When you INSTALL a class, the VM on your connection is implicitly
restarted. Therefore, you have immediate access to the new class.
567
How Java objects are stored
For connections other than the one that carries out the INSTALL, the new
class loads the next time a VM accesses the class for the first time. If the
class is already loaded by a VM, that connection does not see the new class
until the VM is restarted for that connection (for example, with a STOP
JAVA and START JAVA).
568
Chapter 18 Using Java in the Database
569
Java database design
570
Chapter 18 Using Java in the Database
571
Using computed columns with Java classes
Adding computed The following statement alters the product table by adding another
columns to tables computed column:
ALTER TABLE product
ADD inventory_Value INTEGER
COMPUTE ( JProd.quantity * JProd.unit_price )
Modifying the You can change the expression used in a computed column using the
expression for ALTER TABLE statement. The following statement changes the expression
computed columns that a computed column is based on.
572
Chapter 18 Using Java in the Database
573
Using computed columns with Java classes
574
Chapter 18 Using Java in the Database
Managing memory You can control memory use in the following ways:
♦ Set the overall cache size You must use a cache size sufficient to
meet all the requirements for non-relocatable memory.
The cache size is set when the server is started using the -c command-
line switch.
575
Configuring memory for Java
Starting and In addition to setting memory parameters for Java, you can unload the VM
stopping the VM when Java is not in use using the STOP JAVA statement. Only a user with
DBA authority can execute this statement. The syntax is simply:
STOP JAVA
The VM loads whenever a Java operation is carried out. If you wish to
explicitly load it in readiness for carrying out Java operations, you can do so
by executing the following statement:
START JAVA
576
C H A P T E R 1 9
About this chapter This chapter describes how to use JDBC to access data.
JDBC can be used both from client applications and inside the database. Java
classes using JDBC provide a more powerful alternative to SQL stored
procedures for incorporating programming logic in the database.
Contents
Topic Page
JDBC overview 578
Establishing JDBC connections 583
Using JDBC to access data 590
Using the Sybase jConnect JDBC driver 598
Creating distributed applications 602
577
JDBC overview
JDBC overview
JDBC provides a SQL interface for Java applications: if you want to access
relational data from Java, you do so using JDBC calls.
Rather than a thorough guide to the JDBC database interface, this chapter
provides some simple examples to introduce JDBC and illustrates how you
can use it inside and outside the server. As well, this chapter provides more
details on the server-side use of JDBC, running inside the database server.
$ The examples illustrate the distinctive features of using JDBC in
Adaptive Server Anywhere. For more information about JDBC
programming, see any JDBC programming book.
JDBC and You can use JDBC with Adaptive Server Anywhere in the following ways:
Adaptive Server
♦ JDBC on the client Java client applications can make JDBC calls to
Anywhere
Adaptive Server Anywhere. The connection takes place through the
Sybase jConnect JDBC driver or through the JDBC-ODBC bridge.
In this chapter, the phrase client application applies both to applications
running on a user’s machine and to logic running on a middle-tier
application server.
♦ JDBC in the server Java classes installed into a database can make
JDBC calls to access and modify data in the database, using an internal
JDBC driver.
The focus in this chapter is on server-side JDBC.
JDBC resources ♦ Required software You need TCP/IP to use the Sybase jConnect
driver.
The Sybase jConnect driver may already be available, depending on
your installation of Adaptive Server Anywhere.
$ For more information about the jConnect driver and its location,
see "The jConnect driver files" on page 598.
♦ Example source code You can find source code for the examples in
this chapter in the file JDBCExamples.java in the jxmp subdirectory
under your Adaptive Server Anywhere installation directory.
$ For instructions on how to set up the Java examples, including the
JDBCExamples class, see "Setting up the Java examples" on page 536.
579
JDBC overview
JDBC 2.0 The following classes are part of the JDBC 2.0 core interface, but are not
restrictions available in the sybase.sql.ASA package:
♦ java.sql.Blob
♦ java.sql.Clob
♦ java.sql.Ref
♦ java.sql.Struct
♦ java.sql.Array
♦ java.sql.Map
The following JDBC 2.0 core functions are not available in the
sybase.sql.ASA package:
580
Chapter 19 Data Access Using JDBC
581
JDBC overview
jConnect required
Depending on the package you received Adaptive Server Anywhere
in, Sybase jConnect may or may not be included. You must have
jConnect to use JDBC from external applications. You can use
internal JDBC without jConnect.
582
Chapter 19 Data Access Using JDBC
conn = null;
583
Establishing JDBC connections
String machineName;
if ( args.length != 1 ) {
machineName = "localhost";
} else {
machineName = new String( args[0] );
}
try{
serializeVariable();
serializeColumn();
serializeColumnCastClass();
}
catch( Exception e ) {
System.out.println( "Error: " + e.getMessage() );
e.printStackTrace();
}
}
}
private static void ASAConnect( String UserID,
String Password,
String Machinename ) {
// uses global Connection variable
Class.forName("com.sybase.jdbc.SybDriver").newInstance()
;
584
Chapter 19 Data Access Using JDBC
conn = DriverManager.getConnection(
temp.toString() , _props );
}
catch ( Exception e ) {
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
}
The main method Each Java application requires a class with a method named main, which is
the method invoked when the program starts. In this simple example,
JDBCExamples.main is the only method in the application.
The JDBCExamples.main method carries out the following tasks:
1 Processes the command-line argument, using the machine name if
supplied. By default, the machine name is localhost, which is
appropriate for the personal database server.
2 Calls the ASAConnect method to establish a connection.
3 Executes several methods that scroll data to your command line.
The ASAConnect The JDBCExamples.ASAConnect method carries out the following tasks:
method
1 Connects to the default running database using Sybase jConnect.
585
Establishing JDBC connections
586
Chapter 19 Data Access Using JDBC
587
Establishing JDBC connections
The application requires only one of the libraries (JDBC) imported in the
first line of the JDBCExamples.java class. The others are for external
connections. The package named java.sql contains the JDBC classes.
The InternalConnect() method carries out the following tasks:
1 Connects to the default running database using the current connection:
♦ DriverManager.getConnection establishes a connection using a
connection string of jdbc:default:connection.
2 Prints Hello World to the current standard output, which is the server
window. System.out.println carries out the printing.
3 If there is an error in the attempt to connect, an error message appears in
the server window, together with the place where the error occurred.
The try and catch instructions provide the framework for the error
handling.
4 The class terminates.
588
Chapter 19 Data Access Using JDBC
You can also install the class using Sybase Central. While connected to
the sample database, open the Java Objects folder and double-click Add
Class. Then follow the instructions in the wizard.
4 You can now call the InternalConnect method of this class just as you
would a stored procedure:
CALL JDBCExamples>>InternalConnect()
The first time a Java class is called in a session, the internal Java virtual
machine must be loaded. This can take a few seconds.
5 Confirm that the message Hello World prints on the server screen.
589
Using JDBC to access data
590
Chapter 19 Data Access Using JDBC
You can also install the class using Sybase Central. While connected to
the sample database, open the Java Objects folder and double-click Add
Java Class or JAR. Then follow the instructions in the wizard.
♦ The integer return type converts to an Integer object. The Integer class
is a wrapper around the basic int data type, providing some useful
methods such as toString().
♦ The Integer IRows converts to a string to be printed. The output goes to
the server window.
592
Chapter 19 Data Access Using JDBC
Notes ♦ The two arguments are the department id (an integer) and the
department name (a string). Here, both arguments pass to the method as
strings, because they are part of the SQL statement string.
♦ The INSERT is a static statement and takes no parameters other than the
SQL itself.
♦ If you supply the wrong number or type of arguments, you receive the
Procedure Not Found error.
593
Using JDBC to access data
The following code fragment illustrates how queries can be handled within
JDBC. The code fragment places the total inventory value for a product into
a variable named inventory. The product name is held in the String variable
prodname. This example is available as the Query method of the
JDBCExamples class.
The example assumes an internal or external connection has been obtained
and is held in the Connection object named conn. It also assumes a variable
public static void Query () {
int max_price = 0;
try{
conn = DriverManager.getConnection(
"jdbc:default:connection" );
while( result.next() ) {
int price = result.getInt(2);
System.out.println( "Price is " + price );
if( price > max_price ) {
max_price = price ;
}
}
}
catch( Exception e ) {
System.out.println("Error: " + e.getMessage());
e.printStackTrace();
}
return max_price;
}
Running the Once you have installed the JDBCExamples class into the sample database,
example you can execute this method using the following statement in Interactive
SQL:
select JDBCExamples>>Query()
Notes ♦ The query selects the quantity and unit price for all products named
prodname. These results are returned into the ResultSet object named
result.
♦ There is a loop over each of the rows of the result set. The loop uses the
next method.
594
Chapter 19 Data Access Using JDBC
♦ For each row, the value of each column is retrieved into an integer
variable using the getInt method. ResultSet also has methods for other
data types, such as getString, getDate, and getBinaryString.
The argument for the getInt method is an index number for the column,
starting from 1.
Data type conversion from SQL to Java is carried out according to the
information in "SQL-to-Java data type conversion" on page 283 of the
book ASA Reference.
♦ Adaptive Server Anywhere supports bidirectional scrolling cursors.
However, JDBC provides only the next method, which corresponds to
scrolling forward through the result set.
♦ The method returns the value of max_price to the calling environment,
and Interactive SQL displays it in the Results pane.
595
Using JDBC to access data
+ "VALUES ( ? , ? )" ;
stmt.setInt(1, id);
stmt.setString(2, name );
Integer IRows = new Integer(
stmt.executeUpdate() );
Running the Once you have installed the JDBCExamples class into the sample database,
example you can execute this example by entering the following statement:
call JDBCExamples>>InsertPrepared(
202, ’Eastern Sales’ )
The string argument is enclosed in single quotes, which is appropriate for
SQL. If you invoked this method from a Java application, use double quotes
to delimit the string.
Retrieving objects
You can retrieve objects and their fields and methods by:
596
Chapter 19 Data Access Using JDBC
Inserting objects
From a server-side Java class, you can use the JDBC setObject method to
insert an object into a column with Java class data type.
You can insert objects using a prepared statement. For example, the
following code fragment inserts an object of type MyJavaClass into a column
of table T:
java.sql.PreparedStatement ps =
conn.prepareStatement("insert T values( ? )" );
ps.setObject( 1, new MyJavaClass() );
ps.executeUpdate();
An alternative is to set up a SQL variable that holds the object and then to
insert the SQL variable into the table.
597
Using the Sybase jConnect JDBC driver
set classpath=%classpath%;path\java\jdbcdrv.zip
Importing the The classes in jConnect are all in the com.sybase package. The client
jConnect classes application needs to access classes in com.sybase.jdbc. For your application
to use jConnect, you must import these classes at the beginning of each
source file:
import com.sybase.jdbc.*
599
Using the Sybase jConnect JDBC driver
Tip
You can also use a command prompt to add the jConnect system objects
to a Version 6 database. At the command prompt, type:
dbisql -c "uid=user;pwd=pwd" path\scripts\jcatalog.sql
where user and pwd identify a user with DBA authority, and path is your
Adaptive Server Anywhere installation directory.
600
Chapter 19 Data Access Using JDBC
601
Creating distributed applications
602
Chapter 19 Data Access Using JDBC
603
Creating distributed applications
if ( conn != null ) {
Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(
"SELECT JContactInfo FROM jdba.contact"
);
while ( rs.next() ) {
ci = ( asademo.ContactInfo )rs.getObject(1);
System.out.println( "\n\tStreet: " + ci.street +
"City: " + ci.city +
"\n\tState: " + ci.state +
"Phone: " + ci.phone +
"\n" );
}
}
}
The getObject method is used in the same way as in the internal Java case.
Older method
604
Chapter 19 Data Access Using JDBC
In this section we describe how one of these examples works. You can study
the code for the other examples.
Serializing and Here is the serializeColumn method of an old version of the
deserializing query JDBCExamples class.
results private static void serializeColumn() throws Exception {
Statement stmt;
ResultSet rs;
byte arrayb[];
asademo.ContactInfo ci;
String name;
if ( conn != null ) {
stmt = conn.createStatement();
rs = stmt.executeQuery( "SELECT
sybase.sql.ASAUtils.toByteArray( JName.getName() )
AS Name,
sybase.sql.ASAUtils.toByteArray(
jdba.contact.JContactInfo )
FROM jdba.contact" );
while ( rs.next() ) {
arrayb = rs.getBytes("Name");
name = ( String
)sybase.sql.ASAUtils.fromByteArray( arrayb );
arrayb = rs.getBytes(2);
ci =
(asademo.ContactInfo)sybase.sql.ASAUtils.fromByteArray(
arrayb );
System.out.println( "Name: " + name +
"\n\tStreet: " + ci.street +
"\n\tCity: " + ci.city +
"\n\tState: " + ci.state +
"\n\tPhone: " + ci.phone +
"\n" );
}
System.out.println( "\n\n" );
}
}
Here is how the method works:
1 A connection already exists when the method is called. The connection
object is checked, and as long as it exists, the code executes.
2 A SQL query is constructed and executed. The query is as follows:
SELECT
sybase.sql.ASAUtils.toByteArray( JName.getName() )
AS Name,
sybase.sql.ASAUtils.toByteArray(
jdba.contact.JContactInfo )
605
Creating distributed applications
FROM jdba.contact
This statement queries the jdba.contact table. It gets information from
the JName and the JContactInfo columns. Instead of just retrieving the
column itself, or a method of the column, the
sybase.sql.ASAUtils.toByteArray function converts the values to a
byte stream so it can be serialized.
3 The client loops over the rows of the result set. For each row, the value
of each column is deserialized into an object.
4 The output (System.out.println ) shows that the fields and methods of
the object can be used as they could in their original state.
606
C H A P T E R 2 0
About this chapter This chapter describes how to use the Sybase debugger to assist in
developing Java classes, SQL stored procedures, triggers, and event handlers.
Contents
Topic Page
Introduction to debugging in the database 608
Tutorial 1: Connecting to a database 610
Tutorial 2: Debugging a stored procedure 614
Tutorial 3: Debugging a Java class 617
Common debugger tasks 622
Writing debugger scripts 624
607
Introduction to debugging in the database
Debugger features
You can carry out many tasks with the Sybase debugger, including the
following:
♦ Debug Java classes You can debug Java classes that are stored in the
database.
♦ Debug procedures and triggers You can debug SQL stored
procedures and triggers.
♦ Debug event handlers Event handlers are an extension of SQL stored
procedures. The material in this chapter about debugging stored
procedures applies equally to debugging event handlers.
♦ Browse classes and stored procedures You can browse through the
source code of installed classes and SQL procedures.
♦ Trace execution Step line by line through the code of a Java class or
stored procedure running in the database. You can also look up and
down the stack of functions that have been called.
♦ Set breakpoints Run the code until you hit a breakpoint, and stop at
that point in the code.
♦ Set break conditions Breakpoints include lines of code, but you can
also specify conditions when the code is to break. For example, you can
stop at a line the tenth time it is executed, or only if a variable has a
particular value. You can also stop whenever a particular exception is
thrown in a Java application.
♦ Inspect and modify local variables When execution is stopped at a
breakpoint, you can inspect the values of local variables and alter their
value.
♦ Inspect and break on expressions When execution is stopped at a
breakpoint, you can inspect the value of a wide variety of expressions.
♦ Inspect and modify row variables Row variables are the OLD and
NEW values of row-level triggers. You can inspect and set these values.
608
Chapter 20 Debugging Logic in the Database
Debugger information
This chapter contains three tutorials to help you get started using the
debugger. Task-based help and information about each window is available
in the debugger online Help.
$ For information on the debugger windows, see "Debugger windows" on
page 1027.
609
Tutorial 1: Connecting to a database
610
Chapter 20 Debugging Logic in the Database
611
Tutorial 1: Connecting to a database
Notes on URL The URL in the debugger connection window has the following default
values behavior:
♦ Full URL You can provide a full URL of the form
jdbc:sybase:Tds:machine-name:port.
♦ machine-name:port You can omit the jdbc:sybase:Tds portion
of the URL.
♦ Default port If you do not specify a port number, 2638 is used as
the default. This is the default TCP/IP port used by the database
server.
♦ Default machine name If you do not specify a machine name,
localhost is used as the default.
612
Chapter 20 Debugging Logic in the Database
♦ Default URL You can omit the URL entirely, and the above
defaults are used to construct a default URL of
jdbc:sybase:Tds:localhost:2638 .
$ For more information on valid URL entries, see "Supplying a URL for
the server" on page 600.
613
Tutorial 2: Debugging a stored procedure
614
Chapter 20 Debugging Logic in the Database
Set a breakpoint
You can set a breakpoint in the body of the sp_customer_products
procedure. When the procedure is executed, execution stops at the
breakpoint.
615
Tutorial 2: Debugging a stored procedure
Id sum(sales_order....
301 60
700 48
... ...
Inspecting trigger In addition to local variables, you can display other variables, such as row-
row variables level trigger OLD and NEW values in the debugger Row Variables window.
616
Chapter 20 Debugging Logic in the Database
617
Tutorial 3: Debugging a Java class
Notes on locating The Source Path window holds a list of directories in which the
Java source code debugger looks for Java source code. Java rules for finding packages
apply. The debugger also searches the current CLASSPATH for source
code.
For example, if you add the paths c:\asa7\jxmp and c:\Java\src to the
source path, and the debugger is trying to find a class called
asademo.Product, it looks for the source code in
c:\asa7\jxmp\asademo\Product.Java and c:\Java\src\my\
asademo\Product.Java
618
Chapter 20 Debugging Logic in the Database
Set a breakpoint
You can set a breakpoint at the beginning of the Query() method. When the
method is invoked, execution stops at the breakpoint.
619
Tutorial 3: Debugging a Java class
Following the previous section, the debugger should have stopped execution
of JDBCExamples.Query() at the first statement in the method:
Examples Here are some example steps you can try:
1 Step to the next line Choose Run➤Step Over, or press F7 to step to
the next line in the current method. Try this two or three times.
2 Run to a selected line Select the following line using the mouse, and
choose Run➤Run To Selected, or press F6 to run to that line and break:
max_price = price;
The red arrow moves to the line.
3 Set a breakpoint and execute to it Select the following line (line
292) and press F9 to set a breakpoint on that line:
return max_price;
An asterisk appears in the left hand column to mark the breakpoint.
Press F5 to execute to that breakpoint.
4 Experiment Try different methods of stepping through the code. End
with F5 to complete the execution.
When you have completed the execution, the Interactive SQL Data
window displays the value 24.
Options The complete set of options for stepping through source code are displayed
on the Run menu. You can find more information in the debugger online
Help.
620
Chapter 20 Debugging Logic in the Database
SELECT JDBCExamples.Query()
The query executes only as far as the breakpoint.
3 Press F7 to step to the next line. The max_price variable has now been
declared and initialized to zero.
4 If the Local Variables window is not displayed, choose Window➤Local
Variables to display it.
The Local Variables window shows that there are several local
variables. The max_price variable has a value of zero. All others are
listed as variable not in scope, which means they are not yet initialized.
5 In the Local Variables window, double-click the Value column entry for
max_price, and type in 45 to change the value of max_price to 45.
The value 45 is larger than any other price. Instead of returning 24, the
query will now return 45 as the maximum price.
6 In the Source window, press F7 repeatedly to step through the code. As
you do so, the values of the variables appear in the Local Variables
window. Step through until the stmt and result variables have values.
7 Expand the result object by clicking the icon next to it, or setting the
cursor on the line and pressing ENTER. This displays the values of the
fields in the object.
8 When you have experimented with inspecting and modifying variables,
press F5 to complete the execution of the query and finish the tutorial.
Inspecting static In addition to local variables, you can display class-level variables (static
variables variables) in the debugger Statics window, and inspect their values in the
Inspection window. For more information, see the debugger online Help.
621
Common debugger tasks
622
Chapter 20 Debugging Logic in the Database
623
Writing debugger scripts
sybase.asa.procdebug.DebugScript class
You can write scripts to control debugger behavior. Scripts are classes that
extend the DebugScript class. For more information on scripts, see
"Writing debugger scripts" on page 624.
The DebugScript class is as follows:
Error! Not a valid filename.
sybase.asa.procdebug.IDebugAPI interface
You can write scripts to control debugger behavior. Scripts are Java classes
that use the IDebugAPI interface to control the debugger. For more
information on scripts, see "Writing debugger scripts" on page 624.
The IDebugAPI interfaces is as follows:
Error! Not a valid filename.
sybase.asa.procdebug.IDebugWindow interface
You can write scripts to control debugger behavior. In scripts, the debugger
window is represented by the IDebugWindow interface. For more
information on scripts, see "Writing debugger scripts" on page 624.
The IDebugWindow interfaces is as follows:
Error! Not a valid filename.
624
P A R T F I V E
625
626
C H A P T E R 2 1
About this chapter This chapter describes how to protect your data against operating system
crashes, file corruption, disk failures, and total machine failure.
The chapter describes how to make backups of your database, how to restore
data from a backup, and how to run your server so that performance and data
protection concerns are addressed.
Contents
Topic Page
Introduction to backup and recovery 628
Understanding backups 633
Designing backup procedures 636
Configuring your database for data protection 645
Backup and recovery internals 649
Backup and recovery tasks 656
627
Introduction to backup and recovery
628
Chapter 21 Backup and Data Recovery
629
Introduction to backup and recovery
Media failure The database file and/or the transaction log become
unusable. This may occur because the file system or the device storing the
database file becomes unusable, or it may be because of file corruption.
For example:
♦ The disk drive holding the database file or the transaction log file
becomes unusable.
♦ The database file or the transaction log file become corrupted. This can
happen because of hardware problems or software problems.
Backups protect your data against media failure.
$ For more information, see "Understanding backups" on page 633.
630
Chapter 21 Backup and Data Recovery
631
Introduction to backup and recovery
♦ Offline backup The above examples are all online backups, executed
against a running database. You can make offline backups by copying
the database files when the database is not running.
Notes You must have DBA authority or REMOTE DBA authority to make backups
of a database.
632
Chapter 21 Backup and Data Recovery
Understanding backups
To understand what files you need to back up, and how you restore databases
from backups, you need to understand how the changes made to the database
are stored on disk.
633
Understanding backups
The transaction log is a key component of backup and recovery, and is also
essential for data replication using SQL Remote or the Replication Agent.
By default, all databases use transaction logs. Using a transaction log is
optional, but you should always use a transaction log unless you have a
specific reason not to. Running a database with a transaction log provides
much greater protection against failure, better performance, and the ability to
replicate data.
$ For information on how to use a transaction log to protect against
media failure, see "Protecting against media failure on the database file" on
page 645.
When changes are Like the database file, the transaction log is organized into pages: fixed size
forced to disk areas of memory. When a change is recorded in the transaction log, it is
made to a page in memory. The change is forced to disk when the earlier of
the following happens:
♦ The page is full.
♦ A COMMIT is executed.
In this way, completed transactions are guaranteed to be stored on disk,
while performance is improved by avoiding a write to the disk on every
operation.
$ Configuration options are available to allow advanced users to tune the
precise behavior of the transaction log. For more information, see
"COOPERATIVE_COMMITS option" on page 168 of the book ASA
Reference, and "DELAYED_COMMITS option" on page 172 of the book
ASA Reference.
Transaction log A transaction log mirror is an identical copy of the transaction log,
mirrors maintained at the same time as the transaction log. If a database has a
mirrored transaction log, every database change is written to both the
transaction log and the transaction log mirror. By default, databases do not
have transaction log mirrors.
A transaction log mirror provides extra protection for critical data. It enables
complete data recovery in the case of media failure on the transaction log. A
mirrored transaction log also enables a database server to carry out automatic
validation of the transaction log on database startup.
$ For more information, see "Protecting against media failure on the
transaction log" on page 645.
634
Chapter 21 Backup and Data Recovery
Media failure on the database file If your database file is not usable, but
your transaction log is still usable, you can recover all committed changes to
the database as long as you have proper a backup procedure in place. All
information since the last backed up copy of the database file is held in
backed up transaction logs, or in the online transaction log.
$ For information on how to configure your database system, see
"Protecting against media failure on the database file" on page 645.
Media failure on the transaction log file Unless you use a mirrored
transaction log: you cannot recover information entered between the last
database checkpoint and a media failure on the transaction log. For this
reason, it is recommended that you use a mirrored transaction log in setups
such as SQL Remote consolidated databases, where loss of the transaction
log can lead to loss of key information, or the breakdown of a replication
system.
$ For more information, see "Protecting against media failure on the
transaction log" on page 645.
635
Designing backup procedures
Types of backup
This section assumes you are familiar with basic concepts related to backups.
$ For more information about concepts related to backups, see
"Introduction to backup and recovery" on page 628, and "Understanding
backups" on page 633.
Backups can be categorized in several ways:
♦ Full backup and incremental backup A full backup is a backup of
both the database file and of the transaction log. An incremental
backup is a backup of the transaction log only. Typically, full backups
are interspersed with several incremental backups.
$ For information on making backups, see , "Making a full backup"
on page 656, and "Making an incremental backup" on page 657.
♦ Server-side backup and client-side backup You can execute an
online backup from a client machine using the Backup utility. To
execute a server side backup you execute the BACKUP statement; the
database server then carries out the backup.
You can easily build server side backup into applications because it is a
SQL statement. Also, server-side backup is generally faster because the
data does not have to be transported across the client/server
communications system.
Instructions for server-side and client-side backups are given together
for each backup procedure.
636
Chapter 21 Backup and Data Recovery
Scheduling backups
Most backup schedules involve periodic full backups interspersed with
incremental backups of the transaction log. There is no simple rule for
deciding how often to make backups of your data. The frequency with which
you make backups depends on the importance of your data, how often it is
changing, and other factors.
Most backup strategies involve occasional full backups, interspersed by
several incremental backups. A common starting point for backups is to
carry out a weekly full backup, with daily incremental backups of the
transaction log. Both full and incremental backups can be carried out online
(while the database is running) or offline, server side or client side. Archive
backups are always full backups.
637
Designing backup procedures
The kinds of failure against which a backup schedule protects you depends
not only on how often you make backups, but on how you operate your
database server.
$ For more information, see "Configuring your database for data
protection" on page 645.
You should always keep more than one full backup. If you make a backup on
top of a previous backup, a media failure in the middle of the backup leaves
you with no backup at all. You should also keep some of your full backups
offsite to protect against fire, flood, earthquake, theft, or vandalism.
You can use the event scheduling features of Adaptive Server Anywhere to
perform online backups automatically at scheduled times.
$ For information on scheduling operations such as backups, see
"Automating Tasks Using Schedules and Events" on page 481.
638
Chapter 21 Backup and Data Recovery
Before
backup db_name.log
db_name.db
database log
directory directory
Backup
After db_name.log
backup
db_name.log
db_name.db db_name.db
$ For information on how to carry out backups of this type, see "Making
a backup, continuing to use the original transaction log" on page 660.
639
Designing backup procedures
Before
backup db_name.log
db_name.db
database log
directory directory
Truncate Backup
After db_name.log
backup
db_name.log
db_name.db db_name.db
Deleting the transaction log after each incremental backup makes recovery
from a media failure on the database file a more complex task, as there may
then be several different transaction logs since the last full backup. Each
transaction log needs to be applied in sequence to bring the database up to
date.
You can use this kind of backup at a database that is operating as a MobiLink
consolidated database, as MobiLink does not rely on the transaction log. If
you are running SQL Remote or the MobiLink dbsync.exe application, you
must use a scheme suitable for preserving old transaction logs, as in the
following section.
$ For information on how to carry out a backup of this type, see "Making
a backup, deleting the original transaction log" on page 661.
640
Chapter 21 Backup and Data Recovery
Before
backup db_name.log
db_name.db
database log
directory directory
Rename Backup
New transaction
After log
YYMMDDnn.log db_name.log
backup
db_name.log
db_name.db db_name.db
$ For information on how to carry out a backup of this kind, see "Making
a backup, renaming the original transaction log" on page 662.
Offline transaction In addition to backing up the transaction log, the backup operation renames
logs the online transaction log to a filename of the form YYMMDDnn.log. This file
is no longer used by the database server, but is available for the Message
Agent and the Replication Agent. It is called an offline transaction log. A
new online transaction log is started with the same name as the old online
transaction log.
641
Designing backup procedures
There is no Year 2000 issue with the two-digit year in the YYMMDDnn.log
filenames. The names are used for distinguishability only, not for ordering.
For example, the renamed log file from the first backup on December 10,
2000, is named 00121000.log. The first two digits indicate the year, the
second two digits indicate the month, the third two digits indicate the day of
the month, and the final two digits distinguish among different backups made
on the same day.
The Message Agent and the Replication Agent can use the offline copies to
provide the old transactions as needed. If you set the DELETE_OLD_LOGS
database option to ON, then the Message Agent and Replication Agent delete
the offline files when they are no longer needed, saving disk space.
642
Chapter 21 Backup and Data Recovery
You can carry out direct backup to a tape drive using an archive backup.
Archive backups are always full backups. An archive backup makes copies
of both the database file and the transaction log, but these copies are placed
into a single file.
$ You can make archive backups using the BACKUP statement. For
information, see "Backing up a database directly to tape" on page 665, and
"BACKUP statement" on page 389 of the book ASA Reference.
$ You can restore the backup using the RESTORE statement. For
information, see "Restoring an archive backup" on page 670, and
"RESTORE statement" on page 577 of the book ASA Reference.
643
Designing backup procedures
Adding more files into the recovery scenario increases the places where
recovery can fail. As the backup and recovery strategy develops you should
consider checking your recovery plan.
$ For information on how to implement a backup and recovery plan, see
"Implementing a backup and recovery plan" on page 656.
Tip
You can ensure that no transactions are in progress when you make a
backup by using the BACKUP statement with the WAIT BEFORE
START clause.
If a base table in the database file is corrupt, you should treat the situation as
a media failure, and recover from your previous backup. If an index is
corrupt, you may want to unload the database without indexes, and reload.
$ For instructions, see "Validating a database" on page 658, and
"Validating a transaction log" on page 659.
$ For information on read-only databases, see "–r command-line option"
on page 33 of the book ASA Reference.
644
Chapter 21 Backup and Data Recovery
Where to store the There is a performance penalty for using a mirrored log, as each database log
transaction log write operation must be carried out twice. The performance penalty depends
mirror on the nature and volume of database traffic and on the physical
configuration of the database and logs.
A transaction log mirror should be kept on a separate device from the
transaction log. This improves performance. Also, if either device fails, the
other copy of the log keeps the data safe for recovery.
Alternatives to a Alternatives to a mirrored transaction log are to use a disk controller that
transaction log provides hardware mirroring, or operating-system level software mirroring,
mirror as provided by Windows NT and NetWare. Generally, hardware mirroring is
more expensive, but provides better performance.
For more Live backups provide additional protection that has some similarities to
information transaction log mirroring. For more information, see "Differences between
live backups and transaction log mirrors" on page 646.
For information on creating a database with a mirrored transaction log, see
"The Initialization utility" on page 92 of the book ASA Reference.
For information on changing an existing database to use a mirrored
transaction log, see "The Transaction Log utility" on page 120 of the book
ASA Reference.
646
Chapter 21 Backup and Data Recovery
Live backups and The live backup of the transaction log is always the same length or shorter
regular backups than the active transaction log. When a live backup is running, and another
backup restarts the transaction log (dbbackup -r or dbbackup -x), the live
backup automatically truncates the live backup log and restarts the live
backup at the beginning of the new transaction log.
$ For information on how to make a live backup, see "Making a live
backup" on page 667.
647
Configuring your database for data protection
If a primary key does not exist, the engine looks for a UNIQUE NOT NULL
index on the table (or a UNIQUE constraint). A UNIQUE index that allows
NULL is not sufficient.
648
Chapter 21 Backup and Data Recovery
Backup internals
When you issue a backup instruction, the database may be in use by many
people. If you later need to use your backup to restore your database, you
need to know what information has been backed up, and what has not.
The database server carries out a backup as follows:
1 Issue a checkpoint. Further checkpoints are disallowed until the backup
is complete. While the backup is taking place, any pages modified by
other connections are saved before modification in the temporary file,
instead of the database file, so that the backup image is made as of the
checkpoint.
2 Make a backup of the database file, if the backup instruction is for a full
backup.
3 Make a backup of the transaction log.
The backup includes all operations recorded in the transaction log before
the final page of the log is read. This may include instructions issued
after the backup instruction was issued.
The backup copy of the transaction log is generally smaller than the
online transaction log. The database server allocates space to the online
transaction logs in multiples of 64K, so the transaction log file size
generally includes empty pages. However, only the non-empty pages are
backed up.
4 If the backup instruction requires the transaction log to be truncated or
renamed, then wait until there are no uncommitted transactions before
truncating or renaming the log file.
If the database is busy, this wait may be significant.
$ For information on renaming and truncating the transaction log,
see "Designing backup procedures" on page 636.
5 Mark the backup image of the database to indicate that recovery is
needed. This causes any operations that happened since the start of the
backup to be applied. It also causes operations that were incomplete at
the checkpoint to be undone, if they were not committed.
649
Backup and recovery internals
Cache A
Database A A
file
Changes made to the page are applied to the copy in the cache. For
performance reasons they are not written immediately to the database file on
disk.
650
Chapter 21 Backup and Data Recovery
Cache B
Changed
page
Database A A
file
Transaction A->B
log
When the cache is full, the changed page may get written out to disk. The
copy in the checkpoint log remains unchanged.
Cache B
Database
B A
file
Transaction
A->B
log
651
Backup and recovery internals
At a checkpoint, all the data in the database is held on disk in the database
file. The information in the database file matches that in the transaction log.
The checkpoint represents a known state of the database on disk. During
recovery, the database is first recovered to the most recent checkpoint, and
then changes since that checkpoint are applied.
$ For more information, see "How the database server decides when to
checkpoint" on page 653.
652
Chapter 21 Backup and Data Recovery
Database
A A
file
Dirty page Checkpoint log
overwritten copy of page
Transaction
A->B
log file
There are two database options that allow you to control the frequency of
checkpoints. CHECKPOINT_TIME controls the maximum desired time
between checkpoints and RECOVERY_TIME controls the maximum desired
time for recovery in the event of system failure.
The writing of dirty pages to disk is carried out by a task within the server
called the idle I/O task. This task shares processing time with other database
tasks.
There is a threshold for the number of dirty pages, below which writing of
database pages does not take place.
When the database is busy, the urgency is low, and the cache only has a few
dirty pages, the idle I/O task runs at a very low priority and no writing of
dirty pages takes place.
Once the urgency exceeds 30%, the priority of the idle I/O task is increased.
At intervals, the priority is increased again. As the urgency becomes high,
the engine shifts its primary focus to writing dirty pages until the number
gets below the threshold again. However, the engine only writes out pages
during the idle I/O task if the number of dirty pages is greater than the
threshold.
If, because of other activity in the database, the number of dirty pages falls to
zero, and if the urgency is 50% or more, then a checkpoint takes place
automatically, since it is a convenient time.
Both the checkpoint urgency and recovery urgency values increase in value
until the checkpoint occurs, at which point they drop to zero. They do not
decrease otherwise.
If the check finds that the log and the mirror are different in the body of the
shorter of the two, one of the two files is corrupt. In this case, the database
does not start, and an error message is generated saying that the transaction
log or its mirror is invalid.
655
Backup and recovery tasks
656
Chapter 21 Backup and Data Recovery
$ For information on how to carry out the backup operation, see the
following:
♦ "Making a backup, continuing to use the original transaction log" on
page 660.
♦ "Making a backup, deleting the original transaction log" on
page 661.
♦ "Making a backup, renaming the original transaction log" on
page 662.
Notes Validity checking requires exclusive access to entire tables on your database.
For more information and alternative approaches, see "Ensuring your
database is valid" on page 644.
If you validate your backup copy of the database, make sure you do so in
read-only mode. Start the database server with the –r command-line option
to use read-only mode.
657
Backup and recovery tasks
Notes The backup copies of the database file and transaction log file have the same
names as the online versions of these files. For example, if you make a
backup of the sample database, the backup copies are called asademo.db and
asademo.log. When you repeat the backup statement, choose a new backup
directory to avoid overwriting the backup copies.
$ For information on how to make a repeatable incremental backup
command, by renaming the backup copy of the transaction log, see
"Renaming the backup copy of the transaction log during backup" on
page 665.
Validating a database
Validating a database is a key part of the backup operation. For information,
see "Ensuring your database is valid" on page 644.
For an overview of the backup operation, see "Making a full backup" on
page 656.
658
Chapter 21 Backup and Data Recovery
$ For more information, see "The Validation utility" on page 136 of the
book ASA Reference.
Notes If you are checking the validity of a backup copy, you should run the
database in read-only mode so that it is not modified in any way. You can
only do this when there were no transactions in progress during the backup.
$ For information on running databases in read-only mode, see "–r
command-line option" on page 33 of the book ASA Reference.
Notes If you do have errors reported, you can drop all of the indexes and keys on a
table and recreate them. Any foreign keys to the table will also need to be
recreated. Another solution to errors reported by VALIDATE TABLE is to
unload and reload your entire database. You should use the -u option of
dbunload so that it does not try to use a possibly corrupt index to order the
data.
659
Backup and recovery tasks
660
Chapter 21 Backup and Data Recovery
661
Backup and recovery tasks
662
Chapter 21 Backup and Data Recovery
663
Backup and recovery tasks
664
Chapter 21 Backup and Data Recovery
Notes The backup copy of the transaction log is named YYMMDDnn.log, where YY
is the year, MM is the month, DD is the day of the month, and nn increments
if there are more than one backup per day. There is no Year 2000 issue with
the two-digit year in the YYMMDDnn.log filenames. The names are used for
distinguishability only, not for ordering.
665
Backup and recovery tasks
666
Chapter 21 Backup and Data Recovery
667
Backup and recovery tasks
668
Chapter 21 Backup and Data Recovery
Caution
This command should only be used when the database is not
participating in a SQL Remote or Replication Server replication
system. If your database is a consolidated database in a SQL
Remote replication system, you may have to re-extract the remote
databases.
Without the -f switch, the server reports the lack of a transaction log as
an error. With the switch, the server restores the database to the most
recent checkpoint and then rolls back any transactions that were not
committed at the time of the checkpoint. A new transaction log is then
created.
669
Backup and recovery tasks
670
Chapter 21 Backup and Data Recovery
671
Backup and recovery tasks
672
Chapter 21 Backup and Data Recovery
673
Backup and recovery tasks
674
C H A P T E R 2 2
About this chapter Transferring large amounts of data into and from your database may be
necessary in several situations. For example,
♦ Importing an initial set of data into a new database
♦ Exporting data from your database for use with other applications, such
as spreadsheets
♦ Building new copies of a database, perhaps with a modified structure
♦ Creating extractions of a database for replication or synchronization
This chapter describes how to import data to and export data from databases,
both in text form and in other formats.
Contents
Topic Page
Introduction to import and export 676
Understanding importing and exporting 678
Designing import procedures 683
Designing export procedures 687
Designing rebuild and extract procedures 691
Import and export internals 695
Import tasks 697
Export tasks 701
Rebuild tasks 709
Extract Tasks 714
675
Introduction to import and export
676
Chapter 22 Importing and Exporting Data
677
Understanding importing and exporting
Importing/Exporting
Importing and exporting are administrative tasks that involve reading data
into your database, or writing data out of your database. This data may be
coming from or destined for database systems or programs other than
Adaptive Server Anywhere.
You can import individual tables or portions of tables, from other database
file formats, or from ASCII files. Depending on the format of the data you
are inserting, there is some flexibility as to whether you create the table
before the import, or during the import. You may find importing a useful tool
if you need to add large amounts of data to your database at a time.
You can export individual tables and query results in ASCII format, or in a
variety of formats supported by other database programs. You may find
exporting a useful tool if you need to share large portions of your database,
or extract portions of your database according to particular criteria.
Although Adaptive Server Anywhere import and export procedures work on
one table at a time, you can create scripts that effectively automate the
importing or export procedure, allowing you to import and export data into
or from a number of tables consecutively.
Loading/Unloading
Loading and unloading are very similar to importing and exporting in that
they involve copying data into and out of your database. They are different,
however, in that loading and unloading usually involve importing or
exporting the entire database, and are usually intended for reuse within an
Adaptive Server Anywhere database.
Loading and unloading are most useful for improving performance,
reclaiming fragmented space, or upgrading your database to a newer version
of Adaptive Server Anywhere.
678
Chapter 22 Importing and Exporting Data
Rebuilding a database
Tools
Import Tools The following tools are available for importing
♦ Interactive SQL import wizard
♦ INPUT statements
679
Understanding importing and exporting
Temporary Tables
Temporary tables, whether local or global, serve the same purpose:
temporary storage of data. The difference between the two, and the
advantages of each, however, lies in the duration each table exists.
A local temporary table exists only for the duration of a connection or, if
defined inside a compound statement, for the duration of the compound
statement. It is useful when need to load a set of data once only.
The definition of the global temporary table remains in the database
permanently, but the rows exist only within a given connection. When you
close the database connection, the data in the global temporary table
disappears. However, the table definition remains with the database for you
to access when you open your database next time. Global temporary tables
are useful when you need to load a set of data repeatedly, or when you need
to merge tables with different structures.
680
Chapter 22 Importing and Exporting Data
Internal/external
The Interactive SQL INPUT and OUTPUT commands are external to the
database (client-side). If ISQL is being run on a different machine than the
database server, paths to files being read or written are relative to the client.
An INPUT is recorded in the transaction log as a separate INSERT statement
for each row read. As a result, INPUT is considerably slower than LOAD
TABLE . This also means that ON INSERT triggers will fire during an
INPUT. Missing values will be inserted as NULL on NULLABLE rows, as 0
(zero) on non-nullable numeric columns, and as an empty string on non-
nullable non-numeric columns. The OUTPUT statement is useful when
compatibility is an issue since it can write out the result set of a SELECT
statement to any one of a number of file formats.
The LOAD TABLE, UNLOAD TABLE and UNLOAD statements, on the
other hand, are internal to the database (server-side). Paths to files being
written or read are relative to the database server. Only the command travels
to the database server, where all processing happens. A LOAD table
statement is recorded in the transaction log as a single command. The data
file must contain the same number of columns as the table to be loaded.
Missing values on columns with a default value will be inserted as NULL,
zero or an empty string if the DEFAULTS option is set to OFF (default), or
as the default value if the DEFAULTS value is set to ON. Internal importing
and exporting only provides access to text and BCP formats, but it is a faster
method.
Data formats
Interactive SQL supports the following import and export file formats:
681
Understanding importing and exporting
682
Chapter 22 Importing and Exporting Data
683
Designing import procedures
Choose the Interactive SQL INPUT statement when you want to import data
into one or more tables, when you want to automate the import process using
a command file, or when you want to import data in a format other than text.
LOAD TABLE You execute the LOAD TABLE statement from the SQL Statements pane of
statement the Interactive SQL window. It allows you to import data only, into a table,
in an efficient manner in text/ASCII/FIXED formats. The table must exist
and have the same number of columns as the input file has fields, defined on
compatible data types. The LOAD TABLE statement imports with one row
per line, and values separated by a delimiter.
To use the LOAD TABLE statement, the user must have ALTER permission
on the table. For more information about controlling who can use the LOAD
TABLE statement, see "–gl command-line option" on page 27 of the book
ASA Reference.
Choose the LOAD TABLE statement when you want to import data in text
format. If you have a choice between using the INPUT statement or the
LOAD TABLE statement, choose the LOAD TABLE statement for better
performance.
INSERT statement You execute the INSERT statement from the SQL Statements pane of the
Interactive SQL window. Since you include the data you want to place in
your table directly in the INSERT statement, it is considered interactive
input. File formats are not an issue. You can also use the INSERT statement
with remote data access to import data from another database rather than a
file.
Choose the INSERT statement when you want to import small amounts of
data into a single table.
Sybase Central Sybase Central does not provide for importing data. It does provide wizards
for rebuilding (loading or unloading entire databases) or extracting
databases, which are specialized cases of importing and exporting.
Choose Sybase Central when you want to use a wizard to rebuild or extract a
database.
Proxy Tables You can import data directly from another database. Using the Adaptive
Server Anywhere remote data access feature, you can create a proxy table,
which represents a table from the remote database, and then use an INSERT
statement with a SELECT clause to insert data from the remote database into
a permanent table in your database.
$ For more information about remote data access, see "Accessing Remote
Data" on page 867.
684
Chapter 22 Importing and Exporting Data
685
Designing import procedures
686
Chapter 22 Importing and Exporting Data
687
Designing export procedures
You execute the UNLOAD TABLE statement from the SQL Statements
UNLOAD TABLE pane of the Interactive SQL window. It allows you to export data only, in an
statement efficient manner in text/ASCII/FIXED formats. The UNLOAD TABLE
statement exports with one row per line, and values separated by a comma
delimiter. The data exports in order by primary key values to make reloading
quicker.
To use the UNLOAD TABLE statement, the user must have ALTER or
SELECT permission on the table. For more information about controlling
who can use the UNLOAD TABLE statement, see "–gl command-line
option" on page 27 of the book ASA Reference.
Choose the UNLOAD TABLE statement when you want to export entire
tables in text format. If you have a choice between using the OUTPUT
statement, UNLOAD statement, or UNLOAD TABLE statement, choose the
UNLOAD TABLE statement for performance reasons.
UNLOAD The UNLOAD statement is similar to the OUTPUT statement in that they
statement both export query results to a file. As well, you execute both statements from
the SQL Statements pane of the Interactive SQL window. The UNLOAD
statement, however, allows you to export data in a more efficient manner and
in text/ASCII/FIXED formats only . The UNLOAD statement exports with
one row per line, and values separated by a comma delimiter.
To use the UNLOAD statement, the user must have ALTER or SELECT
permission on the table. For more information about controlling who can use
the UNLOAD statement, see "–gl command-line option" on page 27 of the
book ASA Reference.
Choose the UNLOAD statement when you want to export query results if
performance is an issue, and if output in text format is acceptable. The
UNLOAD statement is also a good choice when you want to embed an
export command in an application.
Dbunload utility The dbunload utility and Sybase Central are graphically different, and
functionally equivalent. You can use either one interchangeably to produce
the same results. These tools are different from Interactive SQL statements in
that they can operate on several tables at once. And in addition to exporting
table data, both tools can also export table schema.
If you want to rearrange your tables in the database, you can use dbunload to
create the necessary command files and modify them as needed. Sybase
Central provides wizards and a GUI interface for unloading one, many or all
of the tables in a database. Tables can be unloaded with structure only, data
only or both structure and data. To unload fewer than all of the tables in a
database, a connection must be established beforehand.
You can also extract one or many tables with or without command files.
These files can be used to create identical tables in different databases.
688
Chapter 22 Importing and Exporting Data
Choose Sybase Central or the dbunload utility when you want to export in
text format, when you need to process large amounts of data quickly, when
your file format requirements are flexible, or when your database needs to be
rebuilt or extracted.
For more information about exporting entire databases, rebuilding databases
or creating extractions from databases, see "Designing rebuild and extract
procedures" on page 691, or "Export tasks" on page 701.
689
Designing export procedures
690
Chapter 22 Importing and Exporting Data
What is rebuilding?
With importing and exporting, the destination of the data is either into your
database or out of your database. Importing reads data into your database.
Exporting writes data out of your database. Often the information is either
coming from or going to another non-Adaptive Server Anywhere database.
Rebuilding, however, combines two functions: loading and unloading.
Loading and Unloading takes data and schema out of an Adaptive Anywhere
database and then places the data and schema back into an Adaptive Server
Anywhere database. The unloading procedure produces fixed format data
files and a reload.sql file which contains table definitions required to recreate
the table exactly. Running the reload.sql script recreates the tables and loads
the data back into them.
Rebuilding a database can be a time consuming operation, and can require a
large amount of disk space. As well, the database is unavailable for use while
being unloaded and reloaded. For these reasons, rebuilding a database is not
advised in a production environment unless you have a definite goal in mind.
691
Designing rebuild and extract procedures
What is extracting?
Extracting removes a remote Adaptive Server Anywhere database from a
consolidated Adaptive Server Enterprise or Adaptive Server Anywhere
database.
You can use the Sybase Central Extraction wizard, or the extraction utility to
extract databases. The extraction utility is the recommended way of creating
and synchronizing remote databases from a consolidated databases.
$ For more information about extraction tools and how to perform
extractions, see "The Database Extraction utility" on page 531 of the book
Replication and Synchronization Guide or "Using the extraction utility" on
page 419 of the book Replication and Synchronization Guide.
692
Chapter 22 Importing and Exporting Data
You can use the Sybase Central Unload wizard or the dbunload utility to
unload an entire database in ASCII comma-delimited format and to create
the necessary Interactive SQL command files to completely recreate your
database. This may be useful for creating extractions, creating a backup of
your database, or building new copies of your database with the same or a
slightly modified structure. The dbunload utility and Sybase Central are
useful for exporting Adaptive Server Anywhere files intended for reuse
within Adaptive Server Anywhere.
Choose Sybase Central or the dbunload utility when you want to rebuild your
or extract from your database, export in text format, when you need to
process large amounts of data quickly, or when your file format requirements
are flexible.
693
Designing rebuild and extract procedures
New versions of the ASA database server can be used without upgrading
Upgrading a your database. If you want to use features of the new version that require
database access to new system tables or database options, you must use the upgrade
utility to upgrade your database. The upgrade utility does not unload or
reload any data.
If you want to use features of the new version that rely on changes in the
database file format, you must unload and reload your database. You should
back up your database after rebuilding the database.
To upgrade your database file, use the new version of Adaptive Server
Anywhere.
For more information about upgrading your database, see "The Upgrade
utility" on page 133 of the book ASA Reference.
694
Chapter 22 Importing and Exporting Data
Performance tips
Although loading large volumes of data into a database can be very time
consuming, there are a few things you can do to save time:
♦ If you use the LOAD TABLE statement, then bulk mode is not
necessary.
♦ If you are using the INPUT command, run Interactive SQL or the client
application on the same machine as the server. Loading data over the
network adds extra communication overhead. This might mean loading
new data during off hours.
♦ Place data files on a separate physical disk drive from the database. This
could avoid excessive disk head movement during the load.
♦ If you are using the INPUT command, start the server with the -b switch
for bulk operations mode. In this mode, the server does not keep a
rollback log or a transaction log, it does not perform an automatic
COMMIT before data definition commands, and it does not lock any
records.
Without a rollback log, you cannot use savepoints and aborting a
command always causes transactions to roll back. Without automatic
COMMIT, a ROLLBACK undoes everything since the last explicit
COMMIT.
Without a transaction log, there is no log of the changes. You should
back up the database before and after using bulk operations mode
because, in this mode, your database is not protected against media
failure. For more information, see "Backup and Data Recovery" on
page 627.
The server allows only one connection when you use the -b switch.
If you have data that requires many commits, running with the -b option
may slow database operation. At each COMMIT, the server carries out a
checkpoint; this frequent checkpointing can slow the server.
695
Import and export internals
696
Chapter 22 Importing and Exporting Data
Import tasks
This section collects together instructions for how to import a variety of data
types using a variety of tools. All tasks assume the database is up and
running unless otherwise specified.
Importing a database
You can use either the Interactive SQL wizard or the INPUT statement to
create a database by importing one table at a time. You can also create a
script that automates this process. However, for more efficient results,
consider reloading a database whenever possible.
$ For more information about importing a database that was previously
unloaded, see "Reloading a Database" on page 709.
Importing data
697
Import tasks
Importing a table
698
Chapter 22 Importing and Exporting Data
4 Click the Create a new table with the following name option and enter a
name for the new table in the field.
5 Click Finish to import your data.
If the export is successful, the Messages pane displays the amount of
time it to took to import the data, and what execution plan was used. If
the import is unsuccessful, a message appears indicating the import was
unsuccessful.
699
Import tasks
You can use the CREATE TABLE statement to create the global
temporary table.
2 Use the LOAD TABLE statement to load your data into the global
temporary table.
When you close the database connection, the data in the global
temporary table disappears. However, the table definition remains with
the database for you to access when you open your database next time.
3 Use the INSERT statement with a FROM SELECT clause to extract and
summarize data from the temporary table and put it into one or more of
the permanent database tables.
700
Chapter 22 Importing and Exporting Data
Export tasks
This section collects together instructions for tasks related to exporting. All
tasks assume that you have your database up and running.
$ For information about additional command-line switches you can apply
to the dbunload utility, see "The dbunload command-line utility" on
page 127 of the book ASA Reference.
$ For more information about connection parameter switches that allow
you to export from a non-running database, see "Connection and
Communication Parameters" on page 43 of the book ASA Reference.
Exporting a database
701
Export tasks
8 Specify what portion of the database you want to unload, and then click
Next.
You can choose:
♦ Extract structure and data
♦ Extract structure only
♦ Extract data only
Choose the Extract structure and data option to unload the entire
database.
9 Specify whether you want to do an Internal unload or an External
unload, and click Next
10 Specify the number of levels of view dependency.
Specifying levels of view dependency allows you to recreate views
based upon other views. For example, if you have one view based upon
existing tables, you would enter the number 1 in this field. View one is
independent and can be recreated from the tables alone. If, however, you
have a second view based upon the first view, you would enter number 2
in this field. View 2 is dependent on view 1, and cannot be created until
view 1 is created first.
11 Specify whether you want to order the data.
Exporting the data in an ordered format means that the data will be
reloaded in an ordered format. This is useful if you want to improve
performance of your database, or bypass a corrupted index.
12 Specify the path and location for the unloaded data and click Next.
You can use the Browse button to locate the directory you want to save
the database to. If the directory you choose does not exist, you will be
prompted to confirm creation of the directory.
13 Confirm that the information you specified in the wizard is correct and
click Finish to unload the database.
The Unload/Extract Database Message Window replaces the Unload an
Adaptive Server Anywhere Database wizard, and displays messages
about which files contain which tables in your database.
14 Close the Unload/Extract Database Message Window.
702
Chapter 22 Importing and Exporting Data
Exporting tables
In addition to the methods described below, you can also export a table by
selecting all the data in a table and exporting the query results. For more
information, see "Exporting query results" on page 705.
Tip
You can export views just as you would export tables.
703
Export tasks
704
Chapter 22 Importing and Exporting Data
705
Export tasks
706
Chapter 22 Importing and Exporting Data
Tips
You can combine the APPEND and VERBOSE statements to append both
results and messages to an existing file. For example, type OUTPUT TO
’c:\filename.sql’ APPEND VERBOSE. For more information about
APPEND and VERBOSE, see the "OUTPUT statement" on page 559 of
the book ASA Reference.
The OUTPUT TO, APPEND, and VERBOSE statements are equivalent to
the >#, >>#, >&, and >>& operators of earlier versions of the software.
You can still use these operators to redirect data, but the new
Interactive SQL statements allow for more precise output and easier to
read code.
707
Export tasks
4 Click Make Permanent if you want the changes to become the default, or
click OK if you want the changes to be in effect only for this session.
708
Chapter 22 Importing and Exporting Data
Rebuild tasks
Rebuilding a database involves unloading and reloading your entire database.
You can carry out this operation from Sybase Central or using the command-
line utilities.
There are additional switches available for the dbunload utility that allow
you to export variations of the following types of data, as well as connection
parameter switches that allow you to specify a running or non-running
database and database parameters.
It is good practice to make backups of your database before rebuilding.
Reloading a Database
709
Rebuild tasks
5 Click on the connection you want to use in the box, and then click Next.
6 Specify the path and location for the unloaded database command file
and click Next.
You can use the Browse button to locate the directory you want to save
the command file to. The command file has the .sql extension and is
necessary to rebuild your database from the data files you unload.
7 Specify how you would like to rebuild your database, and then click
Next.
You can choose to:
♦ Reload into a new database. If you choose this option, you must
also specify the name and location of the new database.
♦ Reload into an existing database. If you choose this option, the
Connect dialog box appears. You must specify a database and
connection parameters to continue with the rebuild.
♦ Replace the original database. If you choose this option, you must
also specify where to place the old log file.
$ For more information about the Do not reload the data option, see
"Exporting a database" on page 701.
8 Specify what portion of the database you want to rebuild, and then click
Next.
You can choose:
♦ Extract structure and data
♦ Extract structure only
♦ Extract data only
9 Specify whether you want to do an Internal unload or an External
unload, and click Next
10 Specify the number of levels of view dependency.
Specifying levels of view dependency allows you to recreate views
based upon other views. For example, if you have one view based upon
existing tables, you would enter the number 1 in this field. View one is
independent and can be recreated from the tables alone. If, however, you
have a second view based upon the first view, you would enter number 2
in this field. View 2 is dependent on view 1, and cannot be created until
view 1 is created first.
11 Specify whether you want to order the data.
710
Chapter 22 Importing and Exporting Data
Exporting the data in an ordered format means that the data will be
reloaded in an ordered format. This is useful if you want to improve
performance of your database, or bypass a corrupted index.
12 Specify the path and location for the unloaded data and click Next.
You can use the Browse button to locate the directory you want to save
the database to. If you choose a location that does not exist, you will be
prompted to confirm creation of the new directory.
13 Confirm that the information you specified in the wizard is correct and
click Finish to unload the database.
The Adaptive Server Anywhere Tools Console replaces the Unload an
Adaptive Server Anywhere Database wizard, and displays messages
about which files contain which tables in your database. When the
unload finishes, the message ’Complete’ appears.
14 Close the Unload/Extract Database Message Window.
711
Rebuild tasks
712
Chapter 22 Importing and Exporting Data
8 Use dblog on the new database with the ending offset noted in step 3 as
the -z parameter, and also set the relative offset to zero.
dblog -x 0 -z 137829 database-name.db
9 When you run the Message Agent, provide it with the location of the
original off-line directory on its command-line.
10 Start the database. You can now allow user access to the reloaded
database.
713
Extract Tasks
Extract Tasks
For more information on how to perform database extractions, see
♦ "The Database Extraction utility" on page 531 of the book Replication
and Synchronization Guide
♦ "Extraction utility options" on page 534 of the book Replication and
Synchronization Guide
♦ "Extracting groups" on page 423 of the book Replication and
Synchronization Guide
♦ "Extracting the remote databases" on page 144 of the book Replication
and Synchronization Guide
♦ "Extracting a remote database in Sybase Central" on page 531 of the
book Replication and Synchronization Guide
714
C H A P T E R 2 3
About this chapter Each user of a database must have a name they type when connecting to the
database, called a user ID. This chapter describes how to manage user IDs.
Contents
Topic Page
Database permissions overview 716
Setting user and group options 719
Managing individual user IDs and permissions 720
Managing connected users 731
Managing groups 732
Database object names and prefixes 739
Using views and procedures for extra security 741
How user permissions are assessed 744
Managing the resources connections use 745
Users and permissions in the system tables 746
715
Database permissions overview
716
Chapter 23 Managing User IDs and Permissions
Adding new users The DBA has the authority to add new users to the database. As the DBA
adds users, they are also granted permissions to carry out tasks on the
database. Some users may need to simply look at the database information
using SQL queries, others may need to add information to the database, and
others may need to modify the structure of the database itself. Although
some of the responsibilities of the DBA may be handed over to other user
IDs, the DBA is responsible for the overall management of the database by
virtue of the DBA authority.
The DBA has authority to create database objects and assign ownership of
these objects to other user IDs.
717
Database permissions overview
Permission Description
ALTER Permission to alter the structure of a table or create a trigger on
a table
DELETE Permission to delete rows from a table or view
INSERT Permission to insert rows into a table or view
REFERENCES Permission to create indexes on a table, and to create foreign
keys that reference a table
SELECT Permission to look at information in a table or view
UPDATE Permission to update rows in a table or view. This may be
granted on a set of columns in a table only
ALL All the above permissions
718
Chapter 23 Managing User IDs and Permissions
719
Managing individual user IDs and permissions
Initial permissions By default, new users are not assigned any permissions beyond connecting to
for new users the database and viewing the system tables. To access tables in the database,
they need to be assigned permissions.
The DBA can set the permissions granted automatically to new users by
assigning permissions to the special PUBLIC user group, as discussed in
"Special groups" on page 737.
720
Chapter 23 Managing User IDs and Permissions
Example Add a new user to the database with the user ID of M_Haneef and a
password of welcome.
GRANT CONNECT TO M_Haneef
IDENTIFIED BY welcome
$ See also
♦ "GRANT statement" on page 526 of the book ASA Reference
Changing a password
Changing a user’s Using the GRANT statement, you can change your password or that of
password another user if you have DBA authority. For example, the following
command changes the password for user ID M_Haneef to new_password:
GRANT CONNECT TO M_Haneef
IDENTIFIED BY new_password
Changing the DBA The default password for the DBA user ID for all databases is SQL. You
password should change this password to prevent unauthorized access to your
database. The following command changes the password for user ID DBA to
new_password:
GRANT CONNECT TO DBA
IDENTIFIED BY new_password
Notes ♦ Only the DBA may grant DBA or RESOURCE authority to database
users.
721
Managing individual user IDs and permissions
♦ DBA authority is very powerful, since anyone with this authority has the
ability to carry out any action on the database and as well as access to all
the information in the database. It is wise to grant DBA authority to only
a few people.
♦ Consider giving users with DBA authority two user IDs, one with DBA
authority and one without, so they connect as DBA only when
necessary.
♦ RESOURCE authority allows the user to create new database objects,
such as tables, views, indexes, procedures, or triggers.
722
Chapter 23 Managing User IDs and Permissions
Tips
Legend for the columns on the Permissions page: A=Alter, D=Delete,
I=Insert, R=Reference, S=Select, U=Update
You can also assign permissions from the user/group property sheet. To
assign permissions to many users and groups at once, use the table’s
property sheet. To assign permissions to many tables at once, use the
user’s property sheet.
Example 1 All table permissions are granted in a very similar fashion. You can grant
permission to M_Haneef to delete rows from the table named sample_table
as follows:
1 Connect to the database as a user with DBA authority, or as the owner of
sample_table.
2 Type and execute the following SQL statement:
GRANT DELETE
ON sample_table
TO M_Haneef
Example 2 You can grant permission to M_Haneef to update the column_1 and
column_2 columns only in the table named sample_table as follows:
1 Connect to the database as a user with DBA authority, or as the owner of
sample_table.
2 Type and execute the following SQL statement:
723
Managing individual user IDs and permissions
Table view permissions are limited in that they apply to all the data in a table
(except for the UPDATE permission which may be restricted). You can fine-
tune user permissions by creating procedures that carry out actions on tables,
and then granting users the permission to execute the procedure.
$ See also
♦ "GRANT statement" on page 526 of the book ASA Reference
724
Chapter 23 Managing User IDs and Permissions
Tip
You can also assign permissions from the user/group property sheet. To
assign permissions to many users and groups at once, use the view’s
property sheet. To assign permissions to many views at once, use the
user’s property sheet.
Behavior change
There was a behavior change with Version 5 of the software concerning
the permission requirements. Previously, permissions on the underlying
tables were required to grant permissions on views.
$ See also
♦ "GRANT statement" on page 526 of the book ASA Reference
725
Managing individual user IDs and permissions
$ See also
♦ "GRANT statement" on page 526 of the book ASA Reference
Tip
You can also assign permissions from the user/group property sheet. To
assign permissions to many users and groups at once, use the view’s
property sheet. To assign permissions to many views at once, use the
user’s property sheet.
Execution Procedures execute with the permissions of their owner. Any procedure that
permissions of updates information on a table will execute successfully only if the owner of
procedures the procedure has UPDATE permissions on the table.
As long as the procedure owner has the proper permissions, the procedure
executes successfully when called by any user assigned permission to
execute it, whether or not they have permissions on the underlying table.
You can use procedures to allow users to carry out well-defined activities on
a table, without having any general permissions on the table.
$ See also
727
Managing individual user IDs and permissions
728
Chapter 23 Managing User IDs and Permissions
729
Managing individual user IDs and permissions
Tips
You cannot delete users when you select them within a group container.
730
Chapter 23 Managing User IDs and Permissions
731
Managing groups
Managing groups
Once you understand how to manage permissions for individual users (as
described in the previous section), working with groups is straightforward. A
user ID identifies a group, just like it does a single user. However, a group
user ID has the permission to have members.
DBA, RESOURCE, When you grant permissions to a group or revoke permissions from a group
and GROUP for tables, views, and procedures, all members of the group inherit those
permissions changes. The DBA, RESOURCE, and GROUP permissions are not
inherited: you must assign them individually to each individual user IDs
requiring them.
A group is simply a user ID with special permissions. You grant permissions
to a group and revoke permissions from a group in exactly the same manner
as any other user, using the commands described in "Managing individual
user IDs and permissions" on page 720.
You can construct a hierarchy of groups, where each group inherits
permissions from its parent group. That means that a group can also be a
member of a group. As well, each user ID may belong to more than one
group, so the user-to-group relationship is many-to-many.
The ability to create a group without a password enables you to prevent
anybody from signing on using the group user ID. For more information
about this security feature, see "Groups without passwords" on page 736.
$ For information on altering database object properties, see "Setting
properties for database objects" on page 116.
$ For information about granting remote permissions for groups, see
"Granting and revoking remote permissions" on page 728.
Creating groups
You can create a new group in both Sybase Central and Interactive SQL.
You need DBA authority to create a new group.
The GROUP permission, which gives the user ID the ability to have
members, is not inherited by members of a group. Otherwise, every user ID
would automatically be a group as a consequence of its membership in the
special PUBLIC group.
732
Chapter 23 Managing User IDs and Permissions
$ See also
♦ "GRANT statement" on page 526 of the book ASA Reference
♦ "Creating new users" on page 720
733
Managing groups
$ See also
♦ "GRANT statement" on page 526 of the book ASA Reference
♦ "Creating new users" on page 720
♦ "Users and Groups properties" on page 1059
Tip
You can perform this action by opening the Users & Groups folder, right-
clicking the user or group that is currently a member of another group,
and choosing Leave Group.
734
Chapter 23 Managing User IDs and Permissions
$ See also
♦ "REVOKE statement" on page 581 of the book ASA Reference
♦ "Creating new users" on page 720
♦ "Users and Groups properties" on page 1059
♦ "Deleting users from the database" on page 730
♦ "Deleting groups from the database" on page 737
Permissions of groups
You may grant permissions to groups in exactly the same way as to any other
user ID. Permissions on tables, views, and procedures are inherited by
members of the group, including other groups and their members. Some
complexities to group permissions exists, that database administrators need
to keep in mind.
Notes Members of a group do not inherit the DBA, RESOURCE, and GROUP
permissions. Even if the personnel user ID has RESOURCE permissions,
the members of personnel do not have RESOURCE permissions.
Ownership of database objects is associated with a single user ID and is not
inherited by group members. If the user ID personnel creates a table, then
the personnel user ID is the owner of that table and has the authority to
make any changes to the table, as well as to grant privileges concerning the
table to other users. Other user IDs who are members of personnel are not
the owners of this table, and do not have these rights. Only granted
permissions are inherited. For example, if the DBA or the personnel user ID
explicitly grants SELECT authority to the personnel user ID, all group
members do have select access to the table.
735
Managing groups
Creating a group to A good practice to follow that allows everyone to access the tables without
own the tables qualifying names, is to create a group whose only purpose is to own the
tables. Do not grant any permissions to this group, but make all users
members of the group. You can then create permission groups and grant
users membership in these permission groups as warranted. For an example,
see the section "Database object names and prefixes" on page 739.
736
Chapter 23 Managing User IDs and Permissions
Special groups
When you create a database, the SYS and PUBLIC groups are also
automatically created. Neither of these groups has passwords, so it is not
possible to connect to the database as either SYS or as PUBLIC. However,
the two groups serve important functions in the database.
The SYS group The SYS group owns the system tables and views for the database, which
contain the full description of database structure, including all database
objects and all user IDs.
$ For a description of the system tables and views, together with a
description of access to the tables, see the chapters "System Tables" on
page 961 of the book ASA Reference, and also "System Views" on page 1021
of the book ASA Reference.
The PUBLIC group The PUBLIC group has CONNECT permissions to the database and
SELECT permission on the system tables. As well, the PUBLIC group is a
member of the SYS group, and has read access for some of the system tables
and views, so any user of the database can find out information about the
database schema. If you wish to restrict this access, you can REVOKE
PUBLIC’s membership in the SYS group.
Any new user ID is automatically a member of the PUBLIC group and
inherits any permissions specifically granted to that group by the DBA. You
can also REVOKE membership in PUBLIC for users if you wish.
737
Managing groups
$ See also
♦ "REVOKE statement" on page 581 of the book ASA Reference
♦ "Revoking user permissions" on page 729
♦ "Deleting users from the database" on page 730
738
Chapter 23 Managing User IDs and Permissions
739
Database object names and prefixes
Note Joe and Sally do not have any extra permissions because of their membership
in the company group. The company group has not been explicitly granted
any table permissions. (The company user ID has implicit permission to
look at tables like Salaries because it created the tables and has DBA
authority.) Thus, Joe and Sally still get an error executing either of these
commands:
SELECT *
FROM Salaries;
SELECT *
FROM company.Salaries
In either case, Joe and Sally do not have permission to look at the Salaries
table.
740
Chapter 23 Managing User IDs and Permissions
741
Using views and procedures for extra security
CONNECT "dba"
IDENTIFIED by sql ;
GRANT CONNECT
TO SalesManager
IDENTIFIED BY sales
2 Define a view which only looks at sales employees as follows:
CREATE VIEW emp_sales AS
SELECT emp_id, emp_fname, emp_lname
FROM "dba".employee
WHERE dept_id = 200
The table should therefore be identified as dba.employee, with the
owner of the table explicitly identified, for the SalesManager user ID to
be able to use the view. Otherwise, when SalesManager uses the view,
the SELECT statement refers to a table that user ID does not recognize.
3 Give SalesManager permission to look at the view:
GRANT SELECT
ON emp_sales
TO SalesManager
You use exactly the same command to grant permission on views and on
tables.
Example 2 The next example creates a view which allows the Sales Manager to look at a
summary of sales orders. This view requires information from more than one
table for its definition:
1 Create the view.
CREATE VIEW order_summary AS
SELECT order_date, region, sales_rep, company_name
FROM "dba".sales_order
KEY JOIN "dba".customer
2 Grant permission for the Sales Manager to examine this view.
GRANT SELECT
ON order_summary
TO SalesManager
3 To check that the process has worked properly, connect to the
SalesManager user ID and look at the views you created:
CONNECT SalesManager
IDENTIFIED BY sales ;
SELECT *
FROM "dba".emp_sales ;
SELECT *
FROM "dba".order_summary ;
742
Chapter 23 Managing User IDs and Permissions
Other permissions The previous example shows how to use views to tailor SELECT
on views permissions. You can grant INSERT, DELETE, and UPDATE permissions
on views in the same way.
$ For information on allowing data modification on views, see "Using
views" on page 136.
743
How user permissions are assessed
744
Chapter 23 Managing User IDs and Permissions
745
Users and permissions in the system tables
746
Chapter 23 Managing User IDs and Permissions
In addition to these, there are tables and views that contain information about
each object in the database.
747
Users and permissions in the system tables
748
C H A P T E R 2 4
About this chapter This chapter describes Adaptive Server Anywhere features that help make
your database secure.
Many of these features are described in more detail elsewhere in the
documentation, and for such features, pointers to the relevant places are
provided.
Database administrators are responsible for data security. In this chapter,
unless otherwise noted, you require DBA authority to carry out the tasks
described.
$ User IDs and permissions are major security-related topics. For
information on these topics, see "Managing User IDs and Permissions" on
page 715.
Contents
Topic Page
Security features overview 750
Security tips 751
Controlling database access 753
Controlling the tasks users can perform 755
Auditing database activity 756
Running the database server in a secure fashion 760
749
Security features overview
This chapter describes auditing, and presents overviews of the other security
features, providing pointers to where you can find more detailed information.
750
Chapter 24 Keeping Your Data Secure
Security tips
As database administrator, there are many actions you can take to improve
the security of your data. For example, you can:
♦ Change the default user ID and password The default user ID and
password for a newly created database is DBA and SQL. You should
change this password before deploying the database.
♦ Require long passwords You can set the
MIN_PASSWORD_LENGTH public option to disallow short (and
therefore easily guessed) passwords.
$ For information, see "MIN_PASSWORD_LENGTH option" on
page 186 of the book ASA Reference.
♦ Restrict DBA authority You should restrict DBA authority only to
users who absolutely require it since it is very powerful. Users with
DBA authority can see and do anything in the database.
You may consider giving users with DBA authority two user Ids: one
with DBA authority and one without, so they can connect as DBA only
when necessary.
♦ Drop external system functions The following external functions
present possible security risks: xp_cmdshell, xp_startmail, xp_sendmail,
and xp_stopmail.
The xp_cmdshell procedure allows users to execute operating system
commands or programs.
The e-mail commands allow users to have the server send e-mail
composed by the user. Malicious users could use either the e-mail or
command shell procedures to perform operating-system tasks with
authorities other than those they have been given by the operating
system. In a security-conscious environment, you should drop these
functions.
$ For information on dropping procedures, see "DROP statement" on
page 491 of the book ASA Reference.
♦ Protect your database files You should protect the database file, log
files, dbspace files, and write files from unauthorized access. Do not
store them within a shared directory or volume.
♦ Protect your database software You should similarly protect
Adaptive Server Anywhere software. Only give users access to the
applications, DLLs, and other resources they require.
751
Security tips
752
Chapter 24 Keeping Your Data Secure
753
Controlling database access
754
Chapter 24 Keeping Your Data Secure
755
Auditing database activity
Turning on auditing
The database administrator can turn on auditing to add security-related
information to the transaction log.
Auditing is off by default. To enable auditing on a database, the DBA must
set the value of the public option AUDITING to ON. Auditing then remains
enabled until explicitly disabled, by setting the value of the AUDITING
option to OFF. You must have DBA permissions to set this option.
v To turn on auditing:
1 Ensure that your database is upgraded to at least version 6.0.2.
2 If you had to upgrade your database, create a new transaction log.
3 Execute the following statement:
SET OPTION PUBLIC.AUDITING = ’ON’
756
Chapter 24 Keeping Your Data Secure
757
Auditing database activity
An auditing example
This example shows how the auditing feature records attempts to access
unauthorized information.
1 As database administrator, turn on auditing.
You can do this from Sybase Central as follows:
♦ Connect to the ASA 7.0 Sample data source. This connects you as
the DBA user.
♦ Right-click the asademo database icon and choose Set Options
from the popup menu.
♦ Select Auditing from the list of options, and enter the value ON in
the Public Setting box. Click Set Permanent Now to set the option
and Close to exit.
Alternatively, you can use Interactive SQL. Connect to the sample
database from Interactive SQL as user ID DBA with password SQL and
execute the following statement:
SET OPTION PUBLIC.AUDITING = ’ON’
2 Add a user to the sample database, named BadUser, with password
BadUser. You can do this from Sybase Central. Alternatively, you can
use Interactive SQL and enter the following statement:
GRANT CONNECT TO BadUser
IDENTIFIED BY ’BadUser’
3 Using Interactive SQL, connect to the sample database as BadUser and
attempt to access confidential information in the employee table with
the following query:
SELECT emp_lname, salary
FROM dba.employee
758
Chapter 24 Keeping Your Data Secure
759
Running the database server in a secure fashion
760
Chapter 24 Keeping Your Data Secure
761
Running the database server in a secure fashion
762
C H A P T E R 2 5
About this Chapter This chapter describes how to create and work with database and associated
files.
Contents
Topic Page
Overview of database files 764
Using additional dbspaces 766
Working with write files 770
Using the utility database 772
763
Overview of database files
Additional files Other files can also become part of a database system, including:
♦ Additional database files You can spread your data over several
separate files. These additional files are called dbspaces.
$ For information on dbspaces, see "CREATE DBSPACE
statement" on page 419 of the book ASA Reference.
♦ Transaction log mirror files For additional security, you can create a
mirror copy of the transaction log. This file typically has the extension
.mlg.
764
Chapter 25 Working with Database Files
Chapter goals This chapter describes how to create, name, and delete different types of files
found in a database system.
765
Using additional dbspaces
When you initialize a database, it contains one database file. This first
database file is called the main file. All database objects and all data are
placed, by default, in the main file.
Each database file has a maximum allowable size of 256M database pages.
For example, a database file created with a database page size of 4 KB can
grow to a maximum size of one terabyte (256M*4KB). However, in practice,
the maximum file size allowed by the physical file system in which the file is
created affects the maximum allowable size significantly.
While many commonly employed file systems restrict file size to a
maximum of 2GB, some, such as the NT file system, allow you to exploit the
full database file size. In scenarios where the amount of data placed in the
database exceeds the maximum file size, it is necessary to divide the data
into more than one database file. As well, you may wish to create multiple
dbspaces for reasons other than size limitations, for example to cluster
related objects.
Splitting existing If you wish to split existing database objects among several dbspaces, you
databases need to unload your database and modify the generated command file for
rebuilding the database. To do so, add IN clauses to specify the dbspace for
each table you do not wish to place in the main file.
$ See also
♦ "UNLOAD TABLE statement" on page 619 of the book ASA Reference
♦ "Setting properties for database objects" on page 116
Creating a dbspace
You create a new database file, or dbspace, either from Sybase Central, or
by using the CREATE DBSPACE statement. The database file for a new
dbspace may be on the same disk drive as the main file or on another disk
drive. You must have DBA authority to create dbspaces.
766
Chapter 25 Working with Database Files
For each database, you can create up to twelve dbspaces, including the main
dbspace.
Placing tables in A newly created dbspace is empty. When you create a new table you can
dbspaces place it in a specific dbspace with an IN clause in the CREATE TABLE
statement. If you don’t specify an IN clause, the table appears in the main
dbspace.
Each table is entirely contained in the dbspace it is created in. By default,
indexes appear in the same dbspace as their table, but you can place them in
a separate dbspace by supplying an IN clause.
Example The following command creates a new dbspace called library in the file
library.db in the same directory as the main file:
CREATE DBSPACE library
AS ’library.db’
The following command creates a table LibraryBooks and places it in the
library dbspace.
CREATE TABLE LibraryBooks (
title char(100),
author char(50),
isbn char(30)
) IN library
$ See also
♦ "CREATE DBSPACE statement" on page 419 of the book ASA
Reference
♦ "Creating tables" on page 122
♦ "CREATE INDEX statement" on page 435 of the book ASA Reference
767
Using additional dbspaces
Deleting a dbspace
You can delete a dbspace using either Sybase Central or Interactive SQL.
Before you can delete a space, you must delete all tables that use the space.
You must have DBA authority to delete a dbspace.
Performance Tip
Running a disk defragmentation utility after pre-allocating disk space
helps ensure that the database file is not fragmented over many disjoint
areas of the disk drive. Performance can suffer if there is excessive
fragmentation of database files.
768
Chapter 25 Working with Database Files
2 Right-click the desired dbspace and choose Properties from the popup
menu.
3 On the General tab of the property sheet, click Add Pages.
4 Enter the number of pages to add to the dbspace.
769
Working with write files
770
Chapter 25 Working with Database Files
INSERT
INTO department (dept_id, dept_name)
VALUES (202, ’Eastern Sales’)
If you committed this change, it would be written to the asademo.wlg
transaction log, and when the database checkpoints, the changes are
written to the asademo.wrt write file.
If you now query the department table, the results come from the write
file and the database file.
6 Try deleting a row. Set the WAIT_FOR_COMMIT option to avoid
referential integrity complaints here:
SET TEMPORARY OPTION wait_for_commit = ’on’ ;
DELETE
FROM department
WHERE dept_id = 100
If you committed this change, the deletion would be marked in the write
file. No changes occur to the database file.
For some purposes, it is useful to use a write file with a shared database. If,
for example, you have a read-only database on a network server, each user
could have their own write file. In this way, they could add local
information, which would be stored on their own machine, without affecting
the shared database. This approach can be useful for application development
also.
Deleting a write file You can use the dberase utility to delete a write file and its associated
transaction log.
771
Using the utility database
772
Chapter 25 Working with Database Files
773
Using the utility database
774
Chapter 25 Working with Database Files
$ For more information on the database server –gu command line switch,
see "–gu command-line option" on page 30 of the book ASA Reference.
Examples ♦ To prevent the use of the file administration statements, start the
database server using the none permission level of the –gu switch. The
following command starts a database server and names it TestSrv. It
loads the sample database, but prevents anyone from using that server to
create or delete a database, or execute any other file administration
statement regardless of their resource creation rights, or whether or not
they can load and connect to the utility database.
dbsrv7.exe –n TestSrv -gu none asademo.db
♦ To permit only the users knowing the utility database password to
execute file administration statements, start the server at the command
line with the following command.
dbsrv7 –n TestSrv –gu utility_db
Assuming the utility database password has been set during installation
to asa, the following command starts the Interactive SQL utility as a
client application, connects to the server named TestSrv, loads the
utility database and connects the user.
dbisql -c
"uid=dba;pwd=asa;dbn=utility_db;eng=TestSrv"
Having executed the above statement successfully, the user connects to
the utility database, and can execute file administration statements.
775
Using the utility database
776
C H A P T E R 2 6
About this chapter This chapter describes how to monitor and improve the performance of your
database.
Contents
Topic Page
Top performance tips 778
Using the cache to improve performance 785
Using keys to improve query performance 789
Using indexes to improve query performance 794
Search strategies for queries from more than one table 798
Sorting query results 801
Temporary tables used in query processing 802
How the optimizer works 804
Monitoring database performance 807
777
Top performance tips
Tip
Always use a transaction log. It helps protect your data and it greatly
improves performance.
If you can store the transaction log on a different physical device than the
one containing the main database file, you can further improve performance.
The extra drive head does not generally have to seek to get to the end of the
transaction log.
778
Chapter 26 Monitoring and Improving Performance
On UNIX, Windows NT, and other 32-bit Windows operating systems, the
database server dynamically changes cache size as needed. However, the
cache is still limited by the amount of memory physically available, and by
the amount used by other applications.
On Windows CE and Novell NetWare, the size of the cache is set on the
command line when you launch the database server. Be sure to allocate as
much memory to the database cache as possible, given the requirements of
the other applications and processes that run concurrently. In particular,
databases using Java objects benefit greatly from larger cache sizes. If you
use Java in your database, consider a cache of at least 8 Mb.
Tip
Increasing the cache size can often improve performance dramatically,
since retrieving information from memory is many times faster than
reading it from disk. You may even find it worthwhile to purchase more
RAM to allow a larger cache.
779
Top performance tips
780
Chapter 26 Monitoring and Improving Performance
781
Top performance tips
Tip
Place the transaction log mirror file (if you use one) on a physically
separate drive. Not only do you gain better protection against disk failure,
but also Adaptive Server Anywhere runs faster because it can efficiently
write to the log and log mirror files. Use the dblog transaction log utility
to specify the location of the transaction log and transaction log mirror
files.
Adaptive Server Anywhere may need more space than is available to it in the
cache for such operations as sorting and forming unions. When it needs this
space, it generally uses it intensively. The overall performance of your
database becomes heavily dependent on the speed of the device containing
the fourth type of file, the temporary file.
Tip
Make sure Adaptive Server Anywhere places its temporary file on a fast
device, physically separate from the one holding the database file.
Adaptive Server Anywhere will run faster because many of the operations
that necessitate using the temporary file also require retrieving a lot of
information from the database. Placing the information on two separate
disks allows the operations to take place simultaneously.
782
Chapter 26 Monitoring and Improving Performance
783
Top performance tips
784
Chapter 26 Monitoring and Improving Performance
785
Using the cache to improve performance
786
Chapter 26 Monitoring and Improving Performance
787
Using the cache to improve performance
788
Chapter 26 Monitoring and Improving Performance
789
Using keys to improve query performance
Information in the If you execute a query to look at every row in the employee table:
Messages pane SELECT *
FROM employee
three lines appear in the Messages pane:
PLAN> employee (seq)
75 rows in query (I/O estimate 14)
Execution time: 0.359 seconds
790
Chapter 26 Monitoring and Improving Performance
The first line summarizes the execution plan for the query: the tables
searched and any indexes used to search through a table.
♦ The letters seq inside parentheses mean that the server looked at the
employee table sequentially (that is, one page at a time, in the order that
the rows appear on the pages).
♦ The second line indicates the number of rows in the query. Sometimes
the database knows exactly, as in this case where there are 75 rows.
Other times it estimates the number of rows. The line also indicates an
internal I/O estimate of how many times the server will have to look at
the database on your hard disk to examine the entire employee table.
♦ The third line shows how long it took for your query to be executed. In
this case, it took exactly 0.359 seconds.
Setting the level of The amount and type of information in the Messages pane depends on
plan detail Interactive SQL settings . Interactive SQL provides three levels of detail.
On the Messages tab of the Options dialog (accessed by choosing
Tools➤Options), you have the following plan options:
♦ None No information about an execution appears in the Messages
pane.
♦ Short plan Basic information about an execution appears in one line
in the Messages pane. This line can show the table(s) accessed and
whether the rows are read sequentially or accessed through an index.
This plan is the default.
♦ Long plan Detailed information about an execution appears in
multiple lines in the Messages pane.
On this tab of the dialog, you can also specify whether to include the
execution time in the Messages pane.
Resetting statistics The Messages pane may contain estimates that differ from what appears
here. The optimizer maintains statistics as it evaluates queries and uses these
statistics to optimize subsequent queries. These statistics can be reset by
executing the following statement:
DROP OPTIMIZER STATISTICS
791
Using keys to improve query performance
Statistic window The Messages pane contains the following two lines:
information
Estimated 1 rows in query (I/O estimate 2)
PLAN> employee (employee)
Whenever the name inside the parentheses in the Message pane PLAN
description is the same as the name of the table, it means that the primary
key for the table is used to improve performance. Also, the Messages pane
shows that the database optimizer estimates there will be one row in the
query and it will have to go to the disk twice to get the data.
792
Chapter 26 Monitoring and Improving Performance
In version 7.0, while indexes are still created automatically for primary and
foreign keys, they are done so separately.
793
Using indexes to improve query performance
How indexes are Once you create an index, it is automatically kept up to date and used to
used improve performance whenever it can.
You could create an index for every column of every table in the database,
but that would make data modifications slow, since all indexes affected by
the change have to be updated. Further, each index requires space in the
database. For these reasons, only create indexes you intend to use frequently.
If you will not be using this index again, delete it with the following
statement:
DROP INDEX lname
794
Chapter 26 Monitoring and Improving Performance
Index page The query processor uses modified B+ trees. Each index page is a node in the
structure tree and each node has many index entries. Leaf page index entries have a
reference to a row of the indexed table. Indexes remain of uniform depth and
pages remain close to full.
All leaf index entries are at a uniform depth, but the trees are not necessarily
balanced. In particular, deletions can unbalance an index.
An index lookup An index lookup starts with the root page. The index entries on a nonleaf
page determine which child page has the correct range of values. The index
lookup moves down to the appropriate child page, continuing until it reaches
a leaf page. An index with N levels requires N reads for index pages and one
read for the data page containing the actual row. Index pages tend to be
cached due to the frequency of use.
The leaf nodes of the index are linked together. Once a row has been looked
up, the rows of the table can be scanned in index order. Scanning all rows
with a given value requires only one index lookup, followed by scanning the
leaf nodes of the index until the value changes. This occurs when you have a
WHERE clause that filters out rows with a certain value or a range of values.
It also occurs when joining rows in a one-to-many relationship.
Recommended By default, index pages have a hash size of ten bytes: they store
page sizes approximately the first 10 bytes of data for each index entry. This allows for
a fan-out of roughly 200 using 4K pages, meaning that each index page holds
200 rows, or 40,000 rows with a two-level index. Each new level of an index
allows for a table 200 times larger. Page size can significantly affect fan-out,
in turn affecting the depth of index required for a table. Large databases
should have 4K pages.
You can find the number of levels in any index in the database using the
sa_index_levels system procedure.
$ For more information, see "sa_index_levels system procedure" on
page 940 of the book ASA Reference.
795
Using indexes to improve query performance
Configuring index If a query needs access to more than the hashed data, it needs to fetch it from
hash sizes the underlying row, and this may affect index performance. You can
explicitly configure the amount of data that is stored in the index by
specifying WITH HASH SIZE in the CREATE INDEX statement.
Increasing the hash size can reduce the number of times the underlying row
needs to be accessed. However, this is at the expense of a decreased index
fanout. For optimal performance, you should not increase hash size for all
indexes, only for indexes for which the first ten bytes of data does not
provide high selectivity.
How large a hash Your hash size should be large enough that most index entries can be
size do I need? uniquely identified based on the information stored in the index itself,
without needing to access the underlying row.
The hash size that you need depends on the following factors:
♦ The data types of the index columns Each data type has its own
storage requirements. Here is a summary of the index storage
requirements for commonly indexed data types. For the storage
requirements for each data type, see "SQL Data Types" on page 251 of
the book ASA Reference.
796
Chapter 26 Monitoring and Improving Performance
797
Search strategies for queries from more than one table
798
Chapter 26 Monitoring and Improving Performance
The query determines which order to examine the tables in. The Adaptive
Server Anywhere built-in query optimizer estimates the cost of different
possible execution plans, and chooses the plan with the least estimated cost.
For some more complicated examples, try the following commands that each
join four tables. The Interactive SQL Messages pane shows that each query
is processed in a different order.
Example 1
v To list the customers and the sales reps they have dealt with:
♦ Type the following:
SELECT customer.lname, employee.emp_lname
FROM customer
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN employee
lname emp_lname
Colburn Chin
Smith Chin
Sinnot Chin
Piper Chin
Phipps Chin
Example 2 The following command restricts the results to list all sales reps the customer
named Piper has dealt with:
SELECT customer.lname, employee.emp_lname
FROM customer
KEY JOIN sales_order
KEY JOIN sales_order_items
KEY JOIN employee
WHERE customer.lname = ’Piper’
The plan for this query is as follows:
PLAN> customer (ix_cust_name), sales_order (ky_so_customer),
employee (employee), sales_order_items (id_fk)
Example 3 The third example shows all customers who have dealt with a sales
representative who has the same name they have:
799
Search strategies for queries from more than one table
800
Chapter 26 Monitoring and Improving Performance
801
Temporary tables used in query processing
802
Chapter 26 Monitoring and Improving Performance
Internal temporary Rows from internal temporary tables must be read during processing, and
table cost during the search for efficient access plans, the query optimizer must
estimates estimate the cost associated with accessing temporary tables along with other
costs.
Any indexes associated with internal temporary tables assign a hash size
based on distribution estimates, with an upper limit of 20 bytes. You can
extend this limit, or restrict it further, by setting the
MAX_WORK_TABLE_HASH_SIZE database option. Setting this option to
a value of 10 returns the behavior to that of the software before version 7.0.
803
How the optimizer works
Optimizer estimates
The optimizer uses heuristics (educated guesses) to help decide the best
strategy.
For each table in a potential execution plan, the optimizer estimates the
number of rows that will form part of the results. The number of rows
depends on the size of the table and the restrictions in the WHERE clause or
the ON clause of the query.
In many cases, the optimizer uses more sophisticated heuristics. For
example, the optimizer uses default estimates only in cases where better
statistics are unavailable. As well, the optimizer makes use of indexes and
keys to improve its guess of the number of rows. Here are a few single-
column examples:
Single-column ♦ Equating a column to a value: estimate one row when the column has a
examples unique index or is the primary key.
♦ A comparison of an indexed column to a constant: use the index to
estimate the percentage of rows that satisfy the comparison.
♦ Equating a foreign key to a primary key (key join): use relative table
sizes in determining an estimate. For example, if a 5000 row table has a
foreign key to a 1000 row table, the optimizer guesses that there are five
foreign rows for each primary row.
804
Chapter 26 Monitoring and Improving Performance
805
How the optimizer works
806
Chapter 26 Monitoring and Improving Performance
807
Monitoring database performance
♦ The following query returns the filename for the root file of the current
database:
SELECT db_property( ’file’ )
Improving query For better performance, a client application monitoring database activity
efficiency should use the property_number function to identify a named property, and
then use the number to repeatedly retrieve the statistic. The following set of
statements illustrates the process from Interactive SQL:
CREATE VARIABLE propnum INT ;
CREATE VARIABLE propval INT ;
SET propnum = property_number( ’cacheread’ );
SET propval = property( propnum )
Property names obtained in this way are available for many different
database statistics, from the number of transaction log page write operations
and the number of checkpoints carried out, to the number of reads of index
leaf pages from the memory cache.
You can view many of these statistics in graph form from the Sybase Central
database management tool.
Note
The Performance Monitor only graphs statistics that you have added to it
ahead of time.
$ See also
♦ "Adding and removing statistics" on page 809
♦ "Configuring the Sybase Central Performance Monitor" on page 810
♦ "Resizing the Sybase Central Performance Monitor legend" on page 810
♦ "Monitoring database statistics from the Windows NT Performance
Monitor" on page 811
Tip
You can also add a statistic to or remove one from the Performance
Monitor on the statistic’s property sheet.
$ See also
809
Monitoring database performance
810
Chapter 26 Monitoring and Improving Performance
♦ You can change the relative size of the legend by positioning your
cursor along the top border of the column headings (so that a two-sided
arrow appears) and dragging up or down.
♦ You can minimize or maximize the legend by clicking one of the two
arrow icons located immediately above the legend.
$ See also
♦ "Opening the Sybase Central Performance Monitor" on page 808
♦ "Adding and removing statistics" on page 809
♦ "Configuring the Sybase Central Performance Monitor" on page 810
♦ "Monitoring database statistics from the Windows NT Performance
Monitor" on page 811
811
Monitoring database performance
812
C H A P T E R 2 7
Query Optimization
About this chapter Once each query is parsed, the optimizer analyzes it and decides on an access
plan that will compute the result using as few resources as possible. This
chapter describes the steps the optimizer goes through to optimize a query. It
begins with the assumptions that underlie the design of the optimizer, then
proceeds to discuss selectivity estimation, cost estimation, and the other steps
of optimization.
Although update, insert, and delete statements must also be optimized, the
focus of this chapter is on select queries. The optimization of these other
commands follows similar principles.
Contents
Topic Page
The role of the optimizer 814
Steps in optimization 816
Reading access plans 817
Underlying assumptions 819
Physical data organization and access 823
Indexes 826
Predicate analysis 828
Semantic query transformations 830
Selectivity estimation 834
Join enumeration and index selection 840
Cost estimation 842
Subquery caching 843
813
The role of the optimizer
814
Chapter 27 Query Optimization
The governor limits The governor is the part of the optimizer that performs this limiting function.
the optimizer’s It lets the optimizer run until it has analyzed a minimum number of
work strategies. After considering a reasonable number of strategies, the governor
cuts off further analysis.
In the case of expensive and complicated queries, the optimizer works
longer. In the case of very expensive queries, it may run long enough to
cause a discernable delay.
815
Steps in optimization
Steps in optimization
The steps the Anywhere optimizer follows in generating a suitable access
plan include.
1 The parser converts the query, expressed in SQL, into an internal
representation. In doing so, it may rewrite the query, converting it to a
syntactically different, but semantically equivalent, form. These
conversions make the statement easier to analyze.
2 Optimization proper commences at OPEN CURSOR. Unlike many other
commercial database systems, Anywhere optimizes each statement just
before executing it.
3 Perform semantic optimization on the statement. The optimizer rewrites
each command whenever doing so leads to better, more efficient access
plans.
4 Perform join enumeration and group-by optimization for each subquery.
5 Optimize access order.
Because Anywhere performs just-in-time optimization of each statement, the
optimizer has access to the value of host variables and stored procedure
variables. Hence, it makes better choices because it performs better
selectivity analysis.
Adaptive Server Anywhere optimizes each query you execute, regardless of
how many times you executed it before. Because Anywhere saves statistics
each time it executes a query, the optimizer can learn from the experience of
executing previous plans and can adjust its choices when appropriate.
816
Chapter 27 Query Optimization
Commas separate Join strategies in plans appear as a list of correlation names. Each correlation
tables within a join name is followed immediately, in brackets, by the method to be used to
strategy locate the required rows. This method can either be the word seq, which
indicates sequential scanning of the table, or it the name of an index. The
name of a primary index is the name of the table.
The following self-join creates a list of employees and their managers.
SELECT e.emp_fname, m.emp_fname
FROM employee AS e JOIN employee AS m
ON e.manager_id = m.emp_id
PLAN> e (seq), m (employee)
To compute this result, Adaptive Server Anywhere first accesses the
employee table sequentially. For each row, it accesses the employee table
again, but this time using the primary index.
Temporary tables Adaptive Server Anywhere must use a temporary table to execute some
query results, or it may choose to use a temporary table to lower the overall
cost of computing the result. When it uses a temporary table for a join
strategy, the words TEMPORARY TABLE precede the description of that
strategy.
SELECT DISTINCT quantity
FROM sales_order_items
PLAN> TEMPORARY TABLE sales_order_items (seq)
A temporary table is necessary in this case to compute the distinct quantities.
Colons separate The following command contains two query blocks: the outer select
join strategies statement from the sales_order and sales_order_items tables, and the
subquery that selects from the product table.
SELECT *
FROM sales_order AS o
817
Reading access plans
818
Chapter 27 Query Optimization
Underlying assumptions
A number of assumptions underlie the design direction and philosophy of the
Adaptive Server Anywhere query optimizer. You can improve the quality or
performance of your own applications through an understanding of the
optimizer’s decisions. These assumptions provide a context in which you
may understand the information contained in the remaining sections.
Assumptions
The list below summarizes the assumptions upon which the Adaptive Server
Anywhere optimizer is based.
Assumption Implications
Minimal administration work ♦ Self-tuning design that requires fewer
performance controls.
♦ No separate statistics-gathering utility
Applications tend to retrieve only ♦ Indices are used whenever possible
the first few rows of a cursor
♦ Use of temporary tables is discouraged
Selectivity statistics necessary for ♦ Optimization decisions are based on prior
optimization are available in the query execution.
Column Statistics Registry
♦ Dropping optimizer statistics makes the
optimizer ineffective.
An index can be found to satisfy a ♦ Performance is poor if a suitable index
join predicate in virtually all cases cannot be found.
Virtual memory is a scarce ♦ Intermediate results are not materialized
resource unless absolutely necessary.
819
Underlying assumptions
Every query both contributes to this internal knowledge and benefits from it.
Every user can benefit from knowledge that Anywhere has gained through
executing another users’ query.
Statistics gathering mechanisms are thus an integral part of the database
server, and require no external mechanism. Should you find an occasion
where it would help, you can provide the database server with estimates of
data distributions to use during optimization. If you encode these into a
trigger or procedure, for example, you then assume responsibility for
maintaining these estimates and updating them whenever appropriate.
820
Chapter 27 Query Optimization
821
Underlying assumptions
823
Physical data organization and access
824
Chapter 27 Query Optimization
825
Indexes
Indexes
There are many situations in which creating an index improves the
performance of a database. An index provides an ordering of the rows of a
table on the basis of the values in some or all of the columns. An index
allows Anywhere to find rows quickly. It permits greater concurrency by
limiting the number of database pages accessed. An index also affords
Anywhere a convenient means of enforcing a uniqueness constraint on the
rows in a table.
Hash values
Adaptive Server Anywhere must represent values in an index to decide how
to order them. For example, if you index a column of names, then it must
know that Amos comes before Smith.
For each value in your index, Anywhere creates a corresponding hash value.
It stores the hash value in the index, rather than the actual value. Anywhere
can perform operations with the hash value. For example, it can tell when
two values are equal or which of two values is greater.
When you index a small storage type, such as an integer, the hash value that
Anywhere creates takes the same amount of space as the original value. For
example, the hash value for an integer is 4 bytes in size, the same amount of
space as required to store an integer. Because the hash value is the same size,
Anywhere can use hash values with a one-to-one correspondence to the
actual value. Anywhere can always tell whether two values are equal, or
which is greater by comparing their hash values. However, it can retrieve the
actual value only by reading the entry from the corresponding table.
When you index a column containing larger data types, the hash value will
often be shorter than the size of the type. For example, if you index a column
of string values, the hash value used is at most 9 bytes in length.
Consequently, Adaptive Server Anywhere cannot always compare two
strings using only the hash values. If the hash values are equal, Anywhere
must retrieve and compare the actual two values from the table.
For example, suppose you index the titles of books, many of which are
similar. If you wish to search for a particular title, the index may identify
only a set of possible rows. In this case, Anywhere must retrieve each of the
candidate rows and examine the full title.
826
Chapter 27 Query Optimization
Composite indexes An ordered sequence of columns is also called a composite index. However,
each index key in these indexes is at most a 9 byte hash value. Hence, the
hash value cannot necessarily identify the correct row uniquely. When two
hash values are equal, Anywhere must retrieve and compare the actual
values.
Predicate analysis
A predicate is a conditional expression that, combined with the logical
operators AND and OR, makes up the set of conditions in a WHERE or
HAVING clause. In SQL, a predicate that evaluates to UNKNOWN is
interpreted as FALSE.
A predicate that can exploit an index to retrieve rows from a table is called
sargable. This name comes from the phrase search argument-able. Both
predicates that involve comparisons with constants and those that compare
columns from two or more different tables may be sargable.
The predicate in the following statement is sargable. Adaptive Server
Anywhere can evaluate it efficiently using the primary index of the employee
table.
SELECT *
FROM employee
WHERE employee.emp_id = 123
PLAN> employee (employee)
In contrast, the following predicate is not sargable. Although the emp_id
column is indexed in the primary index, using this index does not expedite
the computation because the result contains all, or all except one, row.
SELECT *
FROM employee
employee.emp_id <> 123
PLAN> employee (seq)
Similarly, no index can assist in a search for all employees whose first name
ends in the letter "k". Again, the only means of computing this result is to
examine each of the rows individually.
Examples In each of these examples, attributes x and y are each columns of a single
table. Attribute z is contained in a separate table. Assume that an index exists
for each of these attributes.
Sargable Non-sargable
x = 10 x <> 10
x IS NULL x IS NOT NULL
x > 25 x = 4 OR y = 5
x=z x=y
x IN (4, 5, 6) x NOT IN (4, 5, 6)
x LIKE ’pat%’ x LIKE ’%tern’
828
Chapter 27 Query Optimization
829
Semantic query transformations
830
Chapter 27 Query Optimization
831
Semantic query transformations
Subquery unnesting
You may express statements as nested queries, given the convenient syntax
provided in the SQL language. However, rewriting nested queries as joins
often leads to more efficient execution and more effective optimization, since
Anywhere can take better advantage of highly selective conditions in a
subquery’s WHERE clause.
Examples 1 The subquery in the following example can match at most one row for
each row in the outer block. Because it can match at most one row,
Anywhere recognizes that it can convert it to an inner join.
SELECT s.*
FROM sales_order_items s
WHERE EXISTS
( SELECT *
FROM product p
WHERE s.prod_id = p.id
AND p.id = 300 AND p.quantity > 300)
Following conversion, this same statement is expressed internally using
join syntax:
SELECT s.*
FROM product p JOIN sales_order_items s
ON p.id = s.prod_id
WHERE p.id = 300 AND p.quantity > 20
PLAN> p (product), s (ky_prod_id)
2 Similarly, the following query contains a conjunctive EXISTS predicate
in the subquery. This subquery can match more than one row.
SELECT p.*
FROM product p
WHERE EXISTS
( SELECT *
FROM sales_order_items s
WHERE s.prod_id = p.id
AND s.id = 2001)
Anywhere converts this query to a inner join, with a DISTINCT in the
SELECT list.
SELECT DISTINCT p.*
FROM product p JOIN sales_order_items s
ON p.id = s.prod_id
WHERE s.id = 2001
PLAN> TEMPORARY TABLE s (id_fk), p (product)
3 Anywhere can also eliminate subqueries in comparisons, when the
subquery can match at most one row for each row in the outer block.
Such is the case in the following query.
832
Chapter 27 Query Optimization
SELECT *
FROM product p
WHERE p.id =
( SELECT s.prod_id
FROM sales_order_items s
WHERE s.id = 2001
AND s.line_id = 1 )
Anywhere rewrites this query as follows.
SELECT p.*
FROM product p, sales_order_items s
WHERE p.id = s.prod_id
AND s.id = 2001
AND s.line_id = 1
PLAN> s (sales_order_items), p (product)
833
Selectivity estimation
Selectivity estimation
Selectivity is a ratio The selectivity of a predicate measures how often the predicate evaluates to
that measures how TRUE. Selectivity is the ratio of the number of times the predicate evaluates
frequently a to true, to the total number of possible instances that must be tested.
predicate is true. Selectivity is most commonly expressed as a percentage. For example, if 2%
of employees have the last name Smith, then the selectivity of the following
predicate is 2%.
emp_lname = ’Smith’
Selectivity is second only to join enumeration in importance to the process of
optimization. Hence, the performance of the optimizer relies heavily on the
presence of accurate selectivity information.
Adaptive Server Anywhere can obtain estimates of selectivity from four
possible sources. It assumes no correlation between columns of a table and
so calculates the selectivity of each column independently.
♦ Column-statistics registry Each time Anywhere performs a query, it
saves selectivity information about the data in a column for future
reference.
♦ Partial index scans The optimizer examines the upper levels of an
index to obtain a selectivity estimate for a condition on an indexed
column.
♦ User-supplied values You can supply selectivity estimates in your
SQL statement. If you do so, Anywhere uses them in preference to those
from other sources, only for current statement execution.
♦ Default values If no other source is available, Anywhere falls back on
the built-in default values.
The scan factor is The scan factor queries the fraction of pages in a table that needs to be read
the fraction of to compute the result, and expresses the result as a percentage. For example,
pages in a table to find the first name of the employee with employee number 123, Anywhere
that need to be may have to read two index pages and, finally, the name contained in the
read. appropriate row. If there are 1000 pages in the employee table, then the scan
factor for this query would be 0.1%, meaning 1 page out of 1000 pages in the
table has to be read to find the appropriate row.
Although the scan factor is frequently small when the selectivity is small,
this is not always the case. Consider a request to find all employees who live
on Phillip Street. Less than one percent of employees may live on this street,
yet, because street names are not indexed, Anywhere can only find the
records by examining every row in the employee table.
834
Chapter 27 Query Optimization
Column-statistics registry
Adaptive Server Anywhere caches skewed predicate selectivity values and
column distribution statistics. It stores this information in the database.
Anywhere stores, logs, and checkpoints this information like other data.
Adaptive Server Anywhere updates these statistics automatically during
query processing.
The optimizer automatically retrieves and uses cached statistics when
processing subsequent queries. Selectivity information is available to all
transactions, regardless of the user or connection.
Adaptive Server Anywhere manages the column-statistics registry on a first-
in, first-out basis. Limited in size to 15,000 entries, Anywhere saves the
following types of information:
♦ column distribution statistics
♦ LIKE predicate selectivity statistics
♦ equality predicate statistics
Do not give You can reset the optimizer statistics using the DROP OPTIMIZER
Anywhere STATISTICS command. If you do so, you erase all the statistics Anywhere
amnesia! has accumulated.
Caution
Use the DROP OPTIMIZER STATISTICS command only when you have
made recent wholesale changes that render previous statistical
information invalid. Otherwise, avoid this command because it can cause
the optimizer to choose very inefficient access plans.
If you erase the statistics, Anywhere resorts to initial guesses about the
distribution of your data as though accessing it for the first time, losing all
performance improvements the statistics could have provided.
Subsequent queries gradually restore the statistics. In the interim, the
performance of many commands can suffer seriously. Consequently, this
command rarely improves performance and certainly never provides a long-
term solution.
835
Selectivity estimation
For example, the optimizer might examine an index of dates to estimate what
proportion refer to days before a given date, such as March 3, 1998. To
obtain such an estimate, Anywhere examines the upper pages of the index
you created on that column. It locates the approximate position of the given
date, then, from its relative position in the index, estimates the proportion of
values that occur before it.
Some cost may be involved in performing such scans because some index
pages, not already available in the buffer cache, may need to be retrieved
from disk. In addition, indices for very large tables, or primary indices for
tables pointed to by a large number of foreign keys may be extremely large.
Low fan-out may mean that the optimizer could only obtain specific
estimates by examining many pages. To limit this expense, the optimizer
examines at most two levels of the index.
Naturally, this method is effective only when the column about which
selectivity information is sought is the first column of the index. Should the
column comprise the second or other column of the index, the index is of no
help because the values will be distributed thoughout the index.
Similarly, estimates of LIKE selectivity values may be obtained by this
method only when the first few letters of the pattern are available. In cases
where only the middle or final sections of a word pattern appear, the
optimizer must rely on one of the other three sources of selectivity
information.
User estimates
Adaptive Server Anywhere allows you, as the user, to supply selectivity
estimates of any predicate. These estimates are expressed as a percentage and
must be supplied as a floating-point value. You may explicitly state such an
estimate for any predicate you choose.
The optimizer always uses user-supplied estimates in preference to an
estimate available from any other source. In this situation, the optimizer even
ignores cached selectivity values for that predicate. Because the optimizer
always uses any explicit estimates you provide, you can use these estimates
to guide the optimizer in its choice of access plan.
You should use explicit estimates with care. Estimates in triggers or stored
procedures are easily forgotten. Anywhere has no means to update them. For
these reasons, all responsibility for their maintenance rests with the author of
the procedure or administrator of the database. Should the distribution of
data change over time, the values may prove inappropriate and lead the
optimizer to choose access plans that are no longer optimal.
836
Chapter 27 Query Optimization
837
Selectivity estimation
SELECT *
FROM tablea AS a JOIN tableb AS b
ON a.x = b.y
Join selectivity for In the case of equijoins, Anywhere calculates the selectivity of the join based
equijoins on the cardinality of the individual tables according to the following formula.
selectivity = cardinality(a JOIN b))/(cardinality(a) cardinality(b))
If the join condition involves two columns, then the optimizer uses data
distribution estimates from the column statistics registry to estimate the
cardinality of the result, and hence the selectivity of the join. Otherwise, if
the join condition involves a mathematical expression, the join predicate
selectivity estimate defaults to 5%.
Key joins—a rare The optimizer takes advantage of joins that are based on foreign key
case where syntax relationships. You can identify these to Adaptive Server Anywhere using the
matters KEY JOIN syntax. When you use this syntax, the optimizer estimates
selectivity accurately using special information contained in the primary
index. Anywhere only takes full advantage of these relationships when you
explicitly use the KEY JOIN syntax. As such, it is a rare exception to the
general rule that Anywhere optimizes your commands based on their
semantics, not their syntax. When estimating the selectivity of key joins, the
Anywhere optimizer assumes a uniform distribution of the values in the table
containing the foreign key.
838
Chapter 27 Query Optimization
839
Join enumeration and index selection
Join enumeration
In selecting a join strategy, Anywhere considers the following information.
♦ selectivity estimates of the number of rows in each intermediate result
♦ estimates of scan factor for each indexed retrieval
♦ the size of the cache—different cache sizes can lead to different join
strategies.
Anywhere begins by using selectivity information, as determined in the
previous step, to select an access order.
Next, Anywhere derives the estimates of scan factors from estimates of index
fan-out. The fan-out of an index can vary greatly depending on the type of
index and the page size you selected when you launched the engine or
created the database. Larger fan-out is better, because it allows Anywhere
access to locate specific rows using fewer pages and hence fewer resources.
Cache size affects Finally, the amount of cache space available to Anywhere can affect the
the access plan outcome of the optimizer's choice of join strategy. The larger the fraction of
space consumed by any one query, the more likely pages will need to be
swapped for those on disk. If Anywhere decides that a particular strategy will
result in using excessive cache space, it assigns that strategy a higher cost.
The number of possible join strategies can be huge. A join of n tables allows
n! possible join orders. For example, a join of 10 tables may have
10! = 3,628,800 possible orders.
When faced with joins that involve a large number of tables, Anywhere
attempts to prune the set of possible strategies. It eliminates those that fall
into certain categories, so as to focus effort on investigating more efficient
possibilities.
840
Chapter 27 Query Optimization
Anywhere chooses Anywhere always selects plans that minimize the number of Cartesian
plans with fewer products required to compute the result, favoring indexed access instead.
Cartesian products
Index selection
In addition to selecting an order, the optimizer must choose a method of
accessing each of these tables. It can choose to either scan a table
sequentially, or to access it through an index. Some tables may have a few
indexes, further increasing the number of possible strategies.
The optimizer analyzes each join strategy to determine which type of
access—indexed or sequential scan—would best suit each table in that
strategy. Although one index may be well suited to one join strategy, it can
be a poor choice for another strategy that joins the tables in a different order.
By making a custom index selection for each join order, the optimizer has
the opportunity to choose a better access plan.
Anywhere decides to use an index instead of embarking on a sequential scan
whenever an index is available and the selectivity is less than 20%.
841
Cost estimation
Cost estimation
The optimizer bases its selection of access plan on the expected cost of each
plan. It uses a mix of metrics to estimate the cost of an access plan:
♦ expected number of rows
♦ use of temporary tables
♦ anticipated amount of CPU and I/O for the access plan
♦ amount of cache utilized
Anywhere gives particular weight to the fact that disk access is substantially
more time-consuming than other operations.
Associate high cost In keeping with the assumption that Anywhere is to use both disk and
with temporary memory efficiently, it avoids using temporary tables. To achieve this goal,
tables the optimizer assigns significant cost to plans that use them.
Anywhere bases its estimate of the cost of temporary tables on both the row
size and the expected number of rows the table will contain. The optimizer
often pessimistically overestimates the actual cost of using a temporary table.
When few queries are competing for cache space, the actual cost of a plan
with a temporary table can be significantly less than the estimate.
842
Chapter 27 Query Optimization
Subquery caching
New to Adaptive Server Anywhere 7.0 is the ability to cache the result of
evaluating a subquery. When Anywhere processes a subquery, it caches the
result. Should it need to re-evaluate the subquery for the same value, it can
simply retrieve the result from the cache. In this way, Anywhere avoids
many repetitious and redundant computations.
At the end of each subquery, Anywhere releases the stored values. Since
values may change between queries, these values may not be reused to
process subsequent queries. For example, another transaction might modify
values in a table involved in the subquery.
As the processing of a query progresses, Anywhere monitors the frequency
with which cached subquery values are reused. If the values of the correlated
variable rarely repeat, then Anywhere needs to compute most values only
once. In this situation, Anywhere recognizes that it is more efficient to
recompute occasional duplicate values, than to cache numerous entries that
occur only once.
Anywhere also does not cache if the size of the dependent column is more
than 255 bytes. In such cases, you may wish to rewrite your query or add
another column to your table to make such operations more efficient.
As soon as Adaptive Server Anywhere recognizes that few values are
repeated, it suspends subquery caching for the remainder of the statement
and proceeds to re-evaluate the subquery for each and every row in the outer
query block.
843
Subquery caching
844
C H A P T E R 2 8
About this chapter This chapter describes how to deploy Adaptive Server Anywhere
components. It identifies the files required for deploying client applications,
and addresses related issues such as connection settings.
Contents
Topic Page
Deployment overview 846
Understanding installation directories and file names 848
Using a silent installation for deployment 851
Deploying client applications 854
Deploying database servers 863
Deploying embedded database applications 865
845
Deployment overview
Deployment overview
When you have completed a database application, you must deploy the
application to your end users. Depending on the way in which your
application uses Adaptive Server Anywhere (as an embedded database, in a
client/server fashion, and so on) you may have to deploy components of the
Adaptive Server Anywhere software along with your application. You may
also have to deploy configuration information, such as data source names,
that enable your application to communicate with Adaptive Server
Anywhere.
Deployment models
The files you need to deploy depend on the deployment model you choose.
Here are some possible deployment models:
♦ Client deployment You may deploy only the client portions of
Adaptive Server Anywhere to your end-users, so that they can connect
to a centrally located network database server.
♦ Network server deployment You may deploy network servers to
offices, and then deploy clients to each of the users within those offices.
♦ Embedded database deployment You may deploy an application
that runs with the personal database server. In this case, both client and
personal server need to be installed on the end-user’s machine.
846
Chapter 28 Deploying Databases and Applications
847
Understanding installation directories and file names
Directory Contents
/opt/sybase/SYBSsa7/bin Executable files
/opt/sybase/SYBSsa7/lib Shared objects and libraries
/opt/sybase/SYBSsa7/res String files
848
Chapter 28 Deploying Databases and Applications
849
Understanding installation directories and file names
Database file Adaptive Server Anywhere databases are composed of two elements:
names
♦ Database file This is used to store information in an organized
format. This file uses a .db file extension.
♦ Transaction log file This is used to record all changes made to data
stored in the database file. This file uses a .log file extension, and is
generated by Adaptive Server Anywhere if no such file exists and a log
file is specified to be used. A mirrored transaction log has the default
extension of .mlg.
♦ Write file If your application uses a write file, it typically has a .wrt file
extension.
♦ Compressed database file If you supply a read-only compressed
database file, it typically has extension .cdb.
These files are updated, maintained and managed by the Adaptive Server
Anywhere relational database management system.
850
Chapter 28 Deploying Databases and Applications
851
Using a silent installation for deployment
852
Chapter 28 Deploying Databases and Applications
SMS Installation
Microsoft System Management Server (SMS) requires a silent install that
does not reboot the target computer. The Adaptive Server Anywhere silent
install does not reboot the computer.
Your SMS distribution package should contain the response file, the install
image and the asa7.pdf package definition file (provided on the Adaptive
Server Anywhere CD ROM in the \extras folder). The setup command line in
the PDF file contains the following options:
♦ The –s option for a silent install
♦ The –SMS option to indicate that it is being invoked by SMS.
♦ The –m option to generate a MIF file. The MIF file is used by SMS to
determine whether the installation was successful.
853
Deploying client applications
854
Chapter 28 Deploying Databases and Applications
Notes ♦ Your end user must have a working ODBC installation, including the
driver manager. Instructions for deploying ODBC are included in the
Microsoft ODBC SDK.
♦ The IPX port library handles network communications over IPX. It is
required only if the client is working with the network database server
over an IPX network.
♦ The Connection dialog is needed if your end users are to create their
own data sources, if they need to enter user IDs and passwords when
connecting to the database, or if they need to display the Connection
dialog for any other purpose.
♦ The ODBC translator is required only if your application relies on OEM
to ANSI character set conversion.
♦ On HP-UX, all files listed with extension .so instead have extension .sl.
On AIX, the files have extension .so or .a.
$ For more information, see "Creating databases for Windows CE"
on page 296, and "Using ODBC on Windows " on page 145 of the book
ASA Programming Interfaces Guide.
855
Deploying client applications
Windows NT and The Adaptive Server Anywhere Setup program makes changes to the
Windows 95/98 Windows NT and Windows 95/98 system Registry to identify and configure
the ODBC driver. If you are building a setup program for your end users,
you should make the same settings.
You can use the Windows regedit utility to inspect registry entries.
The Adaptive Server Anywhere ODBC driver is identified to the system by a
set of registry values in the following registry key:
HKEY_LOCAL_MACHINE\
SOFTWARE\
ODBC\
ODBCINST.INI\
Adaptive Server Anywhere 7.0
The values are as follows:
Third party ODBC If you are using a third-party ODBC driver on an operating system other than
drivers Windows, consult the documentation for that driver on how to configure the
ODBC driver.
856
Chapter 28 Deploying Databases and Applications
857
Deploying client applications
858
Chapter 28 Deploying Databases and Applications
Required and You can identify the data source name in an ODBC configuration string in
optional connection this manner,
parameters DSN=userdatasourcename
... identifies which user data source or system data source from the Registry
is to be used for the ODBC connection.
When a DSN parameter is provided in the connection string, the Current
User data source definitions in the Registry are searched, followed by
System data sources. File data sources are searched only when FileDSN is
provided in the ODBC connection string.
The following table illustrates the implications to the user and developer
when a data source exists and is included in the application’s connection
string as a DSN or FileDSN parameter.
859
Deploying client applications
Notes ♦ The network ports DLL is not required if the client is working only with
the personal database server.
♦ If the client application uses an ODBC data source to hold the
connection parameters, your end user must have a working ODBC
installation. Instructions for deploying ODBC are included in the
Microsoft ODBC SDK.
$ For more information on deploying ODBC information, see
"Deploying ODBC clients" on page 854.
♦ The Connection dialog is needed if your end users will be creating their
own data sources, if they will need to enter user IDs and passwords
when connecting to the database, or if they need to display the
Connection dialog for any other purpose.
♦ On HP-UX, all files listed with extension .so instead have extension .sl.
On AIX, the files have extension .so or .a.
860
Chapter 28 Deploying Databases and Applications
Connection information
You can deploy Embedded SQL connection information in one of the
following ways:
♦ Manual Provide your end-users with instructions for creating an
appropriate data source on their machine.
♦ File Distribute a file that contains connection information in a format
that your application can read.
♦ ODBC data source You can use an ODBC data source to hold
connection information. In this case, you need a subset of the ODBC
redistributable files, available from Microsoft. For details see
"Deploying ODBC clients" on page 854.
♦ Hard coded You can hard code connection information into your
application. This is an inflexible method, which may be limiting, for
example when databases are upgraded.
861
Deploying client applications
862
Chapter 28 Deploying Databases and Applications
Notes ♦ Depending on your situation, you should choose whether to deploy the
personal database server (dbeng7) or the network database server
(dbsrv7).
♦ The Java DLL (dbjava7.dll) is required only if the database server is to
use the Java in the Database functionality.
♦ The table does not include files needed to run command-line
applications such as dbbackup.
$ For information about deploying database utilities, see "Deploying
database utilities and Interactive SQL" on page 865.
863
Deploying database servers
♦ The zip files are required only for applications that use Java in the
database, and must be installed into a location so that they can be located
in the user’s CLASSPATH environment variable.
Deploying databases
You deploy a database file by installing the database file onto your end user’s
disk.
As long as the database server shuts down cleanly, you do not need to deploy
a transaction log file with your database file. When your end-user starts
running the database, a new transaction log is created.
For SQL Remote applications, the database should be created in a properly
synchronized state, in which case no transaction log is needed. You can use
the Extraction utility for this purpose.
864
Chapter 28 Deploying Databases and Applications
865
Deploying embedded database applications
Notes ♦ On HP-UX, all files listed with extension .so instead have extension .sl.
On AIX, the files have extension .so or .a.
♦ The database tools are Embedded SQL applications, and you must
supply the files required for such applications, as listed in "Deploying
Embedded SQL clients" on page 860.
866
C H A P T E R 2 9
About this chapter Adaptive Server Anywhere can access data located on different servers, both
Sybase and non-Sybase, as if the data were stored on the local server.
This chapter describes how to configure Adaptive Server Anywhere to
access remote data.
Contents
Topic Page
Introduction 868
Basic concepts 870
Working with remote servers 872
Working with external logins 877
Working with proxy tables 879
Example: a join between two remote tables 884
Accessing multiple local databases 886
Sending native statements to remote servers 887
Using remote procedure calls (RPCs) 888
Transaction management and remote data 890
Internal operations 892
Troubleshooting remote data access 896
867
Introduction
Introduction
Using Adaptive Server Anywhere you can:
♦ Access data in relational databases such as Sybase, Oracle, and DB2.
♦ Access desktop data such as Excel spreadsheets, MS-Access databases,
FoxPro, and text files.
♦ Access any other data source that supports an ODBC interface.
♦ Perform joins between local and remote data.
♦ Perform joins between tables in separate Adaptive Server Anywhere
databases.
♦ Use Adaptive Server Anywhere features on data sources that would
normally not have that ability. For instance, you could use a Java
function against data stored in Oracle, or perform a subquery on
spreadsheets. Adaptive Server Anywhere will compensate for features
not supported by a remote data source by operating on the data after it is
retrieved.
♦ Use Adaptive Server Anywhere to move data from one location to
another using insert-select.
♦ Access remote servers directly using passthrough mode.
♦ Execute remote procedure calls to other servers.
Adaptive Server Anywhere allows access to the following external data
sources:
♦ Adaptive Server Anywhere
♦ Adaptive Server Enterprise
♦ Oracle
♦ IBM DB2
♦ Microsoft SQL Server
♦ Other ODBC data sources
Platform availability
The remote data access features are supported on the Windows 95/98 and
Windows NT platforms only.
868
Chapter 29 Accessing Remote Data
869
Basic concepts
Basic concepts
This section describes the basic concepts required to access remote data.
870
Chapter 29 Accessing Remote Data
Server classes
A server class is assigned to each remote server. The server class specifies
the access method used to interact with the server. Different types of remote
servers require different access methods. The server classes provide
Adaptive Server Anywhere detailed server capability information. Adaptive
Server Anywhere adjusts its interaction with the remote server based on
those capabilities.
There are currently two groups of server classes. The first is JDBC-based;
the second is ODBC-based.
The JDBC-based server classes are:
♦ asajdbc for Adaptive Server Anywhere (version 6 and later)
♦ asejdbc for Adaptive Server Enterprise and SQL Server (version 10 and
later)
The ODBC-based server classes are:
♦ asaodbc for Adaptive Server Anywhere (version 5.5 and later)
♦ aseodbc for Adaptive Server Enterprise and SQL Server (version 10
and later)
♦ db2odbc for IBM DB2
♦ mssodbc for Microsoft SQL Server
♦ oraodbc for Oracle servers (version 8.0 and later)
♦ odbc for all other ODBC data sources
’
$ For a full description of remote server classes, see "Server Classes for
Remote Data Access" on page 899.
871
Working with remote servers
872
Chapter 29 Accessing Remote Data
♦ testasa is the name by which the remote server is known within this
database.
♦ asaodbc specifies that the server is an Adaptive Server Anywhere and
the connection to it uses ODBC.
♦ test4 is the ODBC data source name.
$ For a full description of the CREATE SERVER statement, see
"CREATE SERVER statement" on page 451 of the book ASA Reference.
873
Working with remote servers
$ See also
♦ "DROP SERVER statement" on page 497 of the book ASA Reference
874
Chapter 29 Accessing Remote Data
Example The following statement changes the server class of the server named
ASEserver to aseodbc:
ALTER SERVER ASEserver
CLASS ’aseodbc’
The Data Source Name for the server is ASEserver.
The ALTER SERVER statement can also be used to enable or disable a
server’s known capabilities.
$ See also
♦ "Property Sheet Descriptions" on page 1035
♦ "ALTER SERVER statement" on page 378 of the book ASA Reference
875
Working with remote servers
876
Chapter 29 Accessing Remote Data
Example The following statement allows the local user fred to gain access to the
server ASEserver, using the remote login frederick with password banana.
CREATE EXTERNLOGIN fred
877
Working with external logins
TO ASEserver
REMOTE LOGIN frederick
IDENTIFIED BY banana
$ See also
♦ "Add External Login dialog" on page 1017
♦ "Property Sheet Descriptions" on page 1035
♦ "CREATE EXTERNLOGIN statement" on page 430 of the book ASA
Reference
Example The following statement drops the external login for the local user fred
created in the example above:
DROP EXTERNLOGIN fred TO ASEserver
$ See also
♦ "DROP EXTERNLOGIN statement" on page 495 of the book ASA
Reference
878
Chapter 29 Accessing Remote Data
Server This is the name by which the server is known in the current database, as
specified in the CREATE SERVER statement. This field is mandatory for all
remote data sources.
Database The meaning of the database field depends on the data source. In some cases
this field does not apply and should be left empty. The periods are still
required, however.
♦ Adaptive Server Enterprise Specifies the database where the table
exists. For example master or pubs2.
♦ Adaptive Server Anywhere This field does not apply; leave it empty.
The database name for an Adaptive Server Anywhere ODBC data
source should be specified when the data source name is defined in the
ODBC Administrator.
For jConnect-based connections, the database should be specified in the
USING clause of the CREATE SERVER statement.
For both ODBC and JDBC based connections to Adaptive Server
Anywhere, you need a separate CREATE SERVER statement for each
Adaptive Server Anywhere database being accessed.
879
Working with proxy tables
♦ Excel, Lotus Notes, Access For these file-based data sources, the
database name is the name of the file containing the table. Since file
names can contain a period, a semicolon should be used as the delimiter
between server, database, owner, and table.
Owner If the database supports the concept of ownership, this field represents the
owner name. This field is only required when several owners have tables
with the same name.
Tablename Tablename specifies the name of the table. In the case of an Excel
spreadsheet, this is the name of the "sheet" in the workbook. If the table
name is left empty, the remote table name is assumed to be the same as the
local proxy table name.
Examples: The following examples illustrate the use of location strings:
♦ Adaptive Server Anywhere:
’testasa..dba.employee’
♦ Adaptive Server Enterprise:
’ASEServer.pubs2.dbo.publishers’
♦ Excel:
’excel;d:\pcdb\quarter3.xls;;sheet1$’
♦ Access:
’access;\\server1\production\inventory.mdb;;parts’
880
Chapter 29 Accessing Remote Data
Tips
You can also create a proxy table by selecting the Tables folder and then
choosing File➤New➤Proxy Table.
Proxy tables are displayed under their remote server, inside the Remote
Servers folder. They also appear in the Tables folder. They are
distinguished from other tables by a letter P on their icon.
Example 1 To create a proxy table named p_employee on the current server to a remote
table named employee on the server named asademo1, use the following
syntax:
CREATE EXISTING TABLE p_employee
AT ’asademo1..dba.employee’
881
Working with proxy tables
p_employee employee
proxy table table
mapping
asademo1
local server
server
Example 2 The following statement maps the proxy table a1 to the Microsoft Access file
mydbfile.mdb. In this example, the AT keyword uses the semicolon (;) as a
delimiter. The server defined for Microsoft Access is named access.
CREATE EXISTING TABLE a1
AT’access;d:\mydbfile.mdb;;a1’
$ See also
♦ "CREATE EXISTING TABLE statement" on page 428 of the book ASA
Reference
882
Chapter 29 Accessing Remote Data
$ See also
♦ "CREATE TABLE statement" on page 453 of the book ASA Reference
883
Example: a join between two remote tables
p_employee employee
proxy table table
p_department department
proxy table table
emp_fname emp_lname
dept_id dept_name
asademo1
testasa server
server
884
Chapter 29 Accessing Remote Data
885
Accessing multiple local databases
886
Chapter 29 Accessing Remote Data
Example 2 The following statements show a passthrough session with the server named
ASEserver:
FORWARD TO ASEserver
select * from titles
select * from authors
FORWARD TO
887
Using remote procedure calls (RPCs)
Tip
You can also add a remote procedure by right-clicking the remote server
and choosing Add Remote Procedure from the popup menu.
888
Chapter 29 Accessing Remote Data
889
Transaction management and remote data
890
Chapter 29 Accessing Remote Data
891
Internal operations
Internal operations
This section describes the underlying operations on remote servers
performed by Adaptive Server Anywhere on behalf of client applications.
Query parsing
When a statement is received from a client, it is parsed. An error is raised if
the statement is not a valid Adaptive Server Anywhere SQL statement.
Query normalization
The next step is called query normalization. During this step, referenced
objects are verified and some data type compatibility is checked.
For example, consider the following query:
SELECT *
FROM t1
WHERE c1 = 10
The query normalization stage verifies that table t1 with a column c1 exists
in the system tables. It also verifies that the data type of column c1 is
compatible with the value 10. If the column’s data type is datetime, for
example, this statement is rejected.
Query preprocessing
Query preprocessing prepares the query for optimization. It may change the
representation of a statement so that the SQL statement Adaptive Server
Anywhere generates for passing to a remote server will be syntactically
different from the original statement.
Preprocessing performs view expansion so that a query can operate on tables
referenced by the view. Expressions may be reordered and subqueries may
be transformed to improve processing efficiency. For example, some
subqueries may be converted into joins.
Server capabilities
The previous steps are performed on all queries, both local and remote.
892
Chapter 29 Accessing Remote Data
The following steps depend on the type of SQL statement and the
capabilities of the remote servers involved.
Each remote server defined to Adaptive Server Anywhere has a set of
capabilities associated with it. These capabilities are stored in the
syscapabilities system table. These capabilities are initialized during the first
connection to a remote server. The generic server class odbc relies strictly on
information returned from the ODBC driver to determine these capabilities.
Other server classes such as db2odbc have more detailed knowledge of the
capabilities of a remote server type and use that knowledge to supplement
what is returned from the driver.
Once syscapabilities is initialized for a server, the capability information is
retrieved only from the system table. This allows a user to alter the known
capabilities of a server.
Since a remote server may not support all of the features of a given SQL
statement, Adaptive Server Anywhere must break the statement into simpler
components to the point that the query can be given to the remote server.
SQL features not passed off to a remote server must be evaluated by
Adaptive Server Anywhere itself.
For example, a query may contain an ORDER BY statement. If a remote
server cannot perform ORDER BY, the statement is sent to a the remote
server without it and Adaptive Server Anywhere performs the ORDER BY
on the result returned, before returning the result to the user. The result is
that the user can employ the full range of Adaptive Server Anywhere
supported SQL without concern for the features of a particular back end.
894
Chapter 29 Accessing Remote Data
Each time a row is found, Adaptive Server Anywhere would calculate the
new value of a and issue:
UPDATE t1
SET a = ’new value’
WHERE CURRENT OF CURSOR
If a already has a value that equals the "new value", a positioned UPDATE
would not be necessary and would not be sent remotely.
In order to process an UPDATE or DELETE that requires a table scan, the
remote data source must support the ability to perform a positioned
UPDATE or DELETE ("where current of cursor"). Some data sources do not
support this capability.
895
Troubleshooting remote data access
Case sensitivity
The case sensitivity setting of your Adaptive Server Anywhere database
should match the settings used by any remote servers accessed.
Adaptive Server Anywhere databases are created case insensitive by default.
With this configuration, unpredictable results may occur when selecting from
a case sensitive database. Different results will occur depending on whether
ORDER BY or string comparisons are pushed off to a remote server or
evaluated by the local Adaptive Server Anywhere.
896
Chapter 29 Accessing Remote Data
Connectivity problems
Take the following steps to be sure you can connect to a remote server:
♦ Determine that you can connect to a remote server using a client tool
such as Interactive SQL before configuring Adaptive Server Anywhere.
♦ Perform a simple passthrough statement to a remote server to check your
connectivity and remote login configuration. For example:
FORWARD TO testasa {select @@version}
♦ Turn on remote tracing for a trace of the interactions with remote
servers.
SET OPTION cis_option = 2
897
Troubleshooting remote data access
898
C H A P T E R 3 0
About this chapter This chapter describes how Adaptive Server Anywhere interfaces with
different classes of servers. You will find information such as:
♦ Types of servers that each server class supports
♦ The USING clause value for the CREATE SERVER statement for each
server class
♦ Special configuration requirements
Contents
Topic Page
Overview 900
JDBC-based server classes 901
ODBC-based server classes 904
899
Overview
Overview
The server class you specify in the CREATE SERVER statement determines
the behavior of a remote connection. The server classes give Adaptive Server
Anywhere detailed server capability information. Adaptive Server Anywhere
formats SQL statements specific to a server’s capabilities.
There are two categories of server classes:
♦ JDBC-based server classes
♦ ODBC-based server classes
Each server class has a set of unique characteristics that database
administrators and programmers need to know about to configure the server
for remote data access.
When using this chapter, refer both to the section generic to the server class
category (JDBC-based or ODBC-based), and to the section specific to the
individual server class.
900
Chapter 30 Server Classes for Remote Data Access
901
JDBC-based server classes
902
Chapter 30 Server Classes for Remote Data Access
903
ODBC-based server classes
904
Chapter 30 Server Classes for Remote Data Access
905
ODBC-based server classes
♦ Under the Advanced tab, check the Application Using Threads box
and check the Enable Quoted Identifiers box.
♦ Under the Connection tab:
Set the charset field to match your Adaptive Server Anywhere
character set.
Set the language field to your preferred language for error
messages.
♦ Under the Performance tab:
Set Prepare Method to "2-Full."
Set Fetch Array Size as large as possible for best performance. This
increases memory requirements since this is the number of rows
that must be cached in memory. Sybase recommends using a value
of 100.
Set Select Method to "0-Cursor."
Set Packet Size to as large as possible. Sybase recommends using a
value of -1.
Set Connection Cache to 1.
906
Chapter 30 Server Classes for Remote Data Access
907
ODBC-based server classes
908
Chapter 30 Server Classes for Remote Data Access
909
ODBC-based server classes
910
Chapter 30 Server Classes for Remote Data Access
911
ODBC-based server classes
AT’excel;d:\work1.xls;;mywork’
To create a second sheet (or table) execute a statement such as:
CREATE TABLE mywork2 (x float, y int)
AT ’excel;d:\work1.xls;;mywork2’
You can import existing worksheets into Adaptive Server Anywhere using
CREATE EXISTING, under the assumption that the first row of your
spreadsheet contains column names.
CREATE EXISTING TABLE mywork
AT’excel;d:\work1;;mywork’
If Adaptive Server Anywhere reports that the table is not found, you may
need to explicitly state the column and row range you wish to map to. For
example:
CREATE EXISTING TABLE mywork
AT ’excel;d:\work1;;mywork$’
Adding the $ to the sheet name indicates that the entire worksheet should be
selected.
Note in the location string specified by AT that a semicolon is used instead
of a period for field separators This is because periods occur in the file
names. Excel does not support the owner name field so leave this blank.
Deletes are not supported. Also some updates may not be possible since the
Excel driver does not support positioned updates.
Avoiding password Lotus Notes does not support sending a user name and password through the
prompts ODBC API. If you try to access Lotus notes using a password protected ID, a
window appears on the machine where Adaptive Server Anywhere is
running, and prompts you for a password. Avoid this behavior in multi-user
server environments.
914
Chapter 30 Server Classes for Remote Data Access
915
ODBC-based server classes
916
C H A P T E R 3 1
About this chapter This chapter describes how to use Adaptive Server Anywhere in a three-tier
environment with an application server. It focuses on how to enlist Adaptive
Server Anywhere in distributed transactions.
Contents
Topic Page
Introduction 918
Three-tier computing architecture 919
Using distributed transactions 923
Using Enterprise Application Server with Adaptive Server Anywhere 925
917
Introduction
Introduction
You can use Adaptive Server Anywhere as a database server or resource
manager, participating in distributed transactions coordinated by a
transaction server.
A three-tier environment, where an application server sits between client
applications and a set of resource managers, is a common distributed-
transaction environment. Sybase Enterprise Application Server and some
other application servers are also transaction servers.
Sybase Enterprise Application Server and Microsoft Transaction Server both
use the Microsoft Distributed Transaction Coordinator (DTC) to coordinate
transactions. Adaptive Server Anywhere provides support for distributed
transactions controlled by the DTC service, so you can use Adaptive Server
Anywhere with either of these application servers, or any other product
based on the DTC model.
When integrating Adaptive Server Anywhere into a three-tier environment,
most of the work needs to be done from the Application Server. This chapter
provides an introduction to the concepts and architecture of three-tier
computing, and an overview of relevant Adaptive Server Anywhere features.
It does not describe how to configure your Application Server to work with
Adaptive Server Anywhere. For more information, see your Application
Server documentation.
918
Chapter 31 Three-tier Computing and Distributed Transactions
Application
Server
919
Three-tier computing architecture
920
Chapter 31 Three-tier Computing and Distributed Transactions
921
Three-tier computing architecture
Client
system
Application
Server
Resource Resource
Manager DTC Manager
Proxy Proxy
DTC DTC
Server Server
system 1 system 2
In this case, a single resource dispenser is used. The Application Server asks
DTC to prepare a transaction. DTC and the resource dispenser enlist each
connection in the transaction. Each resource manager must be in contact with
both DTC and the database, so as to carry out the work and to notify DTC of
its transaction status when required.
A DTC service must be running on each machine in order to operate
distributed transactions. You can control DTC services from the Services
icon in the Windows NT control panel; the DTC service is named MSDTC.
$ For more information, see your DTC or Enterprise Application Server
documentation.
922
Chapter 31 Three-tier Computing and Distributed Transactions
923
Using distributed transactions
924
Chapter 31 Three-tier Computing and Distributed Transactions
925
Using Enterprise Application Server with Adaptive Server Anywhere
In the left pane, open the Servers folder. In the right pane, right click on
the server you wish to configure, and select Server Properties from the
drop down menu. Click the Transactions tab, and choose Microsoft DTC
as the transaction model. Click OK to complete the operation.
926
Chapter 31 Three-tier Computing and Distributed Transactions
927
Using Enterprise Application Server with Adaptive Server Anywhere
928
P A R T S I X
929
930
C H A P T E R 3 2
Transact-SQL Compatibility
About this chapter Transact-SQL is the dialect of SQL supported by Sybase Adaptive Server
Enterprise.
This chapter is a guide for creating applications that are compatible with both
Adaptive Server Anywhere and Adaptive Server Enterprise. It describes
Adaptive Server Anywhere support for Transact-SQL language elements and
statements, and for Adaptive Server Enterprise system tables, views, and
procedures.
Contents
Topic Page
An overview of Transact-SQL support 932
Adaptive Server architectures 935
Configuring databases for Transact-SQL compatibility 941
Writing compatible SQL statements 949
Transact-SQL procedure language overview 954
Automatic translation of stored procedures 957
Returning result sets from Transact-SQL procedures 958
Variables in Transact-SQL procedures 959
Error handling in Transact-SQL procedures 960
931
An overview of Transact-SQL support
932
Chapter 32 Transact-SQL Compatibility
Statements allowed
ASA-only statements Transact-SQL statements
in both servers
Transact-SQL control
ASA control statements,
statements, CREATE
CREATE PROCEDURE SELECT, INSERT,
PROCEDURE statement,
statement, CREATE UPDATE, DELETE,...
CREATE TRIGGER
TRIGGER statement,...
statement,...
Similarities and Adaptive Server Anywhere supports a very high percentage of Transact-SQL
differences language elements, functions, and statements for working with existing data.
For example, Adaptive Server Anywhere supports all of the numeric
functions, all but one of the string functions, all aggregate functions, and all
date and time functions. As another example, Adaptive Server Anywhere
supports Transact-SQL outer joins (using =* and *= operators) and extended
DELETE and UPDATE statements using joins.
Further, Adaptive Server Anywhere supports a very high percentage of the
Transact-SQL stored procedure language (CREATE PROCEDURE and
CREATE TRIGGER syntax, control statements, and so on) and many, but
not all, aspects of Transact-SQL data definition language statements.
There are design differences in the architectural and configuration facilities
supported by each product. Device management, user management, and
maintenance tasks such as backups tend to be system-specific. Even here,
Adaptive Server Anywhere provides Transact-SQL system tables as views,
where the tables that are not meaningful in Adaptive Server Anywhere have
no rows. Also, Adaptive Server Anywhere provides a set of system
procedures for some of the more common administrative tasks.
This chapter looks first at some system-level issues where differences are
most noticeable, before discussing data manipulation and data definition
language aspects of the dialects where compatibility is high.
Transact-SQL only Some SQL statements supported by Adaptive Server Anywhere are part of
one dialect, but not the other. You cannot mix the two dialects within a
procedure, trigger, or batch. For example, Adaptive Server Anywhere
supports the following statements, but as part of the Transact-SQL dialect
only:
♦ Transact-SQL control statements IF and WHILE
933
An overview of Transact-SQL support
Adaptive Server Adaptive Server Enterprise does not support the following statements:
Anywhere only
♦ control statements CASE, LOOP, and FOR
♦ Adaptive Server Anywhere versions of IF and WHILE
♦ CALL statement
♦ Adaptive Server Anywhere versions of the CREATE PROCEDURE,
CREATE FUNCTION, and CREATE TRIGGER statements.
♦ SQL statements separated by semicolons
Notes The two dialects cannot be mixed within a procedure, trigger, or batch. This
means that:
♦ You can include Transact-SQL-only statements together with statements
that are part of both dialects in a batch, procedure, or trigger.
♦ You can include statements not supported by Adaptive Server Enterprise
together with statements that are supported by both servers in a batch,
procedure, or trigger.
♦ You cannot include Transact-SQL-only statements together with
Adaptive Server Anywhere-only statements in a batch, procedure, or
trigger.
934
Chapter 32 Transact-SQL Compatibility
935
Adaptive Server architectures
File manipulation Adaptive Server Anywhere does not support the Transact-SQL statements
statements DUMP DATABASE and LOAD DATABASE. Adaptive Server Anywhere
has its own CREATE DATABASE and DROP DATABASE statements with
different syntax.
Device management
Adaptive Server Anywhere and Adaptive Server Enterprise use different
models for managing devices and disk space, reflecting the different uses for
the two products. While Adaptive Server Enterprise sets out a comprehensive
resource management scheme using a variety of Transact-SQL statements,
Adaptive Server Anywhere manages its own resources automatically, and its
databases are regular operating system files.
Adaptive Server Anywhere does not support Transact-SQL DISK statements,
such as DISK INIT, DISK MIRROR, DISK REFIT, DISK REINIT, DISK
REMIRROR, and DISK UNMIRROR.
$ For information on disk management, see "Working with Database
Files" on page 763
936
Chapter 32 Transact-SQL Compatibility
System tables
In addition to its own system tables, Adaptive Server Anywhere provides a
set of system views that mimic relevant parts of the Adaptive Server
Enterprise system tables. You’ll find a list and individual descriptions in
"Views for Transact-SQL Compatibility" on page 1047 of the book ASA
Reference, which describes the system catalogs of the two products. This
section provides a brief overview of the differences.
The Adaptive Server Anywhere system tables rest entirely within each
database, while the Adaptive Server Enterprise system tables rest partly
inside each database and partly in the master database. The Adaptive Server
Anywhere architecture does not include a master database.
In Adaptive Server Enterprise, the database owner (user ID dbo) owns the
system tables. In Adaptive Server Anywhere, the system owner (user ID
SYS) owns the system tables. A dbo user ID owns the Adaptive Server
Enterprise-compatible system views provided by Adaptive Server Anywhere.
Administrative roles
Adaptive Server Enterprise has a more elaborate set of administrative roles
than Adaptive Server Anywhere. In Adaptive Server Enterprise there is a set
of distinct roles, although more than one login account on an Adaptive
Server Enterprise can be granted any role, and one account can possess more
than one role.
Adaptive Server In Adaptive Server Enterprise distinct roles include:
Enterprise roles
♦ System Administrator Responsible for general administrative tasks
unrelated to specific applications; can access any database object.
♦ System Security Officer Responsible for security-sensitive tasks in
Adaptive Server Enterprise, but has no special permissions on database
objects.
937
Adaptive Server architectures
938
Chapter 32 Transact-SQL Compatibility
939
Adaptive Server architectures
Database object The Adaptive Server Enterprise and Adaptive Server Anywhere GRANT and
permissions REVOKE statements for granting permissions on individual database objects
are very similar. Both allow SELECT, INSERT, DELETE, UPDATE, and
REFERENCES permissions on database tables and views, and UPDATE
permissions on selected columns of database tables. Both allow EXECUTE
permissions to be granted on stored procedures.
For example, the following statement is valid in both Adaptive Server
Enterprise and Adaptive Server Anywhere:
GRANT INSERT, DELETE
ON TITLES
TO MARY, SALES
This statement grants permission to use the INSERT and DELETE
statements on the TITLES table to user MARY and to the SALES group.
Both Adaptive Server Anywhere and Adaptive Server Enterprise support the
WITH GRANT OPTION clause, allowing the recipient of permissions to
grant them in turn, although Adaptive Server Anywhere does not permit
WITH GRANT OPTION to be used on a GRANT EXECUTE statement.
Database-wide Adaptive Server Enterprise and Adaptive Server Anywhere use different
permissions models for database-wide user permissions. These are discussed in "Users
and groups" on page 938. Adaptive Server Anywhere employs DBA
permissions to allow a user full authority within a database. The System
Administrator in Adaptive Server Enterprise enjoys this permission for all
databases on a server. However, DBA authority on an Adaptive Server
Anywhere database is different from the permissions of an Adaptive Server
Enterprise Database Owner, who must use the Adaptive Server Enterprise
SETUSER statement to gain permissions on objects owned by other users.
Adaptive Server Anywhere employs RESOURCE permissions to allow a
user the right to create objects in a database. A closely corresponding
Adaptive Server Enterprise permission is GRANT ALL, used by a Database
Owner.
940
Chapter 32 Transact-SQL Compatibility
941
Configuring databases for Transact-SQL compatibility
Make the database By default, string comparisons in Adaptive Server Enterprise databases are
case-sensitive case-sensitive, while those in Adaptive Server Anywhere are case
insensitive.
When building an Adaptive Server Enterprise-compatible database using
Adaptive Server Anywhere, choose the case-sensitive option.
♦ If you are using Sybase Central, this option is in the Create Database
wizard on the Choose the Database Attributes page.
♦ If you are using the dbinit command-line utility, specify the -c
command-line switch.
942
Chapter 32 Transact-SQL Compatibility
Caution
Ensure that you do not drop the dbo.syscolumns or dbo.sysindexes
system view.
Set the By default, Adaptive Server Enterprise treats identifiers and strings
quoted_identifier differently than Adaptive Server Anywhere, which matches the SQL/92 ISO
option standard.
The quoted_identifier option is available in both Adaptive Server Enterprise
and Adaptive Server Anywhere. Ensure the option is set to the same value in
both databases, for identifiers and strings to be treated in a compatible
manner.
For SQL/92 behavior, set the quoted_identifier option to ON in both
Adaptive Server Enterprise and Adaptive Server Anywhere.
943
Configuring databases for Transact-SQL compatibility
Case-sensitivity
Case sensitivity in databases refers to:
♦ Data The case sensitivity of the data is reflected in indexes, in the
results of queries, and so on.
♦ Identifiers Identifiers include table names, column names, and so on.
♦ User IDs and passwords Case sensitivity of user IDs and passwords
is treated differently to other identifiers.
Case sensitivity of You decide the case-sensitivity of Adaptive Server Anywhere data in
data comparisons when you create the database. By default, Adaptive Server
Anywhere databases are case-insensitive in comparisons, although data is
always held in the case in which you enter it.
944
Chapter 32 Transact-SQL Compatibility
945
Configuring databases for Transact-SQL compatibility
Creating a To create a Transact-SQL timestamp column, create a column that has the
Transact-SQL (Adaptive Server Anywhere) data type TIMESTAMP and a default setting of
timestamp column timestamp. The column can have any name, although the name timestamp is
in Adaptive Server common.
Anywhere
For example, the following CREATE TABLE statement includes a Transact-
SQL timestamp column:
CREATE TABLE table_name (
column_1 INTEGER ,
column_2 TIMESTAMP DEFAULT TIMESTAMP
)
The following ALTER TABLE statement adds a Transact-SQL timestamp
column to the sales_order table:
ALTER TABLE sales_order
ADD timestamp TIMESTAMP DEFAULT TIMESTAMP
In Adaptive Server Enterprise a column with the name timestamp and no
data type specified automatically receives a TIMESTAMP data type. In
Adaptive Server Anywhere you must explicitly assign the data type yourself.
If you have the AUTOMATIC_TIMESTAMP database option set to ON,
you do not need to set the default value: any new column created with
TIMESTAMP data type and with no explicit default receives a default value
of timestamp. The following statement sets AUTOMATIC_TIMESTAMP to
ON:
SET OPTION PUBLIC.AUTOMATIC_TIMESTAMP=’ON’
The data type of a Adaptive Server Enterprise treats a timestamp column as a domain that is
timestamp column VARBINARY(8), allowing NULL, while Adaptive Server Anywhere treats
a timestamp column as the TIMESTAMP data type, which consists of the
date and time, with fractions of a second held to six decimal places.
When fetching from the table for later updates, the variable into which the
timestamp value is fetched should correspond to the column description.
946
Chapter 32 Transact-SQL Compatibility
Timestamping an If you add a special timestamp column to an existing table, all existing rows
existing table have a NULL value in the timestamp column. To enter a timestamp value
(the current timestamp) for existing rows, update all rows in the table such
that the data does not change. For example, the following statement updates
all rows in the sales_order table, without changing the values in any of the
rows:
UPDATE sales_order
SET region = region
In Interactive SQL, you may need to set the TIMESTAMP_FORMAT option
to see the differences in values for the rows. The following statement sets the
TIMESTAMP_FORMAT option to display all six digits in the fractions of a
second:
SET OPTION TIMESTAMP_FORMAT=’YYYY-MM-DD HH:NN:ss.SSSSSS’
If all six digits are not shown, some timestamp column values may appear to
be equal: they are not.
Using tsequal for With the tsequal system function you can tell whether a timestamp column
updates has been updated or not.
For example, an application may SELECT a timestamp column into a
variable. When an UPDATE of one of the selected rows is submitted, it can
use the tsequal function to check whether the row has been modified. The
tsequal function compares the timestamp value in the table with the
timestamp value obtained in the SELECT. Identical timestamps means there
are no changes. If the timestamps differ, the row has been changed since the
SELECT was carried out.
A typical UPDATE statement using the tsequal function looks like this:
UPDATE publishers
SET city = ’Springfield’
WHERE pub_id = ’0736’
AND TSEQUAL(timestamp, ’1995/10/25 11:08:34.173226’)
The first argument to the tsequal function is the name of the special
timestamp column; the second argument is the timestamp retrieved in the
SELECT statement. In Embedded SQL, the second argument is likely to be a
host variable containing a TIMESTAMP value from a recent FETCH on the
column.
947
Configuring databases for Transact-SQL compatibility
948
Chapter 32 Transact-SQL Compatibility
949
Writing compatible SQL statements
950
Chapter 32 Transact-SQL Compatibility
951
Writing compatible SQL statements
Compatibility of joins
In Transact-SQL, joins appear in the WHERE clause, using the following
syntax:
952
Chapter 32 Transact-SQL Compatibility
953
Transact-SQL procedure language overview
954
Chapter 32 Transact-SQL Compatibility
955
Transact-SQL procedure language overview
956
Chapter 32 Transact-SQL Compatibility
957
Returning result sets from Transact-SQL procedures
958
Chapter 32 Transact-SQL Compatibility
959
Error handling in Transact-SQL procedures
Value Meaning
0 Procedure executed without error
–1 Missing object
–2 Data type error
–3 Process was chosen as deadlock victim
–4 Permission error
–5 Syntax error
–6 Miscellaneous user error
–7 Resource error, such as out of space
–8 Non-fatal internal problem
–9 System limit was reached
960
Chapter 32 Transact-SQL Compatibility
Value Meaning
–10 Fatal internal inconsistency
–11 Fatal internal inconsistency
–12 Table or index is corrupt
–13 Database is corrupt
–14 Hardware error
The RETURN statement can be used to return other integers, with their own
user-defined meanings.
You lose intermediate RAISERROR statuses and codes after the procedure
terminates. If, at return time, an error occurs along with the RAISERROR,
then the error information is returned and you lose the RAISERROR
information. The application can query intermediate RAISERROR statuses
by examining @@error global variable at different execution points.
961
Error handling in Transact-SQL procedures
ON EXCEPTION RESUME
BEGIN
...
END
The presence of an ON EXCEPTION RESUME clause prevents explicit
exception handling code from being executed, so avoid using these two
clauses together.
962
C H A P T E R 3 3
About this chapter Adaptive Server Anywhere can appear to client applications as an Open
Server. This feature enables Sybase Open Client applications to connect
natively to Adaptive Server Anywhere databases.
This chapter describes how to use Adaptive Server Anywhere as an Open
Server, and how to configure Open Client and Adaptive Server Anywhere to
work together.
$ For information on developing Open Client applications for use with
Adaptive Server Anywhere, see "The Open Client Interface" on page 163 of
the book ASA Programming Interfaces Guide.
Contents
Topic Page
Open Clients, Open Servers, and TDS 964
Setting up Adaptive Server Anywhere as an Open Server 966
Configuring Open Servers 968
Characteristics of Open Client and jConnect connections 974
963
Open Clients, Open Servers, and TDS
964
Chapter 33 Adaptive Server Anywhere as an Open Server
965
Setting up Adaptive Server Anywhere as an Open Server
System requirements
There are separate requirements at the client and server for using Adaptive
Server Anywhere as an Open Server.
Server-side You must have the following elements at the server side to use Adaptive
requirements Server Anywhere as an Open Server:
♦ Adaptive Server Anywhere server components You must use the
network server (dbsrv6.exe) if you want to access an Open Server over a
network. You can use the personal server (dbeng7.exe) as an Open
Server only for connections from the same machine.
♦ TCP/IP You must have a TCP/IP protocol stack to use Adaptive Server
Anywhere as an Open Server, even if you are not connecting over a
network.
Client-side You need the following elements to use Sybase client applications to connect
requirements to an Open Server (including Adaptive Server Anywhere):
♦ Open Client components The Open Client libraries provide the
network libraries your application needs to communicate via TDS, if
your application uses Open Client.
♦ jConnect If your application uses JDBC, you need jConnect and a
Java runtime environment.
♦ DSEdit You need dsedit, the directory services editor, to make server
names available to your Open Client application. On UNIX platforms,
this utility is called sybinit.
966
Chapter 33 Adaptive Server Anywhere as an Open Server
Open Client To connect to this server, the interfaces file at the client machine must
settings contain an entry specifying the machine name on which the database server
is running, and the TCP/IP port it uses.
$ For details on setting up the client machine, see "Configuring Open
Servers" on page 968.
967
Configuring Open Servers
968
Chapter 33 Adaptive Server Anywhere as an Open Server
Starting dsedit
The dsedit executable is in the SYBASE\bin directory, which is added to your
path on installation. You can start dsedit either from the command line or
from the Windows Explorer in the standard fashion.
When you start dsedit, the Select Directory Service window appears.
v To open a session:
♦ Click the local name of the directory service you want to connect to, as
listed in the DS Name box, and click OK.
For Adaptive Server Anywhere, select the Interfaces Driver.
969
Configuring Open Servers
You can add, modify, or delete entries for servers, including Adaptive Server
Anywhere servers, in this window.
Server entry name The server name entered here does not need to match the name provided on
need not match the Adaptive Server Anywhere command line. The server address identifies
server command- and locates the server, not the server name. The server name field is purely
line name an identifier for Open Client. For Adaptive Server Anywhere, if the server
has more than one database loaded, the DSEDIT server name entry identifies
which database to use.
971
Configuring Open Servers
Machine name A name (or an IP address) identifies the machine on which the server is
running. On Windows and Windows NT you can find the machine name in
Network Settings, in the Control Panel.
If your client and server are on the same machine, you must still enter the
machine name. In this case, you can use localhost to identify the current
machine.
Port Number The port number you enter must match the port specified on the Adaptive
Server Anywhere database server command line, as described in "Starting
the database server as an Open Server" on page 966. The default port number
for Adaptive Server Anywhere servers is 2638. This number has been
assigned to Adaptive Server Anywhere by the Internet Adapter Number
Authority, and use of this port is recommended unless you have good reasons
for explicitly using another port.
The following are valid server address entries:
elora,2638
123.85.234.029,2638
v To ping a server:
1 Ensure that the database server is running.
2 Click the server entry in the Server box of the dsedit session window.
3 Choose Ping from the Server Object menu. The Ping window appears.
4 Click the address you want to ping. Click Ping.
A message box appears, notifying you whether or not the connection is
successful. A message box for a successful connection states that both
Open Connection and Close Connection succeeded.
972
Chapter 33 Adaptive Server Anywhere as an Open Server
973
Characteristics of Open Client and jConnect connections
Option Set to
ALLOW_NULLS_BY_DEFAULT OFF
ANSINULL OFF
ANSI_BLANKS ON
ANSI_INTEGER_OVERFLOW ON
AUTOMATIC_TIMESTAMP ON
CHAINED OFF
CONTINUE_AFTER_RAISERROR ON
DATE_FORMAT YYYY-MM-DD
DATE_ORDER MDY
ESCAPE_CHARACTER OFF
ISOLATION_LEVEL 1
FLOAT_AS_DOUBLE ON
QUOTED_IDENTIFIER OFF
TIME_FORMAT HH:NN:SS.SSS
TIMESTAMP_FORMAT YYYY-MM-DD HH:NN:SS.SSS
TSQL_HEX_CONSTANT ON
TSQL_VARIABLES ON
How the startup The default database options are set for TDS connections using a system
options are set procedure named sp_tsql_environment. This procedure sets the following
options:
SET TEMPORARY OPTION TSQL_VARIABLES=’ON’;
SET TEMPORARY OPTION ANSI_BLANKS=’ON’;
SET TEMPORARY OPTION TSQL_HEX_CONSTANT=’ON’;
SET TEMPORARY OPTION CHAINED=’OFF’;
974
Chapter 33 Adaptive Server Anywhere as an Open Server
The procedure sets options only for connections that use the TDS
communications protocol. This includes Open Client and JDBC connections
using jConnect. Other connections (ODBC and Embedded SQL) have the
default settings for the database.
You can change the options for TDS connections.
975
Characteristics of Open Client and jConnect connections
976
C H A P T E R 3 4
About this chapter This chapter describes how you can use Replication Server to replicate data
between an Adaptive Server Anywhere database and other databases. Other
databases in the replication system may be Adaptive Server Anywhere
databases or other kinds of database.
Contents
Topic Page
Introduction to replication 978
A replication tutorial 981
Configuring databases for Replication Server 991
Using the LTM 994
Before you begin Replication Server administrators who are setting up Adaptive Server
Anywhere to take part in their Replication Server installation will find this
chapter especially useful. You should have knowledge of Replication Server
documentation, and familiarity with the Replication Server product. This
chapter does not describe Replication Server itself.
$ For information about Replication Server, including design, commands,
and administration, see your Replication Server documentation.
977
Introduction to replication
Introduction to replication
Data replication is the sharing of data among physically distinct databases.
Changes made to shared data at any one database are copied precisely to the
other databases in the replication system.
Data replication brings some key benefits to database users.
Data availability Replication makes data available locally, rather than through potentially
expensive, less reliable and slower connections to a single, central database.
Since data is accessible locally, you are always have access to data, even in
the event of a long-distance network connection failure.
Response time Replication improves response times for data requests for two reasons. First,
retrieval rates are faster since requests are processed on a local server
without accessing some wide area network. Second, competition for
processor time decreases, since local processing offloads work from a central
database server.
For each replication definition, there is a primary site, where changes to the
data in the replication occur. The sites that receive the data in the replication
are called replicate sites.
From
other
servers
Asynchronous The Replication Server can use asynchronous procedure calls (APC) at
procedure calls replicate sites to alter data at a primary site database. If you are using APCs,
the above diagram does not apply. Instead, the requirements are the same as
for a primary site.
979
Introduction to replication
To other
servers
980
Chapter 34 Replicating Data with Replication Server
A replication tutorial
This section provides a step-by-step tutorial describing how to replicate data
from a primary database to a replicate database. Both databases in the tutorial
are Adaptive Server Anywhere databases.
Replication Server This section assumes you have a running Replication Server. For more
assumed information about how to install or configure Replication Server, see the
Replication Server documentation.
What is in the This tutorial describes how to replicate only tables. For information about
tutorial replicating procedures, see "Preparing procedures and functions for
replication" on page 995.
The tutorial uses a simple example of a (very) primitive office news system:
a single table with an ID column holding an integer, a column holding the
user ID of the author of the news item, and a column holding the text of the
news item. The id column and the author column make up the primary key.
Before you work through the tutorial, create a directory (for example,
c:\tutorial) to hold the files you create in the tutorial.
What’s next? Next, you have to start database servers running on these databases.
981
A replication tutorial
What’s next? Next, you have to make entries for each of the Adaptive Server Anywhere
servers in an interfaces file, so Replication Server can communicate with
these database servers.
982
Chapter 34 Replicating Data with Replication Server
What’s next? Next, confirm that the Open Servers are configured properly.
983
A replication tutorial
Actions carried out The rssetup.sql command file carries out the following functions:
by rssetup.sql
♦ Creates a user named dbmaint, with password dbmaint and with DBA
permissions. This is the maintenance user name and password required
by Replication Server to connect to the primary site database.
♦ Creates a user named sa, with password sysadmin and with DBA
permissions. This is the user ID used by Replication Server when
materializing data.
♦ Adds sa and dbmaint to a group named rs_systabgroup.
Passwords and While the hard-wired user IDs (dbmaint and sa) and passwords are useful
user IDs for test and tutorial purposes, you should change the password and perhaps
also the user IDs when running databases that require security. Users with
DBA permissions have full authority in a Adaptive Server Anywhere
database.
The user ID sa and its password must match that of the system administrator
account on the Replication Server. Adaptive Server Anywhere does not
currently accept A NULL password.
Permissions The rssetup.sql script carries out a number of operations, including some
permissions management. The permissions changes made by rssetup.sql are
outlined here. You do not have to make these changes yourself.
For replication, ensure that the dbmaint and sa users can access this table
without explicitly specifying the owner. To do this, the table owner user ID
must have group membership permissions, and the dbmaint and sa users
must be members of the table owner group. To grant group permissions, you
must have DBA authority.
For example, if the table is owned by user DBA, you should grant group
permissions to the DBA user ID:
GRANT GROUP
TO DBA
984
Chapter 34 Replicating Data with Replication Server
You should then grant the dbmaint and sa users membership in the DBA
group. To grant group membership, you must either have DBA authority or
be the group ID.
GRANT MEMBERSHIP
IN GROUP "DBA"
TO dbmaint ;
GRANT MEMBERSHIP
IN GROUP "DBA"
TO sa ;
For news to act as part of a replication primary site, you must set the
REPLICATE flag to ON for the table using an ALTER TABLE statement:
ALTER TABLE news
REPLICATE ON
go
985
A replication tutorial
986
Chapter 34 Replicating Data with Replication Server
987
A replication tutorial
988
Chapter 34 Replicating Data with Replication Server
#
# Configuration file for ’PRIMELTM’
#
SQL_server=PRIMEDB
SQL_database=primedb
SQL_user=sa
SQL_pw=sysadmin
RS_source_ds=PRIMEDB
RS_source_db=primedb
RS=your_rep_server_name_here
RS_user=sa
RS_pw=sysadmin
LTM_admin_user=dba
LTM_admin_pw=sql
LTM_charset=cp850
scan_retry=2
APC_user=sa
APC_pw=sysadmin
SQL_log_files=C:\TUTORIAL
If you have changed the user ID and password in the rssetup.sql command
file from sa and sysadmin, you should use the new user ID and password in
this configuration.
To start the Adaptive Server Anywhere LTM running on the primary site
server, enter the following command:
dbltm -S PRIMELTM -C primeltm.cfg
The connection information is in primeltm.cfg. In this command line,
PRIMELTM is the server name of the LTM.
You can find usage information about the Adaptive Server Anywhere LTM
by typing the following statement:
dbltm -?
You can run the Adaptive Server Anywhere LTM for Windows NT as an NT
service. For information on running programs as services, see "Running the
server outside the current session" on page 18.
go
You have now completed your installation. Try replicating data to confirm
that the setup is working properly.
990
Chapter 34 Replicating Data with Replication Server
Configuring the Each primary site Adaptive Server Anywhere database requires an LTM to
LTM send data to Replication Server. Each primary or replicate site Adaptive
Server Anywhere database requires an Open Server definition so that
Replication Server can connect to the database.
$ For information on configuring the LTM, see "Configuring the LTM"
on page 997.
991
Configuring databases for Replication Server
The maintenance The setup script creates a maintenance user with name dbmaint and
user password dbmaint. The maintenance user has DBA permissions in the
Adaptive Server Anywhere database, which allows it full control over the
database. For security reasons, you should change the maintenance user ID
and password.
The materialization When Replication Server connects to a database to materialize and the initial
user ID copy of the data in the replication, it does so using the Replication Server
system administrator account.
The Adaptive Server Anywhere database must have a user ID and password
that match the Replication Server system administrator user ID and
password. Adaptive Server Anywhere does not accept a NULL password.
The setup script assumes a user ID of sa and a password of sysadmin for the
Replication Server administrator. You should change this to match the actual
name and password.
992
Chapter 34 Replicating Data with Replication Server
The language is one of the language labels listed in "Language label values"
on page 289.
993
Using the LTM
The effects of Setting REPLICATE ON places extra information into the transaction log.
setting Whenever an UPDATE, INSERT, or DELETE action occurs on the table.
REPLICATE ON The Adaptive Server Anywhere Replication Agent uses this extra
for a table information to submit the full pre-image of the row, where required, to
Replication Server for replication.
Even if only some of the data in the table needs to be replicated, all changes
to the table are submitted to Replication Server. It is Replication Server’s
responsibility to distinguish the data to be replicated from that which is not.
When you update, insert, or delete a row, the pre-image of the row is the
contents of the row before the action, and the post-image is the contents of
the row after the action. For INSERTS, only the post-image is submitted (the
pre-image is empty). For DELETES, the post-image is empty and only the
pre-image is submitted. For UPDATES, both the pre-image and the updated
values are submitted.
994
Chapter 34 Replicating Data with Replication Server
Notes Adaptive Server Anywhere supports data of zero length that is not NULL.
However, non-null long varchar and long binary data of zero length is
replicated to a replicate site as NULL.
If a primary table has columns with unsupported data types, you can replicate
the data if you create a replication definition using a compatible supported
data type. For example, to replicate a DOUBLE column, you could define
the column as FLOAT in the replication definition.
Side effects of There can be a replication performance hit for heavily updated tables. You
setting could consider using replicated procedures if you experience performance
REPLICATE ON problems that may be related to replication traffic, since replicated
for a table procedures send only the call to the procedure instead of each individual
action.
Since setting REPLICATE ON sends extra information to the transaction
log, this log grows faster than for a non-replicating database.
Minimal column The Adaptive Server Anywhere LTM supports the Replication Server
replication replicate minimal columns feature. This feature is enabled at Replication
definitions Server.
$ For more information on replicate minimal columns, see your
Replication Server documentation.
995
Using the LTM
996
Chapter 34 Replicating Data with Replication Server
Asynchronous procedures
Procedures called at a replicate site database to update data at a primary site
database are asynchronous procedures. The procedure carries out no action
at the replicate site, but rather, the call to the procedure is replicated to the
primary site, where a procedure of the same name executes. This is called an
asynchronous procedure call (APC). The changes made by the APC are
then replicated from the primary to the replicate database in the usual
manner.
$ For information about APCs, see your Replication Server
documentation.
The APC_user and Support for APCs in Adaptive Server Anywhere is different from that in
APC support Adaptive Server Enterprise. In Adaptive Server Enterprise, each APC
executes using the user ID and password of the user who called the
procedure at the replicate site. In Adaptive Server Anywhere, however, the
transaction log does not store the password, and so it is not available at the
primary site. To work around this difference, the LTM configuration file
holds a single user ID with associated password, and this user ID (the
APC_user) executes the procedure at the primary site. The APC_user must,
therefore, have appropriate permissions at the primary site for each APC that
may be called.
997
Using the LTM
998
Chapter 34 Replicating Data with Replication Server
Turning off You can turn off buffering of transactions by setting the batch_ltl_cmds
buffering configuration parameter to off:
batch_ltl_cmds=off
999
Using the LTM
File Description
Charset.loc Character set definition files that define the lexical properties of
each character such as alphanumeric, punctuation, operand, upper
or lower case.
*.srt Defines the sort order for alpha-numeric and special characters.
*.xlt Terminal-specific character translation files for use with utilities.
Adaptive Server Anywhere character sets are specified differently than Open
Client/Open Server character sets. Consequently, the requirement is that the
Adaptive Server Anywhere character set be compatible with the LTM
character set.
Sort order All sort orders in your replication system should be the same.
You can find the default entry for your platform in the locales.dat file in the
locales subdirectory of the Sybase release directory.
Example ♦ The following settings are valid for a Japanese installation:
LTM_charset=SJIS
LTM_language=Japanese
1000
Chapter 34 Replicating Data with Replication Server
1001
Using the LTM
1002
Chapter 34 Replicating Data with Replication Server
v To stop the LTM in Windows NT, when the LTM is not running as a
service:
♦ Click SHUTDOWN on the user interface.
1003
Using the LTM
1004
P A R T S E V E N
Appendixes
These appendixes describe the dialog boxes and property sheets you may
encounter while using Sybase Central.
1005
1006
A P P E N D I X A
This chapter provides descriptions of all the dialog boxes you can access in
Sybase Central.
$ For descriptions of the Adaptive Server Anywhere property sheets, see
"Property Sheet Descriptions" on page 1035.
Contents
Topic Page
Introduction to dialog boxes 1008
Dialogs accessed through the File menu 1009
Dialogs accessed through the Tools menu 1019
Debugger windows 1027
1007
Introduction to dialog boxes
1008
Appendix A Dialog Box Descriptions
1009
Dialogs accessed through the File menu
♦ Stop the database when all connections are closed Shuts down the
database when the last connection to it is closed. Note: This option is
different from the –ga server switch, which auto-stops the database
server itself.
$ See also
♦ "Running the Database Server" on page 3
♦ "Working with databases" on page 111
1010
Appendix A Dialog Box Descriptions
1012
Appendix A Dialog Box Descriptions
1013
Dialogs accessed through the File menu
Java Objects
When you select a Java object in the Java Objects folder in Sybase Central,
different menu items appear in the File menu (or in a popup menu if you
right-click the object).
$ For more information about java objects, see "Introduction to Java in
the database" on page 500 and "Using Java in the Database" on page 535.
1014
Appendix A Dialog Box Descriptions
Publications
When you select a publication in the SQL Remote folder in Sybase Central,
different menu items appear in the File menu (or in a popup menu if you
right-click the object).
1015
Dialogs accessed through the File menu
Sybase Central contains one dialog related to publications that is not located
in the File menu. It can be accessed when you double-click Add Article in
the right pane of the viewer when you have a publication open in the left
pane, and lets you define new articles for a publication.
$ For descriptions of dialogs related to remote users, which are also
located in the SQL Remote folder, see "Users and groups" on page 1012.
$ For more information about publications, see "Publication design for
Adaptive Server Anywhere" on page 327 of the book Replication and
Synchronization Guide.
Table Properties
The Table Properties dialog (accessed by clicking File➤Properties when you
have a table selected in the Publications folder) lets you view and edit table
properties. This dialog is the same as for any table located in the Tables
folder; see "Table properties" on page 1046 for a description.
1016
Appendix A Dialog Box Descriptions
Remote Servers
When you select a remote database server in the Remote Servers folder in
Sybase Central, different menu items appear in the File menu (or in a popup
menu if you right-click the object).
$ For descriptions of dialogs related to local servers or databases, see
"Servers and databases" on page 1009.
1017
Dialogs accessed through the File menu
♦ External login Lets you enter the login id to be used by the local user.
♦ External password Lets you enter the external login password to be
used by the local user.
♦ Confirm password Lets you confirm that you have entered the
external login password correctly.
$ See also
♦ "Working with external logins" on page 877
1018
Appendix A Dialog Box Descriptions
Connect dialog
The Connect dialog lets you define parameters for connecting to a server or
database. The same dialog is used in both Sybase Central and Interactive
SQL. In Sybase Central, you can access it by clicking Tools➤Connect. In
Interactive SQL, the dialog appears when you start a new session or open a
new window.
The Connect dialog has the following pages (or tabs).
♦ The Identification tab lets you identify yourself to the database and
specify a data source.
♦ The Database tab lets you identify a server and database to connect to.
♦ The Advanced tab lets you add additional connection parameters and
specify a driver for the connection.
In Sybase Central, after you connect successfully, the database name appears
in the left panel of the main window, under the server that it is running on.
The user that you connect as is shown in brackets after the database name.
You can then administer the database by navigating and selecting objects that
belong to the database.
In Interactive SQL, the connection information (including the database name,
your user id, and the database server) appears on a title bar above the SQL
Statements pane.
Tip
If you connect to a database using an account that does not have DBA
authority, you can only alter objects on which you have the required
permissions.
1019
Dialogs accessed through the Tools menu
1020
Appendix A Dialog Box Descriptions
♦ Database file Lets you enter the full path, name, and extension of the
Adaptive Server Anywhere database file if the database is on the same
machine as Sybase Central or Interactive SQL. You can search for the
file by clicking the Browse button.
♦ Start database automatically Causes the database to start
automatically (if it is not already running) when you start a new Sybase
Central or Interactive SQL session.
♦ Stop database after last disconnect Causes the database to shut
down automatically after the last user has disconnected.
$ See also
♦ "Connecting to a Database" on page 31
♦ "Connection parameters" on page 44 of the book ASA Reference
♦ "Identification dialog tab" on page 1020
♦ "Advanced dialog tab" on page 1021
Tip
You can specify a network protocol as a CommLinks connection
parameter on this tab, but the protocols available depend on the driver you
are using. For jConnect, the TCP/IP protocol is used automatically.
1021
Dialogs accessed through the Tools menu
$ See also
♦ "Connecting to a Database" on page 31
♦ "Connection parameters" on page 44 of the book ASA Reference
♦ "Identification dialog tab" on page 1020
♦ "Database dialog tab" on page 1020
Disconnect dialog
The Disconnect dialog (accessed by clicking Tools➤Disconnect) lets you
view all connections and disconnect from the ones you choose.
Dialog components
♦ Connections list Shows all current connections for your Sybase
Central session, along with a description and corresponding plug-in for
each.
♦ Disconnect Disconnects the selected connection(s).
$ See also
♦ "Connecting to a Database" on page 31
1022
Appendix A Dialog Box Descriptions
$ See also
♦ "Connecting to a Database" on page 31
Plug-ins dialog
The Plug-ins dialog (accessed by clicking Tools➤Plug-ins) lets you
configure existing plug-in modules settings and register new ones. For more
information about using and configuring plug-ins, see the online Help for the
Sybase Central viewer.
Dialog components
♦ Plug-in list Lists and describes each plug-in currently registered with
Sybase Central. Note that Sybase Central has its own version number.
♦ Register Displays the "Register Plug-In dialog" on page 1025, which
lets you register a new plug-in by specifying a Java class or JAR file. In
this dialog, you can also type additional directory paths to add to the
plug-in’s classpath (the set of locations of the required classes for the
plug-in).
♦ Load Loads the selected plug-in for use.
1023
Dialogs accessed through the Tools menu
Advanced tab ♦ Load plug-in with a separate class loader Lets you specify
components additional paths to add to the current classpath. When this option is
enabled, the field below becomes editable.
♦ Additional paths field Lets you type additional paths to add to the
current classpath. You can search for a path by clicking the Browse
button. If you are entering more than one path, make sure each one is on
a separate line.
1024
Appendix A Dialog Box Descriptions
Options dialog
The Options dialog for the main Sybase Central viewer (accessed by clicking
Tools➤Options) lets you configure basic viewer appearance options. This
dialog consists of two pages (or tabs): General and Chart.
General tab ♦ Viewer look and feel Lets you configure the look and feel of the main
components viewer window. Metal displays the viewer in the standard Java look;
CDE/Motif displays it in the standard Motif look; and Windows displays
it in the standards Windows look.
♦ Tab Placement Lets you configure the placement of the tabs for the
viewer’s right pane. For some plug-ins, there is only one tab. For other
plug-ins, there are multiple tabs.
♦ Viewer font options Let you configure the appearance of the viewer
display text. Options include:
♦ Font name Lets you choose the type of font for the viewer
display text.
♦ Font style Lets you choose the style of font (bold, italic, or plain)
for the viewer display text.
1025
Dialogs accessed through the Tools menu
♦ Size Lets you choose the size of the viewer display text.
♦ Sample Shows a text sample that reflects the current font settings.
♦ Reset to Defaults Resets all settings on this tab to their defaults.
Chart tab ♦ Update interval Lets you specify how frequently the Performance
components Monitor is updated. You can move the slider to increase or decrease the
time (in seconds), or you can type an exact value in the text box.
♦ Type options Let you choose how statistics are displayed in the
Performance Monitor. Use the sample window to preview the options.
♦ Reset to Defaults Resets all settings on this tab to their defaults.
Help tab This tab only appears if you are running on Windows operating systems.
components ♦ Help options Let you choose the type of online help you want the
Sybase Central viewer to use. All of these help systems contain the same
content, so you can always access the same information even if you
choose a different type of help.
1026
Appendix A Dialog Box Descriptions
Debugger windows
This section describes the windows in the Adaptive Server Anywhere
debugger.
Breakpoints window
The Breakpoints window lets you display and manipulate breakpoints. The
window displays breakpoints for the active connection only.
Enabling You can enable and disable breakpoints by clicking the breakpoint icon.
breakpoints
The icon represents an active breakpoint.
Calls window
The Calls window shows the chain of procedures that have been called to get
to the currently executing procedure. You can change the Source code
window display to a calling procedure by double clicking on a row, or
selecting it and pressing ENTER. You can also move up and down the call
stack using the Stack menu.
1027
Debugger windows
Java and non-Java The Calls window does not show complete information for stored procedure
groups execution when at a Java breakpoint. Instead it groups all stored procedure
calls into one row and labels it non-Java.
Similarly, when at a stored procedure breakpoint, the Calls window groups
Java calls into a single row and labels it as Java.
Catch window
The debugger always traps uncaught exceptions from Java code. You can use
the Catch Window to instruct the debugger to trap exceptions caught by the
Java code. The exception specified in this window must exactly match the
thrown exception. The debugger will not trap a thrown subclass.
When an exception is thrown, execution stops at the point the exception is
thrown. You can then choose Run➤Step Over to proceed to the point where
the exception is caught.
v To add an exception:
♦ In the Catch window, press INSERT, or select a class in the "Classes
window" on page 1028 and choose Break➤When exception thrown.
v To clear an exception:
♦ Select a line in the Catch window and press DELETE.
Classes window
The Classes window displays all Java classes currently installed in the
database.
You can see the source for a class by double clicking on it, or by selecting it
and pressing ENTER. If source code does not appear in the Source code
window, you may have to tell the debugger where to find the source code
using the "Source Path window" on page 1031, or by choosing File➤Add
Source Path.
Note Source code is not available for Sybase and Java API classes.
1028
Appendix A Dialog Box Descriptions
If the source for a class is displayed before the active connection has started
running Java in the database, the "Source code window" on page 1032
indicates that all lines are potential candidates for breakpoints by displaying
on each line. This is because the Java class has not yet been loaded and
the debugger cannot determine which lines contain code. A breakpoint set on
an invalid line (for example a comment) will be moved to the nearest valid
line when Java is started on that connection.
Connection window
The connections window shows a list of all connections and their status.
Possible window statuses include Running, Waiting, and Execution
Interrupted.
You can use this window to select the connection you wish to debug (the
active connection).
Evaluate window
The Evaluate window allows you to debug individual expressions. At a
breakpoint, you may enter an expression in the evaluate window, and click
Evaluate. The value of the expression appears in the Expression Value box.
You can watch an expression by clicking on the Inspect button; this will
transfer the expression to the "Inspect window" on page 1030.
If you enter an expression that does not make sense within the context of the
procedure, then the Expression Value box will display the string ???.
Globals window
This window displays the names and values of all SQL global variables.
1029
Debugger windows
Inspect window
This window lets you evaluate expressions in the context of the connection
being debugged. If you are at a Java breakpoint, the expression is evaluated
as a Java expression; if you are at a stored procedure breakpoint, the
expression is evaluated as a SQL expression. If the expression is invalid, an
error message is displayed in the Value column.
Add new rows by pressing the INSERT key.
Remove rows by selecting them and pressing the DELETE key.
Change rows by selecting the Name column and typing a new expression.
You can expand Java objects by clicking the to the left of the name, by
double clicking the name, or by selecting the name and pressing ENTER.
You can expand Java objects by clicking the to the left of the name, by
double clicking the name, or by selecting the name and pressing ENTER.
Methods window
This window displays the names of all the Java methods in the class
currently displayed in the"Source code window" on page 1032. You can
move the source code window to a method by double clicking on a method,
or selecting it and pressing ENTER. If the source code window is not
displaying a Java source file, no methods will be displayed.
1030
Appendix A Dialog Box Descriptions
Tip
When the methods window has focus, you can select methods by typing
the first part of the name. The list will change selections as you type.
Procedures window
This window displays all non-Java stored procedures in the database. You
can see the source code for a procedure by double clicking on it, or by
selecting it and pressing the ENTER key.
Tip
When the procedures window has focus, you can select procedures by
typing the first part of the name. The list will change selections as you
type.
Query window
This window allows you to run a SQL query in the context of the stored
procedure being debugged. For instance, you can use the query window to
inspect the contents of a temporary table used by a stored procedure.
1031
Debugger windows
To browse the disk for a directory and add it to the source path, choose Add
Source Path from the File menu.
Tip
You can search the source code for a procedure name by choosing Find
from the Search menu.
Statics window
This window displays the names of all the Java static fields in the class
which is currently displayed in the"Source code window" on page 1032. You
can add a field to the Inspect window by double clicking on it, or selecting it
and pressing ENTER. If the source code window is displaying a Java source
file, no fields are displayed.
1032
Appendix A Dialog Box Descriptions
Tip
When the Statics window has focus, you can select fields by typing the
first part of the name. The list will change selections as you type.
Threads window
This window displays all active threads in the Java program begin debugged.
You can double-click on a thread to cause the debugger to show its context.
1033
Debugger windows
1034
A P P E N D I X B
This chapter provides descriptions of all the property sheets you can access
in Sybase Central. The sections are organized according to the object
hierarchy in the Sybase Central viewer object tree.
Contents
Topic Page
Introduction to property sheets 1037
Service properties 1038
Server properties 1041
Statistics properties 1043
Database properties 1044
Table properties 1046
Column properties 1049
Foreign Key properties 1052
Index properties 1055
Trigger properties 1056
View properties 1057
Procedures and Functions properties 1058
Users and Groups properties 1059
Integrated Logins properties 1062
Java Objects properties 1063
Domains properties 1064
Events properties 1065
Publications properties 1066
Articles properties 1067
Remote Users properties 1069
Message Types properties 1073
Connected Users properties 1074
1035
Introduction to property sheets
1036
Appendix B Property Sheet Descriptions
1037
Service properties
Service properties
The Properties dialog for a service consists of five pages (or tabs): General,
Configuration, Account, Dependencies, and Polling. Each tab is described in
its own section.
$ For more information, see "Managing services" on page 19.
1038
Appendix B Property Sheet Descriptions
1039
Service properties
♦ Remove Removes the selected group or service from the list on this
tab. The group or service is then no longer started before the current
service.
1040
Appendix B Property Sheet Descriptions
Server properties
The Properties dialog for a server consists of one page (or tab): the General
tab.
1041
Server properties
1042
Appendix B Property Sheet Descriptions
Statistics properties
The Properties dialog for statistics consists of one page (or tab): the General
tab.
General tab ♦ Object information The top part of the dialog shows the name and
components type of the selected object
♦ Description Gives a brief explanation of the statistic.
♦ Graph statistic in the Performance Monitor Adds this statistic to (or
removes it from) the Performance Monitor.
$ See also "Monitoring and Improving Performance" on page 777.
1043
Database properties
Database properties
The Properties dialog for a database consists of three pages (or tabs):
General, Extended Information, SQL Remote.
$ For more information about using databases, see "Working with
databases" on page 111.
1044
Appendix B Property Sheet Descriptions
1045
Table properties
Table properties
The Properties dialog for tables consists of five pages (or tabs): General,
Columns, Constraints, Permissions and Misc. Each tab is described in its
own section.
$ For more information about using tables, see "Working with tables" on
page 120.
1046
Appendix B Property Sheet Descriptions
1047
Table properties
1048
Appendix B Property Sheet Descriptions
Column properties
The Properties dialog for columns consists of four pages (or tabs): General,
Data Type, Default, and Constraints. Each tab is described in its own section.
$ For more information about using tables, see "Working with tables" on
page 120.
1049
Column properties
1050
Appendix B Property Sheet Descriptions
1051
Foreign Key properties
1052
Appendix B Property Sheet Descriptions
♦ Update action Lets you define the behavior of the selected table when
the user tries to update data. You have the following options:
♦ Restrict Update prevents updates of the associated primary table’s
primary key value if there are corresponding foreign keys in this
table
♦ Cascade updates the foreign key to match a new value for the
associated primary key
♦ Set NULL sets to NULL all the foreign-key values in this table that
correspond to the updated primary key of the associated primary
table. Note: To use this option, the foreign-key columns must all
have Allow Nulls set.
♦ Set Default sets to the column’s default value all the foreign-key
values in this table that correspond to the updated primary key of
the associated primary table. Note: To use this option, the foreign-
key columns must all have default values.
♦ Delete action Lets you define the behavior of the selected table when
the user tries to delete data. You have the following options:
♦ Restrict Delete prevents deletion of the associated primary table’s
primary key value if there are corresponding foreign keys in this
table.
♦ Cascade deletes the rows from this table that match the deleted
primary key of the associated primary table.
♦ Set NULL sets to NULL all the foreign-key values in this table that
correspond to the deleted primary key of the associated primary
table. Note: To use this option, the foreign-key columns must all
have Allow Nulls set.
♦ Set Default sets to the column’s default value all the foreign-key
values in this table that correspond to the deleted primary key of the
associated primary table. Note: To use this option, the foreign-key
columns must all have default values.
♦ Advanced foreign key properties Lets you set advanced properties of
the selected foreign key. You have the following options:
♦ Allows NULL determines the nullability of the foreign-key columns.
Note: To use this option, the foreign-key columns must all have
Allow Nulls set.
♦ Check on commit forces the database to wait for a COMMIT
before checking the integrity of the foreign key, overriding the
setting of the WAIT_FOR_COMMIT database option. Note: This
option can only be used with the Restrict actions.
1053
Foreign Key properties
$ See also
♦ "Ensuring Data Integrity" on page 345
♦ "Managing foreign keys" on page 129
1054
Appendix B Property Sheet Descriptions
Index properties
The Properties dialog for index consists of two pages (or tabs): General and
Columns.
General tab ♦ Object information The top half of the dialog shows the name and type
components of the object, as well as the table with which it is associated.
♦ DB Space Shows the dbspace used by the index.
♦ Is unique Shows whether values in the index must be unique. You can
set the uniqueness value when you create a new index.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
Columns tab ♦ Column list Shows all of the columns in the index, along with their
components order (ascending or descending). You can set the order when you create
a new index.
♦ Details Displays the Column Details dialog, which shows a summary
of the properties of the selected object.
$ See also "Working with indexes" on page 141.
1055
Trigger properties
Trigger properties
The Properties dialog for trigger consists of two pages (or tabs): General and
Type Information.
General tab ♦ Object information The top half of the tab shows the name and type
components of the object, the table with which it is associated, and the SQL dialect in
which the code was last saved (Watcom-SQL or Transact-SQL).
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
Type Information ♦ Trigger Timing Determines whether the trigger executes Before or
tab components After the event. Row-level triggers can also have SQL Remote conflict
timing, which executes before UPDATE or UPDATE OF column-lists
events.
♦ Trigger Type Determines which events cause the trigger to execute.
Events: Insert, Delete, Update, Update Columns.
♦ Trigger Level Determines whether the trigger is a row-level trigger or
a statement-level trigger.
♦ List For triggers in this table that execute for the same kind of event
with the same timing, this number determines the order in which these
triggers are fired.
$ See also "Using Procedures, Triggers, and Batches" on page 423.
1056
Appendix B Property Sheet Descriptions
View properties
The Properties dialog for views consists of three pages (or tabs): General,
Permissions and Columns.
General tab ♦ View information The top half of the tab shows the name and type of
components the object, as well as the database user who created (and owns) this
object.
♦ Comment text box Provides a place for you to type a text description
of this object. For example, you could use this area to describe the
object’s purpose in the system.
Permissions tab ♦ Permission list Lists the users who have permission on the selected
components table; you can add users to the list by clicking the Grant button. Click in
the fields beside each user to grant or revoke permission; double-
clicking (so that a check mark and two '+' signs appear) gives the user
grant options.
♦ Grant Displays the Grant Permission dialog, which lets you grant
permissions to other users.
♦ Revoke Revokes permissions from the selected users and removes
them from the list.
Columns tab ♦ Columns list Lists the column in the selected view.
components
$ See also "Working with views" on page 134.
1057
Procedures and Functions properties
Permissions tab ♦ User list Shows all users who have permissions to execute the
components procedure. Users with permission have a check mark beside them in the
Execute column; you can click in this column to toggle between
granting or not granting permission.
♦ Grant Displays the Grant Permission dialog, which lets you choose the
users or groups for which you want to grant permission.
♦ Revoke Revokes permission from the selected user or group and
removes them from the list.
Parameters tab ♦ Parameter list Displays the parameters for the selected procedure.
components
$ See also
♦ "Using Procedures, Triggers, and Batches" on page 423
♦ "Granting permissions on procedures" on page 726
1058
Appendix B Property Sheet Descriptions
1059
Users and Groups properties
1060
Appendix B Property Sheet Descriptions
1061
Integrated Logins properties
1062
Appendix B Property Sheet Descriptions
Description tab ♦ Class description Shows a sample of the Java's class's code.
components
$ See also
♦ "Using Java in the Database" on page 535
1063
Domains properties
Domains properties
The Properties dialog for domains consists of two pages (or tabs): General
and Check Constraint.
General tab ♦ Object information The top half of the tab shows the name of the
components object, its type and the database user who created (and owns) this object.
♦ Built-in type Shows the pre-defined data type of the selected domain.
The format of the data type (where applicable) is listed after the type’s
name.
♦ Default value Shows the default value of the selected domain.
Columns based on this domain inherit this default value (if any), but you
can subsequently override it.
♦ Allows null Shows the nullability of columns based on this domain.
Check Constraint ♦ Check constraint list Shows the check constraint for the selected
tab components domain. A check constraint allows you to specify conditions for a
column or set of columns.
$ See also "Using domains" on page 359.
1064
Appendix B Property Sheet Descriptions
Events properties
The Properties dialog for events consists of four pages (or tabs): General,
Misc., Conditions, and Handler.
General tab ♦ Object information The top half of the tab shows the name of the
components object, its type and the database user who created (and owns) this object.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
Misc. tab ♦ Even is enabled Lets you enable or disable the event.
components ♦ Restrict options Lets you specify how the event executes. You have
the following options:
♦ Execute at all locations Executes the event at all remote
locations
♦ Execute at the consolidated database only Executes the event
at the consolidated database only, and not at any of the remote
locations.
♦ Execute at the remote database only Executes the event at a
remote database only, and not at the consolidated database.
Conditions tab ♦ Manually Executes the event only when you manually trigger it.
components ♦ By the following schedules Executes the event according to the
schedule that you define. You can create new schedules by click Add.
You can change existing schedules or remove them entirely by clicking
Edit and Remove, respectively.
♦ When the following occurs Executes the event when a circumstance
or condition is met. You can specify a circumstance or condition by
clicking Edit.
Handler tab ♦ Event handler code Lets you edit the code of the event handler.
components
1065
Publications properties
Publications properties
The Properties dialog for publication consists of three pages (or tabs):
General, Articles and Subscriptions.
General tab ♦ Object information The top half of the tab shows the name of the
components object, its type and the database user who created (and owns) this object.
♦ Comment Provides a place for you to type a text description of this
object. For example, you could use this area to describe the object’s
purpose in the system.
Articles tab ♦ Article list Shows all article in the publication (listed by article name,
components article type, and conflict trigger (if any)).
Subscriptions tab ♦ Publication list Shows all subscription to the selected publication
components (listed by remote user name and status).
♦ Subscribe For Displays the Subscribe for User dialog, which lets you
subscribe an existing remote user to the publication.
♦ Unsubscribe Removes the subscriptions of selected remote users
from the publication.
♦ Advanced Displays the Advanced Remote Actions dialog, which lets
you start, stop, or synchronization subscriptions.
$ See also "Publication design for Adaptive Server Anywhere" on
page 327 of the book Replication and Synchronization Guide.
1066
Appendix B Property Sheet Descriptions
Articles properties
The Properties dialog for publication article consists of four pages (or tabs):
General, Table, Where Restriction and Subscribe Restriction. Each tab is
described in its own section.
$ For more information about articles, see "Publication design for
Adaptive Server Anywhere" on page 327 of the book Replication and
Synchronization Guide.
1068
Appendix B Property Sheet Descriptions
1069
Remote Users properties
♦ DBA Grants DBA authority to the remote user; a remote user with
DBA authority can fully administer the database.
♦ Resource Grants resource authority to the remote user; a remote user
with resource authority can create database objects.
♦ Remote DBA Grants Remote DBA authority to the remote user. The
Message Agent should be run using a user ID with this type of authority
to ensure that actions can be carried out, without creating security
loopholes.
$ See also "Granting and revoking remote permissions" on page 728.
1070
Appendix B Property Sheet Descriptions
♦ Publication list Shows all publication that the selected remote user is
subscribed to (listed by publication name and status).
♦ Subscribe To Displays the Subscribe to Publication dialog, which lets
you subscribe the selected remote user to any of the listed publications.
♦ Unsubscribe Removes the selected publication from the list, causing
the remote user to no longer be subscribed to that publication.
♦ Advanced Displays the Advanced Remote Actions dialog, which lets
you start, stop, or synchronization subscriptions.
♦ Message type Lets you select a message type for communicating with
the publisher.
♦ Address Provides a place for you to type the remote address of the
selected remote user.
♦ Send then close Sets the replication frequency so that the publisher’s
agent runs once, sends all pending messages to this remote group, then
shuts down. This means that the agent must be restarted each time the
publisher wants to send messages. Note: In most replication setups, this
option is not used for sending from the consolidated publisher to the
remote group.
♦ Send every Sets the replication frequency so that the publisher’s agent
runs continuously, sending messages to this remote group at the given
periodic interval.
♦ Send daily at Sets the replication frequency so that the publisher’s
agent runs continuously, sending messages to this remote group each
day at the given time.
$ See also "Granting and revoking remote permissions" on page 728.
1071
Remote Users properties
♦ Statistics list Shows SQL Remote usage statistics for the selected
remote user (from the perspective of the consolidated database). For
example, Last send shows when the most recent replication messages
were sent from the consolidated database to this remote user.
$ See also "Granting and revoking remote permissions" on page 728.
1072
Appendix B Property Sheet Descriptions
Remote Users tab ♦ Remote users list Lists all of the remote users that are currently using
components the selected message type.
♦ Properties Displays the property sheet for the selected remote user.
$ See also
♦ "Remote Users properties" on page 1069
♦ "Working with message types" on page 442 of the book Replication and
Synchronization Guide
1073
Connected Users properties
1074
Appendix B Property Sheet Descriptions
Contains tab ♦ Object list Depending upon the Show options enabled, this window
components displays the objects (tables and indexes) currently saved in the current
database space.
♦ Show options Lets you determine which objects appear in the list on
this tab. You can show tables or indexes by enabling their check box, or
you can remove them from the list by disabling their check box.
♦ Properties Displays the property sheet of the object selected in the list
on this tab.
$ See also "Working with Database Files" on page 763.
1075
Remote Servers properties
External Logins tab ♦ External login list Lists all of the external login and local users
components currently associated with the remote server.
♦ Add External Login Displays the Add External Login dialog, which
lets you define external login settings.
Proxy Tables tab ♦ Proxy table list Displays the proxy tables of the server and their
components respective creators. Proxy tables are tables in the consolidated database
which map directly to tables found in the remote database.
Remote ♦ Remote procedure list Displays the external logins for the server and
Procedures tab their respective local users. External logins are alternate login names and
components passwords to be used when communicating with a remote server.
1076
Glossary
article
In SQL Remote replication, an article is a database object that represents a
whole table, or a subset of the columns and rows in a table. Articles are
grouped together in publications.
authority
Determines what structural actions a user can perform in a database. While
most users will have no special authorities, a user with DBA authority can
grant other users resource authority, DBA authority, or remote DBA
authority.
backup
It is important to make regular backups of your database files in case of
media failure. A backup is a copy of the database file. You can make
backups using the backup utility or using other archiving software of your
choice.
base table
The tables that permanently hold the data in the database are sometimes
called base tables to distinguish them from temporary tables and from views.
cache
To avoid having to access a hard disk every time it needs to retrieve or write
information to the database, Adaptive Server Anywhere keeps data it may
need to access again in the computer’s memory, where access is much
quicker. The area of memory set aside for this information is called a cache.
check constraint
A check constraint allows specified conditions on a column or set of columns
in a table to be verified.
client
Client is a widely-used term with several meanings. It refers to the user’s side
of a client/server arrangement: for example, an application that addresses a
database, typically held elsewhere on a network, is called a client
application.
1077
client/server A software architecture where one application (the client) obtains
information from and sends information to another application (the server).
In a database context, the server is a database server, and the client is a
database client application. The two applications often reside on different
computers on a local area network.
column
All data in relational databases is held in tables, composed of rows and
columns. Each column holds a particular type of information.
command file
A text file containing SQL statements. Command files can be built by
yourself (manually) or by database utilities (automatically). The
DBUNLOAD utility, for example, creates a command file consisting of the
SQL statements necessary to recreate a given database.
compress The Compression utility reads the given database file and creates a
compressed database file. Compressed databases are usually 40 to 60 per
cent of their original size.
compressed
database file A database file that has been compressed to a smaller physical size using the
database compression utility (dbshrink). To make changes to a compressed
database file, you must use an associated write file. Compressed database
files can be re-expanded into normal database files using the Uncompression
utility (dbexpand).
conflict trigger
In SQL Remote replication, a trigger that is fired when an update conflict is
detected, before the update is applied. Specifically, conflict triggers are fired
by the failure of values in the VERIFY clause of an UPDATE statement to
match the values in the database before the update. They are fired before
each row is updated.
connected A connected database shows its contents as an object tree under the database.
database
connection
When a client application connects to a database, it specifies several
parameters that govern all aspects of the connection once it is established. A
user ID, a password, the name of the database to attach to, are all parameters
that specify the connection. All exchange of information between the client
application and the database to which it is connected is governed by the
connection.
1078
connection ID A unique number that identifies a given connection between the user and the
database.
You can determine your own connection ID using the following SQL
statement inside that connection:
select connection_property( ’Number’ )
connection profile A named set of connection parameters (user name, password, server name,
and so on).
consolidated
database In SQL Remote replication, a database that serves as the "master" database in
the replication setup. The consolidated database contains all of the data to be
replicated, while its remote databases may only contain their own subsets of
the data. In case of conflict or discrepancy, the consolidated database is
considered to have the primary copy of all data.
constraint
When tables and columns are created they may have constraints assigned to
them. A constraint ensures that all entries in the database object. to which it
applies satisfy a particular condition. For example, a column may have a
UNIQUE constraint, which requires that all values in the column be
different. A table may have a foreign key constraint, which specifies how the
information in the table relates to that in some other table.
container
In a graphical user interface, a container is an object that contains other
objects. Containers can be expanded by double-clicking them.
data type
Each column in a table is associated with a particular data type. Integers,
character strings, and dates are examples of data types.
database
A relational database is a collection of tables, related by primary and foreign
keys. The tables hold the information in the database, and the tables and keys
together define the structure of the database. A database may be stored in one
or more database files, on one or more devices.
database
administrator The database administrator (DBA) is a person responsible for maintaining
the database. The DBA is generally responsible for all changes to a database
schema, and for managing users and user groups.
1079
The role of database administrator is built in to databases as a user ID. When
a database is initialized, a DBA user ID is created. The DBA user ID has
authority to carry out any activity within the database.
database
connection All exchange of information between client applications and the database
takes place in a particular connection. A valid user ID and password are
required to establish a connection, and the actions that can be carried out
during the connection are defined by the privileges granted to the user ID.
database server
All access to information in a database goes through a database server. The
specific server you are using will depend on your operating system. Requests
for information from a database are sent to the database server, which carries
out the instructions.
database file
A database is held in one or more distinct database files. The user does not
have to be concerned with the organization of a database into files: requests
are issued to the database server about a database, and the server knows in
which file to look for each piece of required information.
Database administrators can create new database files for a database using
the CREATE DBSPACE command.
Each table, must be contained in a single database file.
database name
When a database is loaded by a server, it is assigned a database name. Client
applications can connect to a database by specifying its database name.
The default database name is the root of the database file.
database object
A database is made up of tables, indexes, views, procedures, and triggers.
Each of these is a database object.
database owner
The user ID that creates a database is the owner of that database, and has the
authority to carry out any changes to that database. The database owner is
also referred to as the database administrator, or DBA. A database owner can
grant permission to other users to have access to the database and to carry
out different operations on the database, such as creating tables or stored
procedures.
1080
DBA An abbreviation for database administrator, also called the database owner.
When a database is first created it has the single user ID DBA, with password
SQL.
DBA authority
DBA (DataBase Administrator) authority enables a user to carry out any
activity in the database (create tables, change table structures, assign
ownership of new objects, create new users, revoke permissions from users,
and so on).
The DBA user has DBA authority by default.
dbspace
A database can be held in multiple files, called dbspaces. The SQL command
CREATE DBSPACE adds a new file to the database.
Each table, together with its associated indexes, must be contained in a single
database file.
display properties
Display properties include the color and line style of a statistic in the
performance monitor.
Embedded SQL
The native programming interface from C programs. Adaptive Server
Anywhere embedded SQL is an implementation of the ANSI and IBM
standard.
erase
Erasing a database deletes all tables and data from disk, including the
transaction log that records alterations to the database.
external login By default, Adaptive Server Anywhere uses the names and passwords of its
clients whenever it connects to a remote server on behalf of those clients.
However, this default can be overridden by creating external logins. External
logins are alternate login names and passwords to be used when
communicating with a remote server.
1081
extraction In SQL Remote replication, the act of synchronizing a remote database with
its consolidated database by unloading the appropriate structure and data
from the consolidated database, then reloading it into the remote database.
Extraction uses direct manipulation of ordinary files—it does not use the
SQL Remote message system.
FILE
In SQL Remote replication, a message system that uses shared files for
exchanging replication messages. This is useful for testing and for
installations without an explicit message-transport system (such as MAPI).
foreign key
Tables are related to each other by using foreign keys. A foreign key in one
table (the foreign table) contains a value corresponding to the primary key of
another table (the primary table). This relates the information in the foreign
table to that in the primary table.
foreign key
constraint A foreign restricts the values for a set of columns to match the values in a
primary key or uniqueness constraint of another table. For example, a foreign
key constraint could be used to ensure that a customer number in an invoice
table corresponds to a customer number in the customer table. Imposing a
foreign key constraint on a set of columns makes that set the foreign key. in a
foreign key relationship.
foreign table
A foreign table is the table containing the foreign key in a foreign key
relationship.
full backup
In a full backup, a copy is made of the entire database file itself, and
optionally of the transaction log. A full backup contains all the information
in the database.
function
Also called a "user-defined function", this is a type of procedure that returns
a single value to the calling environment. A function can be used, subject to
permissions, in any place that a built-in non-aggregate function is used.
Data in a temporary table is held for a single connection only. Global
global temporary temporary table definitions (but not data) are kept in the database until
table dropped. Local temporary table definitions and data exist for the duration of
a single connection only.
1082
grant option When a user is granted the WITH GRANT OPTION permission, they are
given the authority to pass on permissions to other users.
group
A user group is a database user ID that has been given the permission to have
members. User groups are used to make the assignment of database
permissions simpler. Rather than assign permissions to each user ID, a user
ID is assigned to a particular group, and takes on the permissions assigned to
that group.
index
An index on one or more columns of a database table allows fast lookup of
the information in these columns, and so can greatly speed up database
queries. Specifically, indexes assist WHERE clauses in SELECT statements.
InfoMaker
InfoMaker is a powerful yet easy-to-use reporting and data maintenance tool
that lets you work with data in the Windows environment. With InfoMaker
you can create sophisticated forms, reports, graphs, crosstabs, and tables, as
well as applications that use these as building blocks.
integrated login
The integrated login feature allows you to maintain a single user ID and
password for both database connections and operating system and/or network
logins.
IPX
IPX is a network-level protocol. by Novell.
Jar file
A JAR file is a collection of one or more packages.
Java class A Java class is the main structural unit of code in Java. It is a collection of
procedures and variables that have been grouped together because they all
relate to a specific, identifiable category.
jConnect
jConnect is a 100% pure Java implementation of the JavaSoft JDBC
standard. It provides Java developers native database access in multi-tier and
heterogeneous environments.
1083
Java Database Connectivity (JDBC), provides a SQL interface for Java
JDBC
applications: if you want to access relational data from Java, you do so using
JDBC calls.
LAN
See “local area network”.
log files
Adaptive Server Anywhere maintains a set of three log files to ensure that
the data in the database is recoverable in the event of a system or media
failure, and to assist database performance.
MAPI
Microsoft's Message Application Programming Interface, a message system
used in several popular e-mail systems such as Microsoft Mail.
message system
In SQL Remote replication, a protocol for exchanging messages between the
consolidated database and a remote database. Adaptive Server Anywhere
includes support for a FILE message system (using shared files) and the
MAPI message system. In most cases, a consolidated database and a remote
database(s) will send and receive messages using the same message system.
message type
In SQL Remote replication, a database object that specifies how remote users
communicate with the publisher of a consolidated database. A consolidated
database may have several message types defined for it; this allows different
remote users to communicate with it using different message systems. A
message type is named after a message system (e.g. MAPI), and includes the
publisher address for that message system (e.g. a valid MAPI address).
messages
Message based communication between applications or computers does not
require a direct connection. Instead, a message sent at one time by an
application can be received at another time by another application at a later
time.
NetBIOS
NetBIOS is a transport-level interface defined by IBM.
NetBEUI
NetBEUI is a transport-level protocol.
1084
NetWare A widely-used network operating system. by Novell. NetWare generally
employs the IPX protocol, although the TCP/IP protocol may also be used.
network server
A database server that runs on a different PC from the client application. The
server communicates with the client using a particular network protocol.
A network server can support many users connecting from many PCs on the
network.
object tree
The object tree is the hierarchy of objects that Sybase Central can manage.
The top level of the object tree shows all products that your version of
Sybase Central supports. Each product expands to reveal its own sub tree of
objects.
ODBC
The Open Database Connectivity (ODBC) interface, defined by Microsoft
Corporation, is a standard interface to database management systems in the
Windows and Windows NT environments. ODBC is one of several
interfaces supported by Adaptive Server Anywhere.
ODBC
Administrator The ODBC. Administrator is a Microsoft program included with Adaptive
Server Anywhere for setting up ODBC data sources.
owner
Each object of a database is owned by the user ID that created it. The owner
of a database object has rights to do anything with that object.
packages A collection of sets of related classes. Packages are grouped together into a
JAR file.
passthrough
In SQL Remote replication, a mode by which the publisher of the
consolidated database can directly change remote databases with SQL
statements. Passthrough is set up for specific remote users (you can specify
all remote users, individual users, or those users who subscribe to given
publications). In normal passthrough mode, all database changes made at the
consolidated database are passed through to the selected remote databases. In
"passthrough only" mode, the changes are made at the remote databases, but
not at the consolidated database.
1085
password Whenever a user connects to a database, a password must be specified. The
passwords are stored in the SYS.SYSUSERPERM system table, to which
only the DBA has access.
performance
statistics Values that reflect the performance of the database system with respect to
disk and memory usage. The CURRREAD statistic, for example, represents
the number of file reads issued by the engine which have not yet completed.
permissions
Each user has a set of permissions that govern the actions they may take
while connected to a database. Permissions are assigned by the DBA or by
the owner of a particular database object.
personal database
server A database server that runs on the same PC as the client application. A local
server is typically for a single user on a single PC, but can support several
concurrent connections from that user.
plug-in modules
A module stored as a file, that adds support for a specific product to Sybase
Central.
Plug-ins are usually installed and registered automatically with Sybase
Central when you install the respective product.
Typically, a plug-in appears as a top-level container, in the Sybase Central
main window, using the name of the product itself (for example, Sybase
Adaptive Server Anywhere).
Power Designer Power Designer is a comprehensive modeling solution that business and
systems analysts, designers, DBAs, and developers can tailor to meet their
specific needs. The flexible analysis and design features of PowerDesigner
allow a structured approach to efficiently create a database or data warehouse
without demanding strict adherence to a specific methodology.
PowerJ
PowerJ is a Java system that allows you to use JavaBeans and ActiveX
components to build, test and deploy business applications with database
connectivity.
primary table
A primary table is the table containing the primary key in a foreign key
relationship.
1086
primary key Each table in a relational database must be assigned a primary key. The
primary key is a column, or set of columns, whose values uniquely identify
every row in the table.
primary key
constraint A primary key constraint identifies one or more columns that uniquely
identify each row in a table. Imposing a primary key constraint on a set of
columns makes that set the primary key. for the table. The primary key
usually identifies the best identifier for a row.
publication
In SQL Remote replication, a publication is a database object that describes
data to be replicated. A publication consists of articles (tables or subsets of
tables). Periodically, the changes made to each publication in a database are
replicated to all subscribers to that publication as publication updates.
publication update
In SQL Remote replication, a periodic batch of changes made to one or more
publications in one database. A publication update is sent as part of a
replication message to the remote database(s).
publisher
In SQL Remote replication, the single user in a database that can exchange
replication messages with other replicating databases.
remote database
In SQL Remote replication, a database that exchanges replication messages
with a consolidated database. Remote databases may contain all or some of
the data in the consolidated database.
remote permission
In SQL Remote replication, the permission to exchange replication messages
with the publishing database. Granting remote permissions to a user make
them a remote user. This requires you to specify a message type, an
appropriate remote address, and a replication frequency. In general terms,
remote permissions can also refer to any user involved in SQL Remote
replication (for example, the consolidated publisher and remote publisher).
referential integrity
The tables of a relational database are related to each other by foreign keys.
Adaptive Server Anywhere provides tools that maintain the referential
integrity of the database: that is, ensure that the relations between the rows in
different tables remain valid.
1087
remote DBA The Message Agent should be run using a user ID with REMOTE DBA
authority authority, to ensure that actions can be carried out, without creating security
loopholes.
remote user
In SQL Remote replication, a user who has been granted remote permissions
in a replication setup. When the remote database is extracted from the
consolidated database, the remote user becomes the publisher of the remote
database, able to exchange publication updates with the consolidated
database. While groups can also be granted remote permissions, note that
users in these "remote groups" do not inherit remote permissions from their
group.
replication
For databases, a process by which the changes to data in one database
(including creation, updating, and deletion of records) are also applied to the
corresponding records in other databases. Adaptive Server Anywhere
supports replication using SQL Remote or Sybase Replication Server.
replication
frequency In SQL Remote replication, a setting for each remote user that determines
how often the publisher’s message agent should send replication messages to
that remote user. The frequency can be specified as on-demand, every given
interval, or at a certain time of day.
replication
message In SQL Remote replication, a discrete communication that is sent from a
publishing database to a subscribing database. Messages can contain a
mixture of publication updates and passthrough statements (manual SQL
statements such as DDL).
row
All data in relational databases is held in tables, composed of rows and
columns. Each row holds a separate occurrence of each column. In a table of
employee information, for example, each row contains information about a
particular employee.
row-level trigger
A trigger that executes BEFORE or AFTER each row modified by the
triggering insert, update, or delete operation is changed.
1088
server In Adaptive Server Anywhere, servers are database servers—the programs
that manage the physical structure of the database and process queries on its
data. Servers can be local servers or network servers.
In Sybase Central, local servers and network servers are both called servers.
service
In the Windows NT operating system, applications set up as NT services can
run even when the user ID starting them logs off the machine.
Running a database server as a service under NT allows databases to keep
running while not tying up the machine on which they are running.
SQL
Structured Query Language (SQL) is the language used to communicate to
databases. SQL is very widely used in database applications, and in order to
ensure compatibility among databases, SQL is the subject of standards set by
several standards bodies.
SQL Remote
An asynchronous message-based replication system for two-way server-to-
laptop, server-to-desktop, and server-to-server replication between databases.
SQL statement
SQL allows several kinds of statement. Some statements modify the data in a
database (commands), others request information from the database
(queries), and others modify the database schema itself.
statement-level
trigger A trigger that executes after the entire triggering statement is completed.
statistic
Values that reflect the performance of the database system with respect to
disk and memory usage. The CURRREAD statistic, for example, represents
the number of file reads issued by the engine which have not yet completed.
stored procedure
Stored procedures are procedures kept in the database itself, which can be
called from client applications. Stored procedures provide a way of providing
uniform access to important functions automatically, as the procedure is held
in the database, not in each client application.
Structured Query
Language See SQL.
1089
subscriber In SQL Remote replication, a remote user who is subscribed to one or more
of a database’s publications.
subscribing
In a Replication Server or SQL Remote installation, a database that has
subscribed to a replication or publication receives updates of changes to the
data in that replication or publication.
subscription
In SQL Remote replication, a link between a publication and a remote user,
allowing the user to exchange updates on that publication with the
consolidated database. The user’s subscription may include an argument
(value) for the publication’s SUBSCRIBE BY parameter (if any).
synchronization
In SQL Remote replication, the process by which SQL Remote deletes all
existing rows from those tables of a remote database that form part of a
publication, and copies the publication’s entire contents from the
consolidated database to the remote database. Synchronization is performed
during the initial extraction of the remote database from the consolidated
database, and may also be necessary later if a remote database becomes
corrupt or gets out of step with the consolidated database (and cannot be
repaired using passthrough mode). Synchronization can be accomplished by
bulk extraction (the recommended method), by manually loading from files,
or by sending synchronization messages through the message system.
system object
In a database, a table, view, stored procedure, or user-defined data type.
System tables store information about the database itself, while system
views, procedures, and user-defined data types largely support Sybase
Transact-SQL compatibility.
system tables
Every database includes a set of tables called the system tables, which hold
information about the database structure itself: descriptions of the tables,
users and their permissions, and so on.
The system tables are created and maintained automatically by the database
server. They are owned by the special user ID SYS, and cannot be modified
by database users.
system views
Every database includes a set of views, which present the information held in
the system tables. in a more easily understood format.
1090
table All data in relational databases is stored in tables. Each table consists of rows
and columns. Each column carries a particular kind of information (a phone
number, a name, and so on), while each row specifies a particular entry. Each
row in a relational database table must be uniquely identifiable by a primary
key.
TCP/IP
Transmission Control Protocol/Internet Protocol (TCP/IP) is a network
protocol supported by Adaptive Server Anywhere
template Templates, located in the right panel of Sybase Central, are special icons that
perform a specific task. Most templates help you create objects of certain
types. For example, the Add Index template opens a wizard that helps you
create an index.
temporary table
Data in a temporary table is held for a single connection only. Global
temporary table definitions (but not data) are kept in the database until
dropped. Local temporary table definitions and data exist for the duration of
a single connection only.
transaction
A transaction is a logical unit of work that should be processed in its entirety
by the database (though not necessarily at once) or not at all. Adaptive
Server Anywhere supports transaction processing, with locking features built
in to allow concurrent transactions to access the database without corrupting
the data. Transactions begin following a COMMIT or ROLLBACK
statement and end either with a COMMIT statement, which makes all the
changes to the database required by the transaction permanent, or a
ROLLBACK statement, which undoes all the changes made by the
transaction.
transaction log
A log storing all changes made to a database, in the order in which they are
made. In the event of a media failure on a database file, the transaction log is
essential for database recovery. The transaction log should therefore be kept
on a different device from the database files for optimal security.
transaction log
mirror An identical copy of the transaction log file, maintained at the same time.
Every time a database change is written to the transaction log file, it is also
written to the transaction log mirror file.
1091
A mirror file should be kept on a separate device from the transaction log, so
that if either device fails, the other copy of the log keeps the data safe for
recovery.
trigger
A trigger is a procedure stored in the database that is executed automatically
by the database server whenever a particular action occurs, such as a row
being updated. Triggers are used to enforce complex forms of referential
integrity, or to log activity on database tables.
uncompress With the Uncompression utility, you can expand a compressed database file
created by the Compression utility. The Uncompression utility reads the
compressed file and creates an uncompressed database file.
The Uncompression utility does not uncompress files other than the main
database file (dbspace files).
unique constraint
A unique constraint identifies one or more columns that uniquely identify
each row in the table. A table may have several unique constraints.
unload
Unloading a database dumps the structure and/or data of the database to text
files (command files for the structure, ASCII comma-delimited files for the
data). This may be useful for creating extractions, creating a backup of your
database, or building new copies of your database with the same or slightly
modified structure. You can also unload the data (but not the structure) of a
particular table.
updates
In replication, each set of changes sent from one database to another is an
update to a publication or replication.
user-defined data
type A named combination of base data type, default value, check condition, and
nullability. Defining similar columns using the same user-defined data type
encourages consistency throughout the database.
user account
Every connection with a database requires a user account. The permissions
that a user has are tied to their user account. A user account consists of a user
ID and password.
user ID
A string of characters that identifies the user when connecting to a particular
database. The user ID, together with a password, constitutes a user account.
1092
validate When the information in a database, or a database table, is checked for
integrity it is validated.
view
A view is a computed table. Every time a user uses a view of a particular
table, or combination of tables, it is recomputed from the information stored
in those tables. Views can be useful for security purposes, and to tailor the
appearance of database information to make data access straightforward. As
a permanent part of the database schema, a view is a database object.
write file
If a database is used with a write file, all changes made to the database do not
modify the database itself, but instead are made to the write file. Write files
are useful in applications development, so the developer can have access to
the database without interfering with it. Also, write files are used in
conjunction with compressed databases and other read-only databases.
1093
1094
Index
access modifiers
Java, 516
access plans
& cache size, effect of, 840
& reading, 817
UNIX command line, 18 accessing
Connect dialog, 36
actions
CASCADE, 365
* RESTRICT, 365
* (asterisk) SET DEFAULT, 365
SELECT statement, 149 SET NULL, 365
*= (asterisk equals) active connection
Transact-SQL outer join operator, 214 Connection window, 1029
Adaptive Server Enterprise
compatibility, 931
adding
@ column data INSERT statement, 249
@@identity global variable, 948 external logins, 877
JAR files, 546
Java classes, 545
= remote procedures, 888
statistics to the Performance Monitor, 809
=* (equals asterisk)
administrator role
Transact-SQL outer join operator, 214
Adaptive Server Enterprise, 937
ADO
connecting, 59
> ADO applications
>> SQL statements, 258
Java methods, 521 Agent connection parameter
about, 60
aggregate functions
about, 170
A Adaptive Server Enterprise compatibility, 187
access ALL keyword, 170
security features, 750 data types, 171
Access DISTINCT keyword and, 170
ODBC configuration for, 51 GROUP BY clause, 174
remote data access, 913 Java columns, 559
NULL, 172
1095
A–A
1096
B–B
attributes
choosing, 332
B
B+ trees
definition of, 323
indexes, 795
Java, 569
background
SQLCA.lock, 377
running the database server, 18
audit trail, 671
backup plans
auditing
about, 643
about, 756
backups
comments, 758
concepts, 633
dblog utility, 759
databases not involved in replication, 639
dbtran utility, 759
dbltm and, 640
dbwrite utility, 759
dbremote and, 640
example, 758
dbsync and, 640
retrieving audit information, 757
designing procedures, 636
security features, 750, 756
external, 631
turning on, 756
for remote databases, 642
authenticated server
full, 656
deploying, 863
internal, 631
AUTO_COMMIT option
live, 667
Interactive SQL, 371
MobiLink and, 640
settings, 371
MobiLink consolidated databases, 639
autocommit
offline, 631
transactions, 371
online, 631
Autocommit
planning, 637
ODBC configuration, 51
replication, 1001, 1002
autocommit mode
Replication Agent and, 640
JDBC, 589
scheduling, 637
performance, 783
SQL Remote and, 640
transactions, 277
SQL statement, 631
autoexec.ncf
strategies, 637
automatic loading, 6
Sybase Central, 631
autoincrement
types of, 631
IDENTITY column, 947
unfinished, 664
AUTOINCREMENT
validating, 644
default, 352
base table, 121
negative numbers, 353
batch mode
signed data types, 353
for LTM, 998
UltraLite applications, 353
batches
when to use, 414
about, 423, 444
AUTOMATIC_TIMESTAMP option, 943
control statements, 444
Open Client, 974
data definition statements, 444
automation
SQL statements allowed, 474
administration tasks, 481
Transact-SQL overview, 955
generating unique keys, 414
writing, 955
AutoStop connection parameter
BEGIN TRANSACTION statement
ODBC configuration, 53
remote data access, 890
availability
BETWEEN keyword
database server, 18
range queries, 160
high, 667
bi-directional replication, 416
AVG function, 170
1097
C–C
1098
C–C
1099
C–C
1100
C–C
1101
C–C
1102
C–C
1103
D–D
creating databases
security, 760
D
daemon
Windows CE, 296
database server as, 18
cross joins, 205
daemon database server, 18
and self-joins, 205
data
CT-library, 964
case sensitivity, 944
current date and time defaults, 351
consistency, 374
cursor positioning
duplicated, 346
troubleshooting, 270
exporting, 675, 678, 687
cursors, 274
formats, 681
about, 263
importing, 675, 678, 683, 697
and LOOP statement, 458
integrity and correctness, 382
availability, 267
invalid, 346
canceling, 274
viewing, 127
choosing a type, 267
data consistency
connection limit, 745
assuring using locks, 401
describing, 275
correctness, 382
dynamic scroll, 266
dirty reads, 374, 386, 404
DYNAMIC SCROLL, 270
ISO SQL/92 standard, 374
fat, 271
phantom rows, 374, 393, 396, 405, 412
fetching, 269
repeatable reads, 374, 389, 404, 405
fetching multiple rows, 271
two-phase locking, 410
fetching rows, 271
data definition
in procedures, 458
concurrency, 414
insensitive, 266
data definition language
introduction, 263
about, 108
isolation level, 270
data definition statements
no scroll, 266
and concurrency, 415
ODBC configuration, 51
data entry
on SELECT statements, 458
and isolation levels, 383
performance, 272
data integrity
platforms, 267
about, 345
positioning, 269
column constraints, 343
prepared statements, 265
constraints, 348, 355
procedures and triggers, 458
effects of unserializable schedules on, 383
read only, 266
overview, 346
result sets, 263
rules in the system tables, 367
savepoints, 278
data link layer
scroll, 266
about, 84
scrollable, 273
troubleshooting, 99
step-by-step, 264
data model normalization, 334
transactions, 278
data modification
unique, 266
permissions, 248
updating and deleting, 273
data organization
uses of, 264
physical, 823
using, 269
data source description
custom collations
ODBC, 50
about, 305
data source name
creating, 305
ODBC, 50
creating databases, 317
1104
D–D
1105
D–D
1106
D–D
dbjava7.dll deadlock
deploying, 863 about, 380
dblgen7.dll, 855, 860, 866 transaction blocking, 381
deploying, 863 deadlocks
dblib7.dll, 860 reasons for, 381
DB-Library, 964 Debug connection parameter
dblog utility about, 60
auditing, 759 debugging
transaction log mirrors, 674 about, 607
dbmapi.dll, 866 breakpoints, 619
DBN connection parameter compiling classes, 617
about, 60 connecting, 610
dbo user ID event handlers, 490, 496
Adaptive Server Enterprise, 937 features, 608
dbodbc7.dll, 855 getting started, 610
dbodtr7.dll, 855 introduction, 608
dbping utility Java, 617
using, 71 local variables, 616, 620
dbremote.exe, 866 permissions, 609
DBS connection parameter requirements, 609
about, 60 stored procedures, 614
dbserv7.dll tutorial, 614, 617
deploying, 863 DebugScript class, 624
dbsmtp.dll, 866 decision support
dbspaces and isolation levels, 383
creating, 766 DECLARE statement
managing, 936 compound statements, 447
maximum 12, 766 procedures, 458, 462
dbsrv7.exe default character set
deploying, 863 about, 290
dbstop utility UNIX, 290
permissions, 11 Windows, 290
using, 15 DefaultCollation property
dbtool7.dll, 866 about, 772
dbtran utility defaults
auditing, 757, 759 AUTOINCREMENT, 352
transaction logs, 671 column, 350
uncommitted changes, 671 connection parameters, 45
DBUNLOAD constant expressions, 354
replication, 679 creating, 350
dbunload utility, 687, 692 creating in Sybase Central, 351
dbupgrad utility current date and time, 351
Java, 540, 542 INSERT statement and, 249
dbvalid utility Java, 549
using, 656 NULL, 353
dbvim.dll, 866 string and number, 353
dbwrite utility Transact-SQL, 936
auditing, 759 user ID, 352
dbwtsp7.dll, 866 using in domains, 360
DDL with transactions and locks, 414
about, 108
1107
D–D
1108
D–D
1109
E–E
1110
F–F
1111
G–G
1112
H–I
1113
I–I
1114
J–J
1115
J–J
1116
K–L
1117
L–L
1118
M–M
1119
N–N
1120
O–O
1121
O–O
1122
P–P
optimizations owners
using indexes, 412 about, 717
optimizer
about, 804, 813
assumptions, 819
predicate analysies, 828 P
role of, 814 packages
selectivity estimation, 834 installing, 546
semantic subquery transformations, 830 Java, 515, 524
optional foreign keys, 364 jConnect, 599
optional relationships, 326 locating, 1031
options packets
BLOCKING, 380 network communications, 84
DEFAULTS, 699 page size
ISOLATION_LEVEL, 376 performance, 780
NULLS, 689, 707 switch, 11
Open Client, 974 parentheses
setting database options, 116 in arithmetic statements, 154
setting user and group options, 719 UNION operators, 185
startup settings, 974 password
or (|) bitwise operator default, 716
using, 167 Password connection parameter
Oracle and remote access, 909 about, 60
oraodbc server class, 909 passwords
ORDER BY clause case sensitivity, 945
GROUP BY, 184 changing, 721
Java columns, 559 length, 751
limiting results, 183 Lotus Notes, 914
performance, 801 ODBC configuration, 52
ordering security features, 754
Java objects, 558 security tips, 751
ordering of transactions, 382 utility database, 774
organization performance
of data, physical, 823 about, 777
orphan and referential integrity, 407 autocommit, 783
OSI Reference Model bulk loading, 695
protocol stacks, 82 bulk operations, 784
OUT parameters cache, 778
Java, 564 cursors, 267, 272
outer joins database design, 779
FROM clause, 199 disk fragmentation, 768
join conditions, 201 file fragmentation, 783
restrictions, 215 file management, 781
Transact-SQL, 214, 952 improving versus locks, 385
Transact-SQL, restrictions on, 215 indexes, 141, 780
Transact-SQL, views and, 216 Java values, 566
outer reference JDBC, 595
definition, 180 joins and, 209
output redirection, 705 keys, 789
OUTPUT statement, 687, 705 LTM, 998
monitoring, 807, 811
1123
P–P
1124
P–P
PowerBuilder private
remote data access, 869 Java access, 516
predicates procedure language
about, 158 overview, 954
analysis of, 828 procedure not found error
selectivity estimation, 834 Java methods, 593
prefetch, 271, 272 procedures
PREFETCH option, 272 about, 423, 424
PREPARE statement adding remote procedures, 888
remote data access, 890 altering, 428
prepared statements and Adaptive Server Anywhere LTM, 994
bind parameters, 260 benefits of, 425
connection limit, 745 calling, 429
cursors, 265 command delimiter, 472
dropping, 260 copying, 429
Java objects, 554 creating, 426
JDBC, 595 cursors, 458
using, 260 dates, 473
PreparedStatement class default error handling, 461
setObject method, 554 deleting, 430
PreparedStatement interface deleting remote procedures, 889
about, 595 error handling, 461, 960, 961
prepareStatement method, 262 exception handlers, 466
preparing EXECUTE IMMEDIATE statement, 470
to commit, 921 external functions, 475
preserved table multiple result sets from, 456
outer joins, 214 parameters, 450, 451
primary key, 127 permissions, 726
creating, 127 permissions for creating, 717
primary keys, 128 permissions for result sets, 455
AUTOINCREMENT, 352 replicating, 995, 996
concurrency, 414 result sets, 431, 455
creating, 128 return values, 960
entity integrity, 363 returning results, 453
generation, 414 returning results from, 430
Java columns, 559 savepoints, 471
modifying, 127, 128 security, 425, 741
performance, 789 SQL statements allowed, 449
transaction log, 647 structure, 449
primary site table names, 472
adding Replication Server information, 983 times, 473
Replication Server, 978 tips, 472
uses of LTM at, 980 Transact-SQL, 957
primary sites Transact-SQL overview, 954
creating, 981 translation, 957
Replication Server, 979 using, 426
print using cursors in, 458
Java, 522 variable result sets from, 457
println method verifying input, 473
Java, 522 warnings, 465
writing, 472
1125
Q–R
1126
R–R
1127
R–R
1128
S–S
return values
procedures, 960
S
SA_DEBUG group
REVOKE statement
debugger, 609
and concurrency, 415
sample database
Transact-SQL, 939
about, xviii
revoking
Java, 536
permissions, 729
SAVEPOINT statement
remote permissions, 728
and transactions, 373
revoking group membership, 734
savepoints
right-outer joins
cursors, 278
FROM clause, 199
nesting and naming, 373
roles
procedures and triggers, 471
Adaptive Server Enterprise, 937
within transactions, 373
definition of, 325
saving transaction results, 371
ROLLBACK
scalar aggregates, 171
TO SAVEPOINT statement, 373
scan factor, 834
rollback log
scan_retry parameter, 988
about, 652
schedules, 382
savepoints, 373
about, 484
ROLLBACK statement, 373
defined, 482
compound statements, 447
definition of serializable, 382
cursors, 278
effects of serializability, 383
log, 652
effects of unserializable, 383
procedures and triggers, 471
internals, 492
transactions, 371
serializable versus early release of locks, 412
triggers, 955
two-phase locking, 410
ROLLBACK TO SAVEPOINT statement
scheduling
cursors, 278
about, 481, 482, 484
rolling back transactions, 371
backups, 637, 643
Row variables window, 1031
scheduling of transactions, 382
rows
schema
copying with INSERT, 251
exporting, 704
selecting, 158
scope
RS parameter, 988
Java, 516
RS_pw parameter, 988
scripts
RS_source_db parameter, 988
DebugScript class, 624
RS_source_ds parameter, 988
IDebugAPI interface, 624
RS_user parameter, 988
IDebugWindow interface, 624
rssetup.sql command file, 983, 986
writing, 624
rules
scroll cursors, 266
Transact-SQL, 936
scrollable cursors, 273
runtime classes
security
contents, 539
about, 715, 749
installing, 539
auditing, 756, 757
Java, 519
creating databases, 760
runtime environment
database server, 751, 760
Java, 537
deleting databases, 760
runtime Java classes, 519
encryption, 761
event example, 488
integrated logins, 77, 78, 753
1129
S–S
1130
S–S
1131
S–S
1132
S–S
1133
T–T
1134
T–T
1135
T–T
1136
U–U
deleting, 442
error handling, 461
U
UID connection parameter
exception handlers, 466
about, 60
executing, 440
UNC connection parameter
execution permissions, 442
about, 60
permissions, 717, 728
unchained mode, 277
permissions for creating, 717
Unconditional connection parameter
recursion, 955
about, 60
ROLBACK statement, 955
Unicode character sets
savepoints, 471
about, 301
SQL statements allowed in, 449
UNION operation
statement-level, 954
about, 185
structure, 449
unique columns
times, 473
Java columns, 559
Transact-SQL, 945, 954
unique cursors, 266
using, 437
unique keys
warnings, 465
generating and concurrency, 414
troubleshooting
unique results
backups, 664
limiting, 155
common problems, 101
UNIX
cursor positioning, 270
database server, 6
database connections, 63
default character set, 290
deadlocks, 381
deployment issues, 848
debugging classes, 617
directory structure, 848
protocols, 98
ODBC support, 55
remote data access, 896
TCP/IP, 91
selectivity estimates, 838
threaded applications, 849
server address, 972
unknown values. about, 165
server startup, 29, 30
UNLOAD statement, 687
wiring problems, 101
security, 760
TRUNCATE TABLE statement
UNLOAD table statement, 692
about, 256
UNLOAD TABLE statement, 684, 687
try block
security, 760
Java, 517
unloading, 676, 678
tsequal function, 947
databases, 679
TSQL_HEX_CONSTANT option
unloading and reloading
Open Client, 974
databases, 709, 712
TSQL_VARIABLES option
unloading data
Open Client, 974
security, 760
tutorials
unserializable transaction scheduling
implications of locking, 396
effects of, 383
isolation levels, 386
UPDATE permissions, 722
non-repeatable reads, 386, 389
UPDATE statement, 994
phantom rows, 393, 396
Java, 554
two-phase commit
locking during, 408
three-tier computing, 920, 921
positioned, 273
two-phase locking, 410
set methods, 555
protocol, 411
using, 253
two-phase locking theorem, 411
using join operations, 254
type
upgrading databases, 693
objects, 510
1137
V–V
1138
W–X
1139
Z–Z
zip files
Z Java, 515
-z option
database server, 30
1140