You are on page 1of 38

Client Server Architecture

Department of Information Technology

CLIENT/SERVER BUILDING BLOCKS


CLIENT/SERVER: A ONE SIZE FITS ALL:
The client/server model is deceptively simple, and it works well with today's
technologies. It is ideally suited for dealing with the needs of a post-scarcity computing
world, where client/server becomes the ultimate medium for sharing and collaborating.
This model is really a game of putting together things with building blocks. The
wide spectrum of client/server needs is met - from the tiny to the intergalactic - with just
three basic building blocks: a client, a server, and the slash (/) that ties the client to the
server (see below Figure). Kids of all ages will love this game. It should help you identify
some durable structures in the design of client/server systems.

Figure : The Three Basis Building Blocks of Client/Server.


The following are the various building-block arrangements that are used in four
situations:

Client/server for tiny shops and nomadic tribes is a building-block implementation that runs the client, the middleware software, and most of the business
services on the same machine. It is the suggested implementation for the oneperson shops, home offices, and mobile users with well-endowed laptops. This is
a new opportunity area for client/server technology.

Client/server for small shops and departments is the classic Ethernet


client/single-server building-block implementation. It is used in small shops,
departments, and branch offices. This is the predominant form of client/server
today.

Client/server for intergalactic enterprises is the multi server building-block


implementation of client/server. The servers present a single system image to the
client. They can be spread out throughout the enterprise, but they can be made to
look like they're part of the local desktop. This implementation meets the initial
needs of intergalactic client/server computing.

A. Meiappane

Sri Manakula Vinayagar Engg. College

44

Client Server Architecture

Department of Information Technology

Client/server for a post-scarcity world transforms every machine in the world


into both a client and a server. Personal agents on every machine will handle all
the negotiations with their peer agents anywhere in the universe. This dream is
almost within reach.

Many similarities can be discovered in this four arrangements. This is because


they all use the same type of software, middleware, and communications infrastructure.
It is easy to run the client and server portion of an application on the same
machine. Vendors can easily package single-user versions of a client/server application
(see above figure). For example, a client/server application for a dentist's office can be
sold in a single-user package for offices consisting of a single dentist and in a multi user
package for offices with many dentists. The same client/server application covers both
cases. The only caveat is that you need to use an operating system that is robust enough
to run both the client and server sides of the application.
The example of the tiny dentist's office also works for the tiny in-home business
office and the mobile user on the road. In all cases, the business-critical client/server

A. Meiappane

Sri Manakula Vinayagar Engg. College

45

Client Server Architecture

Department of Information Technology

Figure : Client/Server for Tiny Shafts and Nomadic Users.


application runs on one machine and does some occasional communications with outside
servers to exchange data, refresh a database, and send or receive mail and faxes. For
example, the one-person dentist's office may need to communicate with outside servers
such as insurance company billing computers. And, of course, everyone needs to be on
the Internet, even dentists.
The client/server architecture is particularly well-suited for the LAN-based single
Server establishments. So, it's no wonder that they account for around 80% of today's
client/server installations. This is the "archetypical" Ethernet model of client/server. It
consists of multiple clients talking to a local server (see above figure). This is the model
used in small businessesfor example, a multi user dentist officeand by the
departments of large corporationsfor example, the branch offices of a bank.
The single-server nature of the model tends to keep the middleware simple. The
client only needs to look into a configuration file to find its server's name. Security is
implemented at the machine level and kept quite simple. The network is usually relatively
easy to administer; it's a part-time job for a member of the group. There are no complex
interactions between servers, so it is easy to identify failures they're either on the client
or on the local server.
Braver souls may be using their server to interact in a very loosely-coupled way
with some enterprise server. For example, data (such as a price list) may be downloaded
once a day to refresh the local server. Or inventory data may be uploaded to an enterprise
server. Fax and mail can be sent or received any time through the mail server gateway.
Typically, the software that interacts with remote servers will reside on the departmental
server.

A. Meiappane

Sri Manakula Vinayagar Engg. College

46

Client Server Architecture

Department of Information Technology

Figure : Client/Server for Small Shops and Departments.


Departmental servers will continue to be popular, even in large enterprises,
because they provide a tremendous amount of user autonomy and controls. Users feel that
it is their server, and they can do anything they please with it. A departmental server's
applications typically address the specific needs of the local clients first, which make
users very happy. With fiber optics and high-speed ATM connections, it will be hard to
detect a performance difference between a local departmental server and an enterprise
server a continent away. However, the psychology (and politics) of ownership will always
provide a powerful motivator for holding on to that local server.
In summary, this implementation of client/server "uses our three building blocks
to create the classical single-server model of client/server that is so predominant in the
Ethernet era. This model works very well in small businesses and departments that
depend on single servers or on very loosely-coupled multi server arrangements.
The client/server enterprise model addresses the needs of establishments with a
mix of heterogeneous servers. This is an area that's getting a lot of industry attention as
solutions move from a few large computers to multiple servers that live on the Internet,
Intranets, and corporate backbone networks (see below figure). One of the great things
about the client/server model is that it is upwardly scalable. When more processing power
is needed for various intergalactic functions, more servers can be added (thus creating a
pool of servers), or the existing server machine can be traded up for the latest generation
of super server machine.
The servers are partioned based on the function they provide, the resource they
control, or the database they own. In addition, we may choose to replicate servers for
fault tolerance or to boost an application's performance. There can be as many server
combinations as your budget will tolerate. Multi server capability, when properly used,
can provide an awesome amount of compute power and flexibility, in many cases rivaling
that of mainframes.

A. Meiappane

Sri Manakula Vinayagar Engg. College

47

Client Server Architecture

Department of Information Technology

Figure : Client/Server for Intergalactic Enterprises.


To exploit the full power of multi servers, low-cost, high-speed bandwidth and an
awesome amount of middleware featuresincluding network directory services, network
security, remote procedure calls, and network time services are needed. Middleware
creates a common view of all the services on the network called a "single system image."
Good software architecture for intergalactic enterprise client/server implementations is all about creating system "ensembles" out of modular building blocks. The
creative way is to found for partitioning the work among the servers. For example, the
work may be partitioned using distributed objects. The user should design their own
servers so that they can delegate work to their fellow servers. A complex request may
involve a task force of servers working together on the request. Preferably, the client
should not be made aware of this behind-the-scenes collaboration.
Intergalactic client/server is the driving force behind middleware standards such
as distributed objects and the Internet. We're all looking for that magic bullet that will
make the distributed multi vendor world as integrated as single-vendor mainframes. Tools
for creating, deploying, and managing scalable client/server applications are getting a lot
of attention. There are fortunes to be made in intergalactic client/server because nobody
has yet put all of the pieces back together.
Client/Server for a Post-Scarcity World
The new systems can be created on a client/server platform when memory and
hardware become incredibly affordable. Every machine is both a client and a fullfunction server (see below figure). This plentiful environment is the post-scarcity world.
The typical post-scarcity machine as a $2000 cellular notebook powered by a 200 MHz
Pentium and loaded with 100 MBytes of RAM and 100 GBytes or more of disk space.
And, of course, this machine runs all the middleware that vendors will be able to dream
of over the next few years.
A. Meiappane

Sri Manakula Vinayagar Engg. College

48

Client Server Architecture

Department of Information Technology

Figure: Client/Server for a Post-Scarcity World


What do we do with all this power other than run middleware? What happens
when every machine in the world becomes a universal server and client? Because every
machine is a full-function server, we should assume it will run, at a minimum, a file
server, database server, workflow agent, TP Monitor, and Web serverall connected via
an ORB. This is in addition to all the client software and middleware.
The three building blocks of client/server: the client, the server, and the middleware
slash (/) that ties them together. The below figure Client/Server Software Infrastructure
peels the next layer off the onion and provides more detail about what goes into each of
the building blocks.

The client building block runs the client side of the application. It runs on an
Operating System (OS) that provides a Graphical User Interface (GUI) or an
Object Oriented User Interface (OOUI) and that can access distributed services,
wherever they may be. The operating system most often passes the buck to the
middleware building block and lets it handle the non-local services. The client
also runs a component of the Distributed System Management (DSM) element.
This could be anything from a simple agent on a managed PC to the entire frontend of the DSM application on a managing station.

The server building block runs the server side of the application. The server
application typically runs on top of some shrink-wrapped server software
package. The five contending server platforms for creating the next generation of
client/server applications are SQL database servers, TP Monitors, groupware
servers, object servers, and the Web. The server side depends on the operating
system to interface with the middleware building block that brings in the requests
for service.. The server also runs a DSM component. This could be anything from
a simple agent on a managed PC to the entire back-end of the DSM application

A. Meiappane

Sri Manakula Vinayagar Engg. College

49

Client Server Architecture

Department of Information Technology

(for example, it could provide a shared object database for storing system
management information).

The middleware building block runs on both the client and server sides of an
application. We broke this building block into three categories: transport stacks,
network operating systems (NOSs), and service-specific middleware. Middleware
is the nervous system of the client/server infrastructure. Like the other two
building blocks, the middleware also has a DSM software component.

The Distributed System Management application runs on every node in .a client/server network. A managing workstation collects information from all its agents on
the network and displays it graphically. The managing workstation can also instruct its
agents to perform actions on its behalf. Think of management as running an autonomous
"network within a network." It is the "Big Brother" of the client/server world, but life is
impossible without it.

Figure : The Client/Server Software Infrastructure


Server to Server
Middleware does not include the software that provides the actual service. It does,
however, include the software that is used to coordinate inter-server interactions (see
below figure). Server-to-server interactions are usually client/server in nature servers are
clients to other servers. However some server-to-server interactions require specialized
server middleware. For example a two-phase commit protocol may be used to coordinate
a transaction that executes on multiple servers. Servers on a mail backbone will use
special server-to-server middleware for doing store-and-forward type messaging. But
most modern software (even on-operating system kernels) follows the client/server
paradigm.

A. Meiappane

Sri Manakula Vinayagar Engg. College

50

Client Server Architecture

Department of Information Technology

Figure : Server-to-server Middleware Infrastructure.

MIDDLEWARE:
The following diagram shows the basic view of how a client workstation interacts
with a database server through a network.

FIGURE: A implistic view of client/server system.


API - Application Programming Interface
IPC - Interprocess Communications
NPS- Network Protocol Stack
When an application at the client end requires data from the server, a transaction
is sent from the application logic via SQL to the network. This transaction is passed
through an application programming interface, an Interprocess communications protocol,
and a network protocol stack to the server. The application programming interface and
the interprocess communications portions of the process can be made up of middleware.
A. Meiappane

Sri Manakula Vinayagar Engg. College

51

Client Server Architecture

Department of Information Technology

Building the Blueprint:


The following diagram shows where middleware sits in the client/server system.
On the left side of the diagram, a business application is communicating to middleware
that is then communicating to a business server on the right side of the diagram. These
systems are physically separate and may be located anywhere.

API - Application Programming Interface


IPC - Interprocess Communications
NPS - Network Protocol Stack
FIGURE : The position of middleware in a client-server system.
In some cases, the middleware can also provide the database language vocabulary
as well. Examples of this type of middleware include Rumba Database Access and
ODBC products from Microsoft.
High-Level Middleware Communications Types:
Middleware products, at a high level, use one of three communications types:

Synchronous transaction-oriented communications

Asynchronous batch-oriented communications or message-oriented middleware


(MOM)

A. Meiappane

Sri Manakula Vinayagar Engg. College

52

Client Server Architecture

Department of Information Technology

Application-to-server communications

Synchronous Transaction-Oriented Communications:


Middleware that uses synchronous transaction-oriented communications involves
back and forth exchanges of information between two or more programs. For example, a
PC running ODBC retrieves host system-based information requested by the PC
application. The synchronized aspect of this communications style demands that every
program perform its task correctly; otherwise, the transaction will not be completed.
Products of this type include the following:
Products that provide APIs that allow PC programs to communicate with an
AS/400 using APPC fit in this type because APPC is synchronous transactionoriented.
Products that support TCP/IP sockets so that PC programs can communicate with
other sockets-based systems are synchronous transaction-oriented as well. This
approach is similar to the APPC approach for the AS/400. Applications that
support Winsocks for Microsoft Windows work in this way.
Products that provide APIs, high-level language APIs (HLLAPIs), or extended
HLLAPIs (EHLLAPIs) that let PC programs communicate with mainframe and
midrange programs through an intermediate terminal emulation program are
included in the synchronous transaction-oriented group. This kind of product is
the origin of the "screen scrape" programs. By using this technology, your
application program communicates with a terminal emulation program package
through APIs to sign on to the host computer, and then interact with the host
application as if the PC program were a display session user. Examples of these
packages include Rumba Office from Walldata, products from Netsoft such as
Netsoft Elite, and the Attachmate series.
Microsoft Windows-oriented communications products that support the Windows
Dynamic Data Exchange (DDE) or Object Linking and Embedding (OLE)
facilities to create links between host-based information (typically again accessed
through Windows-based terminal emulation sessions) and native Windows
programs are also synchronous transaction-oriented products. With DDE and
OLE, you can create a hot link between information on a host application screen
(through terminal emulation software) and a spreadsheet handled by a native
Windows application (such as Excel or Lotus 1-2-3). Note that both applications
involved in DDE or OLE conversation must support the DDE or OLE formats.
Both Rumba and Netsoft support these formats for the AS/400.

A. Meiappane

Sri Manakula Vinayagar Engg. College

53

Client Server Architecture

Department of Information Technology

Asynchronous Batch-Oriented Communications


In the asynchronous batch-oriented communications type, messages are sent
either one at a time or in batches with no expectation of an immediate response (or
sometimes of any response at all). For example, a server database update program uses a
data queue facility to send subsets of updated records to PC programs that then update the
local client-based database. Or a PC program uses a file transfer API to send sales order
records to an AS/400 program as they're entered. This method is commonly called
message-oriented middleware and is covered in more detail in a later section of this
chapter.
Application-to-Server Communications:
Middleware can also link a business application with a generalized server
program that typically resides on another system. A database server, an image server, a
video server, and other general-purpose servers can communicate with an application
program through a middleware solution.
Products in the server-oriented middleware range include the following:
Products that conform to the Windows-based ODBC specification are serveroriented middleware. Under this specification, vendors provide an ODBC driver
that, on one side, provides a consistent set of, SQL-oriented access routines for
use by Windows programs and, on the other side, manages access to the vendor's
remote database. ODBC is covered in more detail in a later section of this chapter.
Vendor-specific remote access products and a handful of generic SQL-based
remote access solutions, which offer alternatives to remote database access
through ODBC, are also categorized under server-oriented middleware. Oracle's
Oracle Transparent Gateway range is an example of an SQL-based remote access
product.
On the edge of the server-oriented middleware market is the transactionprocessing workhorse CICS. CICS is available for OS/2, mainframes, and
AS/400.

TYPES OF MIDDLEWARE:
Several main types of middleware can be used to build client/server systems. The
following are the well-known types:
1.
2.
3.
4.

DCE (Distributed Computing Environment)


MOM (Message-Oriented Middleware)
Transaction-processing monitors (TP monitors)
ODBC

A. Meiappane

Sri Manakula Vinayagar Engg. College

54

Client Server Architecture

Department of Information Technology

DCE:
DCE is an combined integrated set of services that supports the development of
distributed applications, including client/server. DCE is operating system and network
independent, providing compatibility with users' existing environments. The following
figure shows DCE's layered approach.
The architecture of DCE is a layered model that integrates a set of technologies.
The architecture is layered bottom-up from the operating system to the highest-level
applications. Security and management are essential to all layers of the environment. To
applications, the environment appears as a single logical system rather than a collection
of different services. Distributed services provide tools for software developers to create
the end-user services needed for distributed computing. These distributed services include
the following:

Remote procedure call and presentation services


Naming or directory services
Time service
Security service
Threads service
Distributed file services
PC integration service
Management service

Applications
PC Integration
Other Distributed
Services
Distributed File Services
Time Services

Naming
Services

Management

Security

FIGURE : The Distributed Computing Environment.

Other
Services

RPC and Presentation Services


Threads
Operating System

A. Meiappane

Sri Manakula Vinayagar Engg. College

55

Client Server Architecture

Department of Information Technology

Remote Procedure Call (RPC):


The Remote Procedure Call (RPC) capability is based on a simple premise: make
individual procedures in an application run on a computer somewhere else within the
network. A distributed application, running as a process on one computer, makes
procedure calls that execute on another computer. Within the application, such program
calls appear to be standard local procedure calls, but these calls activate sub procedures
that interact with an RPC run-time library to carry out the necessary steps to execute the
call on the remote computer. RPC manages the network communications needed to
support these calls, even the details such as network protocols. This means that
distributed applications need little or no network-specific code, making development of
such applications relatively easy. In this way, RPC distributes application execution. RPC
extends a local procedure call by supporting direct calls to procedures on remote systems,
enabling programmers to develop distributed applications as easily as traditional, singlesystem programs. RPC presentation services mask the differences between data representations on different machines, allowing programs to work across multiple, mixed
systems.
RPC is used to allow applications to be processed in part on other servers, which
leaves the client workstation free to do other tasks. RPC allows clients to interact with
multiple servers and allows servers to handle multiple clients simultaneously. RPC allows
clients to identify and locate servers by name. RPC applications, integrated with the
directory services, are insulated from the details of the service. This characteristic will
allow them to take advantage of future enhancements.
Naming Services:
The distributed directory service provides a single naming model throughout
DCE. This model allows users to identify by name resources such as servers, files, disks,
or print queues and to gain access to these resources without needing to know where they
are located on a network. As a result, users can continue referring to a resource by one
name even when a characteristic of the resource, such as its network address, changes.
The distributed directory service seamlessly integrates the X.500 naming system
with a replicated local naming system. Developers can move transparently from
environments supporting full ISO functionality to those supporting only the local naming
service component. The service allows the transparent integration of other services, such
as distributed file services, into the directory service. The global portion of the directory
service offers full X.500 functionality through the X/Open Directory Service API and
through a standard management interface.
The directory service allows users or administrators to create multiple copies of
critical data, assuring availability in spite of communication and hardware failures. It also
provides a sophisticated update mechanism that ensures consistency. Changes to names
or their attributes are automatically propagated to all replicas. In addition, replication
A. Meiappane

Sri Manakula Vinayagar Engg. College

56

Client Server Architecture

Department of Information Technology

allows names to be replicated near the people who use them, providing better
performance.
The directory service is fully integrated with the security service, which provides
secure communications. Sophisticated access control provides protection for entries. The
directory service can accommodate large networks as easily as small ones. The ability to
easily add servers, directories, and directory levels makes painless growth possible.
Time Service:
A time service synchronizes all system clocks of a distributed environment so that
executing applications can depend on equivalent clocking among processes. Consider
that many machines operating in many time zones may provide processes as part of a
single application solution. It's essential that they all agree on die time in order to manage
scheduled events and time-se-quenced events.
The distributed time service is a software-based service that synchronizes each
computer to a widely recognized time standard. This service provides precise, faulttolerant clock synchronization for systems in both local area networks and wide area
networks. Time service software is integrated with the RPC, directory, and security
services. DCE uses a modified version of DEC's Time Synchronization Service.
Threads Service:
Developers want to exploit the computing power that is available throughout the
distributed environment. The threads service provides portable facilities that support
concurrent programming, which allows an application to perform many actions
simultaneously. While one thread executes a remote procedure call, another thread can
process user input. The threads service includes operations to create and control multiple
threads of execution in a single process and to synchronize access to global data within
an application. Because a server process using threads can handle many clients at the
same time, the threads service is well-suited to dealing with multiple clients in
client/server-based applications. A number of DCE components, including RPC, security,
directory, and time services, use the threads service.
Security Service
In most conventional timesharing systems, the operating system authenticates the
identity of users and authorizes access to resources. In a distributed computing
environment where activities span multiple hosts with multiple operating systems,
however, authentication and authorization require an independent security service that
can be trusted by many hosts. DCE provides such a service. The DCE security service
component is well integrated within the fundamental distributed service and data-sharing
components. It provides the network with three conventional services: authentication,
authorization, and user account management. These facilities are made available through
a secure means of communication that ensures both integrity and privacy. The security
service incorporates an authentication service based on the Kerberos system from MIT's
A. Meiappane

Sri Manakula Vinayagar Engg. College

57

Client Server Architecture

Department of Information Technology

Project Athena. Kerberos is a trusted service that prevents fraudulent requests by


validating the identity of a user or service.
After users are authenticated, they receive authorization to use resources such as
files. The authorization facility gives applications the tools they need to determine
whether a user should have access to resources. It also provides a simple and consistent
way to manage access control information.
Every computer system requires a mechanism for managing user account
information. The User Registry solves the traditional problems of user account control in
distributed, multivendor networks by providing a single, scalable system for
consolidating and managing user information. The User Registry ensures the use of
unique user names and passwords across the distributed network of systems and services,
ensures the accuracy and consistency of this information at all sites, and provides security
for updates and changes. It maintains a single, logical database of user account
information, including user and group naming information, login account information,
and general system properties and policies. It is well-integrated with Kerberos to provide
an integrated, secure, reliable user account management system.
MOM:
MOM is a class of middleware that operates on the principles of message passing
and/or message queuing. MOM is characterized by a peer-to-peer distributed computing
model supporting both synchronous and asynchronous interactions between distributed
computing processes. MOM generally provides high-level services, multiprotocol
support, and other systems management services. These services create an infrastructure
to support highly reliable, scalable, and performance-oriented client/server systems in
mixed environments.
MOM is perhaps the most visible and currently the clearest example of
middleware. One of the key attributes of middleware is that it should provide seamless
integration between different environments. MOM uses the concept of a message to
separate processes so that they can operate independently and often simultaneously. For
example, a workstation can send a request for data, which requires collection and
collation from multiple sources, while continuing with other processing.
This form of so-called asynchronous processing allows MOM to provide a rich
level of connectivity in many types of business systems. MOM can handle everything
from a simple message to download some data from a database server to an advanced
client/server system with built-in workflow. In general terms, MOM works by defining,
storing, and forwarding the messages. When a client issues a request for a service such as
a database search, it does not talk directly to that service; it talks to the middleware.
Talking to the middleware usually involves placing the message on a queue where it will
be picked up by the appropriate service when the service is available. Some MOM
products use a polling method to pick up messages instead, but the principle is the same;
the messaging middleware acts as a buffer between the client and the server. More strictly
speaking, middleware is the requester on the client and the service on the server; as with
A. Meiappane

Sri Manakula Vinayagar Engg. College

58

Client Server Architecture

Department of Information Technology

MOM itself, there can be many instances of both requesters and services on a single
client or server. MOM insulates both the client and server applications from the complexities of network communications.
MOM ensures that messages get to their destinations and receive a response. The
queuing mechanism can be very flexible, either offering a first in, first out scheme or one
that allows priorities to be assigned to a message. The use of queues means that MOM
software can be very flexible. Like other forms of middleware, it can accommodate
straightforward one-to-one communications and many-to-one communications. Message
passing and message queuing have been around for many years as the basis for Online
Transaction Processing (OLTP) systems. The MOM software can also include system
management functions such as network integrity and disaster recovery.
MOM is similar to electronic mail systems such as Lotus cc:Mail. Although
MOM uses similar mechanisms and can indeed provide the foundation for electronic
mail; a key difference exists between MOM and electronic mail systems. Electronic mail
passes messages from one person to another, whereas MOM passes messages back and
forth between software processes.
MOM differs from database middleware in that database middleware vendors'
expertise and products focus on providing their customers with the integration of data
residing in multiple databases throughout the customers enterprise. Their solutions
normally require a communications component for managing and supporting sessions
between the front-end client and one or more back-end database servers. Their designs
are normally specific to accommodate the distribution and integration of their own
DBMS on multiple platforms. Products from MOM companies who specialize in this
environment provide users with a general-purpose solution that can be more readily used
for any-to-any environments, including SQL to SQL and SQL to non-SQL (IMS, for
example), and for non-DBMS data files. MOM products provide direct process-toprocess communications and are not just restricted to accessing data.
The Advantages of Using MOM:
In many modern client/server applications, there are clear advantages to using
MOM. It provides a relatively simple application programming interface (API), making it
easy for programmers to develop the necessary skills. The API is portable, so MOM
programs can be moved to new platforms easily without changing the application code.
The flexibility of the API also extends to legacy applications so that distributed
computing can be introduced gradually without incurring a massive reprogramming
exercise. MOM is a good tool to use as you begin your initial client/server development.
MOM is also a valid middleware technology on a system that uses object-oriented
technology. Objects, by their very definition, interact with one another by using
messages. Message passing and message queuing allow objects to exchange data and can
even pass objects without sharing control. Therefore, message-oriented middleware can
be a natural technology to complement and support object technology.
A. Meiappane

Sri Manakula Vinayagar Engg. College

59

Client Server Architecture

Department of Information Technology

Problems with MOM:


The main problem with MOM is that its function is restricted to message passing.
In other words, it does not include facilities to convert data formats. If, as in many
systems, data is to be transferred from mainframes to PCs, the data conversion from
EBCDIC to ASCII formats must be handled elsewhere. The MOM software only
provides the transport and delivery mechanisms for messages; it is not concerned with the
content. As a result, the application must take responsibility for creating and decoding
messages. This additional responsibility increases the application's complexity.
MOM's simplicity also can slow performance because messages are usually
processed from a queue one at a time. The problem can be solved by running multiple
versions of the message-processing software, although this approach is not ideal. This
particular problem means that MOM is not usually suitable for applications that require
real-time communications within applications.
Another major problem with MOM is that there is little in the way of
standardization. In 1993, a group of suppliers formed the MOM Association (MOMA) to
promote common interests. Key members include IBM, Digital, Novell, Peer Logic and
Software AG. MOMA is not a standards-making body, which means MOM products are
essentially proprietary in nature. MOMA does lobby standards bodies with an interest in
middleware. It has ties to the Open Software Foundation (OSF) and the Object
Management Group (OMG) in its work on object-oriented computing. MOM suppliers
argue with some justification that the simplicity of MOM calls means that rigid standards
are unnecessary. There are some individual initiatives aimed at promoting interworking
both between different MOM products and with non-MOM middleware such as OSF's
DCE remote procedure call (RPC) technology, which was discussed earlier in the chapter.
Many third-party products also provide links to IBM's CICS to ease the migration path
from legacy systems to client/server.
As MOM expands to resolve these problems, it will inevitably become more
complex and start to resemble other approaches to middleware, such as transaction
processing and RPC. Finally, like many other solutions to the middleware problem, tools
that help create application systems around MOM and subsequently manage them are
needed. Momentum Software offers one of the most promising solutions with its
modeling and simulation software that sits on top of Message Express.
Available MOM Products:
The leading MOM products inevitably come from the established systems
suppliers, with IBM and Digital having the highest profile. IBM's MQSeries, originally
developed for IBM's main platforms (Mainframe MVS, OS/400 and AIX, and IBM's
UNIX), now supports a wide range of non-IBM hardware platforms such as Sun Solaris,
Tandem, and AT&T GIS. MQSeries is a group of products that uses IBM's Message
Queue Interface (MQI) to provide communications between mixed computer platforms.
A. Meiappane

Sri Manakula Vinayagar Engg. College

60

Client Server Architecture

Department of Information Technology

MQSeries accommodates all of the major computer languages (COBOL, C, Visual Basic)
and network protocols (SNA, TCP/IP, Decnet, and IPX). Front-end client support covers
Microsoft Windows, MS-DOS, and OS/2. MQSeries goes much further than many MOM
products in providing support for transac-tional messaging and all of its associated
benefits. This support includes features such as two-phase commit, security, and restart
and recovery, which would normally be found in transaction management software.
Among the third-party suppliers, Peer Logic's Pipes is one of the leading
contenders. It supports the main platforms of DEC, IBM, and Hewlett Packard. The two
companies are working to integrate the Pipes software into IBM's Distributed System
Object Model (DSOM) and to provide bridges between Pipes and MQSeries. Momentum
Software's Message Express and X-IPC products are also widely used.
Transaction Processing Monitors:
Before client/server had developed as a concept, the concept of middleware was
very much in place within transaction processing systems. Transaction Processing (TP)
monitors were first built to cope with batched transactions. Transactions were
accumulated during the day and then passed against the company's data files overnight.
Originally, TP monitor meant teleprocessing monitora program that multiplexed many
terminals to a single central computer. Over time, TP monitors took on more than just
multiplexing and routing functions, and TP came to mean transaction processing.
By the 1970s, TP monitors were handling online transactions, which gave rise to
the term Online Transaction Processing that then became a part of the majority of legacy
business systems in place today. Transaction Processing systems pass messages between
programs. They operate, store, and forward queues, and they send acknowledgments.
They have advanced error trapping procedures and restart and recovery features in the
event of a breakdown that have evolved over the past 30 years from the requirements of
mainframe integrity. IBM has defined a transaction as an atomic unit of work that
possesses four properties. These properties are atomicity, consistency, isolation, and
durability. These properties are often referred to as ACID properties.
Atomicity effectively provides the transaction recovery needs. A transaction must
be completed as a whole, or the transaction is not completed at all. Therefore, the system
must have full restart and recovery capabilities such that any transaction that goes bad
can be automatically reversed. Consistency means that the results of a particular
transaction must be reproducible and predictable. The transaction obviously must always
produce the same results under the same conditions. Isolation means that no transaction
must interfere with any concurrently operating transaction. Finally, durability means that
the results of the transaction must be permanent.
As you can see from these definitions, the software required to achieve these
properties is essential for robust client/server systems, yet also it is inevitably complex.
The robustness of TP systems, as discussed earlier, has evolved over many years as
A. Meiappane

Sri Manakula Vinayagar Engg. College

61

Client Server Architecture

Department of Information Technology

companies have demanded strong, secure mainframe systems. Client/server still has a
long way to go to match this robustness.
IBM has been in the forefront of moving TP from its mainframe roots to
client/server. IBM's CICS is perhaps one of the best examples of a Transaction
Processing system. CICS began in the late 1960s as the Customer Information Control
System (not Covered In Chocolate Sauce as was initially rumored!), a robust and reliable
piece of software with a great range of OLTP functionality. It has traditionally been used
on mainframes, yet recently it has also been ported to OS/2 as CICS OS/2 and the
RS/6000 UNIX machines as CICS/6000.
The CICS OS/2 product brings the traditional terminal emulation product and a
new External Call Interface (ECI) together at the client for processing across a network to
a TP server. IBM uses a technique called function shipping that enables TP tasks to be
moved around a network. The ECI technology is the crux of the system because it
provides a high level of communication between the client and server components of the
TP application that is required to support function shipping. Function shipping works in a
similar fashion to RPC as outlined in the DCE section. The benefit for CICS users is that
the CICS API is the same across all the platforms, so, in theory, a mainframe CICS
application could run on either CICS OS/2 or CICS RS/6000.
IBM and other TP suppliers have recognized that their products have an enormous
role to play in the new era of client/server computing. Their experience in the TP world,
coupled with the maturity of the product, can teach the client/server world significant
lessons as development goes forward. As a result, TP products such as CICS from IBM,
Tuxedo from Novell, and Top End from NCR are beginning to meet the demands of
client/server developers who need the robust, secure, and controllable features available
in these products. Without a doubt, the biggest reason for not moving to client/server is
that developers fear that the systems (sometimes rightly) do not have the integrity of the
30-year old legacy systems. In comparison to these legacy systems, client/server is a
newborn babe. Yet now more than ever, client/server systems based on workgroups and
LANs are considerably more viable than the traditional centralized mainframe processor
operating dumb terminals.
The main drawbacks of a TP system for client/server are that it is still
considerably more expensive than other forms of middleware and that TP suffers from a
lack of standards, similar to MOM. As companies diversify their client/server systems
and move from their legacy systems to client/server, they will benefit from using TP.
Queued, Conversational, and Workflow Models:
Most TP monitors have migrated from a client/server basis to a three-system
model in which the client performs data capture and local data processing and then sends
a request to a middleman called a request router. The router brokers the client request to
one or more server processes. Each server in turn executes the request and responds. This
A. Meiappane

Sri Manakula Vinayagar Engg. College

62

Client Server Architecture

Department of Information Technology

design has evolved in three major directions: queued requests, conversational


transactions, and workflow.
Queued TP is convenient for applications in which some clients produce data and
others process or consume it. E-mail, job dispatching, EDI (Electronic Data Interchange),
print spooling, and batch report generation are typical examples of queued TP. TP
monitors include a subsystem that manages transactional queues. The router inserts a
client's request into a queue for later processing by other applications. The TP monitor
may manage a pool of applications servers to process the queue. Conversely, the TP
monitor may attach a queue to each client and inform the client when messages appear in
its queue. Messaging applications are examples of queued transactions.
Simple transactions are one-message-in, one-message-out client/server
interactions, much like a simple RPC. Conversational transactions require the client and
server to exchange several messages as a single ACID unit. These relationships are
sometimes not a simple request and response, but rather small requests answered by a
sequence of responses (for example, a large database selection) or a large request (such as
sending a file to a server). The router acts as an intermediary between the client and
server for conversational transactions. Conversational transactions often invoke multiple
servers and maintain client context between interactions. Menu and forms-processing
systems are so common that TP systems have scripting tools to quickly define menus and
forms and the flows among them. The current menu state is part of the client context.
Application designers can attach server invocations and procedural logic to each menu or
form. In these cases, the TP monitor (router) manages the client context and controls the
conversation with a workflow language.
Workflow is the natural combination of conversational and queued transactions.
In its simplest form, a workflow is a sequence of ACID transactions following a
workflow script. For example, the script for a person-to-person e-mail message is
compose-deliver-receive. Typical business scripts are quite complex. Workflow systems
capture and manage individual flows. A client may advance a particular workflow by
performing a next step in the script. A developer defines workflow scripts as part of the
application design. Administrative tools report and administer the current work-inprocess.
Advanced TP:
Modern database systems can maintain multiple replicas of a database. When one
replica is updated, the updates are cross-posted to the other replicas. TP monitors can
complement database replication in two ways. First, they can submit transactions to
multiple sites so that update transactions are applied to each replica, thus avoiding the
need to cross-post database updates.
Second, TP systems use database replicas in a fallback schemeleaving the data
replication to the underlying database system. If a primary database site fails, the router
sends the transactions to the fallback replica of the database. Server failures are thus
A. Meiappane

Sri Manakula Vinayagar Engg. College

63

Client Server Architecture

Department of Information Technology

hidden from clients, who are given the illusion of an instant switch over. Because the
router uses ACID transactions to cover both messages and database updates, each
transaction will be processed once. The main TP monitors available today are CICS, IMS,
ACMS, Pathway, Tuxedo, Encina, and Top End.
ODBC:
Open database connectivity (ODBC) is,Microsoft's strategic interface for
accessing data in a distributed environment made up of relational and nonrelational
DBMSs. Based on the Call
Level Interface specification of the SQL Access Group, ODBC provides an open,
supposedly vendor-neutral way of accessing data stored in a variety of proprietary
personal computer, minicomputer, and mainframe databases. ODBC alleviates the need
for independent software vendors and corporate developers to learn multiple application
programming interfaces. ODBC now provides a universal data access interface. With
ODBC, application developers can allow an application to concurrently access, view, and
modify data from multiple, diverse databases. ODBC is a core component of Microsoft
Windows Open Services Architecture (WOSA). ODBC has emerged as the industry
standard for data access for both Windows-based and Macintosh-based applications.
The key salient points with respect to ODBC in the client/server development
environment are as follows:
ODBC is vendor-neutral, allowing access to DBMS from multiple vendors.
ODBC is open. Working with ANSI standards, the SQL Access Group (SAG), X/,
Open, and numerous independent software vendors, Microsoft has gained a very
broad consensus on ODBC's implementation, and it is now the dominant standard.
ODBC is powerful; it offers capabilities critical to client/server online transaction
processing (OLTP) and decision support systems (DSS) applications, including
system table transparency, full transaction support, scrollable cursors,
asynchronous calling, array fetch and update, a flexible connection model, and
stored procedures for static SQL performance.
The key benefits of ODBC are the following:
It allows users to access data in more than one data storage location (for example,
more than one server) from within a single application.
It allows users to access data in more than one type of DBMS (such as DB2,
Oracle,
Microsoft SQL Server, DEC Rdb, and Progress) from within a single application.
It simplifies application development. It is now easier for developers to provide
A. Meiappane

Sri Manakula Vinayagar Engg. College

64

Client Server Architecture

Department of Information Technology

access to data in multiple, concurrent DBMSs.


It is a portable application programming interface (API), enabling the same
interface
and access technology to be a cross-platform tool.
It insulates applications from changes to underlying network and DBMS versions.
Modifications to networking transports, servers, and DBMSs will not affect
current ODBC applications.
It promotes the use of SQL, the standard language for DBMSs, as defined in the
ANSI 1989 standard. It is an open, vendor-neutral specification based on the SAG
Call Level Interface (CLI).
It allows corporations to protect their investments in existing DBMSs and protect
developers' acquired DBMS skills. ODBC allows corporations to continue to use
existing diverse DBMSs while moving to client/server-based systems.
The ODBC Solution:
ODBC addresses the database connectivity problem by using the common
interface approach outlined previously. Application developers can use one API to access
all data sources. ODBC is based on a CLI specification, which was developed by a
consortium of over 40 companies (members of the SQL Access Group and others) and
has broad support from application and database suppliers. The result is a single API that
provides all the functionality that application developers need and an architecture that
database developers require to ensure interoperability. As a result, a very large selection
of applications use ODBC.
How ODBC Works:
ODBC defines an API. Each application uses the same code, as defined by the
API specification, to talk to many types of data sources through DBMS-specific drivers.
A driver manager sits between the applications and the drivers. In Windows, the Driver
Manager and the drivers are implemented as dynamic-link libraries (DLLs). Windows 95
and Windows NT work in a similar fashion, but as they are both 32-bit operating systems,
they can use a 32-bit version of ODBC.

ODBC ARCHITECTURE
The ODBC architecture has four components:

Application. Performs processing and calls ODBC functions to submit SQL


statements and retrieve results.

Driver Manager. Loads and unloads drivers on behalf of an application.


Processes ODBC function calls or passes them to a driver.

A. Meiappane

Sri Manakula Vinayagar Engg. College

65

Client Server Architecture

Department of Information Technology

Driver. Processes ODBC function calls, submits SQL requests to a specific data
source, and returns results to the application. If necessary, the driver modifies an
applications request so that the request conforms to syntax supported by the
associated DBMS.

Data source. Consists of the data the user wants to access and its associated
operating system, DBMS, and network platform (if any) used to access the
DBMS.
Figure. Relationship among the four components of ODBC

The above figure shows the relationship among ODBC components. Note the
following about this diagram. First, multiple drivers and data sources can exist, which
allows the application to simultaneously access data from more than one data source.
Second, the ODBC API is used in two places: between the application and the Driver
Manager, and between the Driver Manager and each driver. The interface between the
Driver Manager and the drivers is sometimes referred to as the service provider interface,
or SPI. For ODBC, the application-programming interface (API) and the service provider
interface (SPI) are the same. That is, the Driver Manager and each driver have the same
interface to the same functions.
This section contains the following topics.

Applications

The Driver Manager

A. Meiappane

Sri Manakula Vinayagar Engg. College

66

Client Server Architecture

Drivers

Data Sources

Department of Information Technology

APPLICATIONS :
An application is a program that calls the ODBC API to access data. Although
many types of applications are possible, most fall into three categories, which are used as
examples throughout this guide.

Generic Applications These are also referred to as shrink-wrapped


applications or off-the-shelf applications. Generic applications are designed
to work with a variety of different DBMSs. Examples include a spreadsheet
or statistics package that uses ODBC to import data for further analysis and a
word processor that uses ODBC to get a mailing list from a database.
An important subcategory of generic applications is application development
environments, such as PowerBuilder or Microsoft Visual Basic. Although
the applications constructed with these environments will probably work
only with a single DBMS, the environment itself needs to work with multiple
DBMSs.
What all generic applications have in common is that they are highly
interoperable among DBMSs and they need to use ODBC in a relatively
generic manner.

Vertical Applications Vertical applications perform a single type of task,


such as order entry or tracking manufacturing data, and work with a database
schema that is controlled by the developer of the application. For a particular
customer, the application works with a single DBMS. For example, a small
business might use the application with dBase, while a large business might
use it with Oracle.
The application uses ODBC in such a manner that the application is not tied
to any one DBMS, although it might be tied to a limited number of DBMSs
that provide similar functionality. Thus, the application developer can sell the
application independently from the DBMS. Vertical applications are
interoperable when they are developed but are sometimes modified to
include noninteroperable code once the customer has chosen a DBMS.

Custom Applications Custom applications are used to perform a specific


task in a single company. For example, an application in a large company
might gather sales data from several divisions (each of which uses a different
DBMS) and create a single report. ODBC is used because it is a common
interface and saves programmers from having to learn multiple interfaces.
Such applications are generally not interoperable and are written to specific
DBMSs and drivers.

A number of tasks are common to all applications, no matter how they use ODBC.
Taken together, they largely define the flow of any ODBC application. The tasks are:
A. Meiappane

Sri Manakula Vinayagar Engg. College

67

Client Server Architecture

Department of Information Technology

Selecting a data source and connecting to it.

Submitting an SQL statement for execution.

Retrieving results (if any).

Processing errors.

Committing or rolling back the transaction enclosing the SQL statement.

Disconnecting from the data source.

Because most data access work is done with SQL, the primary task for which
applications use ODBC is to submit SQL statements and retrieve the results (if any)
generated by those statements. Other tasks for which applications use ODBC include
determining and adjusting to driver capabilities and browsing the database catalog.
THE DRIVER MANAGER :
The Driver Manager is a library that manages communication between
applications and drivers. For example, on Microsoft Windows platforms, the Driver
Manager is a dynamic-link library (DLL) that is written by Microsoft and can be
redistributed by users of the redistributable MDAC 2.8 SP1 SDK.
The Driver Manager exists mainly as a convenience to application writers and
solves a number of problems common to all applications. These include determining
which driver to load based on a data source name, loading and unloading drivers, and
calling functions in drivers.
To see why the latter is a problem, consider what would happen if the application
called functions in the driver directly. Unless the application was linked directly to a
particular driver, it would have to build a table of pointers to the functions in that driver
and call those functions by pointer. Using the same code for more than one driver at a
time would add yet another level of complexity. The application would first have to set a
function pointer to point to the correct function in the correct driver, and then call the
function through that pointer.
The Driver Manager solves this problem by providing a single place to call each
function. The application is linked to the Driver Manager and calls ODBC functions in
the Driver Manager, not the driver. The application identifies the target driver and data
source with a connection handle. When it loads a driver, the Driver Manager builds a
table of pointers to the functions in that driver. It uses the connection handle passed by
the application to find the address of the function in the target driver and calls that
function by address.
For the most part, the Driver Manager just passes function calls from the
application to the correct driver. However, it also implements some functions
(SQLDataSources, SQLDrivers, and SQLGetFunctions) and performs basic error
checking. For example, the Driver Manager checks that handles are not null pointers, that
functions are called in the correct order, and that certain function arguments are valid.
A. Meiappane

Sri Manakula Vinayagar Engg. College

68

Client Server Architecture

Department of Information Technology

The final major role of the Driver Manager is loading and unloading drivers. The
application loads and unloads only the Driver Manager. When it wants to use a particular
driver, it calls a connection function (SQLConnect, SQLDriverConnect, or
SQLBrowseConnect) in the Driver Manager and specifies the name of a particular data
source or driver, such as "Accounting" or "SQL Server." Using this name, the Driver
Manager searches the data source information for the driver's file name, such as
Sqlsrvr.dll. It then loads the driver (assuming it is not already loaded), stores the address
of each function in the driver, and calls the connection function in the driver, which then
initializes itself and connects to the data source.
When the application is done using the driver, it calls SQLDisconnect in the
Driver Manager. The Driver Manager calls this function in the driver, which disconnects
from the data source. However, the Driver Manager keeps the driver in memory in case
the application reconnects to it. It unloads the driver only when the application frees the
connection used by the driver or uses the connection for a different driver, and no other
connections use the driver.
DRIVERS:
Drivers are libraries that implement the functions in the ODBC API. Each is
specific to a particular DBMS; for example, a driver for Oracle cannot directly access
data in an Informix DBMS. Drivers expose the capabilities of the underlying DBMSs;
they are not required to implement capabilities not supported by the DBMS. For example,
if the underlying DBMS does not support outer joins, then neither should the driver. The
only major exception to this is that drivers for DBMSs that do not have stand-alone
database engines, such as Xbase, must implement a database engine that at least supports
a minimal amount of SQL.
This section contains the following topics.

Driver Tasks

Driver Architecture

Driver Tasks: Specific tasks performed by drivers include:

Connecting to and disconnecting from the data source.

Checking for function errors not checked by the Driver Manager.

Initiating transactions; this is transparent to the application.

Submitting SQL statements to the data source for execution. The driver
must modify ODBC SQL to DBMS-specific SQL; this is often limited to
replacing escape clauses defined by ODBC with DBMS-specific SQL.

Sending data to and retrieving data from the data source, including
converting data types as specified by the application.

Mapping DBMS-specific errors to ODBC SQLSTATEs.

A. Meiappane

Sri Manakula Vinayagar Engg. College

69

Client Server Architecture

Department of Information Technology

Driver Architecture: The following are the types of Driver Architecture.

File-Based Drivers

DBMS-Based Drivers

File-Based Drivers:
File-based drivers are used with data sources such as dBASE that do not provide a
stand-alone database engine for the driver to use. These drivers access the physical data
directly and must implement a database engine to process SQL statements. As a standard
practice, the database engines in file-based drivers implement the subset of ODBC SQL
defined by the minimum SQL conformance level.
In comparing file-based and DBMS-based drivers, file-based drivers are harder to
write because of the database engine component, less complicated to configure because
there are no network pieces, and less powerful because few people have the time to write
database engines as powerful as those produced by database companies.
The following illustration shows two different configurations of file-based
drivers, one in which the data resides locally and the other in which it resides on a
network file server.

DBMS-Based Drivers
DBMS-based drivers are used with data sources such as Oracle or SQL Server
that provide a stand-alone database engine for the driver to use. These drivers access the
physical data through the stand-alone engine; that is, they submit SQL statements to and
retrieve results from the engine.
A. Meiappane

Sri Manakula Vinayagar Engg. College

70

Client Server Architecture

Department of Information Technology

Because DBMS-based drivers use an existing database engine, they are usually
easier to write than file-based drivers. Although a DBMS-based driver can be easily
implemented by translating ODBC calls to native API calls, this results in a slower driver.
A better way to implement a DBMS-based driver is to use the underlying data stream
protocol, which is usually what the native API does. For example, a SQL Server driver
should use TDS (the data stream protocol for SQL Server) rather than DB Library (the
native API for SQL Server). An exception to this rule is when ODBC is the native API.
For example, Watcom SQL is a stand-alone engine that resides on the same machine as
the application and is loaded directly as the driver.
DBMS-based drivers act as the client in a client/server configuration where the
data source acts as the server. In most cases, the client (driver) and server (data source)
reside on different machines, although both could reside on the same machine running a
multitasking operating system. A third possibility is a gateway, which sits between the
driver and data source. A gateway is a piece of software that causes one DBMS to look
like another. For example, applications written to use SQL Server can also access DB2
data through the Micro Decisionware DB2 Gateway; this product causes DB2 to look like
SQL Server.
The following illustration shows three different configurations of DBMS-based
drivers. In the first configuration, the driver and data source reside on the same machine.
In the second, the driver and data source reside on different machines. In the third, the
driver and data source reside on different machines and a gateway sits between them,
residing on yet another machine.

A. Meiappane

Sri Manakula Vinayagar Engg. College

71

Client Server Architecture

Department of Information Technology

DATA
SOURCES :
A

data

source

is

simply the

source of the

data.

It

can be a file,

particular

database

on a DBMS,

or even a

live

data

feed. The

data

might

be located

on the same

computer

as

the

program,

or on another

computer

somewhere

on

network. For

example,

a data source

might be

an

DBMS

Oracle

running on an OS/2 operating system, accessed by Novell Netware; an IBM DB2


DBMS accessed through a gateway; a collection of Xbase files in a server directory; or a
local Microsoft Access database file.
The purpose of a data source is to gather all of the technical information needed to
access the data the driver name, network address, network software, and so on into
a single place and hide it from the user. The user should be able to look at a list that
includes Payroll, Inventory, and Personnel, choose Payroll from the list, and have the
application connect to the payroll data, all without knowing where the payroll data
resides or how the application got to it.
The term data source should not be confused with similar terms. In this manual,
DBMS or database refers to a database program or engine. A further distinction is made
between desktop databases, designed to run on personal computers and often lacking in
A. Meiappane

Sri Manakula Vinayagar Engg. College

72

Client Server Architecture

Department of Information Technology

full SQL and transaction support, and server databases, designed to run in a client/server
situation and characterized by a stand-alone database engine and rich SQL and
transaction support. Database also refers to a particular collection of data, such as a
collection of Xbase files in a directory or a database on SQL Server. It is generally
equivalent to the term catalog, used elsewhere in this manual, or the term qualifier in
earlier versions of ODBC.
Types of Data Sources:
There are two types of data sources: machine data sources and file data sources.
Although both contain similar information about the source of the data, they differ in the
way this information is stored. Because of these differences, they are used in somewhat
different manners. The types are

Machine Data Sources

File Data Sources

Machine Data Sources:


Machine data sources are stored on the system with a user-defined name.
Associated with the data source name is all of the information the Driver Manager and
driver need to connect to the data source. For an Xbase data source, this might be the
name of the Xbase driver, the full path of the directory containing the Xbase files, and
some options that tell the driver how to use those files, such as single-user mode or readonly. For an Oracle data source, this might be the name of the Oracle driver, the server
where the Oracle DBMS resides, the SQL*Net connection string that identifies the
SQL*Net driver to use, and the system ID of the database on the server.

File Data Sources:


File data sources are stored in a file and allow connection information to be used
repeatedly by a single user or shared among several users. When a file data source is
used, the Driver Manager makes the connection to the data source using the information
in a .dsn file. This file can be manipulated like any other file. A file data source does not
A. Meiappane

Sri Manakula Vinayagar Engg. College

73

Client Server Architecture

Department of Information Technology

have a data source name, as does a machine data source, and is not registered to any one
user or machine.
A file data source streamlines the connection process, because the .dsn file
contains the connection string that would otherwise have to be built for a call to the
SQLDriverConnect function. Another advantage of the .dsn file is that it can be copied
to any machine, so identical data sources can be used by many machines as long as they
have the appropriate driver installed. A file data source can also be shared by
applications. A shareable file data source can be placed on a network and used
simultaneously by multiple applications.
A .dsn file can also be unshareable. An unshareable .dsn file resides on a single
machine and points to a machine data source. Unshareable file data sources exist mainly
to allow the easy conversion of machine data sources to file data sources so that an
application can be designed to work solely with file data sources. When the Driver
Manager is sent the information in an unshareable file data source, it connects as
necessary to the machine data source that the .dsn file points to.

A. Meiappane

Sri Manakula Vinayagar Engg. College

74

Client Server Architecture

Department of Information Technology

OPERATING SYSTEM SERVICES :


In distributed computing environments, operating system functions are either
base or extended services. The base services are part of the standard operating system,
while the extended services are add-on modular software components that are layered on
top of the base services. Functionally equivalent extended services are usually provided
by more than one vendor. There is no hard rule that determines what gets bundled in the
base operating system and what goes into the extensions. Today's extensions are usually
good candidates for tomorrow's base system services.

BASE SERVICES:
It should be apparent from the previous description that server programs exhibit a
high level of concurrency. Ideally, a separate task will be assigned to each of the clients
the server is designed to concurrently support. Task management is best done by a
multitasking operating system. Multitasking is the natural way to simplify the coding of
complex applications that can be divided into a collection of discrete and logically
distinct, concurrent tasks. It improves the performance, throughput, modularity, and
responsiveness of server programs. Multitasking also implies the existence of
mechanisms for intertask coordination and information exchanges.
Servers also require a high level of concurrency within a single program. Server
code will run more efficiently if tasks are allocated to parts of the same program rather
than to separate programs (these tasks are called coroutines or threads). Tasks within the
same program are faster to create, faster to context switch, and have easier access to
shared information. The below figure shows the type of support that servers require from
their operating system. The following are the server operating system requirements.

Figure: What Server Programs Expect From Their Operating System.

A. Meiappane

Sri Manakula Vinayagar Engg. College

75

Client Server Architecture

Department of Information Technology

Task Preemption: An operating system with preemptive multitasking must


allot fixed time slots of execution to each task. Without preemptive multitasking,
a task must voluntarily agree to give up the processor before another task- can
run.
It is much safer and easier to write multitasking server programs in
environments where the operating system automatically handles all the task
switching.
Task Priority: An operating system must dispatch tasks based on their priority.
This feature allows servers to differentiate the level of service based on their
clients' priority.
Semaphores: An operating system must provide simple synchronization mechanism for
keeping concurrent tasks from bumping into one another when accessing shared
resources.These mechanism,known as semaphores,are used to synchronize the actions of
independent server tasks and alert them when some significant event occurs.
Interprocess Communication(IPC): An operating system must provide the mechanisms
that allow independent processes to exchange and share data.
Local Or Remote Interprocess Communication: An operating system must allow the
transparent redirection of interprocess calls to remote process over a network without
the application being aware of it .The extension of the interprocess communications
across machine boundaries is key to the development of applications where resources
and processes can be easily moved across machines(i.e.they allow servers to grow
bigger and fatter.)
Threads: These are units of concurrency provided within the program itself. Threads are
used to create very concurrent, event driven server programs.Each waiting event can
be assigned to a thread that blocks until the event occurs.In the mean time,other
threads can use he CPU s cycles productively to perform useful work.
Intertask Protection: The operating system must protect tasks from interfering with
each others resources. A single task must not be able to bring down the entire system.
Protection also extends to the file system and calls to the operating system.
Multiuser High-Performance File System: The file system must support multiple tasks
and provide the locks that protect the integrity of the data.Server programs typically
works on many files at the same time.The file system must support large number of
open files without too much deterioration in performance.
Efficient Memory Management: The memory system must efficiently support very
large programs and very large data objects. These programs and data objects must be
easily swapped to and from disk, preferably in small granular blocks.

A. Meiappane

Sri Manakula Vinayagar Engg. College

76

Client Server Architecture

Department of Information Technology

Dynamically Linked Run -Time Extension: The operating system services should be
extendable. A mechanism must be provided to allow services to grow at run time
without recompiling the operating system.

Extended Services:
Extended services provide the advanced system software that exploits the
distributed potential of networks,provide flexible access to shared information, and make
the system easier to manage and maintain. They also make it easier for independent
software vendors (ISVs) and system integrators to create new server applications. The
following figure shows some of the extended services server programs can expect from
their operating system. We will go over these expectations, starting from the bottom layer
and working our way up. Some of these expectations read more like wish lists. They will
eventually find their way into most operating systems.

A. Meiappane

Sri Manakula Vinayagar Engg. College

77

Client Server Architecture

Department of Information Technology

Ubiquitous Communications. The operating system extensions must provide a


rich set of communications protocol stacks that allow the server to communicate
with the greatest number of client platforms. In addition, the server should be able
to communicate with other server platforms in case it needs assistance in providing
services.

Network Operating System Extensions. The operating system extensions must


provide facilities for extending the file and print services over the network. Ideally,
the applications should be able to transparently access any remote device (such as
printers and files) as if they were local.

Binary Large Objects (BLOBs). Images, video, graphics, intelligent documents, and database snapshots are about to test the capabilities of our operating
systems, databases, and networks. These large objects (affectionately called
BLOBs) require operating system extensions such as intelligent message streams
and object representation formats. Networks must be prepared to move and
transport these large BLOBs at astronomic speeds. Databases and file systems
must be prepared to store those BLOBs and provide access to them. Protocols are
needed for the exchange of BLOBs across systems and for associating BLOBs
with programs that know what to do when they see one.

Global Directories and Network Yellow Pages. The operating system


extensions must provide a way for clients to locate servers and their services on the
network using a global directory service. Network resources must be found by
name. Servers must be able to dynamically register their services with the
directory provider.

Authentication and Authorization Services. The operating system extensions


must provide a way for clients to prove to the server that they are who they claim
to be. The authorization system determines if the authenticated client has the
permission to obtain a remote service.

System Management. The operating system extensions must provide an


integrated network and system management platform. The system should be
managed as a single server or as multiple servers assigned to domains. An
enterprise view that covers multiple domains must be provided for servers that play
in the big leagues. System management includes services for configuring a system
and facilities for monitoring the performance of all elements, generating alerts when
things break, distributing and managing software packages on client workstations,
checking for viruses and intruders, and metering capabilities for pay-as-you-use
server resources.

Network Time. The operating system extensions must provide a mechanism for
clients and servers to synchronize their clocks. This time should be coordinated
with some universal time authority.

Database and Transaction Services. The operating system extensions must


provide a robust multiuser Database Management System (DBMS). This DBMS
should ideally support SQL for decision support and server-stored procedures for
transaction services. The server-stored procedures are created outside the
operating system by programmers. More advanced functions include a Trans-

A. Meiappane

Sri Manakula Vinayagar Engg. College

78

Client Server Architecture

Department of Information Technology

action Processing Monitor (TP Monitor) for managing stored procedures (or
transactions) as atomic units of work that execute on one or more servers.

Internet Services. The Internet is a huge growth opportunity for servers. We


expect that over time the more common Internet services will become standard
server featuresincluding HTTP daemons, Secure Sockets Layer (SSL), firewalls, Domain Name Service, HTML-based file systems, and electronic
commerce frameworks.

Object-Oriented Services.
This is an area where extended services will
flourish for a long time to come. Services are becoming more object-oriented. The
operating system will provide object broker services that allow any object to
interact with any other object across the network. The operating system must also
provide object interchange services and object repositories. Client/server
applications of the future will be between communicating objects (in addition to
communicating processes).

The extended does mean "extended." It covers the universe of current and future
services needed to create distributed client/server environments. No current operating
system bundles all the extended functions, but they're moving in that direction. You can
purchase most functions a la carte from more than one vendor.

SERVER SCALABILITY:
What are the upper limits of servers? The limits really depend on the type of
service required by their clients. One safe rule is that clients will always want more
services; so scalable servers are frequently an issue. The following diagram shows the
different levels of escalation in server power. It starts with a single PC server that
reaches its limits with the top-of-the-line processor and I/O power. The next level of
server power is provided by superservers populated with multiprocessors. If that is not
enough power, the client/server model allows you to divide the work among different
servers. These multiservers know no upper limits to power. But they must know how to
work together.
Multiservers (or clusters) are used in environments that require more processing
power than that provided by a single server system either SMP or uniprocessor. The
client/server model is upwardly scalable. When you need more processing power, you
can add more servers (thus creating a pool of servers). Or, the existing server machine
can be traded up to the latest generation of PC superserver machine. Multiservers
remove any upward limits to the growth of server power. Ordinary servers can provide
this power by working in all kinds of ensembles. For example, network operating system
extensions like the Distributed Computing Environment (DCE), and TP Monitors like
CICS, Encina, and Tuxedo provide the plumbing needed to create cooperating server
ensembles. Eventually, ORBs will also play in this arena.

A. Meiappane

Sri Manakula Vinayagar Engg. College

79

Client Server Architecture

PC Server

Department of Information Technology

Asymmetric
Multiprocessing
Superserver

Symmetric
Multiprocessing
Superserver

Figure. The PC Server Scalability Story

If you need more server power, you'll be looking at a new generation of


superservers. These are fully-loaded machines; they include multiprocessors, high-speed
disk arrays for intensive I/O, and fault-tolerant features. Operating systems can enhance
the server hardware by providing direct support for multiprocessors. With the proper
division of labor, multiprocessors should improve job throughput and server application
speeds. A multiprocessor server is upwardly scalable. Users can get more performance out
of their servers by simply adding more processors instead of additional servers.
Multiprocessing comes in two flavors: asymmetric and fully symmetric (see above figure).
Asymmetric multiprocessing imposes hierarchy and a division of labor among
processors. Only one designated processor, the master, can run the operating system at
any one time. The master controls (in a tightly-coupled arrangement) slave processors
dedicated to specific functions such as disk I/O or network I/O. A coprocessor is an extreme
form of codependency; one processor completely controls a slave processor through
interlocked special-purpose instructions. The coprocessor has unique special-purpose
hardware that is not identical to the main processor. An example is a graphic coprocessor.
Symmetric Multiprocessing (SMP) treats all processors as equals. Any processor can do the work of any other processor. Applications are divided into threads that
can run concurrently on any available processor. Any processor in the pool can run the
operating system kernel and execute user-written threads. Symmetric multiprocessing
improves the performance of the application itself, as well as the total throughput of the
server system. Ideally, the operating system should support symmetric multiprocessing by
supplying three basic functions: a reentrant OS kernel, a global scheduler that assigns
threads to available processors, and shared I/O structures. Symmetric multiprocessing
requires multiprocessor hardware with some form of shared memory and local instruction
caches. Most importantly, symmetric multiprocessing requires new applications that can
exploit multithreaded parallelism. The few applications on the market that exploit SMP
are SQL database managers such as Oracle 7 and Sybase.

A. Meiappane

Sri Manakula Vinayagar Engg. College

80

Client Server Architecture

Department of Information Technology

A. Meiappane

Sri Manakula Vinayagar Engg. College

81

You might also like