Professional Documents
Culture Documents
Client/server for tiny shops and nomadic tribes is a building-block implementation that runs the client, the middleware software, and most of the business
services on the same machine. It is the suggested implementation for the oneperson shops, home offices, and mobile users with well-endowed laptops. This is
a new opportunity area for client/server technology.
A. Meiappane
44
A. Meiappane
45
A. Meiappane
46
A. Meiappane
47
48
The client building block runs the client side of the application. It runs on an
Operating System (OS) that provides a Graphical User Interface (GUI) or an
Object Oriented User Interface (OOUI) and that can access distributed services,
wherever they may be. The operating system most often passes the buck to the
middleware building block and lets it handle the non-local services. The client
also runs a component of the Distributed System Management (DSM) element.
This could be anything from a simple agent on a managed PC to the entire frontend of the DSM application on a managing station.
The server building block runs the server side of the application. The server
application typically runs on top of some shrink-wrapped server software
package. The five contending server platforms for creating the next generation of
client/server applications are SQL database servers, TP Monitors, groupware
servers, object servers, and the Web. The server side depends on the operating
system to interface with the middleware building block that brings in the requests
for service.. The server also runs a DSM component. This could be anything from
a simple agent on a managed PC to the entire back-end of the DSM application
A. Meiappane
49
(for example, it could provide a shared object database for storing system
management information).
The middleware building block runs on both the client and server sides of an
application. We broke this building block into three categories: transport stacks,
network operating systems (NOSs), and service-specific middleware. Middleware
is the nervous system of the client/server infrastructure. Like the other two
building blocks, the middleware also has a DSM software component.
The Distributed System Management application runs on every node in .a client/server network. A managing workstation collects information from all its agents on
the network and displays it graphically. The managing workstation can also instruct its
agents to perform actions on its behalf. Think of management as running an autonomous
"network within a network." It is the "Big Brother" of the client/server world, but life is
impossible without it.
A. Meiappane
50
MIDDLEWARE:
The following diagram shows the basic view of how a client workstation interacts
with a database server through a network.
51
A. Meiappane
52
Application-to-server communications
A. Meiappane
53
TYPES OF MIDDLEWARE:
Several main types of middleware can be used to build client/server systems. The
following are the well-known types:
1.
2.
3.
4.
A. Meiappane
54
DCE:
DCE is an combined integrated set of services that supports the development of
distributed applications, including client/server. DCE is operating system and network
independent, providing compatibility with users' existing environments. The following
figure shows DCE's layered approach.
The architecture of DCE is a layered model that integrates a set of technologies.
The architecture is layered bottom-up from the operating system to the highest-level
applications. Security and management are essential to all layers of the environment. To
applications, the environment appears as a single logical system rather than a collection
of different services. Distributed services provide tools for software developers to create
the end-user services needed for distributed computing. These distributed services include
the following:
Applications
PC Integration
Other Distributed
Services
Distributed File Services
Time Services
Naming
Services
Management
Security
Other
Services
A. Meiappane
55
56
allows names to be replicated near the people who use them, providing better
performance.
The directory service is fully integrated with the security service, which provides
secure communications. Sophisticated access control provides protection for entries. The
directory service can accommodate large networks as easily as small ones. The ability to
easily add servers, directories, and directory levels makes painless growth possible.
Time Service:
A time service synchronizes all system clocks of a distributed environment so that
executing applications can depend on equivalent clocking among processes. Consider
that many machines operating in many time zones may provide processes as part of a
single application solution. It's essential that they all agree on die time in order to manage
scheduled events and time-se-quenced events.
The distributed time service is a software-based service that synchronizes each
computer to a widely recognized time standard. This service provides precise, faulttolerant clock synchronization for systems in both local area networks and wide area
networks. Time service software is integrated with the RPC, directory, and security
services. DCE uses a modified version of DEC's Time Synchronization Service.
Threads Service:
Developers want to exploit the computing power that is available throughout the
distributed environment. The threads service provides portable facilities that support
concurrent programming, which allows an application to perform many actions
simultaneously. While one thread executes a remote procedure call, another thread can
process user input. The threads service includes operations to create and control multiple
threads of execution in a single process and to synchronize access to global data within
an application. Because a server process using threads can handle many clients at the
same time, the threads service is well-suited to dealing with multiple clients in
client/server-based applications. A number of DCE components, including RPC, security,
directory, and time services, use the threads service.
Security Service
In most conventional timesharing systems, the operating system authenticates the
identity of users and authorizes access to resources. In a distributed computing
environment where activities span multiple hosts with multiple operating systems,
however, authentication and authorization require an independent security service that
can be trusted by many hosts. DCE provides such a service. The DCE security service
component is well integrated within the fundamental distributed service and data-sharing
components. It provides the network with three conventional services: authentication,
authorization, and user account management. These facilities are made available through
a secure means of communication that ensures both integrity and privacy. The security
service incorporates an authentication service based on the Kerberos system from MIT's
A. Meiappane
57
58
MOM itself, there can be many instances of both requesters and services on a single
client or server. MOM insulates both the client and server applications from the complexities of network communications.
MOM ensures that messages get to their destinations and receive a response. The
queuing mechanism can be very flexible, either offering a first in, first out scheme or one
that allows priorities to be assigned to a message. The use of queues means that MOM
software can be very flexible. Like other forms of middleware, it can accommodate
straightforward one-to-one communications and many-to-one communications. Message
passing and message queuing have been around for many years as the basis for Online
Transaction Processing (OLTP) systems. The MOM software can also include system
management functions such as network integrity and disaster recovery.
MOM is similar to electronic mail systems such as Lotus cc:Mail. Although
MOM uses similar mechanisms and can indeed provide the foundation for electronic
mail; a key difference exists between MOM and electronic mail systems. Electronic mail
passes messages from one person to another, whereas MOM passes messages back and
forth between software processes.
MOM differs from database middleware in that database middleware vendors'
expertise and products focus on providing their customers with the integration of data
residing in multiple databases throughout the customers enterprise. Their solutions
normally require a communications component for managing and supporting sessions
between the front-end client and one or more back-end database servers. Their designs
are normally specific to accommodate the distribution and integration of their own
DBMS on multiple platforms. Products from MOM companies who specialize in this
environment provide users with a general-purpose solution that can be more readily used
for any-to-any environments, including SQL to SQL and SQL to non-SQL (IMS, for
example), and for non-DBMS data files. MOM products provide direct process-toprocess communications and are not just restricted to accessing data.
The Advantages of Using MOM:
In many modern client/server applications, there are clear advantages to using
MOM. It provides a relatively simple application programming interface (API), making it
easy for programmers to develop the necessary skills. The API is portable, so MOM
programs can be moved to new platforms easily without changing the application code.
The flexibility of the API also extends to legacy applications so that distributed
computing can be introduced gradually without incurring a massive reprogramming
exercise. MOM is a good tool to use as you begin your initial client/server development.
MOM is also a valid middleware technology on a system that uses object-oriented
technology. Objects, by their very definition, interact with one another by using
messages. Message passing and message queuing allow objects to exchange data and can
even pass objects without sharing control. Therefore, message-oriented middleware can
be a natural technology to complement and support object technology.
A. Meiappane
59
60
MQSeries accommodates all of the major computer languages (COBOL, C, Visual Basic)
and network protocols (SNA, TCP/IP, Decnet, and IPX). Front-end client support covers
Microsoft Windows, MS-DOS, and OS/2. MQSeries goes much further than many MOM
products in providing support for transac-tional messaging and all of its associated
benefits. This support includes features such as two-phase commit, security, and restart
and recovery, which would normally be found in transaction management software.
Among the third-party suppliers, Peer Logic's Pipes is one of the leading
contenders. It supports the main platforms of DEC, IBM, and Hewlett Packard. The two
companies are working to integrate the Pipes software into IBM's Distributed System
Object Model (DSOM) and to provide bridges between Pipes and MQSeries. Momentum
Software's Message Express and X-IPC products are also widely used.
Transaction Processing Monitors:
Before client/server had developed as a concept, the concept of middleware was
very much in place within transaction processing systems. Transaction Processing (TP)
monitors were first built to cope with batched transactions. Transactions were
accumulated during the day and then passed against the company's data files overnight.
Originally, TP monitor meant teleprocessing monitora program that multiplexed many
terminals to a single central computer. Over time, TP monitors took on more than just
multiplexing and routing functions, and TP came to mean transaction processing.
By the 1970s, TP monitors were handling online transactions, which gave rise to
the term Online Transaction Processing that then became a part of the majority of legacy
business systems in place today. Transaction Processing systems pass messages between
programs. They operate, store, and forward queues, and they send acknowledgments.
They have advanced error trapping procedures and restart and recovery features in the
event of a breakdown that have evolved over the past 30 years from the requirements of
mainframe integrity. IBM has defined a transaction as an atomic unit of work that
possesses four properties. These properties are atomicity, consistency, isolation, and
durability. These properties are often referred to as ACID properties.
Atomicity effectively provides the transaction recovery needs. A transaction must
be completed as a whole, or the transaction is not completed at all. Therefore, the system
must have full restart and recovery capabilities such that any transaction that goes bad
can be automatically reversed. Consistency means that the results of a particular
transaction must be reproducible and predictable. The transaction obviously must always
produce the same results under the same conditions. Isolation means that no transaction
must interfere with any concurrently operating transaction. Finally, durability means that
the results of the transaction must be permanent.
As you can see from these definitions, the software required to achieve these
properties is essential for robust client/server systems, yet also it is inevitably complex.
The robustness of TP systems, as discussed earlier, has evolved over many years as
A. Meiappane
61
companies have demanded strong, secure mainframe systems. Client/server still has a
long way to go to match this robustness.
IBM has been in the forefront of moving TP from its mainframe roots to
client/server. IBM's CICS is perhaps one of the best examples of a Transaction
Processing system. CICS began in the late 1960s as the Customer Information Control
System (not Covered In Chocolate Sauce as was initially rumored!), a robust and reliable
piece of software with a great range of OLTP functionality. It has traditionally been used
on mainframes, yet recently it has also been ported to OS/2 as CICS OS/2 and the
RS/6000 UNIX machines as CICS/6000.
The CICS OS/2 product brings the traditional terminal emulation product and a
new External Call Interface (ECI) together at the client for processing across a network to
a TP server. IBM uses a technique called function shipping that enables TP tasks to be
moved around a network. The ECI technology is the crux of the system because it
provides a high level of communication between the client and server components of the
TP application that is required to support function shipping. Function shipping works in a
similar fashion to RPC as outlined in the DCE section. The benefit for CICS users is that
the CICS API is the same across all the platforms, so, in theory, a mainframe CICS
application could run on either CICS OS/2 or CICS RS/6000.
IBM and other TP suppliers have recognized that their products have an enormous
role to play in the new era of client/server computing. Their experience in the TP world,
coupled with the maturity of the product, can teach the client/server world significant
lessons as development goes forward. As a result, TP products such as CICS from IBM,
Tuxedo from Novell, and Top End from NCR are beginning to meet the demands of
client/server developers who need the robust, secure, and controllable features available
in these products. Without a doubt, the biggest reason for not moving to client/server is
that developers fear that the systems (sometimes rightly) do not have the integrity of the
30-year old legacy systems. In comparison to these legacy systems, client/server is a
newborn babe. Yet now more than ever, client/server systems based on workgroups and
LANs are considerably more viable than the traditional centralized mainframe processor
operating dumb terminals.
The main drawbacks of a TP system for client/server are that it is still
considerably more expensive than other forms of middleware and that TP suffers from a
lack of standards, similar to MOM. As companies diversify their client/server systems
and move from their legacy systems to client/server, they will benefit from using TP.
Queued, Conversational, and Workflow Models:
Most TP monitors have migrated from a client/server basis to a three-system
model in which the client performs data capture and local data processing and then sends
a request to a middleman called a request router. The router brokers the client request to
one or more server processes. Each server in turn executes the request and responds. This
A. Meiappane
62
63
hidden from clients, who are given the illusion of an instant switch over. Because the
router uses ACID transactions to cover both messages and database updates, each
transaction will be processed once. The main TP monitors available today are CICS, IMS,
ACMS, Pathway, Tuxedo, Encina, and Top End.
ODBC:
Open database connectivity (ODBC) is,Microsoft's strategic interface for
accessing data in a distributed environment made up of relational and nonrelational
DBMSs. Based on the Call
Level Interface specification of the SQL Access Group, ODBC provides an open,
supposedly vendor-neutral way of accessing data stored in a variety of proprietary
personal computer, minicomputer, and mainframe databases. ODBC alleviates the need
for independent software vendors and corporate developers to learn multiple application
programming interfaces. ODBC now provides a universal data access interface. With
ODBC, application developers can allow an application to concurrently access, view, and
modify data from multiple, diverse databases. ODBC is a core component of Microsoft
Windows Open Services Architecture (WOSA). ODBC has emerged as the industry
standard for data access for both Windows-based and Macintosh-based applications.
The key salient points with respect to ODBC in the client/server development
environment are as follows:
ODBC is vendor-neutral, allowing access to DBMS from multiple vendors.
ODBC is open. Working with ANSI standards, the SQL Access Group (SAG), X/,
Open, and numerous independent software vendors, Microsoft has gained a very
broad consensus on ODBC's implementation, and it is now the dominant standard.
ODBC is powerful; it offers capabilities critical to client/server online transaction
processing (OLTP) and decision support systems (DSS) applications, including
system table transparency, full transaction support, scrollable cursors,
asynchronous calling, array fetch and update, a flexible connection model, and
stored procedures for static SQL performance.
The key benefits of ODBC are the following:
It allows users to access data in more than one data storage location (for example,
more than one server) from within a single application.
It allows users to access data in more than one type of DBMS (such as DB2,
Oracle,
Microsoft SQL Server, DEC Rdb, and Progress) from within a single application.
It simplifies application development. It is now easier for developers to provide
A. Meiappane
64
ODBC ARCHITECTURE
The ODBC architecture has four components:
A. Meiappane
65
Driver. Processes ODBC function calls, submits SQL requests to a specific data
source, and returns results to the application. If necessary, the driver modifies an
applications request so that the request conforms to syntax supported by the
associated DBMS.
Data source. Consists of the data the user wants to access and its associated
operating system, DBMS, and network platform (if any) used to access the
DBMS.
Figure. Relationship among the four components of ODBC
The above figure shows the relationship among ODBC components. Note the
following about this diagram. First, multiple drivers and data sources can exist, which
allows the application to simultaneously access data from more than one data source.
Second, the ODBC API is used in two places: between the application and the Driver
Manager, and between the Driver Manager and each driver. The interface between the
Driver Manager and the drivers is sometimes referred to as the service provider interface,
or SPI. For ODBC, the application-programming interface (API) and the service provider
interface (SPI) are the same. That is, the Driver Manager and each driver have the same
interface to the same functions.
This section contains the following topics.
Applications
A. Meiappane
66
Drivers
Data Sources
APPLICATIONS :
An application is a program that calls the ODBC API to access data. Although
many types of applications are possible, most fall into three categories, which are used as
examples throughout this guide.
A number of tasks are common to all applications, no matter how they use ODBC.
Taken together, they largely define the flow of any ODBC application. The tasks are:
A. Meiappane
67
Processing errors.
Because most data access work is done with SQL, the primary task for which
applications use ODBC is to submit SQL statements and retrieve the results (if any)
generated by those statements. Other tasks for which applications use ODBC include
determining and adjusting to driver capabilities and browsing the database catalog.
THE DRIVER MANAGER :
The Driver Manager is a library that manages communication between
applications and drivers. For example, on Microsoft Windows platforms, the Driver
Manager is a dynamic-link library (DLL) that is written by Microsoft and can be
redistributed by users of the redistributable MDAC 2.8 SP1 SDK.
The Driver Manager exists mainly as a convenience to application writers and
solves a number of problems common to all applications. These include determining
which driver to load based on a data source name, loading and unloading drivers, and
calling functions in drivers.
To see why the latter is a problem, consider what would happen if the application
called functions in the driver directly. Unless the application was linked directly to a
particular driver, it would have to build a table of pointers to the functions in that driver
and call those functions by pointer. Using the same code for more than one driver at a
time would add yet another level of complexity. The application would first have to set a
function pointer to point to the correct function in the correct driver, and then call the
function through that pointer.
The Driver Manager solves this problem by providing a single place to call each
function. The application is linked to the Driver Manager and calls ODBC functions in
the Driver Manager, not the driver. The application identifies the target driver and data
source with a connection handle. When it loads a driver, the Driver Manager builds a
table of pointers to the functions in that driver. It uses the connection handle passed by
the application to find the address of the function in the target driver and calls that
function by address.
For the most part, the Driver Manager just passes function calls from the
application to the correct driver. However, it also implements some functions
(SQLDataSources, SQLDrivers, and SQLGetFunctions) and performs basic error
checking. For example, the Driver Manager checks that handles are not null pointers, that
functions are called in the correct order, and that certain function arguments are valid.
A. Meiappane
68
The final major role of the Driver Manager is loading and unloading drivers. The
application loads and unloads only the Driver Manager. When it wants to use a particular
driver, it calls a connection function (SQLConnect, SQLDriverConnect, or
SQLBrowseConnect) in the Driver Manager and specifies the name of a particular data
source or driver, such as "Accounting" or "SQL Server." Using this name, the Driver
Manager searches the data source information for the driver's file name, such as
Sqlsrvr.dll. It then loads the driver (assuming it is not already loaded), stores the address
of each function in the driver, and calls the connection function in the driver, which then
initializes itself and connects to the data source.
When the application is done using the driver, it calls SQLDisconnect in the
Driver Manager. The Driver Manager calls this function in the driver, which disconnects
from the data source. However, the Driver Manager keeps the driver in memory in case
the application reconnects to it. It unloads the driver only when the application frees the
connection used by the driver or uses the connection for a different driver, and no other
connections use the driver.
DRIVERS:
Drivers are libraries that implement the functions in the ODBC API. Each is
specific to a particular DBMS; for example, a driver for Oracle cannot directly access
data in an Informix DBMS. Drivers expose the capabilities of the underlying DBMSs;
they are not required to implement capabilities not supported by the DBMS. For example,
if the underlying DBMS does not support outer joins, then neither should the driver. The
only major exception to this is that drivers for DBMSs that do not have stand-alone
database engines, such as Xbase, must implement a database engine that at least supports
a minimal amount of SQL.
This section contains the following topics.
Driver Tasks
Driver Architecture
Submitting SQL statements to the data source for execution. The driver
must modify ODBC SQL to DBMS-specific SQL; this is often limited to
replacing escape clauses defined by ODBC with DBMS-specific SQL.
Sending data to and retrieving data from the data source, including
converting data types as specified by the application.
A. Meiappane
69
File-Based Drivers
DBMS-Based Drivers
File-Based Drivers:
File-based drivers are used with data sources such as dBASE that do not provide a
stand-alone database engine for the driver to use. These drivers access the physical data
directly and must implement a database engine to process SQL statements. As a standard
practice, the database engines in file-based drivers implement the subset of ODBC SQL
defined by the minimum SQL conformance level.
In comparing file-based and DBMS-based drivers, file-based drivers are harder to
write because of the database engine component, less complicated to configure because
there are no network pieces, and less powerful because few people have the time to write
database engines as powerful as those produced by database companies.
The following illustration shows two different configurations of file-based
drivers, one in which the data resides locally and the other in which it resides on a
network file server.
DBMS-Based Drivers
DBMS-based drivers are used with data sources such as Oracle or SQL Server
that provide a stand-alone database engine for the driver to use. These drivers access the
physical data through the stand-alone engine; that is, they submit SQL statements to and
retrieve results from the engine.
A. Meiappane
70
Because DBMS-based drivers use an existing database engine, they are usually
easier to write than file-based drivers. Although a DBMS-based driver can be easily
implemented by translating ODBC calls to native API calls, this results in a slower driver.
A better way to implement a DBMS-based driver is to use the underlying data stream
protocol, which is usually what the native API does. For example, a SQL Server driver
should use TDS (the data stream protocol for SQL Server) rather than DB Library (the
native API for SQL Server). An exception to this rule is when ODBC is the native API.
For example, Watcom SQL is a stand-alone engine that resides on the same machine as
the application and is loaded directly as the driver.
DBMS-based drivers act as the client in a client/server configuration where the
data source acts as the server. In most cases, the client (driver) and server (data source)
reside on different machines, although both could reside on the same machine running a
multitasking operating system. A third possibility is a gateway, which sits between the
driver and data source. A gateway is a piece of software that causes one DBMS to look
like another. For example, applications written to use SQL Server can also access DB2
data through the Micro Decisionware DB2 Gateway; this product causes DB2 to look like
SQL Server.
The following illustration shows three different configurations of DBMS-based
drivers. In the first configuration, the driver and data source reside on the same machine.
In the second, the driver and data source reside on different machines. In the third, the
driver and data source reside on different machines and a gateway sits between them,
residing on yet another machine.
A. Meiappane
71
DATA
SOURCES :
A
data
source
is
simply the
source of the
data.
It
can be a file,
particular
database
on a DBMS,
or even a
live
data
feed. The
data
might
be located
on the same
computer
as
the
program,
or on another
computer
somewhere
on
network. For
example,
a data source
might be
an
DBMS
Oracle
72
full SQL and transaction support, and server databases, designed to run in a client/server
situation and characterized by a stand-alone database engine and rich SQL and
transaction support. Database also refers to a particular collection of data, such as a
collection of Xbase files in a directory or a database on SQL Server. It is generally
equivalent to the term catalog, used elsewhere in this manual, or the term qualifier in
earlier versions of ODBC.
Types of Data Sources:
There are two types of data sources: machine data sources and file data sources.
Although both contain similar information about the source of the data, they differ in the
way this information is stored. Because of these differences, they are used in somewhat
different manners. The types are
73
have a data source name, as does a machine data source, and is not registered to any one
user or machine.
A file data source streamlines the connection process, because the .dsn file
contains the connection string that would otherwise have to be built for a call to the
SQLDriverConnect function. Another advantage of the .dsn file is that it can be copied
to any machine, so identical data sources can be used by many machines as long as they
have the appropriate driver installed. A file data source can also be shared by
applications. A shareable file data source can be placed on a network and used
simultaneously by multiple applications.
A .dsn file can also be unshareable. An unshareable .dsn file resides on a single
machine and points to a machine data source. Unshareable file data sources exist mainly
to allow the easy conversion of machine data sources to file data sources so that an
application can be designed to work solely with file data sources. When the Driver
Manager is sent the information in an unshareable file data source, it connects as
necessary to the machine data source that the .dsn file points to.
A. Meiappane
74
BASE SERVICES:
It should be apparent from the previous description that server programs exhibit a
high level of concurrency. Ideally, a separate task will be assigned to each of the clients
the server is designed to concurrently support. Task management is best done by a
multitasking operating system. Multitasking is the natural way to simplify the coding of
complex applications that can be divided into a collection of discrete and logically
distinct, concurrent tasks. It improves the performance, throughput, modularity, and
responsiveness of server programs. Multitasking also implies the existence of
mechanisms for intertask coordination and information exchanges.
Servers also require a high level of concurrency within a single program. Server
code will run more efficiently if tasks are allocated to parts of the same program rather
than to separate programs (these tasks are called coroutines or threads). Tasks within the
same program are faster to create, faster to context switch, and have easier access to
shared information. The below figure shows the type of support that servers require from
their operating system. The following are the server operating system requirements.
A. Meiappane
75
A. Meiappane
76
Dynamically Linked Run -Time Extension: The operating system services should be
extendable. A mechanism must be provided to allow services to grow at run time
without recompiling the operating system.
Extended Services:
Extended services provide the advanced system software that exploits the
distributed potential of networks,provide flexible access to shared information, and make
the system easier to manage and maintain. They also make it easier for independent
software vendors (ISVs) and system integrators to create new server applications. The
following figure shows some of the extended services server programs can expect from
their operating system. We will go over these expectations, starting from the bottom layer
and working our way up. Some of these expectations read more like wish lists. They will
eventually find their way into most operating systems.
A. Meiappane
77
Binary Large Objects (BLOBs). Images, video, graphics, intelligent documents, and database snapshots are about to test the capabilities of our operating
systems, databases, and networks. These large objects (affectionately called
BLOBs) require operating system extensions such as intelligent message streams
and object representation formats. Networks must be prepared to move and
transport these large BLOBs at astronomic speeds. Databases and file systems
must be prepared to store those BLOBs and provide access to them. Protocols are
needed for the exchange of BLOBs across systems and for associating BLOBs
with programs that know what to do when they see one.
Network Time. The operating system extensions must provide a mechanism for
clients and servers to synchronize their clocks. This time should be coordinated
with some universal time authority.
A. Meiappane
78
action Processing Monitor (TP Monitor) for managing stored procedures (or
transactions) as atomic units of work that execute on one or more servers.
Object-Oriented Services.
This is an area where extended services will
flourish for a long time to come. Services are becoming more object-oriented. The
operating system will provide object broker services that allow any object to
interact with any other object across the network. The operating system must also
provide object interchange services and object repositories. Client/server
applications of the future will be between communicating objects (in addition to
communicating processes).
The extended does mean "extended." It covers the universe of current and future
services needed to create distributed client/server environments. No current operating
system bundles all the extended functions, but they're moving in that direction. You can
purchase most functions a la carte from more than one vendor.
SERVER SCALABILITY:
What are the upper limits of servers? The limits really depend on the type of
service required by their clients. One safe rule is that clients will always want more
services; so scalable servers are frequently an issue. The following diagram shows the
different levels of escalation in server power. It starts with a single PC server that
reaches its limits with the top-of-the-line processor and I/O power. The next level of
server power is provided by superservers populated with multiprocessors. If that is not
enough power, the client/server model allows you to divide the work among different
servers. These multiservers know no upper limits to power. But they must know how to
work together.
Multiservers (or clusters) are used in environments that require more processing
power than that provided by a single server system either SMP or uniprocessor. The
client/server model is upwardly scalable. When you need more processing power, you
can add more servers (thus creating a pool of servers). Or, the existing server machine
can be traded up to the latest generation of PC superserver machine. Multiservers
remove any upward limits to the growth of server power. Ordinary servers can provide
this power by working in all kinds of ensembles. For example, network operating system
extensions like the Distributed Computing Environment (DCE), and TP Monitors like
CICS, Encina, and Tuxedo provide the plumbing needed to create cooperating server
ensembles. Eventually, ORBs will also play in this arena.
A. Meiappane
79
PC Server
Asymmetric
Multiprocessing
Superserver
Symmetric
Multiprocessing
Superserver
A. Meiappane
80
A. Meiappane
81