You are on page 1of 15

INDEX

1.Abstract
2. Client Sever Architecture
2.1 Evolution
2.2 Evolution of Architectures:
2.2.1 Mainframe architecture
2.2.2 File sharing architecture:
2.2.3 Client/server architecture:
2.3 Client Server Architecture Models:
2.3.1 FAT Vs THIN
2.3.2Two-tier and Three-tier Architectures:
2.4 Characteristics
2.5 General Issues in client-server computing
3. Distributed System Architecture
3.1 Remote Procedure Call (RPC):
3.2 Object Management Architecture (OMA):
3.3 Distributed Resource Architecture:
3.3.1. Distributed data Architecture:
3.3.2 Distributed Server Architecture:
3.3.3 Distributed Computing Architecture:
4. Distributed Architecture Requirements
5. Conclusions
6. Bibliography
1. ABSTRACT

The evolution of networking technology has necessitated


simpler and efficient technologies to share the various resources; the
resources include data, hardware and computing power. Two technologies
that enabled this are Client/server and Distributed Systems architecture.
Client server architecture, which was introduced in the early 1970s, became a
subset of newer and emerging Distributed Systems architecture. The following
pages will trace this evolutionary path all the while discussing the intricate
technologies involved, their advantages, limitations, applications and their
future scope. It will also discuss the Software Engineering concepts as
applied to these technologies.

2. Client Server Technology

2.1 Evolution:
The term client/server was first used in the 1980s in reference to
personal computers (PCs) on a network. The actual client/server model
started gaining acceptance in the late 1980s. The client/server software
architecture is a versatile, message-based and modular infrastructure that is
intended to improve usability, flexibility, interoperability, and scalability as
compared to centralized, mainframe, time sharing computing.

A client is defined as a requester of services and a server is defined as the


provider of services. A single machine can be both a client and a server
depending on the software configuration.
2.2 Evolution of Architectures:

2.2.1 Mainframe architecture : With mainframe software


architectures all intelligence is within the central host computer. Users interact
with the host through a terminal that captures keystrokes and sends that
information to the host. Mainframe software architectures are not tied to a
hardware platform. User interaction can be done using PCs and UNIX
workstations. A limitation of mainframe software architectures is that they do
not easily support graphical user or access to multiple databases from
geographically dispersed sites. In the last few years, mainframes have found
a new use as a server in distributed client/server architectures

2.2.2 File sharing architecture: The original PC networks were


based on file sharing architectures, where the server downloads files from the
shared location to the desktop environment. The requested user job is then
run (including logic and data) in the desktop environment. File sharing
architectures work if shared usage is low, update contention is low, and the
volume of data to be transferred is low. In the 1990s, PC LAN (local area
network) computing changed because the capacity of the file sharing was
strained as the number of online user grew (it can only satisfy about 12 users
simultaneously) and graphical user interfaces (GUIs) became popular (making
mainframe and terminal displays appear out of date). PCs are now being used
in client/server architectures.

2.2.3 Client/server architecture: As a result of the limitations of file


sharing architectures, the client/server architecture emerged. The client is the
software component that requests a service. The server is the component
that provides the service. In a file sharing network, for example, the
workstation contains the client and the file server contains the server. The
service is file access, and includes read and write access as well as file
locking. Communication between the client and server occurs over the
network. Software that carries on the communication between client and
server is sometimes called middleware. Client/server doesn't necessarily
imply a network. A client/server architecture can exist on a single machine. An
example of this in Windows is the Print Manager. Print Manager is a server
that exists on most Windows computers. Client/server does imply a many-to-
one relationship between clients and server. One server offers specialized
services to multiple clients. In the example of Print Manager, the clients are all
applications that perform printing.
A client/server network is a network that uses a client/server architecture,
with servers existing on specialized, usually dedicated, computers. The
servers run in a software environment that is specially designed to support the
needs of the server component.

2.3 Client Server Architecture Models:


Client/server systems can also be classified based on the amount of
processing that happen on the client machine and on the server machine and
the method of processing.

2.3.1 FAT Vs THIN: The processing location based


Clients and servers may be ``fat’’ or ``thin’’. Fat clients use up space on
each client they run on, may have complex installation requirements, etc (e.g.
Netscape 3.0). Thin clients have reduced functionality, but are easier to
manage .The network computer model favours thin clients.
A fat client is a client that contains a lot of software and does a lot of
processing. An example of a fat client is a workstation running Microsoft
Word, editing a document that is stored on a file server. Although a server is
involved, most of the software and processing occur on the client workstation.
A fat server is a server that runs a lot of software and does a lot of
processing. A classic mainframe application is an example of a fat server.
Almost all the processing occurs on the server. The client simply displays data
and accepts user input. (In mainframe applications, even input editing is done
on the server.)
2.3.2 Two-tier and Three-tier Architectures: The processing method
based
While "fat" and "thin" refer to how much occurs on the client or server,
"two-tier" and "three-tier" refer in a more specific way to how functionality is
split.
In a two-tier architecture, software functionality is split into two
components, one of which runs on the client, the other on the server. The two
tier client/server architecture is a good solution for distributed computing when
work groups are defined as a dozen to 100 people interacting on a LAN
simultaneously. It does have a number of limitations. When the number of
users exceeds 100, performance begins to deteriorate. This limitation is a
result of the server maintaining a connection via "keep-alive" messages with
each client, even when no work is being done. A second limitation of the two
tier architecture is that implementation of processing management services
using vendor proprietary database procedures restricts flexibility and choice of
DBMS for applications. Finally, current implementations of the two tier
architecture provide limited flexibility in moving (repartitioning) program
functionality from one server to another without manually regenerating
procedural code.

In a three-tier architecture, the software functionality is split into three


components: the user interface, the application logic, and resource (data)
management. The phrase "three-tier" has had several different meanings. It
has also been used to describe partitioning an application between (1) a PC
client, (2) a departmental server, and (3) an enterprise server, and to describe
partitioning between (1) a client component, (2) a local database, and (3) an
enterprise database.. The most basic type of three tier architecture has a
middle layer consisting of Transaction Processing (TP) monitor technology
.The TP monitor technology is a type of message queuing, transaction
scheduling, and prioritization service where the client connects to the TP
monitor (middle tier) instead of the database server. The transaction is
accepted by the monitor, which queues it and then takes responsibility for
managing it to completion, thus freeing up the client. When the capability is
provided by third party middleware vendors it is referred to as "TP Heavy"
because it can service thousands of users. When it is embedded in the DBMS
(and could be considered a two tier architecture), it is referred to as "TP Lite"
because experience has shown performance degradation when over 100
clients are connected. TP monitor technology also provides

• the ability to update multiple different DBMSs in a single transaction


• connectivity to a variety of data sources including flat files, non-
relational DBMS, and the mainframe
• the ability to attach priorities to transactions
• robust security

Using a three tier client/server architecture with TP monitor technology results


in an environment that is considerably more scalable than a two tier
architecture with direct client to server connection.

2.4 Characteristics
Client/server is a software model of computing, not a hardware
definition. Because the client/server environment is typically heterogeneous,
the hardware platform and operating system of the client and server are not
usually the same. In such cases, the communications mechanism may be
further extended through a well-defined set of standard application program
interfaces (APIs) and remote procedure calls (RPCs). This architecture has
the following characteristics.

 A client-user relies on the desktop workstation for all computing needs.


Whether the application runs totally on the desktop or uses services
provided by one or more servers—be they powerful PCs or mainframes
—is irrelevant.
 Effective client/server computing is fundamentally platform-
independent. The user of an application wants the business
functionality it provides; the computing platform provides access to this
business functionality. There is no benefit, yet considerable risk, in
exposing this platform to its user.
 Changes in platform and underlying technology should be transparent
to the user. Training costs, business processing delays and errors, staff
frustration, and staff turnover result from the confusion generated by
changes in environments where the user is sensitive to the technology
platform.

2.5 General Issues in client-server computing


 Wise Use of Existing Investments
Successful client/server solutions integrate with the existing
applications and provide a gradual migration to the new platforms and
business models.
 Connectivity—Management of Distributed Data Resources
Information must be made available to the data creators and
maintainers by providing the connectivity and distributed management of
enterprise databases and applications. The technology of client/server
computing should support the movement of information processing to the
direct creators and users of information
 Online Transaction Processing (OLTP)
OLTP applications traditionally have been used in insurance, financial,
government, and sales-related organizations. These applications are
characterized by their need for highly reliable platforms that guarantee that
transactions will be handled correctly, no data will be lost, response times will
be extremely low (less than three seconds is a good rule of thumb), and only
authorized users will have access to an application. The IS industry
understands OLTP in the traditional mainframe-centered platforms but not in
the distributed client/server platforms. The other considerations are
 Systems Administration
 Availability
 Reliability
 Serviceability
 Software Distribution
 Performance
 Network Management
 Remote Systems Management
 Security
There a number of tradeoffs that must be made to select the
appropriate client/server architecture. These include business strategic
planning, and potential growth on the number of users, cost, and the
homogeneity of the current and future computational environment.

3. Distributed System Architecture

“A distributed system is a system designed to support the development of


applications and services which can exploit a physical architecture consisting
of multiple, autonomous processing elements that do not share primary
memory but cooperate by sending asynchronous messages over a
communication network” – Blair & Stefani

In any system two aspects may be distributed, these are a.) Components of
the application running on the system and b.) System resources, thus
client/server technology becomes a general case of (a) which may be
rephrased as distributed component computing or as distributed object
computing. Which are basically client/server models with middleware the
various types of middleware are:

3.1 Remote Procedure Call (RPC):


This has the following characteristics:
a. Uses the well-known procedure call semantics.
b. The caller makes a procedure call and then waits. If it is a local
procedure call, then it is handled normally; if it is a remote procedure,
then it is handled as a remote procedure call.
c. Caller semantics is blocked send; callee semantics is blocked receive to
get the parameters and a non-blocked send at the end to transmit results.
This middleware architecture may be depicted as in the following diagram

3.2 Object Management Architecture (OMA):


This is an object oriented architecture and may be depicted as follows:

The various OMA Modules that may be identified from the above diagram are
Object Request Broker: directs requests and answers between objects
Object Services: basic functions for object management (e.g., name service)
Common Facilities: generic object-oriented tools for various applications (e.g.,
a class browser)
Application Objects: classes specific to an application domain (e.g., a CASE
tool)
Here we may define an object and a request as follows:
An object is an abstraction with a state and a set of operations
A request is an operation call with one or more parameters, any of which may
identify an object (multi-targeting); Arguments and results are passed by
value.
The following are the features of CORBA
• Communications substrate
• A specific programmer interface (no implementation)
• Multi-vendor ORBs to interoperate (CORBA-2)
• Layered on top of other communication substrates (RPC, byte streams,
IPC,…)
• Language mapping (C, C++, Ada, Smalltalk, Java,Cobol)
• ORB interface.
3.3 Distributed Resource Architecture:
The term distributed can be applied to any resource that can be shared.,
this includes data, services, hardware, computing power. Thus there are 3
types of distributed architectures.
1. Distributed data architecture
2. Distributed Server architecture
3. Distributed computing architecture

3.3.1. Distributed data Architecture:


This consists of distributed file systems and distributed
databases. Distributed file systems allow the application to have an integrated
logical view of data spread over several computer systems. These use the
operating systems native system call mechanism, including RPC to achieve
this effect. An example of this is the coda file system. It is a state of the art
experimental system developed at CMU. It has been ported to Linux, Net
BSD, Free BSD. Currently a large portion of Coda had been ported to Win95
and efforts are being made to understand the feasibility of porting to NT.

Distributed Database: It is a set of databases stored on multiple


computers at different locations and it appears to the user as a single
database. Those computers are in a network environment and the user (or
application) can access, update, or modify the data in several databases
through the network. The locations of the distributed database may be spread
over a large area around the world, or over a small area such as one building

One of the major objectives of a distributed database is to provide the users


with easy access to data at many different locations. They are several
reasons that encourage the use of a distributed database:
• Distribution and autonomy of business units: Divisions, department,
and facilities of modern organizations are geographically distributed.
• Data sharing: Data sharing will always be there, so it must be
convenient and consolidated.
• Data communications costs and reliability: Transferring large amount of
data across the network can be very costly and will affect the network
performance. Also, dependence on data communication can be risky.
An example of Distributed database is oracle this concept is described
below:
Oracle Client/Server Concept The software that manage the
database is called the database server and the application that requests
information from that server is the client or a node. A client can connect to the
database server either directly or indirectly. For example, client A can connect
to server B directly and to server C indirectly through server B. Server C here
will be the remote site for the data.
Overview Oracle Distributed Database System:
The Network Connection:
Net8 is an Oracle's network software that provides the inter-database
communications across the network. It connects clients and servers through
the network in a distributed database system. Net8 performs all it's operations
independent of the network operating system (NOS).
Database Replication
It is the process of storing a copy of the database at each location of the
distributed database system. It has several advantages:
• Reliability: If one site containing the database fails, a copy can always
be accessed at another site.
• Fast response time: Each site has a local copy of the database, so
queries can be executed faster.
• Node decoupling: Transaction may proceed without coordination
across the network.
• Improve performance by minimizing the network traffic at prime time
The Disadvantages are:
• Storage requirements: Each site must storage capacity to store copy of
the database.
• Complexity and cost of updating: When updating the database, all sites
must be updated.

Heterogeneous Distributed Database


One of the database systems may not be an oracle database system, this is
called heterogeneous distributed database system. Oracle uses a component
within the software called Heterogeneous Services to provide the common
architecture and the administrative tools for this type of a distributed
database. The communications between the non-Oracle system and the
Heterogeneous Services are handled by Oracle software called Oracle
Gateway.

3.3.2 Distributed Server Architecture:


Modern networking services require high-availability(HA)
and high-reliability, these maybe served by system level distributed servers
such as piranha and Linux virtual server architecture described below:

Piranha is strictly a software HA implementation. Piranha is the name


of the cluster package, and also of the GUI administrative tool for the cluster
interface to the entire cluster system. The clustering system is quite modular
in nature and can be completely configured and run from a text mode,
command line interface. Requests for service from a Piranha cluster are sent
to a virtual server address: a unique triplicate consisting of a protocol, a
floating IP address, and a port number. Depending on the role it performs, a
computer in a Piranha cluster is either a router or a real server. A router
receives job requests and redirects those requests the real servers that
actually process the requests. This architecture thus ensures fail-over and
hence reliability

3.3.3 Distributed Computing Architecture:


The complexity and size of software are increasing at a rapid
rate. This results in the increase in build time and execution times. Cluster
computing is proving to be an effective way to reduce this in an economical
way. Currently, most available cluster computing software tools which achieve
load balancing by process migration schemes. They share processing power,
memory and network resources, software such as MOSIX enable this. This
system is described below:

Mosix is a software that was specifically designed to enhance the Linux


kernel with cluster computing capabilities. It is a tool consisting of kernel level
resource sharing algorithms that are geared for performance scalability in a
cluster computer. Mosix supports resource sharing by dynamic process
migration. It relieves the user from the responsibility of allocation of processes
to nodes by distributing the workloads dynamically. The resource sharing
algorithm of Mosix attempts to reduce the load differences between pairs of
nodes (systems in the cluster) by migrating processes from higher loaded
nodes to lesser loaded nodes. This is done in decentralized manner i.e. all
nodes execute the same algorithms and each node performs the reduction of
loads independently. Also, Mosix considers only balancing of loads on
processors and responds to changes in loads on processors as long as there
is no extreme shortage of other resources such as free memory and empty
process slots.

4.Distributed Architecture Requirements


The following are the requirements of a distributed architecture:
1. Enable client application transparency
2. Enable server application transparency
3. Support dynamic client operation request patterns
4. Maximize scalability and equalize dynamic load distribution
5. Increase system dependability
6. Support administrative tasks
7. Incur minimal overhead
8. interoperability and portability
9. application-defined load metrics and balancing policies

5. CONCLUSION

Client-Server Architecture, Distributed object architectures and


Distributed Resource Architectures are not independent but interrelated
concepts, this can be seen in concepts such as Coda-FS which use the OS
level RPC mechanism. Newer versions of CORBA are being released, the
latest being CORBA v3, the current versions of MOSIX is focusing on thread
migration and interoperability between various operating systems. Other
products such as the Globus Toolkit from IBM focus on standardization of
some distributed service architectures such as FTP. The development thus in
the distributed systems is growing while client-server architecture is taking on
newer dimensions.

You might also like