You are on page 1of 14

Middleware Technologies

MOM Vs RPC
Feature Metaphor Client/Server time relationship MOM Post-office like Asynchronous. Clients and Servers may operate at different times and speeds No fixed sequence RPC Telephone like Synchronous. Clients and Servers must run concurrently. Servers must keep up with clients. Servers must first come up before clients can talk to them. Call-Return Yes

Client/Server Sequencing Style Partners need to be available Load-balancing

Queued No

Transactional Support

Message Filtering Performance Asynchronous processing

Single queue can be used to Requires a separate TP implement FIFO or priority Monitor. based policy Yes (Some Products) No. Requires a Message Queue can Transactional RPC. participate in the commit synchronization Yes No Slow. An intermediate hop Fast is required Yes. Queues and triggers are Limited. Requires threads required and tricky code for managing threads.

Messaging and Queuing : The MOM Middleware


MOM is a key piece of middleware that is absolutely essential for a class of client/server products. If the application can tolerate a certain level of time-independent responses, MOM provides the easiest path for creating enterprise and inter-enterprise client/server systems. This accumulates outgoing transactions in queue and do a bulk upload when a connection can be established with an office server. MOM allows general purpose messages to be exchanged in a client/server system using message queues. Applications communicate over networks by simply putting messages in queues and getting messages from queues.

MOM hides all the nasty communications from applications and typically provides a very simple high-level of API to its services. A MOM consortium was formed in mid-1993 with the goal of creating standards for messaging middleware. Members are product providers including o IBM MQSeries o Covia Communications Integrator o Peerlogic PIPES o Horizon Strategies Message Express o System Strategies ezBridge MOMs messaging and queuing allows clients and servers to communicate across a network without being linked by a private, dedicated, logical connection. The clients and servers can run at different times. Everybody communicates by putting messages on queues and by taking messages from queues.

MOM products provide their own NOS services including hierarchical naming, security, and a layer that isolates applications from the network. They use virtual memory on the local OS to create their queues. Most messaging products allow the sender to specify the name of the reply queue. The products also include some type of format field that tells the recipient how to interpret the message data. MOM enabled programs do not talk to each other directly, so either program can be busy, unavailable, or simply not running at the same time. The target program can even be started several hours later.

Many clients are sending requests to one server queue. The messages are picked off the queue by multiple instances of the server program that are concurrently servicing the clients. The server instances can take messages off the queue either on a FIFO basis or accounting to some priority to load-balancing scheme. The message queue can be concurrently accessed. The servers can also use messaging filters to throw away the messages they dont want to process, or they can pass them on to other servers. The MOM messaging products provide persistent (logged on disk) and non-persistent (in memory) message queues. Persistent messages are slower, that they can be recovered in case of power failures after a system restart. A message queue can be local to the machine or remote.

System administrators can usually specify the number of messages a queue can hold and the maximum message size.

Remote Procedure Call


RPCs are not procedure calls at all, they are truly process invocations. The invoked program runs across the wire in a different resource domain

A client process calls a function on a remote server and suspends itself until it gets back the results. Parameters are passed like in any ordinary procedure. The RPC, like an ordinary procedure is synchronous. The process (or threads) that issue the call waits until it gets the results. Under the covers, the RPC run-time software collects values for the parameters, forms a message, and sends it to the remote server. The server receives the request, unpacks the parameters, calls the procedure, and sends the reply back to the client. While RPCs make life easier for the programmer, they pose a challenge for the NOS designers who supply the development tools and run-time environments. The Common issues are:

How are the Server functions located and started?


Server starts the process, when a remote invocation is received with necessary parameters and returns the response to the client. What happens when multiple clients invoke the same function? Now an environment is needed to start and stop servers, prioritize requests, perform security checks, and provide some form of load-balancing. Each incoming requests invokes a thread in the server side. A server loop is created to manage the pool of threads waiting for work rather than create a thread for each incoming request. TP Monitors are really need on the server side, which provides more functions than a NOS.

How are parameters defined and passed between the client and the server?

The better NOSs provide an Interface Definition Language (IDL) for describing the functions and parameters that a server exports to its clients. An IDL compiler takes these descriptions and produces source code stubs (and header files) for both the client and server. These stubs can then be linked with the client and server code. The client stubs packages the parameters in an RPC packet, converts the data, calls the RPC run-time library and waits for the servers reply.

On the server side, the server stubs unpacks the parameters, calls the remote procedure, packages the results, and sends the reply to the client.

How are failures handled?


Both the sides of the RPC can fail separately, it is important for the software to be able to handle all the possible failure combinations. If the server does not respond, the client side will normally block, timeout, and retry the call. The server side must guarantee only once semantics to make sure that a duplicate request is not re-executed. If the client unexpectedly dies after issuing a request, the server must be able to undo the effects of that transition.

How is security handled by the RPC?


Modern NOSs, like DCE make it easy to automatically incorporate their security features into the RPC. All you need to specify is the level of security required; then the RPC and security feature will cooperate to make it happen.

How does the client find its server?


The association of a client with a server is called binding. The binding information may be hardcoded in the client. The client can find its server by consulting a configuration file or an environment parameter. A client can also find its server at run time through the network directory services. The server must, of course, advertise their services in the directory. The process of using the directory to find a server at runtime is called dynamic binding RPC can be used to find a server. The RPC client stub will locate a server from a list of servers that support the interface. This is called automatic binding.

How is data representation across systems handled?


The problem here is that different CPUs represent data structures differently (Ex: bigendian Vs little endian) To maintain machine independence, the RPC must provide some level of data format translation across systems.

Example: Sun RPC requires that clients convert their data to a neutral canonical format using the External Data Representation (XDR) APIs. In contrast, DCEs Network Data Representation (NDR) service is multicanonical, meaning that it supports multiple data format representations. The client chooses one of these formats, tags the data with chosen format, and then leaves it up to the server to transform the data into a format it understands. In other words, the server makes it right. It lets the client to do translation, which makes the life easy for the server. With Sun, all clients look the same to the server: The Client makes it right.

Datagrams Vs Sessions

Connection-oriented protocols also known as session-based protocols, virtual circuits or sequenced packet exchanges provide a reliable two-way connection service over a session. Each packet of information gets exchanged over a session. Duplicate packets are detected and discarded by the session services. Overhead associated with creating and managing the session. If a session is lost, one of the parties must reestablish it. This can be a problem for fault-tolerant servers that require automatic switch overs to a backup server if the primary server fails. The backup server needs to reestablish all the outstanding sessions with clients. Datagrams also known as connectionless protocols or transmits and pray protocols provide a simple but unreliable form of exchange. The more powerful datagram protocols such as NetBIOS provide broadcast capabilities. NetBIOS allows you to send datagrams to a named entity, to a select group of entities (multicast), or to all entities on a network (broadcast). Datagrams are unreliable in the sense that they are not acknowledged or tracked through a sequence number. Some stacks (ex: LAN Servers MailSlots) provide an acknowledged datagram service. Datagrams are very useful to have in discovery types of situations. These are situations where you discover things about your network environment by broadcasting queries and learning who is out there from the response. Broadcast can be used to obtain bids for services or to advertise the avilability of new services. Broadcast datagrams provide the capability of creating electronic bazaars The alternative to broadcast is to use a network directory service. Datagrams are also very useful in situations where there is a need to send a quick message (or) important message. Ex: All the systems have to send the I am alive message periodically to the network manager. o The ordinary method may need 500 sessions for each computer in the network and not possible. o Instead the system can broadcast the datagram to the manager.

Peer to Peer Communications

Most early client/server applications were implemented using low-level, conversational, peer-to-peer protocols such as sockets, TLI, CPIC/APPC, NetBIOS and Named Pipes. These low-level protocols are hard to code and maintain. Instead now, the programmers are using RPCs, MOMs, and ORBs, which provide high level abstraction. The term, peer-to-peer indicates that the two sides of a communication link use the same protocol interface to conduct a networked conversation. The protocol is symmetrical, and it is sometimes called program-to-program. The peer-to-peer interface not fully mask the underlying network from the programmer. Programmer have to handle the transmission timeouts, race conditions, and other network errors. The peer-to-peer protocols started as stack-specific APIs.

a) Sockets

Sockets were introduced in 1981 as the UNIX BSD 4.2 generic interface that would provide Unix-to-Unix communications over network. In 1985, SUN OS introduced NFS and RPC over sockets. Sockets are supported on virtually every OS. The windows socket API, known as WinSock, is a multivendor specification that standardizes the use of TCP/IP under windows. In BSD Unix System, sockets are part of the Kernel and provide both a standalone and networked IPC service.

Socket = Net_ID . Host_ID . Port_ID = IP Address + Port Address.

The three most popular socket types are o Stream o Datagram o Raw Stream and datagram sockets interface to the TCP and UDP protocols, and raw sockets interface to the IP protocol. A port is an entry point to an application that resides on the host. It is represented by a 16-bit integer. Ports are commonly used to define the entry points for services provided by the server applications.

b) TLI

In 1986, AT&T introduced the Transport Layer Interface that provides functionality similar to sockets but in a more network-independent fashion. Sockets and TLI are very similar from a programmers perspective. TLI is just cleaner version of the sockets. It should run on IPX/SPX (or) TCP/IP with very few modifications. The TLI API consists of 25 API calls. Later standardized as XTI, X/Open Transport Interface.

c) NetBIOS:

It is the premier protocol for LAN-based, program-to-program communications. Introduced by IBM and Sytek in 1984 for the IBM PC network. It is used as an interface to a variety of stacks including NetBEUI, TCP/IP, XNS, Vines, OSI and IPX/SPX. The NetBIOS services are provided through a set of commands, specified in a structure called the Network Control Block (NCB) It does not support the routing of messages to other networks.

d) Named Pipes:

Provide highly reliable, two-way communications between clients and a server. They provide a file-like programming API that abstracts a session-based two-way exchange of data. Using named pipes, processes can exchange data as if they were writing to, or reading from, a sequential file. These are suitable for implementing server programs that require many-to-one pipelines. Important benefit of named pipes are part of the base interprocess communications service. Named pipes interface is identical, whether the processes are running on an individual machine or distributed across the network. Named pipes run on NetBIOS, IPX/SPX, and TCP/IP stacks. Named pipes are built-in networking features in Windows NT, Windows for Workgroups, Windows 95 and Warp Server. Unix support for Named Pipes is provided by LAN Manager/X.

e) CPI-C/APPC:

Common Programming Interface for Communications (CPI-C) build on top of APPC and marks its complexities and irregularities. Writing to the CPI-C API allows you to port your programs to all SNA platforms. The CPI-C API consists of about 40 calls; APPC consists of over 60 calls. Most of these calls deals with configuration and services. Advanced program-to-program communication is a protocol which computer programs can use to communicate over a network. APPC was developed as a component of IBMs Systems Network Architecture (SNA). APPC is linked with the term LU6.2. LU6.2. (Logic Unit Version 6.2) is a device independent SNA Protocol. It was developed to allow computers in IBM environments to setup their own communications sessions, rather than rely on a hos computer to do so. Contrary to TCP/IP, in which both communication partners always possess a clear role, the communication partners in APPC are equal, i.e., everyone can be both servers and clients equally. With the wide success of TCP/IP, APPC has declined.

RPC, Messaging and Peer to Peer


Client/Server applications are split across address spaces, physical machines, networks and operating systems. All NOSs offer peer-to-peer interfaces that let applications communicate using close to the wire send/receive semantics. Most NOSs provide some form of RPC middleware that hides the wire. An alternative type of model message queueing or simple MOM incredibly helpful in situations when the tight synchronization is not needed between the clients and servers.

Communication Stacks RPC Application Presentation Session Messaging Named Pipes DCE NDR
Peer-to-peer service API

Peer-to-Peer SUN XDR

NetBIOS Sockets TLI CPI-C/APPC


Common Transport Semantics

Transport TCP/IP Network LLC MAC Physical IEEE 802.5 (Token Ring) Fiber Optic IEEE 802.3 (Ethernet) Coax NetBEUI IEEE802.2
NDIS (OR) ODI

SPX/IPX

LU6.2 /APPN

SDLC

ISDN Twisted Pair

Each layer has a well-defined set of APIs and Protocols. With these mix-and-match can be done. Practically, an entire stack of from a single vendor. The lowest layer of the communication software belongs to the device drivers that provide an interface to several types of communication hardware adapters. Real products dont have any notion of architectural boundaries or reference models they just get a job done. At the lower layers, they interface to the hardware using MAC protocols defined by IEEE. The LLC provides a common interface to the MACs and a reliable link service for transmitting communication packets between two nodes. The transport layer provides end-to-end delivery service.

Inside the Building Blocks

The Client Building Block


Runs the client side of the application It runs on the OS that provides a GUI or an OOUI and that can access distributed services, wherever they may be. The client also runs a component of the Distributed System Management (DSM) element.

The Server Building Block


Runs the server side of the application The server application typically runs on top of some shrink-wrapped server software package. The five contending server platforms for creating the next generation of client/server applications are SQL database severs, TP Monitors, groupware servers, Object servers and the Web server. The server side depends on the OS to interface with the middleware building block. The server also runs DSM component It may be a simple agent or a shared object database etc.

The Middleware Building Block


Runs on both the client and server sides of an application This broken into three category o Transport Stacks o NOS o Service-specific middleware Middleware is the nervous system of the client/server infrastructure This also has the DSM component

DSM

Runs on every node in the client/server network. A managing workstation collects information from all its agents on the network and displays it graphically. The managing workstation can also instruct its agents to perform actions on its behalf.

Server-to-server Middleware

Server-to-server interactions are usually client/server in nature servers are clients to other servers. However, some server-to-server interactions require specialized server middleware. For example, Two-Phase commit protocol may be used to coordinate a transaction that executes on multiple servers. Servers on mail backbone will use special server-to-server middleware for doing store-and-forward type messaging. But most modern software follows the client/server paradigm.

Client/Server : A one size fits all model


The building blocks of client/server applications are: 1. Client 2. Middleware 3. Server

These building blocks can be rearranged to use them in the following situations: 1. Client/Server for tiny shops and nomadic tribes This is a building-block implementation that runs the client, the middleware software, and most of the business services on the same machine. It is the suggested implementation for the one-person shops, home offices, and mobile users with well-endowed laptops. 2. Client/Server for small shops and departments - This is the classic Ethernet client/single-server, building block implementation. It is used in small shops, departments, and branch offices. This is the predominant form of client/server today.

3. Client/Server for intergalactic enterprises This is the multiserver building-block implementation of client/server. The servers present a single system image to the client. They can be spread out throughout the enterprise, but they can be made to look like they are part of the local desktop. This implementation meets the initial needs of intergalactic client/server computing. 4. Client/Server for a post-scarcity world This model transforms every machine in the world into both a client and a server. Personal agents on every machine will handle all the negotiations with their peer agents anywhere in the universe. This dream is almost within reach. 1) Client/Server for Tiny Shops and Nomadic Tribes

It is easy to run the client and server portion of an application on the same machine. Vendors can easily package single-user versions of a client/server application. The business critical client/server application runs on one machine and does some occasional communications with outside servers to exchange data, refresh a database and send or receive mail and faxes. Ex: Internet.

2) Client/Server for small shops and departments


The client/server architecture is particularly well-suited for the LAN-based single server establishments. It consists of multiple clients talking to a local server. This is the model used in small businesses. The single-server nature of the model tends to keep the middleware simple. The client only needs to look into a configuration file to find its servers name. Security is implemented at the machine level and kept quite simple. The network is usually relatively easy to administer; its a part-time job for a member of the group. There are no complex interactions between servers, so it is easy to identify failurestheyre either on the client or on the local server.

3) Client/Server for Intergalactic Enterprises:


The client/server enterprise model addresses the needs of establishments with a mix of heterogeneous servers. These models are upwardly scalable. When more processing power is needed for various intergalactic functions, more servers can be added, or the existing server machine can be traded up for the latest generation of superserver machine. The servers can be partitioned based on the function they provide, the resource they control, or the database they own. The servers can be replicated to provide a fault-tolerant service or to boost an applications performance. Multiserver capability, when properly used, can provide an awesome amount of compute power and flexibility, in many cases rivaling that of mainframes. To exploit the full power of multiservers, we need low-cost, high-speed bandwidth and an awesome amount of middleware features -including o network directory services o network security o remote procedure calls and o network time services. Middleware creates a common view of all the services on the network called a single system image. Good software architecture for intergalactic enterprise client/server implementations is all about creating system ensembles out of modular building blocks. Intergalactic client/server is the driving force behind middleware standards as distributed objects and the Internet.

4) Client/Server for a Post-Scarcity World


Every machine is both a client and a full-function server. Because every machine is a full-function server, it will run, at a minimum, a file server, database server, workflow agent, TP Monitor, and Web server all connected via an ORB. This is in addition to all the client software and middleware.

In next few years, a hundred million machines or more may be running almost all the forms of client/server software In this model instead of mobile agents, personal agents will be used.

Intergalactic Client/Server
Intergalactic client/server is a new threshold of client/server applications and this is because of 1. the exponential increase of low-cost bandwidth on Wide Area Networks for example, the Internet and CompuServe 2. a new generation of network-enabled, multi-threaded desktop operating systems for example, OS/2 Warp Connect and Windows 95. This new threshold marks the beginning of a transition from Ethernet client/server to intergalactic client/server that will result in the irrelevance of proximity. The center of gravity is shifting from single-server, 2-tier, LAN-based departmental client/server to a post-scarcity form of client/server where every machine on the global information highway can be both a client and a server. Application characteristics Number of clients per application Number of servers per application Geography Server-to-server Interactions Middleware Client/server architecture Transactional updates Multimedia content Mobile Agents Client front-ends Intergalactic Era client/server Millions 100,000+ Global Yes ORBs on top of Internet 3-tier (or n-tier) Pervasive High Yes OOUIs, compound documents, and shippable places Ethernet Era client/server Less than 100 <5 Campus-based No SQL and stored procedure 2-tier Very infrequent Low No GUI

Time-frame

1997 onwards

1985 till present

The major key technologies in intergalactic client/server model are: a) Rich transaction Processing supports the nested transactions that span across multiple servers, long-lived transactions that executes over long periods of time as they travel from server to server, and queued transactions that can be used in secure business-to-business dealings. Most nodes on the network participates in the secured transaction; super-server nodes handle the massive transaction loads. b) Roaming agents the environment is populated with all types of agents. This agent technology includes cross-platform scripting engines, workflow, and Java-like mobile code environments that allow agents to live on any machine on the network. c) Rich data management This includes active multimedia compound documents that can move, store, view and edit in-place anywhere on the network. Most nodes on the network provides compound document technology for example, OLE or OpenDoc-for mobile document management. d) Intelligent self-managing entities With the introduction of new multi-threaded, highvolume, network-ready desktop operating systems, increased the workload on the server operating system. This type of distributed software knows how to manage and configure itself and protect itself against threats. e) Intelligent Middleware -The distributed environment must provide the semblance of a single system-image across potentially millions of hybrid client/server machines. The middleware creates this illusion by making all servers on the global network appear to behave like a single computer system. Users and programs should be able to dynamically join and leave the network, and then discover each other.

2-Tier Vs 3-Tier

Instead of Fat clients and fat servers these terms can be used. It is all about how you split the client/server applications into functional units. These functional units can be assigned to either the client or to one or more servers. The most typical functional units are: o User Interface o Business Logic and o the Shared Data In 2-tier, the application logic is either buried inside the User Interface on the client or within the database on the server (or both) 2-tier system examples: File Servers and Database Servers with stored procedure. In 3-tier, the application logic (or) process lives in the middle-tier, it is separated from the data and the user interface. 3-tier systems are more scalable, robust and flexible. In addition, they can integrate data from multiple sources. Examples: TP Monitors, Distributed Objects and the Web.

You might also like