Professional Documents
Culture Documents
INTRODUCTION
1
A socket is one endpoint of a two-way communication link between two
programs running on the network. A socket is bound to a port number so that the
TCP layer can identify the application that data is destined to be sent.
A datagram is an independent, self-contained message sent over the network
whose arrival, arrival time, and content are not guaranteed.
NetWare is a network operating system (NOS) that provides transparent remote
file access and numerous other distributed network services, including printer
sharing and support for various applications such as electronic mail transfer and
database access. NetWare specifies the upper five layers of the OSI reference
model and, as such, runs on any media-access protocol.
NetBIOS is an acronym for Network Basic Input/Output System. It provides
services related to the session layer of the OSI model allowing applications on
separate computers to communicate over a local area network. NetBIOS is not a
networking protocol.
A named pipe is a named, one-way or duplex pipe for communication between the
pipe server and one or more pipe clients. All instances of a named pipe share the
same pipe name, but each instance has its own buffers and handles, and provides a
separate conduit for client/server communication.
RPC (Remote Procedure Calls) is a powerful technique for constructing
distributed, client-server based applications. It is based on extending the notion of
conventional or local procedure calling, so that the called procedure need not exist
in the same address space as the calling procedure.
MOM (Message Oriented Middleware) is software that resides in both portions
of client/server architecture and typically supports asynchronous calls between the
client and server applications. Message queues provide temporary storage when
the destination program is busy or not connected.
DCE (Distributed Computing Environment) is an industry-standard software
technology for setting up and managing computing and data exchange in a system
of distributed computers. DCE is typically used in a larger network of computing
systems that include different size servers scattered geographically. DCE uses the
2
client/server model. Using DCE, application users can use applications and data at
remote servers.
SSL uses a program layer located between the Internet's Hypertext Transfer
Protocol (HTTP) and Transport Control Protocol (TCP) layers. SSL is included as
part of both the Microsoft and Netscape browsers and most Web server products.
S-HTTP (Secure HTTP) is an extension to the Hypertext Transfer Protocol
(HTTP) that allows the secure exchange of files on the World Wide Web. Each S-
HTTP file is either encrypted, contains a digital certificate, or both. For a given
document, S-HTTP is an alternative to another well-known security protocol,
Secure Sockets Layer (SSL).
Internet Protocol Security (IPsec) is a protocol suite for securing Internet
Protocol (IP) communications by authenticating and encrypting each IP packet of
a data stream. IPsec also includes protocols for establishing mutual authentication
between agents at the beginning of the session and negotiation of cryptographic
keys to be used during the session.
Firewalls make it possible to filter incoming and outgoing traffic that flows
through your system. A firewall can use one or more sets of “rules” to inspect the
network packets as they come in or go out of your network connections and either
allows the traffic through or blocks it.
CONTENTS
NOS Middleware
NOSs are evolving from being a collection of independent workstations, able to
communicate via a shared file system, to becoming real distributed computing
environments that make the network transparent to users.
It includes enterprise intranets, extranets and the Internet. In all caess, NOS
provide the illusion of s single system.
In departmental LAN, the illusion only extends to a few hundred users. But on the
Internet, it extends to 100 of million of users. In between, you have enterprise
3
networks and extranets that may have 1000 or sometimes 100 opf thousands of
users.
(i) Transparency
Transparency means fooling everyone into thinking the client/server system is totally
seamless – the single system illusion. It really means hiding the network and its servers
from the users and even the application programmers. Some of its types are:
Location transparency
You should not be able to be aware of the location of a resource.
Namespace transparency
You should not be able to use the same naming conventions to locate any
resource on the network.
Logon transparency
You should not be able to provide a single password that works on all
servers and for all services on the network.
Replication transparency
You should not be able to tell how many copies of a resource exist.
Local/Remote access transparency
You should not be able to work with any resource on the networks as if it
were on the local machine.
Distributed time transparency
You should not see any time differences across servers.
Failure transparency
You must be shielded from network failures.
Administration transparency
You should only have to deal with a single system management interface.
4
Global Directory Services (GDS)
GDS tracks all the NOS’s resources and knows where everything is. It should
provide a single image that can be used by all network applications-including
email, system management, network inventory, file services, PRC, distributed
objects, databases, authentication and security.
A directory object can have attributes. A schema describes the types of objects a
directory may contain: it also describes the attributes-both mandatory and
optional-associated with an object type.
A typical directory is implemented as a set of named entries and their associated
attributes. In modern NOS, the directory server is implemented as a distributed,
replicated object database. It is distributed to allow administration domains to
control their environments.
It is replicated to provide high availability and performance where needed.
Modern NOS directories have APIs and user interfaces that allow programs top
locate entities on the network by querying on the name or attributes.
Two types of synchronization schemes are used to refresh the replicas:
Immediate replication (causes any update to the master to be
immediately shadowed on all replicas)
Shucking (causes a periodic propagation to all the replicas on all chages
made on the master)
5
The list of APIs are used to create directory enabled programs such as
Directory-specific API and class libraries
LDAP and X.500 API
Java classes
Distributed object interfaces
Meta-directory services and scripts
Objective Type Questions
1. Which one of the following that provides the illusion of a single system and make
the network transparent to the user?
a. NOS
b. Windows OS
c. Server OS
d. Client OS
2. In which transparency, we should not be able to tell how many copies of a
resource exist.
a. Logon transparency
b. Name space transparency
c. Replication transparency
d. Logon transparency
3. In which transparency, we should not be able to use the same naming conventions
to locate any resource on the network.
a. Logon transparency
b. Name space transparency
c. Replication transparency
d. Logon transparency
4. Which one of the following will causes a periodic propagation to all the replicas on
all changes made on the master?
a. Shucking
b. Immediate Replication
c. Transparency
6
d. None of the above
5. Which one of the following will causes any update to the master to be immediately
shadowed on all replicas
a. Shucking
b. Immediate Replication
c. Transparency
d. None of the above
Review Questions
Two Mark Questions
1. Define NOS.
2. Define Global Directory Services.
3. What is mean by transparency?
4. What is meant by Logon transparency?
5. What are the two types of synchronization schemes are used to refresh the
replicas?
Big Questions
1. What are the various types of transparencies are used in GDS?
X.500 Overview
The X.500 directory service is a global directory service. Its components cooperate to
manage information about objects such as countries, organizations, people, machines, and
so on in a worldwide scope. It provides the capability to look up information by name (a
white-pages service) and to browse and search for information (a yellow-pages service).
The information is held in a directory information base (DIB). Entries in the DIB are
arranged in a tree structure called the directory information tree (DIT). Each entry is a
named object and consists of a set of attributes. Each attribute has a defined attribute type
and one or more values. The directory schema defines the mandatory and optional
attributes for each class of object (called the object class). Each named object may have
one or more object classes associated with it.
7
The X.500 namespace is hierarchical. An entry is unambiguously identified by a
distinguished name (DN). A distinguished name is the concatenation of selected attributes
from each entry, called the relative distinguished name (RDN), in the tree along a path
leading from the root down to the named entry.
The need for an industry standard directory has been articulated for many years. Many in
the industry have promoted and continue to promote the ISO/ITU X.500 standard as the
basis for this directory. Netscape, along with a number of other companies, recently
announced that they would be supporting LDAP (Lightweight Directory Access Protocol)
based directory services. This has led to a rapid growth of interest in LDAP, and has also
led to substantial confusion as to what LDAP is and what it might become.
The most important thing to understand about X.500 and LDAP is that they have more in
common than different. The things that they have in common relate to the information
model and standard services, which are absolutely central to both X.500 and LDAP. This
section describes the common core:
1. Hierarchical Names. LDAP and X.500 both define a hierarchical directory with
hierarchical names. An example name is as follows: CN=Marshall T. Rose;
O=Dover Beach Consulting; C=US This name represents the person with
Common Name (CN) 'Marshall T. Rose', within the organization (O) 'Dover
Beach Consulting', within country 'US'.
8
Object Class attribute. Typical object classes are People, Organizations, and
Computers. X.500 and LDAP share object class definitions.
4. Typed Attributes. Information within objects is held as a set of typed attributes
For example there may be a 'telephone number' attribute with one or more values.
Many attributes are encoded as strings. Other attributes have an encoding which is
inferred from the attribute type. This can be used for non-string data such as
pictures or to handle structured information. X.500 and LDAP share attribute type
definitions.
5. Directory Operations. X.500 and LDAP share a common set of operations to
access and manage data in the directory. These are: read; compare; search; add;
delete; modify entry; modify rdn.
X.500 and LDAP both have their origins in the ISO/ITU X.500(1988) specifications,
which were the result of work started in 1982. This section reviews the core concepts of
this specification, which are not common to LDAP, or are needed to help understand the
relationship between X.500 and LDAP:
1. Directory System Agent (DSA). The DSA is the core directory server. A single
DSA will typically hold only a part of the data available in the total directory.
2. Directory User Agent (DUA). A DUA is the client process that accesses
information in the directory. This could be a (human) user interface or embedded
in another application.
3. Directory Access Protocol (DAP). DAP is the protocol which a DUA uses to
access one or more DSAs. Thus it is the protocol which allows the client/server
model of the X.500 directory.
4. Directory System Protocol (DSP). DSP is the protocol that DSAs use to talk to
each other, and it carries the same operations as DAP, along with some DSA
control information. These interactions are governed by a set of procedures for
distributed operations, which enable a set of DSAs to provide a coherent service,
with the DUA unaware of how the directory data is distributed between DSAs.
9
LDAP
LDAP (Lightweight Directory Access Protocol) arose from initial experience with
deploying X.500 directory services on the Internet in 1989-91. Two DUA implementers
(Marshall Rose and Tim Howes) developed lightweight protocols to communicate
between a DUA and a gateway which mapped from LDAP onto X.500 DAP. It was clear
that this was a more general requirement, which led three people (Wengyik Yeong, Steve
Kille, and Tim Howes) to develop the LDAP standard under the aegis of the IETF OSI-
DS working group.
The goal of the original LDAP was to give simple lightweight access to an X.500
directory, to facilitate the development of X.500 DUAs and use of X.500 for a wide
variety of applications.
1. Simple protocol encoding. While the LDAP PDUs are based on those from X.500,
elements of the protocol have been modified to produce a protocol that is
significantly simpler.
2. Names and attributes use text encoding. Names and attributes are pervasive in the
protocol. In X.500, these have a complex ASN.1 encoding, whereas in LDAP they
are given a simple string encoding. This is particularly helpful for applications
that do not need to handle the detailed structure of names or attributes.
3. Mapping directly onto TCP/IP. LDAP maps directly onto TCP/IP (the Internet
transport layer), and removes the need for a non-trivial amount of OSI protocol.
4. Simple API. The University of Michigan implementation has set a de facto API
standard, published as RFC 1823. This is arguably the most important benefit, as
it enables easy implementation of directory enabled applications. The X/Open
XDS interface, which is the preferred API onto X.500, is much more complex.
5. LDAP relies on X.500 for the service definition and distributed operations.
Because LDAP is defined as an access protocol to X.500 and not as a complete
10
directory service, it is possible to specify LDAP very concisely. The detailed
service definitions, while often intuitive from the protocol, are formally specified
in X.500. Where the directory service is provided by more than one Directory
Server, the procedures for doing this are not defined by LDAP.
The original LDAP was very clearly a lightweight access protocol to an X.500 directory.
It is important to understand this clearly, when looking at the issues of providing a large
scale directory service.
This paper has described the history and state of the art of X.500 and LDAP. The rest of
this paper will now consider and give views as to how this will move forward. This
section considers directory access.
LDAP has an excellent future as a directory access protocol, and will become the access
mechanism for directory service in the Internet, and the leading open directory access
protocol. Reasons for this are:
1. schema discovery
A distributed computing system has many advantages but also brings with it new
problems. One of them is keeping the clocks on different nodes synchronized. In a single
11
system, there is one clock that provides the time of day to all applications. Computer
hardware clocks are not completely accurate, but there is always one consistent idea of
what time it is for all processes running on the system.
In a distributed system, however, each node has its own clock. Even if it were possible to
set all of the clocks in the distributed system to one consistent time at some point, those
clocks would drift away from that time at different rates. As a result, the different nodes
of a distributed system have different ideas of what time it is. This is a problem, for
example, for distributed applications that care about the ordering of events. It is difficult
to determine whether Event A on Node X occurred before Event B on Node Y because
different nodes have different notions of the current time.
The DCE Distributed Time Service (DTS) addresses this problem in two ways:
1. DTS provides a way to periodically synchronize the clocks on the different hosts in a
distributed system.
2. DTS also provides a way of keeping that synchronized notion of time reasonably close
to the correct time. (In DTS, correct time is considered to be Coordinated Universal Time
(UTC), an international standard.)
These services together allow cooperating nodes to have the same notion of what time it
is, and to also have that time be meaningful in the rest of the world.
Distributed time is inherently more complex than time originating from a single source -
since clocks cannot be continuously synchronizing, there is always some discrepancy in
their ideas of the current time as they drift between synchronizations. In addition,
indeterminacy is introduced in the communications necessary for synchronization -
clocks synchronize by sending messages about the time back and forth, but that message
passing itself takes a certain (unpredictable) amount of time. So in addition to being able
to express the time of day, a distributed notion of time must also include an inaccuracy
factor - how close the timestamp is to the real time. As a result, keeping time in a
distributed environment requires not only new synchronization mechanisms, but also a
12
new form of expression of time - one that includes the inaccuracy of the given time. In
DTS, distributed time is therefore expressed as a range, or interval, rather than as a single
point.
In a client/server system, you can’t trust any of the OS on the network to protect the
server’s resources from the unauthorized access. To maintain single system illusion,
every trusted user must be given transparent access to all resources. NOS can provide to
meet security on the network:
3. Audit trials -> An audit trail is a series of records of computer events, about an
operating system, an application, or user activities. It is generated by an auditing
system that monitors system activity. Audit trails have many uses in the realm of
computer security : Individual Accountability, Reconstructing Events, Problem
Monitoring , Intrusion Detection.
(i) Integrity
NOS provide at least two mechanisms for dealing with tempering and the confidentiality
of in-transit data:
13
1. Encryption
2. Cryptographic Checksums
(ii) Non-Repudiation
Non-repudiation is the assurance that someone cannot deny something. Typically, non-
repudiation refers to the ability to ensure that a party to a contract or a communication
cannot deny the authenticity of their signature on a document or the sending of a message
that they originated. Non-Repudiation services are:
3. An action timestamp
5. Adjudicator
14
a. X.500
b. DAP
c. API
d. LDAP
4. Schema discovery, dynamically extensible schemas, enhanced security……
are the features of
a. X.500
b. DAP
c. API
d. LDAP
5. Which one of the following that provides a way to periodically synchronize
the clocks on the different hosts in a distributed system?
a. X.500
b. DAP
c. DTS
d. LDAP
6. Any process by which you verify that someone is who they claim they are.
a. Authentication
b. Authorization
c. Access Control
d. Audit Trails
7. The process of finding out if the person, once identified, is permitted to have
the resource is called
a. Authentication
b. Authorization
c. Access Control
d. Audit Trails
8. The assurance that someone cannot deny something is called
a. Authentication
b. Authorization
c. Non-Repudiation
15
d. Audit Trails
9. Checksum is also called as
a. Message Digest
b. Authorization
c. Non-Repudiation
d. Encryption
Review Questions
Two Mark Questions
1. Define LDAP.
2. Write short notes on X.500 service.
3. Compare LDAP and X.500
4. What is meant by DTS?
5. What are the features of LDAP?
6. Define API.
Big Questions
1. Explain in detail about X.500 services. Compare X.500 with LDAP.
2. Explain briefly about Directory Time Services.
3. Explain briefly about Directory Security Services.
Remote Procedure Call (RPC) middleware that hides “the wire” and makes any server
on the network appear to be one function call away.
The Remote Procedure Call (RPC) message protocol consists of two distinct structures:
the call message and the reply message. A client makes a remote procedure call to a
network server and receives a reply containing the results of the procedure's execution.
By providing a unique specification for the remote procedure, RPC can match a reply
message to each call (or request) message.
The RPC message protocol is defined using the external Data Representation (XDR) data
description, which includes structures, enumerations, and unions. When RPC messages
16
are passed using the TCP/IP byte-stream protocol for data transport, it is important to
identify the end of one message and the start of the next one.
Communication Stacks
RPC
Application Messaging Named Peer-to-Peer
DCE SUN
Pipes
Presentation NDR XDR
Peer-to-peer service API
Session
NetBIOS Sockets TLI CPI-C/APPC
Common Transport Semantics
Transport
TCP/IP SPX/IPX LU6.2 /APPN
Network NetBEUI
LLC IEEE802.2
NDIS (OR) ODI
MAC IEEE 802.5 IEEE 802.3
SDLC ISDN
(Token Ring) (Ethernet)
Physical Fiber Optic Coax Twisted Pair
Each layer has a well-defined set of APIs and Protocols.
17
The lowest layer of the communication software belongs to the device drivers that
provide an interface to several types of communication hardware adapters.
Real products don’t have any notion of architectural boundaries or reference
models – they just get a job done.
At the lower layers, they interface to the hardware using MAC protocols defined
by IEEE.
The LLC provides a common interface to the MACs and a reliable link service for
transmitting communication packets between two nodes.
The transport layer provides end-to-end delivery service.
Peer-to-peer Communications
Client/server applications were implemented using low level, conversational,
peer-to-peer protocols such as sockets, TLI, CPIC/APPC, NetBIOS and named
pipes.
These low level protocols are hard to code and maintain.
The term “peer-to-peer” indicates that the two sides of a communication link use
the same protocol interface to conduct a networked conversation. Any computer
ca initiate a conversation with any other computer. The protocol is symmetric and
it is sometimes called “program to program”.
1. What Is a Socket?
Normally, a server runs on a specific computer and has a socket that is bound to a specific
port number. The server just waits, listening to the socket for a client to make a
connection request.
On the client-side: The client knows the hostname of the machine on which the server is
running and the port number on which the server is listening. To make a connection
request, the client tries to rendezvous with the server on the server's machine and port.
The client also needs to identify itself to the server so it binds to a local port number that
it will use during this connection. This is usually assigned by the system.
18
If everything goes well, the server accepts the connection. Upon acceptance, the server
gets a new socket bound to the same local port and also has its remote endpoint set to the
address and port of the client. It needs a new socket so that it can continue to listen to the
original socket for connection requests while tending to the needs of the connected client.
On the client side, if the connection is accepted, a socket is successfully created and the
client can use the socket to communicate with the server.
The client and server can now communicate by writing to or reading from their sockets.
A socket address is the combination of an IP address (the location of the computer) and a
port (entry point to an application that resides on a host) into a single identity, much like
19
one end of a telephone connection is between a phone number and a particular extension
line at that location.
2. Datagram Vs Sessions
Clients and servers that communicate via a reliable channel, such as a TCP socket, have a
dedicated point-to-point channel between themselves, or at least the illusion of one. To
communicate, they establish a connection, transmit the data, and then close the
connection. All data sent over the channel is received in the same order in which it was
sent. This is guaranteed by the channel.
In contrast, applications that communicate via datagram send and receive completely
independent packets of information. These clients and servers do not have and do not
need a dedicated point-to-point channel. The delivery of datagram to their destinations is
not guaranteed. Nor is the order of their arrival. A datagram is an independent, self-
contained message sent over the network whose arrival, arrival time, and content are not
guaranteed.
3. NetWare
NetWare is a network operating system (NOS) that provides transparent remote file
access and numerous other distributed network services, including printer sharing and
support for various applications such as electronic mail transfer and database access.
NetWare specifies the upper five layers of the OSI reference model and, as such, runs on
any media-access protocol (Layer 2). Additionally, NetWare runs on virtually any kind of
computer system, from PCs to mainframes. This chapter summarizes the principal
communications protocols that support NetWare.
20
NetWare was developed by Novell, Inc., and was introduced in the early 1980s. It was
derived from Xerox Network Systems (XNS), which was created by Xerox Corporation
in the late 1970s, and is based on a client-server architecture. Clients (sometimes called
workstations) request services, such as file and printer access, from servers.
4. NetBIOS
Services
21
"type" similar to the use of ports in TCP/IP. In NBT, the name service operates on UDP
port 137 (TCP port 137 can also be used, but it is rarely if ever used).
Session mode lets two computers establish a connection for a "conversation", allows
larger messages to be handled, and provides error detection and recovery. In NBT, the
session service runs on TCP port 139.
22
computer by sending a close request. The computer that started the session will reply with
a close response which prompts the final session closed packet.
Datagram mode is "connectionless". Since each message is sent independently, they must
be smaller; the application becomes responsible for error detection and recovery. In NBT,
the datagram service runs on UDP port 138.
5. NetBEUI
In order to properly describe NetBEUI, the transport protocol sometimes used for
Microsoft networking, it is necessary to describe Microsoft networking in some detail
and the various protocols used and what network layers they support.
NetBIOS, NetBEUI, and SMB are Microsoft Protocols used to support Microsoft
Networking. There are three methods of mapping NetBIOS names to IP addresses on
small networks that don't perform routing:
23
3. NBNS - NetBIOS Name Server. A server that maps NetBIOS names to IP
addresses. This service is provided by the nmbd daemon on Linux.
6. Named Pipes
A named pipe is a named, one-way or duplex pipe for communication between the pipe
server and one or more pipe clients. All instances of a named pipe share the same pipe
name, but each instance has its own buffers and handles, and provides a separate conduit
for client/server communication. The use of instances enables multiple pipe clients to use
the same named pipe simultaneously.
Any process can access named pipes, subject to security checks, making named pipes an
easy form of communication between related or unrelated processes.
Any process can act as both a server and a client, making peer-to-peer communication
possible. As used here, the term pipe server refers to a process that creates a named pipe,
and the term pipe client refers to a process that connects to an instance of a named pipe.
Named pipes can be used to provide communication between processes on the same
computer or between processes on different computers across a network. If the server
service is running, all named pipes are accessible remotely. If you intend to use a named
pipe locally only, deny access to NT AUTHORITY\NETWORK or switch to local RPC.
24
RPC makes the client/server model of computing more powerful and easier to program.
25
Messaging and Queuing
MOM is software that resides in both portions of client/server architecture and typically
supports asynchronous calls between the client and server applications. Message queues
provide temporary storage when the destination program is busy or not connected. MOM
reduces the involvement of application developers with the complexity of the master-
slave nature of the client/server mechanism. MOM comprises a category of inter-
application communication software that generally relies on asynchronous message-
passing, as opposed to a request-response metaphor.
Most message-oriented middleware depend on a message queue system, but there are
some implementations that rely on broadcast or multicast messaging systems.
(i) Advantages
Storage
Most MOM systems provide persistent storage to back up the message transfer medium.
This means that the sender and receiver do not need to connect to the network at the same
time (asynchronous delivery). This becomes particularly useful when dealing with
intermittent connections, such as unreliable networks, casual users or timed connections.
It also means that should the receiver application fail for any reason, the senders can
26
continue unaffected, as the messages they send will simply accumulate in the message
store for later processing when the receiver restarts.
Routing
MOM delivers another important advantage through its ability to route messages within
the middleware layer itself. Taking things a step further, middleware messaging can
deliver a single message to more than one recipient (broadcast or multicast).
Transformation
In a message-based middleware system, the recipient's message need not replicate the
sender's message exactly. A MOM system with built-in intelligence can transform
messages en-route to match the requirements of the sender or of the recipient. In
conjunction with the routing and broadcast/multicast facilities, one application can send a
message in its own native format, and two or more other applications may each receive a
copy of the message in their own native format. Many modern MOM systems provide
sophisticated message transformation (or mapping) tools which allow programmers to
specify transformation rules applicable to a simple GUI drag-and-drop operation.
(ii) Disadvantages
The primary disadvantage of many message oriented middleware systems is that they
require an extra component in the architecture, the message transfer agent (Message
broker). As with any system, adding another component can lead to reductions in
performance and reliability, and can also make the system as a whole more difficult and
expensive to maintain.
27
situations. That said, most MOM systems have facilities to group a request and a
response as a single pseudo-synchronous transaction.
MOM Vs RPC
28
Objective Type Questions
1. RPC is one of the
a. client
b. Server
c. Middleware
d. OS
2. In RPC, end-to-end delivery service is defined in
a. Physical Layer
b. Transport Layer
c. MAC Layer
d. Application Layer
3. Which one of the following that provides a common interface to the MACs and a
reliable link service for transmitting communication packets between two nodes?
a. LLC Layer
b. Transport Layer
c. MAC Layer
d. Application Layer
4. The endpoint of a two-way communication link between two programs running
on the network is called
a. Session
b. Protocol
c. Socket
d. RPC
5. Every TCP connection can be uniquely identified by ----- endpoints.
a. 1
b. 2
c. 3
d. 4
6. An independent, self-contained message sent over the network whose arrival,
arrival time, and content are not guaranteed is called
a. session
29
b. datagram
c. packet
d. socket
7. Which one of the following that provides communication between processes on
the same computer or between processes on different computers across a network?
a. socket
b. protocol
c. Named pipes
d. NetBIOS
Review Questions
Two Mark Questions
1. Define RPC.
2. What are the requirements of RPC message protocol?
3. Define Peer-to-peer.
4. Define Socket.
5. Define datagram.
6. Compare Datagram VS Session.
7. Write short notes on NetWare.
8. Write short notes on NetBIOS
9. Write short notes on NetBEUI.
10. What is the use of named pipes?
11. Define MOM.
12. What are the advantages of MOM?
Big Questions
1. Explain in detail RPC messaging and peer-to-peer Communications.
2. Compare MOM Vs PRC.
30
The Evolution of NOS
Internet NOS
Standards based
directories
Public key
Enterprise NOS infrastructure
Digital certificates
Department NOS Firewalls/VPNs
SSL/IP sec
Global directories SET
Distributed file server Distributed time
Kerboros Publish-and-
authentication subscribe/MOM
RPC IIOP
File server DCE security Java mobile code
Print server DCE networking
Flat directories Single logon
Password
authentication
DCE
31
service, a time service, an authentication service and a distributed file system (DFS)
known as DCE/DFS.
Much of DCE setup requires the preparation of distributed directories so that DCE
applications and related data can be located when they are being used. DCE includes
security support and some implementations provide support for access to popular
databases such as IBM's CICS, IMS, and DB2 databases. DCE was developed by the
Open Software Foundation (OSF) using software technologies contributed by some of its
member companies.
The largest unit of management in DCE is a cell. The highest privileges within a cell are
assigned to a role called cell administrator, normally assigned to the "user" cell_admin.
Note that this need not be a real OS-level user. The cell_admin has all privileges over all
DCE resources within the cell. Privileges can be awarded to or removed from the
following categories : user_obj, group_obj, other_obj, any_other for any given DCE
resource. The first three correspond to the owner, group member, and any other DCE
principal respectively. The last group contains any non-DCE principal. Multiple cells can
be configured to communicate and share resources with each other. All principals from
external cells are treated as "foreign" users and privileges can be awarded or removed
accordingly. In addition to this, specific users or groups can be assigned privileges on any
DCE resource, something which is not possible with the traditional UNIX filesystem,
which lacks ACLs.
32
1. The Security Server that is responsible for authentication
2. The Cell Directory Server (CDS) that is the respository of resources and ACLs
and
3. The Distributed Time Server that provides an accurate clock for proper
functioning of the entire cell
(i) DCE/RPC
Note that DCE/RPC should not be confused with the Distributed Computing
Environment (DCE) as a whole; DCE includes, in addition to DCE/RPC, a suite of
DCE/RPC services that provide, amongst other things, Cell Directory Services (CDS)
and the DCE Distributed File System (DFS). It should also not be confused with RPC as
33
a whole, as RPC refers to a wide range of different and often incompatible technologies
such as ONC RPC and XML-RPC. It is used by Microsoft Exchange/Outlook.
The DCE directory services provide access for applications and users to a federation of
naming systems at the global, enterprise, and application levels.
* The Global Directory Service (GDS) is the DCE implementation of the CCITT
(International Telegraph and Telephone Consultative Committee) 1988 X.500
international standard. GDS is a distributed, replicated directory service that manages a
global namespace for names anywhere in the world.
* The Cell Directory Service (CDS) is a distributed, replicated directory service that
manages names within a DCE cell.
* The Global Directory Agent (GDA) is a daemon that uses global name services to help
applications access names in remote cells. GDA interacts with either X.500 services such
as GDS or Internet Domain Name System (DNS) services such as the Berkeley Internet
Name Domain (BIND) name server, named.
Through these services, DCE applications can access several interconnected namespaces,
including X.500, DNS, CDS, the DCE security namespace, and the DCE Distributed File
Service (DFS) filespace.
34
(ii) Distributed Time Service (DTS)
- ... with overlapping time intervals, neither is ‘earlier’ than the other.
Authentication Service
- allows a process to verify the identity of another process
• Authorization (Privilege) Service
- allows a server to determine whether client access should be granted to a
resource
• Registry Service
- maintains the DCE security database
• Access Control List Facility
- allows users to grant and revoke access to resources they own
• Login Facility
35
(v) DCE Distributed File System
The DCE Distributed File System (DCE/DFS) is the remote file access protocol used
with the Distributed Computing Environment. DCE/DFS consisted of multiple
cooperative components that provided a network file system with strong file system
semantics, attempting to mimic the behavior of POSIX local file systems while taking
advantage of performance optimizations when possible. A DCE/DFS client system
utilized a locally managed cache that would contain copies (or regions) of the original
file. The client system would coordinate with a server system where the original copy of
the file was stored to ensure that multiple clients accessing the same file would re-fetch a
cached copy of the file data when the original file had changed.
The advantage of this approach is that it provided very good performance even over slow
network connections because most of the file access was actually done to the local cached
regions of the file. If the server failed, the client could continue making changes to the
file locally, storing it back to the server when it became available again.
DCE/DFS also divorced the concept of logical units of management (Filesets) from the
underlying volume on which the fileset was stored. In doing this it allowed administrative
control of the location for the fileset in a manner that was transparent to the end user. To
support this and other advanced DCE/DFS features, a local journaling file system was
developed to provide the full range of support options.
36
control. Often servers manage information, requiring input/output operations to a storage
device. While one server thread is waiting for its input or output operation to finish,
another server thread can continue working, improving overall performance.
Using multiple threads puts new requirements on programmers: they must manage the
threads, synchronize threads' access to global resources, and make choices about thread
scheduling and priorities. A threads implementation must provide facilities for
programmers to perform these tasks.
The latest buzzword in computer world is ‘Internet’. It has taken the entire world by
surprise with its cutting edge technology to connect people and computers throughout the
world. Using Internet, organizations all over the world can exchange data, people can
communicate with each other in a faster and effective way, and researchers can gather
information in their respective area of research. With help of video conferencing over
Internet, it has become possible that people can even see each other while communicating
and it is possible by video conferencing over Internet. Even one can do all his shopping
sitting back at home. He does not bother to go to the crowded market place. Slowly
shopkeepers are also opting for electronic commerce, which provides them greater reach
and fastest way to do business over Internet. Don’t get surprised, if you come to know
that the Paanwalla in your locality has started selling his paan over Internet. The Internet
is evolving into a first class intergalactic NOS- with global directories, system
management and extensive security.
37
(i) Web Security
1. Encryption
2. Authentication
3. Firewalls
4. Non-Repudiation
(ii) SSL
The Secure Sockets Layer (SSL) is a commonly-used protocol for managing the security
of a message transmission on the Internet. SSL has recently been succeeded by Transport
Layer Security (TLS), which is based on SSL. SSL uses a program layer located between
the Internet's Hypertext Transfer Protocol (HTTP) and Transport Control Protocol (TCP)
layers. SSL is included as part of both the Microsoft and Netscape browsers and most
Web server products. Developed by Netscape, SSL also gained the support of Microsoft
and other Internet client/server developers as well and became the de facto standard until
evolving into Transport Layer Security. The "sockets" part of the term refers to the
sockets method of passing data back and forth between a client and a server program in a
network or between program layers in the same computer. SSL uses the public-and-
private key encryption system from RSA, which also includes the use of a digital
certificate.
TLS and SSL are an integral part of most Web browsers (clients) and Web servers. If a
Web site is on a server that supports SSL, SSL can be enabled and specific Web pages can
be identified as requiring SSL access. Any Web server can be enabled by using
38
Netscape's SSLRef program library which can be downloaded for noncommercial use or
licensed for commercial use.
TLS and SSL are not interoperable. However, a message sent with TLS can be handled
by a client that handles SSL but not TLS. Secure Sockets Layer (SSL) technology
protects your Web site and makes it easy for your Web site visitors to trust you in three
essential ways:
(iii) S-HTTP
S-HTTP (Secure HTTP) is an extension to the Hypertext Transfer Protocol (HTTP) that
allows the secure exchange of files on the World Wide Web. Each S-HTTP file is either
encrypted, contains a digital certificate, or both. For a given document, S-HTTP is an
alternative to another well-known security protocol, Secure Sockets Layer (SSL). A
major difference is that S-HTTP allows the client to send a certificate to authenticate the
user whereas, using SSL, only the server can be authenticated. S-HTTP is more likely to
be used in situations where the server represents a bank and requires authentication from
the user that is more secure than a userid and password.
S-HTTP does not use any single encryption system, but it does support the Rivest-
Shamir-Adleman public key infrastructure encryption system. SSL works at a program
layer slightly higher than the Transmission Control Protocol (TCP) level. S-HTTP works
at the even higher level of the HTTP application. Both security protocols can be used by a
browser user, but only one can be used with a given document. Terisa Systems includes
both SSL and S-HTTP in their Internet security tool kits.
39
A number of popular Web servers support both S-HTTP and SSL. Newer browsers
support both SSL and S-HTTP. S-HTTP has been submitted to the Internet Engineering
Task Force (IETF) for consideration as a standard. Request for Comments (RCFs)
Internet draft 2660 describes S-HTTP in detail.
(iv) IPsec
Internet Protocol Security (IPsec) is a protocol suite for securing Internet Protocol (IP)
communications by authenticating and encrypting each IP packet of a data stream. IPsec
also includes protocols for establishing mutual authentication between agents at the
beginning of the session and negotiation of cryptographic keys to be used during the
session. IPsec can be used to protect data flows between a pair of hosts (e.g. computer
users or servers), between a pair of security gateways (e.g. routers or firewalls), or
between a security gateway and a host. [1]
IPsec is a dual mode, end-to-end, security scheme operating at the Internet Layer of the
Internet Protocol Suite or OSI model Layer 3. Some other Internet security systems in
widespread use, such as Secure Sockets Layer (SSL), Transport Layer Security (TLS)
and Secure Shell (SSH), operate in the upper layers of these models. Hence, IPsec can be
used for protecting any application traffic across the Internet. Applications need not be
specifically designed to use IPsec. The use of TLS/SSL, on the other hand, must typically
be incorporated into the design of applications.
IPsec is a successor of the ISO standard Network Layer Security Protocol (NLSP). NLSP
was based on the SP3 protocol that was published by NIST, but designed by the Secure
Data Network System project of the National Security Administration (NSA).
(v) Firewalls
Firewalls make it possible to filter incoming and outgoing traffic that flows through your
system. A firewall can use one or more sets of “rules” to inspect the network packets as
they come in or go out of your network connections and either allows the traffic through
or blocks it. The rules of a firewall can inspect one or more characteristics of the packets,
40
including but not limited to the protocol type, the source or destination host address, and
the source or destination port.
Firewalls can greatly enhance the security of a host or a network. They can be used to do
one or more of the following things:
To protect and insulate the applications, services and machines of your internal
network from unwanted traffic coming in from the public Internet.
To limit or disable access from hosts of the internal network to services of the
public Internet.
To support network address translation (NAT), which allows your internal
network to use private IP addresses and share a single connection to the public
Internet (either with a single IP address or by a shared pool of automatically
assigned public addresses).
Packet filter: Looks at each packet entering or leaving the network and
accepts or rejects it based on user-defined rules. Packet filtering is fairly effective
and transparent to users, but it is difficult to configure. In addition, it is
susceptible to IP spoofing.
Application gateway: Applies security mechanisms to specific applications,
such as FTP and Telnet servers. This is very effective, but can impose a
performance degradation.
Circuit-level gateway: Applies security mechanisms when a TCP or UDP
connection is established. Once the connection has been made, packets can flow
between the hosts without further checking.
Proxy server: Intercepts all messages entering and leaving the network. The
proxy server effectively hides the true network addresses.
Objective Type Questions
1. RPC, Global Directories are
41
a. Enterprise NOS
b. Department NOS
c. Internet NOS
d. None of the above
2. File server is one of the
a. Enterprise NOS
b. Department NOS
c. Internet NOS
d. None of the above
3. Firewalls, SSL are
a. Enterprise NOS
b. Department NOS
c. Internet NOS
d. None of the above
4. Which server is used in authentication in DCE?
a. Security Server
b. Cell Directory Server
c. Remote Server
d. Distributed Time Server
5. Which one of the following is used for managing the security of a message
transmission on the Internet?
a. HTTP
b. FTP
c. S-HTTP
d. SSL
6. SSL is located between
a. HTTP and TCP
b. HTTP and Presentation Layer
c. S-HTTP and TCP
d. Application Layer and TCP
42
7. ------------- is used to protect data flows between a pair of hosts between a pair of
security gateways, or between a security gateway and a host.
a. SSL
b. HTTP
c. S-HTTP
d. IPSec
7. Looks at each packet entering or leaving the network and accepts or rejects it
based on user-defined rules is called
a. Packet filter
b. Application gateway
c. Circuit-level gateway
d. Proxy server
8. Proxy Server is one of the
a. Gateway
b. File Server
c. Web Server
d. Firewalls
9. Application gateway is one of the
a. Gateway
b. Application Server
c. Web Server
d. Firewalls
Review Questions
Two Mark Questions
1. Name any 4 Department NOS.
2. Define DCE.
43
3. Define DCE/RPC.
4. What are the components of DCE?
5. What are the DCE directory services?
6. What is meant by DTS?
7. Write short notes on DCE?DFS.
8. Define DCE threads.
9. Define SSL.
10. Define HTTP.
11. Define S-HTTP.
12. Write short notes on IPSec.
13. What is meant by Firewalls.
14. Define Application gateway.
15. What are the types of Firewalls.
Big Questions
1. Explain in detail about DCE.
2. Explain in detail about the Internet as NOS.
ASSIGNMENT QUESTIONS
--------------------------------------------------------------------------------------------------------
44
45