You are on page 1of 77

1

CS2363-COMPUTER NETWORKS
Unit I
Introduction to networks network architecture network performance Direct link networks
encoding framing error detection transmission Ethernet Rings FDDI - Wireless networks
Switched networks bridges
Unit II
Internetworking IP - ARP Reverse Address Resolution Protocol Dynamic Host Configuration
Protocol Internet Control Message Protocol Routing Routing algorithms Addressing
Subnetting CIDR Inter domain routing IPv6
Unit III
Transport Layer User Datagram Protocol (UDP) Transmission Control Protocol Congestion
control Flow control Queuing Disciplines Congestion Avoidance Mechanisms.
Unit IV
Data Compression introduction to JPEG, MPEG, and MP3 cryptography symmetric-key
public-key authentication key distribution key agreement PGP SSH Transport layer
security IP Security wireless security - Firewalls
Unit V
Domain Name System (DNS) E-mail World Wide Web (HTTP) Simple Network
Management Protocol File Transfer Protocol (FTP) Web Services - Multimedia Applications
Overlay networks
REFERENCES / WEBSITES:
[1]Larry L. Peterson, Bruce S. Davie, Computer Networks: A Systems Approach, Fourth Edition, Morgan
Kauffmann Publishers Inc., 2009, Elsevier.
[2] Andrew S. Tanenbaum, Computer Networks, Sixth Edition, 2003, PHI Learning.

CS2363 CN / Unit 1

UNIT 1 COMPUTER NETWORKS


Computer network is a collection of computers and devices connected by communications channel that
facilitates communications among users and allows users to share resources with other users.
Connectivity between computers can be either private or public e.g. internet. For reasons of privacy and
security, many private (corporate) networks have the explicit goal of limiting the set of machines that are
connected. In contrast, other networks (of which the Internet is the prime example) are designed to grow in a
way that allows them the potential to connect all the computers in the world. A system that is designed to
support growth to an arbitrarily large size is said to scale.
Links, Nodes, and Clouds
Network connectivity occurs at many different levels. At the lowest level, a network can consist of two or more
computers directly connected by some physical medium, such as a coaxial cable or an optical fiber. We call such
a physical medium a link, and we often refer to the computers it connects as nodes. (Sometimes a node is a
more specialized piece of hardware rather than a computer)
Types of Connectivity:
1) Direct links
2) Indirect links / connectivity (switched networks)
a) Point-to-point
a) circuit switched
b) Multiple access
b) packet switched
Physical links are sometimes limited to a pair of nodes (such a link is said to be point-to-point), while in other
cases, more than two nodes may share a single physical link (such a link is said to be multiple access). Whether
a given link supports point-to-point or multiple access connectivity depends on how the node is attached to the
link. It is also the case that multiple-access links are often limited in size, in terms of both the geographical
distance they can cover and the number of nodes they can connect. The exception is a satellite link, which can
cover a wide geographic area.

Circuit switched networks:


e.g. Telephone networks

First, it establishes a dedicated circuit across a sequence of links and then allows the source node to
send a stream of bits across this circuit to a destination node.
Packet switched networks:

RMDEC

Nodes in this network send discrete blocks of data to each other.


CS2363 CN / Unit 1

Block of data -> packet / message

Uses the strategy called store and forward

Each node receives a complete packet over some link, stores the packet in its internal memory and then
forwards the complete packet to the next node.

Efficient than circuit switched network

Internetwork : (internet)
A set of computers can be indirectly connected. A set of independent networks (clouds) are interconnected to form
an internetwork. A node that is connected to two or more networks is commonly called a router or gateway. A
router/gateway forwards messages from one network to another.
Network performance is measured using the metrics bandwidth and latency. The bandwidth also referred to as
throughput of a network is given by the number of bits that can be transmitted over the network in a certain period of
time. The more sophisticated the transmitting and receiving technology, the narrower each bit can become, and
thus, the higher then bandwidth. Latency corresponds to the time taken by the message to travel from one end of
the network to the other and is measured in terms of time. There are situations in which it is important to know
how long it takes to send a message from one end of a network to the other and back, rather than the one-way
latency. Two-way latency is referred to as the round trip time of the network.
1.1 INTRODUCTION
Data Communications is the transfer of data or information between a source and a receiver. The source
transmits the data and the receiver receives it.
The effectiveness of a data communication depends on three characteristics
1. Delivery
2. Accuracy

3. Timeliness
Delivery: The system must deliver data to correct destination.
Accuracy: The system must deliver data accurately.
Timeliness: The system must deliver data in a timely manner. Timely delivery means delivering data as they are
produced, in the same order that they are produced and without significant delay. This kind of delivery is
called real time transmission.

1.1.1 Components

Source: It is the transmitter of data. Some examples are Terminal, Computer, and Mainframe.

Medium: The communications stream through which the data is being transmitted. Some examples are:
Cabling, Microwave, Fiber optics, Radio Frequencies (RF), Infrared Wireless

Receiver: The receiver of the data transmitted. Some examples are Printer, Terminal, Mainframe, and Computer.

RMDEC

CS2363 CN / Unit 1

4
Message: It is the data that is being transmitted from the Source/Sender to the Destination/Receiver.
Protocol: It is steps that are to be followed for sending the data from the data and as well as interpreting the data
received.

DCE: The interface between the Source & the Medium, and the Medium & the Receiver is called the DCE
(Data Communication Equipment) and is a physical piece of equipment.
DTE: Data Terminal Equipment is the Telecommunication name given to the Source and Receiver's equipment.

1.1.2 Direction of Data Flow


Data flow is the flow of data between 2 points. The direction of the data flow can be described as Simplex, HalfDuplex and Full-duplex.
Simplex: In this type of data communication the data flows in only one direction on the data communication line
(medium). Examples are Keyboard, Monitor, Radio and Television broadcasts. They go from the TV station to
your home television. Figure 1.3 represents the simplex data flow.

RMDEC

CS2363 CN / Unit 1

Half-Duplex: In this type of data communication the data flows in both directions but at a time in only one
direction on the data communication line. Example Conversation on walkie-talkies is a half-duplex data flow. Each
person takes turns talking. If both talk at once - nothing occurs. Figure 1.4 represents the Half-Duplex data flow.

1.4: Half-Duplex Data Flow


Full-Duplex: In this type of data communication the data flows in both directions simultaneously. Example
Telephones and Modems are configured to flow data in both directions. Figure 1.5 represents the full-duplex data
flow.

Figure 1.5: Full-Duplex Data Flow

Different Communication Modes:


1) Unicast
Unicast packets are sent from host to host. The communication is from a single host to another single host.
There is one device transmitting a message destined for one reciever.
2) Broadcast
Broadcast is when a single device is transmitting a message to all other devices in a given address range. This
broadcast could reach all hosts on the subnet, all subnets, or all hosts on all subnets. Broadcast packets have the
host (and/or subnet) portion of the address set to all ones. By design, most modern routers will block IP
broadcast traffic and restrict it to the local subnet.
3) Multicast
Multicast is a special protocol for use with IP. Multicast enables a single device to communicate with a specific
set of hosts, not defined by any standard IP address and mask combination. This allows for communication that
resembles a conference call. Anyone from anywhere can join the conference, and everyone at the conference
hears what the speaker has to say. The speaker's message isn't broadcasted everywhere, but only to those in the
conference call itself. A special set of addresses is used for multicast communication.
RMDEC

CS2363 CN / Unit 1

1.1.3 Networks
Network is a set of device (called nodes) connected by media links. Node is any device that is capable of
sending / receiving data generated by other nodes on the network. Example for nodes is computer, workstations.
Components of networks
The basic hardware components of computer networks are
Network Interface Card (NIC): It is the hardware which is essential for allowing a computer to communicate
to another computer in the Ethernet network. If a computer has to be connected to a network NIC card should be
installed in the computer. NIC provides a dedicated full time connection to a network. NIC contains firmware that
controls the protocol and the Ethernet controller which is required to support the Media Access Control (MAC)
protocol used by the Ethernet. It allows the systems to communicate by using the physical address which is also
referred to as MAC address, which is burnt into it.
MAC address is 48 bit hardware addresses that uniquely identify the device in the network. MAC address is
expressed as 12 hexadecimal digits of which the first 6 digits is referred to as the vendor code (Organizationally
Unique Identifier) and the last 6 digits represent the interface serial number for that vendor. NIC works at the layer 1
and layer 2 of the OSI model.

Figure 1.6: Network Interface Card


Repeater: It is an electronic device which is used to enhance the strength of the incoming signal so that the signal
can travel for a longer distance. In computer networks we use repeaters when the distance between the computers
that are networked is more when compared to that of the distance for which the cable can carry the signal
without degradation. Repeaters attempt to preserve signal integrity and extend the distance over which data can
safely travel. It works at the layer 1 of the OSI model.
Hub: It is a network device which is used for connecting multiple Ethernet devices together. Hence it is also
referred to as concentrator. Hub is called as multiport repeater. Hub works at the layer 1 of the OSI model and is
also called as the layer 1 switch. It forwards a jam signal to all the devices as soon as it detects a collision.
Bridge: It is a network device which is used to connect LAN segments. It either filters or forwards the packet
depending on whether the destination is in the same segment as the source or not. It works at the layer 1 of the OSI
model.
Switches: Switches were initially introduced as layer 2 devices and was used to route the data at layer 2 of the
OSI reference model. Now a days we have switches that process data at the network layer, they are referred to as
layer 3 or multi layer switches.
Routers: It is a network device that works at the layer 3 of the OSI reference model. Routers connect two or more
logical subnets, which do not necessarily map one-to-one to the physical interfaces of the router. The term
layer 3 switch often is used interchangeably with router, but switch is really a general term without a
rigorous technical definition.

RMDEC

CS2363 CN / Unit 1

Categories of networks
The three primary categories are of network are Local Area Network (LAN), Metropolitan Area
Network (MAN), and Wide Area Network (WAN). The category into which a network fall is determined by its
size, ownership, the distance it covers and its physical architecture.
LAN: A LAN is usually privately owned and links the devices in a single office, building or campus. A
LAN can be as simple as two PCs or it can extend throughout a company. LAN size is limited to a few
kilometers. The most widely used LAN system is the Ethernet system developed by the Xerox Corporation.

MAN: A MAN is designed to extend over an entire city. It could be a single network
such as cable TV network or connect a number of LANs into a larger network.
Resources may be shared LAN to LAN as well as device to device. A company can use a MAN to connect
the LANs in all its offices throughout a city. A MAN can be owned by a private company or it may be a service
provided by a public company, such as local telephone company. Telephone companies provide a popular
MAN service called (SMDS) Switched Multi-megabit Data Services.

Figure 1.9: Metropolitan Area Network


RMDEC

CS2363 CN / Unit 1

8
WAN: A WAN provides long distance transmission of data, voice, image and video information over large
geographic areas. It may comprise a country, continent or even the whole world. Transmission rates are
typically 2 Mbps, 34 Mbps, 45 Mbps, 155 Mbps and 625 Mbps. WAN utilize public, leased, or private
communication equipment usually in combinations and therefore span an unlimited number of miles. A
WAN that is wholly owned and used by a single company is referred to as an Enterprise Network. The
figure represents the comparison of the different types of networks

1.1.4 Types of Connections


Point to Point: It provides a dedicated link between two devices. The entire capacity of the link is reserved for
transmission between these two devices. Figure 1.11 illustrates point to point connection

Figure 1.11: Point-to-point link


Multipoint: It is a connection in which more than two specific devices share a single link. In this
environment a single channel is shared, either spatially or temporally. If several devices can use the link at
the same time it said to be spatially shared. If the devices take turn to use the link then it is referred to as
timesharing. Figure 1.12 illustrates multipoint connection.

1.1.5 Topologies
Physical Topology refers to the way in which network is laid out physically. Two or more
links form a topology. The topology of a network is the geometric representation of the
RMDEC

CS2363 CN / Unit 1

9
relationship of all the links and the linking devices tone another. The basic topologies are:
Mesh
Star
Bus and
Ring
Mesh: In a mesh topology each device has a dedicated point to point link to every other device. The term dedicated
means that the link carries traffic only between the two devices it connects. A filly connected mesh network
therefore has n (n-1)/2 physical channels to link n devices. To accommodate that many links every device on the
network should have (n-1) I/O ports. Generally partial mesh topology is implemented, in which case the
decision on the links to be implemented should be made appropriately.

Figure 1.13: Mesh Topology


Merits
Dedicated link guarantees that each connection can carry its own data load. This eliminates the traffic
problems that occur when the links are shared by multiple devices.
If one link becomes unusable, it does not incapacitate the entire system.

Since every message travels along a dedicated line only the intended recipient will receive the message and
hence the data is secure.
Demerits
The amount of cabling and the I/O ports required increases with the number of devices connected in the
network
Installation and reconnection are difficult
The sheer bulk of the wire accommodates more space than available.
The hardware required to connect each link can be prohibitively expensive.
Star topology: Each device has a dedicated point to point link only to a central controller usually called a hub.
If one device has to send data to another it sends the data to the controller, which then relays the data to the
other connected device.

RMDEC

CS2363 CN / Unit 1

10

Figure 1.14: Star Topology


Merits

Less expensive than a mesh topology. Each device needs only one link and I/O port to connect it to any
number of others.
Installation and reconfigure is easy.
Robustness. If one link fails only that link is affected.
Requires less cable than a mesh.
Demerits
Require more cable compared to bus and ring topologies.
Failure of the central controller incapacitates the entire network.
Bus: One long cable acts as a backbone to link all the devices in a network Nodes are connected to the bus cable
by drop lines and taps. A drop line is a connection running between the device and the main cable. A tap is a
connector that either splices into the main cable or punctures the sheathing of a cable to create a contact with a
metallic core. As the signal travels farther and farther, it becomes weaker. So there is limitation in the number of
taps a bus can support and on the distance between those taps.
Merits

Ease of installation.
Bus uses less cabling than mesh or star topologies.
Demerits
Difficult reconnection and isolation.
Signal reflection at the taps can cause degradation in quality.
A fault or break in the bus cable stops all transmission. It also reflects signals back in the direction of
origin creating noise in both directions.

RMDEC

CS2363 CN / Unit 1

11
Ring: Each device has a dedicated point to point connection only with the two devices on either side of it. A
signal is passed along the ring in one direction from device to device until it reaches the destination. Each
device in the ring incorporates a repeater. It regenerates the bits and passes them along, when it receives the
signal intended for another device.

Merits:

Easy to install and reconfigure.


To add or delete a device requires changing only two connections.
The constraints are maximum ring length and the number of devices.
If one device does not receive the signal within a specified period, it issue an alarm that alerts the
network operator to the problem and its location Demerits
A break in the ring disables the entire network. It can be solved by using a dual ring or a switch capable
of closing off the break.
1.2 : NETWORK ARCHITECTURES
There are two types of network architectures.
1.

ISO - OSI reference Model (refer section 1.2.1 )

2.

Internet model, sometimes called the TCP/IP protocol suite. (refer 1.2.2)

Layers
Networking system is simpler, cheaper, and more reliable if it is implemented in terms of layers. Each
layer accepts responsibility for a small part of the functionality. A clean separation of function between layers
means that multiple layers do not need to duplicate functionality. It also means that layers are less likely to
interfere with one another. Layering provides two features. First, decomposes the problem into more
manageable components each of which solve one part of the problem. Second, it provides a more modular
design and hence if a service is required to be added then it is required to modify the functionality of a specific
layer alone.
Layer
Each layer provides services to the next higher layer and shields the upper layer from the details
implemented in the lower layers.
RMDEC

CS2363 CN / Unit 1

12
Each layer appears to be in direct (virtual) communication with its associated layer on the other
computer. Actual communication between adjacent layers takes place on one computer only.
Layering simplifies design, implementation, and testing. Only the lowest level can directly communicate
with its peer communications process into parts.
Peer-to-Peer Processes

The processes on each machine that communicate at a given layer are called peerto-peer processes.
At higher layers communication must move down through the layers on device A aver to device B and
then back up through the layers.
Each layer in the sending device adds its own information to the message it receives from the layer
just above it. And passes the whole package to the layer just below and transferred to the receiving
device.
Interfaces between layers

The passing of data and network information down through the layers of the sending device and
back up through the layers of the receiving device is made possible by an interface between each pair
of adjacent layers.

Each interface defines what information and services a layer must provide for the layer above it.

Well defined interfaces and functions provide modularity to a network.

1.2.1 ISO - OSI reference Model


The International Standards Organization (ISO) Open Systems Interconnect (OSI) is a standard set of
rules describing the transfer of data between each layer. Each layer has a specific function. For example the
Physical layer deals with the electrical and cable specifications. The OSI Model clearly defines the interfaces
between each layer. This allows different network operating systems and protocols to work together by
having each manufacturer adhere to the standard interfaces. The application of the ISO OSI model has
allowed the modern multiprotocol networks that exist today. There are 7 Layers of the OSI model:

The OSI model provides the basic rules that allow multiprotocol networks to operate. Understanding the OSI
model is instrument in understanding how the many different protocols fit into the networking jigsaw puzzle. It
forms a valuable reference model and defines much of the language used in data communications.
RMDEC

CS2363 CN / Unit 1

13
Physical Layer: It coordinates the functions required to transmit a bit stream over a physical medium. It deals
with the mechanical (cable, plugs, pins etc.) and electrical (modulation, signal strength, voltage levels, bit times)
specifications of the interface and transmission media. It also defines the procedures and functions that physical
devices and interfaces have to perform for transmission to occur.
Major responsibilities of Physical layer are:
Physical characteristics of interfaces and media: It defines the characteristics of the interface between the
devices and the transmission media. Also defines the type of transmission medium.
Representation of bits: To transmit the bits, it must be encoded into electrical or optical signals. Physical
layer defines the type of representation i.e. how Os and 1s are changed to signals.
Data rate: The number of bits sent each second is also defined by the physical layer. That is the physical
layer defines the duration for which the bit lasts.
Synchronization of bits: Sender and the receiver must be synchronized at the bit level. That is the physical
layer ensures that the sender and the receiver clocks are synchronized.
Signals: Analog Signal has infinitely many levels of intensity over a period of time. Digital Signal has limited
number of defined values, generally 1 and 0. Periodic Signal completes a pattern within a measurable time
frame and repeats that pattern over subsequent identical periods. The completion of one full pattern is
called a cycle. A periodic signal changes without exhibiting a pattern or cycle that repeats over time. Period
refers to the amount of time, in seconds, a signal needs to complete one cycle. Frequency refers to the number
of periods in one second. Period is the inverse of frequency and frequency is the inverse of period.

Data link layer: The data link layer is responsible for hop-to-hop (node-to-node) delivery. It transforms the
physical layer a raw transmission facility to a reliable link. It makes physical layer appear error free to the
network layer.
RMDEC

CS2363 CN / Unit 1

14
The duties of the data link layer are:
Framing: The data link layer divides the stream of bits received from the network layer into manageable
data units called frames.
Physical Addressing: If the frames are to be distributed to different systems on the network the data link
layer adds a header to the frame to define the receiver or sender of the frame. If the frame is intended for
a system located outside the sender's network then the receiver address is the address of the connecting
device that connects the network to the next one.
Flow Control: If the rate at which the data absorbed by the receiver is less than the rate produced in the
sender, the data link layer imposes a flow control mechanism to overwhelming the receiver.
Error control: Reliability is added to the physical layer by data link layer to detect and retransmit loss or
damaged frames and also to prevent duplication of frames. This is achieved through a trailer added to the
end of the frame.
Access control: When two or more devices are connected to the same link it determines which device has
control over the link at any given time.
Network Layer: The network layer is responsible for source-to-destination delivery of a packet across
multiple networks. It ensures that each packet gets from its point of origin to its final destination. It does not
recognize any relationship between those packets. It treats each one independently as though each belong to
separate message.
The functions of the network layer are:
> Internetworking: The logical gluing of heterogeneous physical networks together to look like a
single network to the upper layers.
> Logical Addressing: If a packet has to cross the network boundary then the header contains
information of the logical addresses of the sender and the receiver.

> Routing: When independent networks or links are connected to create an internetwork or a large
network the connective devices route the packet to the final destination.
> Packetizing: Encapsulating packets received from the upper layer protocol. Internet Protocol is used
for packetizing in the network layer.
> Fragmenting: Router has to process the incoming frame and encapsulate it as per the protocol used by
the physical network to which the frame is going.
Transport Layer: The transport layer is responsible for process-to-process delivery, i.e., source to destination
delivery of the entire message.
The responsibilities of Transport layer are:
Service-point (port) addressing: Computers run several programs at the same time. Source-todestination delivery means delivery from a specific process on one computer to a specific process on
the other. The transport layer header therefore includes a type of address called port address.
Segmentation and reassembly: A message is divided into segments and each segment contains a
sequence number. These numbers enable the Transport layer to reassemble the message correctly
upon arriving at the destination. The packets lost in the transmission is identified and replaced.
Connection control: The transport layer can be either connectionless or connection-oriented. A
connectionless transport layer treats segment as an independent packet and delivers it to the transport
layer. A connection-oriented transport layer makes a connection with the transport layer at the
RMDEC

CS2363 CN / Unit 1

15
destination machine and delivers the packets. After all the data are transferred the connection is
terminated.
Flow control: Flow control at this layer is performed end to end.

Error Control: Error control is performed end to end. At the sending side, the transport layer
makes sure that the entire message arrives at the receiving transport layer with out error. Error
correction is achieved through retransmission.
Session Layer: Session layer is the network dialog controller. It establishes, maintains, and synchronizes the
interaction between communicating systems.
Specific responsibilities of the layer are:

Dialog Control: Session layer allows two systems to enter in to a dialog. Communication
between two processes takes place either in half-duplex or full-duplex.

Synchronization: The session layer allows a process to add checkpoints into a stream of data.
Example: If a system is sending a file of 2000 pages, check points may be inserted after every 100 pages
to ensure that each 100 page unit is advised and acknowledged independently. So if a crash happens
during the transmission of page 523, retransmission begins at page 501, pages 1 to 500 need not be
retransmitted.
Presentation layer: It is concerned with the syntax and semantics of the information exchanged between two
systems.
Responsibilities of the presentation layer are

Translation: The processes in two systems are usually exchanging information in the form of
character strings, numbers, and so on. Since different computers use different encoding systems, the
presentation layer is responsible for interoperability between these different encoding methods. At
the sender, the presentation layer changes the information from its sender-dependent format into a
common format. The presentation layer at the receiving machine changes the common format into its
receiver dependent format.

Encryption: The sender transforms the original information from one form to another form and
sends the resulting message over the entire network. Decryption reverses the original process to transform
the message back to its original form.
Compression: It reduces the number of bits to be transmitted. It is important in the transmission of text,
audio and video.

Application Layer: It enables the user (human/software) to access the network. It provides user interfaces and
support for services such as electronic mail, remote file access and transfer, shared database management
and other types of distributed information services.
Services provided by the application layer are
Network Virtual terminal: A network virtual terminal is a software version of a physical terminal and
allows a user to log on to a remote host.

RMDEC

File transfer, access and management: This application allows a user to access files in a remote
computer, to retrieve files from a remote computer and to manage or control files in a remote
CS2363 CN / Unit 1

16

computer.
Mail services: This application provides the basis for e-mail forwarding and storage.
Directory services: It provides distributed database sources and access for global information about
various objects and services.

1.2.2 TCP /IP protocol Suite


The TCP/IP protocol suite was developed prior to the OSI model. Therefore, the layers in the TCP/IP
protocol suite do not exactly match those in the OSI model. The TCP/IP protocol suite is made of five layers:
physical, data link, network, transport and application. The first four layers provide physical standards,
network interfaces, internetworking, and transport functions that correspond to the first four layers of the OSI
model. The three topmost layers in the OSI model, however, are represented in TCP/IP by a single layer
called the application layer (see Figure 1.20).
TCP/IP is a hierarchical protocol made up of interactive modules, each of which provides a specific
functionality; however, the modules are not necessarily interdependent. Whereas the OSI model
specifies which functions belong to each of its layers, the layers of the TCP/IP protocol suite contain relatively
independent protocols that can be mixed and matched depending on the needs of the system. The term
hierarchical means that each upper-level protocol is supported by one or more lower-level protocols.
At the transport layer, TCP/IP defines three protocols: Transmission Control Protocol (TCP), User Datagram
Protocol (UDP), and Stream Control Transmission Protocol (SCTP). At the network layer, the main
protocol defined by TCP/IP is the Internetworking Protocol (IP); there are also some other protocols that
support data movement in this layer.
Physical and Data Link Layers
At the physical and data link layers, TCP/IP does not define any specific protocol. It supports all the standard
and proprietary protocols. A network in a TCP/IP internetwork can be a local-area network or a wide-area
network.
Network Layer
RMDEC

CS2363 CN / Unit 1

17
At the network layer (or, more accurately, the internetwork layer), TCP/IP supports the Internetworking Protocol.
IP, in turn, uses four supporting protocols: ARP, RARP, ICMP, and IGMP. Each of these protocols is described
in greater detail in later chapters.
Internetworking Protocol (IP)
The Internetworking Protocol (IP) is the transmission mechanism used by the TCP/IP protocols. It is an
unreliable and connectionless protocol-a best-effort delivery service. The term best effort means that IP provides
no error checking or tracking. IP assumes the unreliability of the underlying layers and does its best to get a
transmission through to its destination, but with no guarantees. IP transports data in packets called datagrams,
each of which is transported separately. Datagrams can travel along different routes and can arrive out of
sequence or be duplicated. IP does not keep track of the routes and has no facility for reordering datagrams
once they arrive at their destination. The limited functionality of IP should not be considered a weakness,
however. IP provides bare-bones transmission functions that free the user to add only those facilities necessary for
a given application and thereby allows for maximum efficiency.
Address Resolution Protocol
The Address Resolution Protocol (ARP) is used to associate a logical address with a physical address. On a
typical physical network, such as a LAN, each device on a link is identified by a physical or station address,
usually imprinted on the network interface card (NIC). ARP is used to find the physical address of the node
when its Internet address is known.
Reverse Address Resolution Protocol
The Reverse Address Resolution Protocol (RARP) allows a host to discover its Internet address when it knows
only its physical address. It is used when a computer is connected to a network for the first time or when a
diskless computer is booted.
Internet Control Message Protocol
The Internet Control Message Protocol (ICMP) is a mechanism used by hosts and gateways to send
notification of datagram problems back to the sender. ICMP sends query and error reporting messages.
Internet Group Message Protocol
The Internet Group Message Protocol (IGMP) is used to facilitate the simultaneous transmission of a message
to a group of recipients.
Transport Layer
Traditionally the transport layer was represented in TCP/IP by two protocols: TCP and UDP. IP is a host-to-host
protocol, meaning that it can deliver a packet from one physical device to another. UDP and TCP are transport
level protocols responsible for delivery of a message from a process (miming program) to another process. A
new transport layer protocol, SCTP, has been devised to meet the needs of some newer applications.
User Datagram Protocol
The User Datagram Protocol (UDP) is the simpler of the two standard TCPIIP transport protocols. It is a processto-process protocol that adds only port addresses, checksum error control, and length information to the data
from the upper layer.
Transmission Control Protocol
RMDEC

CS2363 CN / Unit 1

18
The Transmission Control Protocol (TCP) provides Hi transport-layer services to applications. TCP is a
reliable stream transport protocol. The term stream, in this context, means connection-oriented: A connection must
be established between both ends of a transmission before either can transmit data. At the sending end of each
transmission, TCP divides a stream of data into smaller units called segments. Each segment includes a sequence
number for reordering after receipt, together with an acknowledgment number for the segments received.
Segments are carried across the internet inside of IP datagrams. At the receiving end, TCP collects each
datagram as it comes in and reorders the transmission based on sequence numbers.
Stream Control Transmission Protocol
The Stream Control Transmission Protocol (SCTP) provides support for newer applications such as voice
over the Internet. It is a transport layer protocol that combines the best features of UDP and TCP.
Application Layer
The application layer in TCP/IP is equivalent to the combined session, presentation, and application layers in the
OSI model. Many protocols are defined at this layer. The protocols pertaining to this layer is discussed in
details as part of chapter 5.

1.4 Encoding
(NRZ, NRZI, Manchester, 4B/5B)
Signals propagate over physical links. The task, therefore, is to encode the binary data that the source node wants to send into the
signals that the links are able to carry, and then to decode the signal back into the corresponding binary data at the receiving node. The
function of encoding are performed by a network adaptora piece of hardware that connects a node to a link. The network adaptor
contains a signaling component that actually encodes bits into signals at the sending node and decodes signals into bits at the receiving
node. As in Figure 2.5, signals travel over a link between two signaling components, and bits flow between network adaptors.

Non Return to Zero (NRZ)


In NRZ the data value 1 is mapped onto the high signal and the data value 0 onto the low signal. In, Figure 2.6 shows NRZ-encoded
signal (bottom) that corresponds to the transmission of a particular sequence of bits (top).

RMDEC

CS2363 CN / Unit 1

19
Demerits:
The problem with NRZ is that a sequence of several consecutive 1s means that the signal stays high on the link for an extended period
of time, and similarly, several consecutive 0s means that the signal stays low for a long time. There are two fundamental problems
caused by long strings of 1s or 0s.
The first is that it leads to a situation known as baseline wander. Specifically, the receiver keeps an average of the signal it
has seen so far, and then uses this average to distinguish between low and high signals. Whenever the signal is significantly
lower than this average, the receiver concludes that it has just seen a 0, and likewise, a signal that is significantly higher than
the average is interpreted to be a 1. The problem, of course, is that too many consecutive 1s or 0s cause this average to
change, making it more difficult to detect a significant change in the signal.

The second problem is that frequent transitions from high to low and vice versa are necessary to enable clock recovery.
Intuitively, the clock recovery problem is that both the encoding and the decoding processes are driven by a clockevery
clock cycle the sender transmits a bit and the receiver recovers a bit. The senders and the receivers clocks have to be
precisely synchronized in order for the receiver to recover the same bits the sender transmits. If the receivers clock is even
slightly faster or slower than the senders clock, then it does not correctly decode the signal.

Solution:

You could imagine sending the clock to the receiver over a separate wire, but this is typically avoided because it makes the
cost of cabling twice as high.

So instead, the receiver derives the clock from the received signalthe clock recovery process. Whenever the signal changes,
such as on a transition from 1 to 0 or from 0 to 1, then the receiver knows it is at a clock cycle boundary, and it can
resynchronize itself. However, a long period of time without such a transition leads to clock drift. Thus, clock recovery
depends on having lots of transitions in the signal, no matter what data is being sent.

Non-Return to Zero Inverted (NRZI):


In non-return to zero inverted (NRZI), the sender makes a transition from the current signal to encode a 1 and stay at the current signal
to encode a 0. This solves the problem of consecutive 1s, but obviously does nothing for consecutive 0s. NRZI is illustrated in Figure
2.7.

Manchester Encoding:
An alternative, called Manchester encoding, does a more explicit job of merging the clock with the signal by transmitting the
exclusive-OR of the NRZ-encoded data and the clock. The Manchester encoding is also illustrated in Figure 2.7. Observe that the
Manchester encoding results in 0 being encoded as a low-to-high transition and 1 being encoded as a high-to-low transition. Because
both 0s and 1s result in a transition to the signal, the clock can be effectively recovered at the receiver.

Differential Manchester Encoding:


There is also a variant of the Manchester encoding, called differential Manchester, in which a 1 is encoded with the first half of the
signal equal to the last half of the previous bits signal and a 0 is encoded with the first half of the signal opposite to the last half of the
previous bits signal.

RMDEC

CS2363 CN / Unit 1

20
Demerits:
The problem with the Manchester encoding scheme is that it doubles the rate at which signal transitions are made on the link, which
means that the receiver has half the time to detect each pulse of the signal. The rate at which the signal changes is called the links
baud rate. In the case of the Manchester encoding, the bit rate is half the baud rate, so the encoding is considered only 50% efficient.
If the receiver had been able to keep up with the faster baud rate required by the Manchester encoding in Figure 2.7, then both NRZ
and NRZI could have been able to transmit twice as many bits in the same time period.

4B/5B:
A final encoding that we consider, called 4B/5B, attempt to address the inefficiency of the Manchester encoding without suffering
from the problem of having extended durations of high or low signals. The idea of 4B/5B is to insert extra bits into the bit stream so as
to break up long sequences of 0s or 1s. Specifically, every 4 bits of actual data are encoded in a 5-bit code that is then transmitted to
the receiver; hence the name 4B/5B. The 5-bit codes are selected in such a way that each one has no more than one leading 0 and no
more than two trailing 0s. Thus, when sent back-to-back, no pair of 5-bit codes results in more than three consecutive 0s being
transmitted. The resulting 5-bit codes are then transmitted using the NRZI encoding, which explains why the code is only concerned
about consecutive 0sNRZI already solves the problem of consecutive 1s. Note that the 4B/5B encoding results in 80% efficiency.
Table 2.4 gives the 5-bit codes that correspond to each of the 16 possible 4-bit data symbols. Notice that since 5 bits are enough to
encode 32 different codes, and we are using only 16 of these for data, there are 16 codes left over that we can use for other purposes.
Of these, code 11111 is used when the line is idle, code 00000 corresponds to when the line is dead, and 00100 is interpreted to mean
halt. Of the remaining 13 codes, 7 of them are not valid because they violate the one leading 0, two trailing 0s rule, and the other 6
represent various control symbols

1.5 Issues in the data link layer


There are 3 main issues that are handled in data link layer are:
1. Framing It is the process of dividing the Ale= of bits received from the upper layer (network layer) into

manageable data units called frames. It also add header to the frame in order to define the physical address of the
communicating partners.
RMDEC

CS2363 CN / Unit 1

21
2. Error control It is process used to detect & retransmit damaged or lost frames, to detect duplication of
frames. It does error control using the information passed as part of the trailer.
3. Flow Control - It is the process used to avoid data loss. Flow control coordinates and decides the amount of

data that can be sent before receiving an acknowledgment.

1.5.1 Framing It divides the stream of bits received from the upper layer (network layer) into
manageable data units called frames. It adds a header to the frame to define the physical address (source
address & destination address).
Blocks of data (called frames at this level), not bit streams, are exchanged between nodes. It is the network
adaptor that enables the nodes to exchange frames. When node A wishes to transmit a frame to node B, it tells
its adaptor to transmit a frame from the node's memory. This results in a sequence of bits being sent over the
link. The adaptor on node B then collects together the sequence of bits arriving on the link and deposits the
corresponding frame in B's memory. Recognizing exactly what set of bits constitutes a framethat is,
determining where the frame begins and endsis the central challenge faced by the adaptor.
Although the whole message could be packed in one frame it is not normally done. One
reason is that a frame can be very large, making flow and error control very inefficient.
When a message is carried in one very large frame, even a single-bit error would require the retransmission of
the whole message. When a message is divided into smaller frames, a single-bit error affects only that small
frame.

Fixed-Size Framing: Frames can be of fixed or variable size. In fixed-size framing, there is no need for
defining the boundaries of the frames; the size itself can be used as a delimiter. An example of this type of
framing is the ATM wide-area network, which uses frames of fixed size called cells.
Variable-Size Framing: In variable-size framing, we need a way to define the end of the frame and the
beginning of the next. Three approaches were used for this purpose:
Framing Protocols:
(i)
Byte-oriented protocols
(ii)
Bit-oriented protocols
(iii)
Clock-based protocols
RMDEC

CS2363 CN / Unit 1

22
Byte-oriented protocols:
(iv)
BISYNC Binary Synchronous Communication
(v)
PPP Point-to-point Protocol
(vi)
DDCMP- Digital Data Communication Message Protocol
Bit-oriented protocols:
(i)
HDLC High Level Data Link Control
Clock-based protocols:
(i)
SONET - Synchronous Optical Network
1.5.1.1 Byte oriented Protocols

Binary Synchronous Communication (BISYNC): protocol developed by IBM in the late 1960s. The beginning
of a frame is denoted by sending a special SYN (synchronization) character. The data portion of the frame is
then contained between special sentinel characters: STX (start of text) and ETX (end of text). The SOH (start
of header) field serves much the same purpose as the STX field. The problem with the sentinel approach, of
course, is that the ETX character might appear in the data portion of the frame. BISYNC overcomes this
problem by "escaping" the ETX character by preceding it with a DLE (data-link-escape) character whenever it
appears in the body of a frame; the DLE character is also escaped (by preceding it with an extra DLE) in the
frame body. (C programmers may notice that this is analogous to the way a quotation mark is escaped by the
backslash when it occurs inside a string.) This approach is often called character stuffing because extra
characters are inserted in the data portion of the frame.
PPP - Point-to-Point Protocol (PPP), which is commonly run over dialup modem links, is similar to
BISYNC in that it uses character stuffing. The format for a PPP frame is given in Figure 1.42. The special
start-of-text character, denoted as the Flag field in Figure 1.42, is 01111110, which is byte stuffed if it occurs
within the payload field.
The Address and Control fields usually contain default values. The Address field which is always set to the
binary value 11111111, indicates that all stations are to accept the frame. This value avoids the issue of using
data link addresses. The default value of the Control field is 00000011. This value indicates an unnumbered
frame. In other words, PPP does not provide reliable transmission using sequence numbers and
acknowledgements.
The Protocol field is used for demultiplexing: It identifies the high-level protocol such as IP or IPX (an IPlike protocol developed by Novell). The frame payload size can be negotiated, but it is 1500 bytes by
default. The Checksum field is either 2 (by default) or 4 bytes long. The PPP frame format is unusual in that
several of the field sizes are negotiated rather than fixed. This negotiation is conducted by a protocol called LCP
(Link Control Protocol). PPP and LCP work in tandem: LCP sends control messages encapsulated in PPP frames
RMDEC
CS2363 CN / Unit 1

23
such messages are denoted by an LCP identifier in the PPP Protocol fieldand then turns around and
changes PPP's frame format based on the information contained in those control messages. LCP is also
involved in establishing a link between two peers when both sides detect the carrier signal.

DDCMP protocol Digital Data Communication Message Protocol


The COUNT field specifies how many bytes are contained in the frame's body. One danger with this
approach is that a transmission error could corrupt the COUNT field, in which case the end of the frame would
not be correctly detected. (A similar problem exists with the sentinel based approach if the ETX field becomes
corrupted.) Should this happen, the receiver will accumulate as many bytes as the bad COUNT field indicates
and then use the error detection field to determine that the frame is bad. This is sometimes called a
framing error. The receiver will then wait until it sees the next SYN character to start collecting the bytes that
make up the next frame. It is therefore possible that a framing error will cause back-to-back frames to be
incorrectly received.

1.5.1.2. Bit-Oriented Protocols (HDLC)


Unlike these byte-oriented protocols, a bit oriented protocol is not concerned with byte boundariesit
simply views the frame as a collection of bits. These bits might come from some character set, such as ASCII,
they might be pixel values in an image, or they could be instructions and operands from an executable file. The
Synchronous Data Link Control (SDLC) protocol developed by IBM is an example of a bit-oriented protocol;
SDLC was later standardized by the ISO as the High-Level Data Link Control (HDLC) protocol. In the following
discussion, we use HDLC as an example; its frame format is given in Figure 1.44. HDLC denotes both the
beginning and the end of a frame with the distinguished bit sequence 01111110. This sequence is also
transmitted during any times that the link is idle so that the sender and receiver can keep their clocks
synchronized. In this way, both protocols essentially use the sentinel approach. Because this sequence might
appear anywhere in the body of the framein fact, the bits 01111110 might cross byte boundariesbit-oriented
protocols use the analog of the DLE character, a technique known as bit stuffing.
Bit stuffing in the HDLC protocol works as follows. On the sending side, any time five consecutive 1s
have been transmitted from the body of the message (i.e., excluding when the sender is trying to transmit the
distinguished 01111110 sequence), the sender inserts a 0 before transmitting the next bit. On the receiving side,
should five consecutive ls arrive, the receiver makes its decision based on the next bit it sees (i.e., the bit
RMDEC

CS2363 CN / Unit 1

24
following the five 1 s). If the next bit is a 0, it must have been stuffed, and so the receiver removes it. If the next
bit is a 1, then one of two things is true: Either this is the end-offrame marker or an error has been introduced
into the bit stream. By looking at the next bit, the receiver can distinguish between these two cases: If it sees a
0 (i.e., the last eight bits it has looked at are 01111110), then it is the end-of-frame marker; if it sees a 1 (i.e.,
the last eight bits it has looked at are 01111111), then there must have been an error and the whole frame is
discarded. In the latter case, the receiver has to wait for the next 01111110 before it can start receiving again,
and as a consequence, there is the potential that the receiver will fail to receive two consecutive frames.
Obviously, there are still ways that framing errors can go undetected, such as when an entire spurious end-offrame pattern is generated by errors, but these failures are relatively unlikely.

An interesting characteristic of bit stuffing, as well as character stuffing, is that the size of a frame is
dependent on the data that is being sent in the payload of the frame. It is in fact not possible to make all frames
exactly the same size, given that the data that might be carried in any frame is arbitrary.
1.5.1.3 Clock based Protocols
Clock-Based Framing (SONET) :

A third approach to framing is exemplified by the Synchronous Optical Network (SONET) standard. For lack of
a widely accepted generic term, we refer to this approach simply as clock-based framing SONET was first
proposed by Bell Communications Research (Bellcore), and then developed under the American National
Standards Institute (ANSI) for digital transmission over optical fiber; it has since been adopted by the ITU-T.
Who standardized what and when is not the interesting issue, though. The thing to remember about SONET is
that it is the dominant standard for long-distance transmission of data over optical networks.
SONET addresses both the framing problem and the encoding problem. It also addresses a problem that is very
important for phone companiesthe multiplexing of several low-speed links onto one high-speed link.
Framing:
A SONET frame has some special information that tells the receiver where the frame starts and ends. Notably,
no bit stuffing is used, so that a frames length does not depend on the data being sent. So the question to ask is,
How does the receiver know where each frame starts and ends? We consider this question for the lowest-speed
SONET link, which is known as STS-1 and runs at 51.84 Mbps.
An STS-1 frame is shown in Figure 2.13. It is arranged as nine rows of 90 bytes each, and the first 3 bytes of
each row are overhead, with the rest being available for data that is being transmitted over the link. The first 2
bytes of the frame contain a special bit pattern, and it is these bytes that enable the receiver to determine where
the frame starts. However, since bit stuffing is not used, there is no reason why this pattern will not occasionally
turn up in the payload portion of the frame. To guard against this, the receiver looks for the special bit pattern
consistently, hoping to see it appearing once every 810 bytes, since each frame is 9 90 = 810 bytes long.
When the special pattern turns up in the right place enough times, the receiver concludes that it is in sync and
can then interpret the frame correctly.

RMDEC

CS2363 CN / Unit 1

25

Fig. 1.13 A SONET STS-1 Frame


SONET runs across the carriers optical network, not just over a single link. Additional complexity comes from
the fact that SONET provides a considerably richer set of services than just data transfer. For example, 64 Kbps
of a SONET links capacity is set aside for a voice channel that is used for maintenance.
The overhead bytes of a SONET frame are encoded using Non Return to Zero (NRZ), the simple encoding
described in the previous section where 1s are high and 0s are low. However, to ensure that there are plenty of
transitions to allow the receiver to recover the senders clock, the payload bytes are scrambled. This is done by
calculating the exclusive-OR (XOR) of the data to be transmitted and by the use of a well-known bit pattern.
The bit pattern, which is 127 bits long, has plenty of transitions from 1 to 0, so that XORing it with the
transmitted data is likely to yield a signal with enough transitions to enable clock recovery.

Figure 1.14 Three STS-1 Frames multiplexed onto one STS-3c frame.

Figure 1.15 SONET Frames out of phase


RMDEC

CS2363 CN / Unit 1

26
SONET supports the multiplexing of multiple low-speed links in the following way. A given SONET link runs
at one of a finite set of possible rates, ranging from 51.84 Mbps (STS-1) to 2488.32 Mbps (STS-48) and
beyond. Note that all of these rates are integer multiples of STS-1. The significance for framing is that a single
SONET frame can contain subframes for multiple lower-rate channels. A second related feature is that each
frame is 125 s long. This means that at STS-1 rates, a SONET frame is 810 bytes long, while at STS-3 rates,
each SONET frame is 2430 bytes long. Notice the synergy between these two features: 3 810 = 2430,
meaning that three STS-1 frames fit exactly in a single STS-3 frame.
Intuitively, the STS-N frame can be thought of as consisting of N STS-1 frames, where the bytes from these
frames are interleaved; that is, a byte from the first frame is transmitted, then a byte from the second frame is
transmitted, and so on. The reason for interleaving the bytes from each STS-N frame is to ensure that the bytes
in each STS-1 frame are evenly placed; that is, bytes show up at the receiver at a smooth 51 Mbps, rather than
all bunched up during one particular 1/Nth of the 125-s interval.
Although it is accurate to view an STS-N signal as being used to multiplex NSTS- 1 frames, the payload from
these STS-1 frames can be linked together to form a larger STS-N payload; such a link is denoted STS-Nc (for
concatenated). One of the fields in the overhead is used for this purpose. Figure 2.14 schematically depicts
concatenation in the case of three STS-1 frames being concatenated into a single STS-3c frame. The
significance of a SONET link being designated as STS-3c rather than STS-3 is that, in the former case, the user
of the link can view it as a single 155.25-Mbps pipe, whereas an STS-3 should really be viewed as three 51.84Mbps links that happen to share a fiber.
Finally, the preceding description of SONET is overly simplistic in that it assumes that the payload for each
frame is completely contained within the frame. In fact, we should view the STS-1 frame just described as
simply a placeholder for the frame, where the actual payload may float across frame boundaries.
This situation is illustrated in Figure 2.15. Here we see both the STS-1 payload floating across two STS-1
frames, and the payload shifted some number of bytes to the right and, therefore, wrapped around. One of the
fields in the frame overhead points to the beginning of the payload. The value of this capability is that it
simplifies the task of synchronizing the clocks used throughout the carriers networks, which is something that
carriers spend a lot of their time worrying about.

1.5.2 Flow Control


Communication networks need to have flow and error control mechanisms implemented at the link
level or at the end-to-end level. Transport layer is responsible for end-to-end error control. The data link layer
is responsible for link-level error control. The error and flow control is normally implemented at the
LLC layer. The implementation is essential for smooth and reliable communication between two devices
connected to the same link.
Flow control refers to the set of procedures used to restrict the amount of data that the sender can send before
waiting for ACK. The flow of data should not overwhelm the receiver. There are two methods that are used
to control the flow of data across communication links. They are stop and wait and sliding window.
1. Stop and wait (send one frame at a time):
The sender sends one frame and waits for an ack. before sending the next frame End of transmission is indicates
by (EOT) frame. If the distance between the devices is long, the time spent waiting for ACK is high. Total
transmission time increases. ACK should travel all the way from receiver to sender.

RMDEC

CS2363 CN / Unit 1

27

2. Sliding window: sender can transmit several frames before needing an ack. the receiver acknowledges
only some of the frames using a single ACK to confirm the receipt of multiple frames.
Sender window represents the no of frames it can send. Sender window shrinks when the frame is sent.
And it expands when the ACK is received.
Receiver window represents the no of frames still it can receive. As a new frame come in receiver
window shrinks. If they acknowledged it expands.
The sender sends the data frames that are present within the sender's window without waiting for ACK.
If the receiver receives a frame without any error it slides its window and sends an ACK frame, ACK could be
sent together for a group of frames. When the sender receives the ACK frame it shifts the window by one or more
number of frames depending on the sequence number in the ACK frame. Send the next frame in the sender's
window.

RMDEC

CS2363 CN / Unit 1

28

1.5.3 Error control


Error control is the process of identifying the error and correcting the same. The error control in the data link
layer is based on automatic repeat request, which is the retransmission of data.
1.5.3.1 Stop and Wait ARQ
In this method of flow control, the sender waits for an ACK after every frame it sends. Each frame has an
alternating bit (0 or 1). This process of alternative sending and wait repeats until the sender transmits the end of
transmission frame. The sender sends out the message, and starts the timeout counter. If the receiver gets the
message without any problems, it sounds out an ACK message; otherwise the timer at the sender's end expires.
If the timer expires then the sender retransmits the frame. If it happened that the message was received fine, but
the ACK got lost, then the even/odd bit in the data frame will alert the receiver to the duplicate frame.
Features of stop and wait ARQ:
The sending devise keeps the copy of the last frame transmitted until it receives an acknowledgement
for that frame. Keeping a copy allows the sender to retransmit lost or damaged frames until they are
received correctly.
Both data and acknowledgement frames are numbered alternately 0 and 1. A data frame 0 is
acknowledged by an ACK 1.

A damaged or lost frame is treated in the same manner by the receiver. If the receiver detects an error
in the received frame, it simply discards the frame and sends no acknowledgement.
The sender has a control variable, which we call S, that holds the number of recently sent frame. The
receiver has a control variable, which we call R that holds the number of the next frame expected.
The sender starts a timer when it sends a frame. If an ACK is not received within an allotted time period
the sender assumes that the frame was lost or damaged and resends it.
The receivers send only positive ACK for frames received safe and sound; it is silent about the frames
damaged or lost.
Four different operations namely Normal, Lost/Error data, Lost ACK and Delayed ACK are possible.
1. Normal Operation: The sender sends the data frame and the receiver receives it
without any error and sends an ACK frame. The sender receives the ACK frame
and sends the next frame. The timing diagram for this process is shown in Figure 1.47.

RMDEC

CS2363 CN / Unit 1

29

2. Data Lost/ Error: The sender sends the data frame, which is lost during transmission. Time-out
occurs at the sender's end and hence the sender retransmits the data frame. If the data received is
with error the sender will not send the ACK frame, hence time-out occurs at the sender. The sender
retransmits the data frame. The timing diagram for this process is shown in Figure 1.48.

3. ACK Lost: The sender sends the data frame and the receiver receives it without any error. Receiver
sends an ACK frame which is lost during transmission. Time-out occur at the senders end, hence it
retransmits the data frame. The receiver would be waiting for a frame whereas the other frame is
received and hence it discards the frame as duplicate. It resends the ACK frame. The timing diagram for
this process is shown in Figure 1.49.

RMDEC

CS2363 CN / Unit 1

30

4. Delayed ACK: The sender sends the data frame and the receiver receives it without any error.
Receiver sends an ACK frame, before it reaches the sender time-out occurs at the senders end and the
data frame is retransmitted. The receiver discards the data frame, as it duplicate and retransmits the
ACK frame. The timing diagram for this process is shown in Figure 1.50.

RMDEC

CS2363 CN / Unit 1

31

Figure 1.51: Time diagram for the stop and wait ARQ with Piggybacking
The advantage of stop-and-wait is simplicity. The disadvantage is inefficiency, because if the distance
between the devices is long then the time spent on waiting adds significantly to the total transmission
time.
It is possible to use piggy backing mechanism if both the devices have data to be transmitted. Piggy
backing is a method that combine data frame with an ACK. Piggy backing can save the bandwidth since it
eliminates the overhead of the ACK frame.
1.5.3.2 Sliding Window
In order to improve its efficiency sender can send more than one frame before waiting for
ACK. This technique needs a window of frames to be maintained at the sender's end. The
size of the window is fixed. As the sender receives an ACK for a frame the window would be shifted. At any
time the window would show the frames that could be sent before receiving an ACK from the receiver. Since
the window keep sliding over the frames this technique is referred to as sliding window. There are two variations of
sliding window technique. They are Go-Back-N ARQ and Selective Repeat ARQ.
1.5.3.2.1 Go-Back-N ARQ
In the Go-back-n scheme, upon an error, the sender retransmits all the frames that came after the error. For
example, sender may send frames 1,2,3,4 and does not receive ACK for frame 2 and the timer expires. The
sender will resend all the frames after frame 2. The features of Go-Back-N ARQ are:

Sequence numbers of transmitted frames are maintained in the header of frame. If k is the number of
bits for sequence number, then the numbering can range from 0 to 2k - 1. Example: if k=3 means
sequence numbers are 0 to 7.
Sender's window is on a set of frames in buffer waiting for ACK. As the ACK is received, the
respective frame goes out of window and new frame to be sent come into window. For example if
RMDEC

CS2363 CN / Unit 1

32
sender receives ACK 4, then it knows Frames up to and including Frame 3 was correctly
received. Window will towards the right of frame 3.

In the receiver side, size of the window is always one. The receiver will receive only the frame
which it is expecting; if any other frame is received it is discarded. The receiver slides over after
receiving the expected frame.
Sender has three control variables namely S to hold the sequence number of the recently sent
frame, SF to hold the sequence number of the first frame in the window and S L to hold the
sequence number of the last frame in the window. Receiver has a single variable R to hold the
sequence number of the expected frame.
The sender has a timer for each transmitted frame. The receivers don't have any timer.
The receiver responds for frame arriving safely by positive ACK. For damaged or lost frames
receiver doesn't reply, the sender has to retransmit it when timer of that frame elapsed. The receiver
may ACK once for several frames.
If the timer for any frame expires, the sender has to resend that frame and the subsequent frame also,
hence the protocol is called GO-BACK-N ARQ.

Figure 1.52: Windows used in Go-Back-N ARQ


There are four possible operations in this mechanism as discussed in the stop and wait ARQ mechanism.
1. Normal Operation: The sender sends the data frames that are present within the sender's window
without waiting for ACK. If the receiver receives a frame without any error it slides its window and
sends an ACK frame, ACK could be sent together for a group of frames. When the sender receives the
ACK frame it shifts the window by one or more number of frames depending on the sequence number in
the ACK frame. Sends the next frame in the sender's window. The timing diagram for this process is
shown in Figure 1.53.

RMDEC

CS2363 CN / Unit 1

33

Figure 1.53: Time diagram for normal operation of Go-Back-N ARQ


2. Lost Frame: Sender sends the frames present within the sender's window without waiting for
ACK. If a frame is lost in the transmission time-out occurs at the sender's end, in the meanwhile the
receiver will discard all the other frames that reaches it since it is waiting to receive a frame that has been
lost. The sender retransmits all the frames within the sender's window starting from the frame for which
the time-out has happened. The timing diagram for this operation is shown in Figure 1.54.

Figure 1.54: Time diagram for Go-Back-N ARQ with lost frame

RMDEC

CS2363 CN / Unit 1

34

Figure 1.55: Choice of window size for Go-Back-N ARQ


3. ACK Lost: Sender will send the frames that are present within the sender's window. If the receiver
receives the frame without any error it slides its window and sends an ACK. If the ACK is lost during
transmission, time-out occurs at the sender's end and the frame would be retransmitted. The receiver will
receive the frame and discard it since it identifies it as duplicate. In the meanwhile if the next frame is
received the receiver would have received it and sent the ACK frame.
4. Delayed ACK: Sender will send the frames that are present within the sender's window. If the receiver
receives the frame without any error it slides its window and sends an ACK. If the ACK is delayed
during transmission, time-out occurs at the sender's end and the frame would be retransmitted. The
receiver will receive the frame and discard it since it identifies it as duplicate. In the meanwhile if the
next frame is received the receiver would have received it and sent the ACK frame.
It is very important to decide on the correct size of the window based on the number of bits used for sequence
numbers. It is that the window size should be less than that of 2 m where m is the number of bits used for
sequence numbers in the Go-Back-N ARQ mechanism. Failing which, that is if the window size is selected as
equal to 2m data would be erroneously accepted at the receiver end. The timing diagram in Figure 1.55
illustrates the need to choose the correct window size.
In case of Go-Back-N ARQ, even if the subsequent frame is received without any error it discarded since the
receiver will receive only the frame which it is expecting. The bandwidth is wasted in this case, in order to
avoid this it is possible to have the receiver also to have a window which indicates the set of frame that could
accepted even if any of the previous frame is yet to be received without error.
1.5.3.2.2 Selective repeat ARQ
In selective repeat the drawback of the Go-Back-N ARQ is overcome by having a window with size greater
RMDEC

CS2363 CN / Unit 1

35
than one at the receiver's end also. If the receiver receives any of the frames in its window, it will accept the
frame but the window will not slide until the first frame in the receiver's window is received. In selective
repeat both the sender and the receiver have the same window size. Features of selective repeat are

Sender's window is on a set of frames in buffer waiting for ACK. As the ACK is received, the
respective frame goes out of window and new frame to be sent come into window.
Sender has three control variables namely S to hold the sequence number of the recently sent frame,
SF to hold the sequence number of the first frame in the window and SL to hold the sequence number
of the last frame in the window.
The size of the window should be one half of the value 2m, where m is the number of bits used for sequence
number.
The receiver window size must also be the same as that of the sender's window size. In this the
receiver is looking for a range of sequence numbers. The receiver has control variables R F and RL
to denote the boundaries of the window.
The sender has a timer for each transmitted frame. The receivers don't have any timer.
The receiver responds for frame arriving safely by positive ACK. For damaged or lost frames
receiver doesn't reply, the sender has to retransmit it when timer of that frame elapsed. The
receiver may ACK once for several frames.
If the timer for any frame expires, the sender has to resend that frame alone.
The receiver sends a negative ACK to the sender if it receives a frame that is not the first frame in
its window.

Figure 1.56: Windows used in Selective Repeat ARQ

The sender and the receiver window size are the same in case of selective repeat. There are four operations in
case of selective repeat. They are:
1. Normal Operation: The sender sends the data frames that are present within the sender's window
without waiting for ACK. If the receiver receives a frame without any error it slides its window and
sends an ACK frame, ACK could be sent together for a group of frames. When the sender receives the
ACK frame it shifts the window by one or more number of frames depending on the sequence number in
RMDEC

CS2363 CN / Unit 1

36
the ACK frame. Sends the next frame in the sender's window.
2. Lost Frame: Sender sends the frames present within the sender's window without waiting for ACK. If a
frame is lost in the transmission the receiver sends a negative ACK for that frame. The receiver will
receive the other frames that reach it, if it is within the receiver's window. The sender retransmits only
the frame for which NACK (negative acknowledgement) has been sent. The timing diagram for this
operation is shown in Figure 1.57.
3. Lost ACK: Sender will send the frames that are present within the sender's window. If the receiver
receives the frame without any error it slides its window and sends an ACK. If the ACK is lost during
transmission, if time-out occurs at the sender's end the frame would be retransmitted otherwise the ACK
of the next frame would have made the sender to slide its window. If the sender had retransmitted
the frame, receiver will discard it since it identifies it as duplicate. In the meanwhile if the receiver
had received the next frame it would have sent the ACK for that frame.

Figure 1.57: Time diagram for Selective Repeat ARQ with lost frame
4. Delayed ACK: Sender will send the frames that are present within the sender's window. If the receiver
receives the frame without any error it slides its window and sends an ACK. If the ACK is delayed
during transmission and time-out occurs at the sender's end the frame for which time out occurred
would be retransmitted. The receiver will receive the frame resent and discard it since it identifies it as
duplicate. In the meanwhile if the next frame is received the receiver would have received it and sent
ACK.

RMDEC

CS2363 CN / Unit 1

37

Figure 1.58: Choice of window size for Selective Repeat


As in the case of Go-Back-N ARQ in selective repeat also the selection of the size of the window is
essential. It is that the size of the window is same at the sender's as well as the receiver's end. If the size
is selected to be equal to 2m -1 then there is chance of receiving a frame erroneously, this is illustrated in
Figure 1.58. As in the case of stop and wait and Go-Back-N ARQs selective repeat can also use
piggybacking in order to improve the efficiency of transmission.

1.5.4 Error detection and correction


Networks must be able to transfer data from one device to another with complete accuracy. Whenever bits
flow from one point to another, they are subject to unpredictable changes because of interference. This
interference can change the shape of the signal. When during the transmission a bit 0 changes to 1 or a bit 1
changes to 0 then an error is said to have occurred. These errors may come in isolated single bit or in bursts.
Single bit error: When an n-bit data segment is communicated and if the receiver receives the data segment
with one of the n bits being in error a single-bit error is said to have occurred. Let us consider that an 8-bit
data segment 11011010 has been communicated from a sender and the receiver receives the data as
10011010. The second bit has been received as 0 instead of 1 whereas the other bits are received as it has been
sent hence a single-error has occurred.
Burst error: When an n-bit data segment is communicated and if the receiver receives the data segment with
more than one bit being in error a burst error is said to have occurred. Let us consider that an 8-bit data
segment 11011010 has been communicated from a sender and the receiver receives the data as 1011110. The
second bit has been received as 0 instead of 1; the third bit has been received as 1 instead of 0, whereas the
other bits are received as it has been sent burst-error has occurred. The error bits need not have to be in
continuous position. Consider another example of the data segment 1101001001000110 being communicated
and received as 1101101101010110 a burst error has occurred. The bits 5, 8 and 12 have been received with
error and hence a burst error has occurred. Burst length is the distance between the first bit in error and the last
bit in error. In our example the first bit with error is 5 and the last bit with error is 12 thus the burst length is 8
(12 - 5 +1). In general if Bf is the first bit position with error and Bl is the last bit position with error then:
RMDEC
CS2363 CN / Unit 1

38
Burst length (B) =B1 Bf + 1

1.5.4.1 Detection
Error detection is the ability to detect the presence of errors caused by noise or other impairments during
transmission from the transmitter to the receiver. The technique of including extra information in the
transmission for error detection is called as Redundancy. There are various techniques available for error
detection. The well known
methods are: Parity, Cyclic Redundancy Check (CRC), Longitudinal Redundancy Check (LRC) and
Checksum.
Modulo 2 Arithmetic
Modulo-2 arithmetic is performed digit by digit on binary numbers. Each digit is considered
independently from its neighbors. Numbers are not carried or borrowed.

A B AXORB

Adding:

0+0=0

Subtracting: 0-0=0

0 0

1 1

0 1

1 0

Figure 1.59: XOR operation


0+1=1 1+0=1
0-1=1

1-0=1

1+1=0
1-1=0

Addition: Modulo 2 addition is performed using an exclusive OR (XOR) operation on the corresponding
binary digits of each operand. The XOR operation is described in the Figure 1.59. We can add two binary
numbers, X and Y as follows:
(X) 10110100
(Y) 00101010 +
(Z) 10011110

Division: Modulo-2 division can be performed in a manner similar to arithmetic long division. Subtract
the denominator from the leading parts of the enumerator. Proceed along the numerator until its end is
reached. Remember that we are using modulo-2 subtraction. For example, we can divide 100100111 by
10011 as follows:

RMDEC

CS2363 CN / Unit 1

39

This has the effect that X/ Y = Y/ K. For example:

Subtraction: Modulo-2 subtraction provides the same results as addition. This can be illustrated by
adding the numbers X and Z from the addition example.

(X)
10110100
(Z) 10011110 +
(Y)
00101010
The addition example shows us that K + Y = Z so Y = Z - K. However, the subtraction example
shows us that Y = Z + K. As neither Z nor X is zero, the addition and subtraction operators must behave in the
same way.

1.5.4.1.1 Parity
A parity bit is a redundant bit which is used in error detection mechanism that can detect all bit errors and
the burst errors that involve odd number of errors. The parity mechanism could either as odd parity (odd
number of one's in data stream) or even parity (even number of one's in data stream). The mechanism of adding
one bit to the data stream is referred to as simple parity. The mechanism that of add one bit to every data
stream and a block parity for a group of data streams is referred to as two dimensional parity.
Simple Parity: The stream of data is broken up into blocks of bits, and the number of 1 bit is counted. Parity
bit is:
o Set (1) if the number of one bit's is odd and even parity is used.
-- Data (without parity): 1100100 Processed Data: I I 00 1001
RMDEC
CS2363 CN / Unit 1

40
o
o
o

Cleared (0) if the number of one bits is even and even parity is used.
-- Data (without parity): 1100101 Processed Data: 11001010
Set (1) if the number of 1 bit's is even and odd parity is used.
-- Data (without parity): 1100101 Processed Data: 11001011
Cleared (0) if the number of one bits is odd and odd parity is used.
-- Data (without parity): 1100100 Processed Data: 11001000

Parity scheme can detect all single bit errors. A parity bit is only guaranteed to detect an odd number of bit
errors. If an even number of bits is in error the parity bit appears to be correct, even though the data is corrupt.
2-D parity: The process of arranging data in a table and associating parity bit with each row and column is
referred to 2-D parity.
Example: If the data to be sent is 1101010011010000101111011101 and if we consider that for every seven
bits a parity bit is added. Let us also consider that even parity is used. The parity generator will perform the
action shown in the Figure 1.60.
Advantage: This increases the likelihood of detecting burst errors. A redundancy of n bits can easily detect burst
errors in n bits.
Disadvantage: If two bits in a data unit are damaged and exactly in the same position in another data unit are also
damaged, the 2-D parity checker will not be able to detect the error.
Original Data

1101010011010000101111011101

Data transmitted: 11010100 01101001 00101110 10111011 00101000


Figure 1.60: 2-Dimensional parity generation process

1.5.4.1.2 CRC
Complex error detection methods make use of the properties of finite fields and polynomials over such
fields. The cyclic redundancy check considers a block of data as the coefficients to a polynomial and then
divides (modulo-2-division) by a fixed, predetermined polynomial. The coefficients of the result of the division
are taken as the redundant data bits, the CRC. On reception, one recomposes the CRC from the payload bits and
compares this with the CRC that was received. A mismatch indicates that an error occurred.
RMDEC

CS2363 CN / Unit 1

41
Example: If we consider that the bit sequence to be sent is 11010011101100 and a predefined 4 bit
divisor 1011 is used. Since the divisor is 4 bits in length CRC is 3 bits in length. Append three zeros to the
data and perform modulo- division.
The appended zeros are replaced by 100 and hence the transmitted data with CRC is 11010011101100100.

Figure 1.61: Checksum Generation process


Steps performed at the senders end (Generator)
1. Consider a divisor of n + l bits, given that the size of CRC is n bits.
2. Append n bits of zeros to the data.
3. Perform a modulo-2 division on the data with zeros appended.
4. The zeros appended at the end are replaced by the remainder obtained
The CRC checker performs the same modulo-2 division on the CRC appended data and if the remainder
obtained is 0, the CRC is dropped and the data is accepted. If the remainder is a non-zero value then the data is
with error and hence dropped.

The divisor used in the CRC is often represented as a polynomial. A polynomial should be selected in such a
way that it should not be divisible x and should be divisible by x+l. Where x is the base of the number system
being used and in our case it is 2 as we use binary digits.

RMDEC

The condition of polynomial not being divisible by x guarantees that all burst errors of length equal to
CS2363 CN / Unit 1

42
the degree of the polynomial are detected.

The condition that the polynomial should be divisible by x+ l guarantees that all burst errors affecting an
odd number of bits are detected.

Some popular polynomials used for standard protocols are:

CRC can detect all burst errors that affect odd number of bits. It can detect all burst errors of length less than or
equal to the degree of the polynomial. With a high probability it can also detect errors of length greater than the
degree of the polynomial.

1.5.4.1.3 Checksum
A checksum of a message is an arithmetic sum of message code words of a certain word length, for example byte
values, and their carry value. The sum is negated by means of ones-complement, and stored or transferred as an
extra code word extending the message. On the receiver side, a new checksum may be calculated, from the
extended message. If the new checksum is not 0, error is detected. Checksum schemes include parity bits,
check digits, and longitudinal redundancy check.
The checksum generator performs the following steps:

It divides the data into k segments of n bits each

It adds all the k segments using ones complement to get n bit sum

Checksum is obtained by complementing the sum calculated in previous step

The checksum is sent along with the data. The checksum checker performs the following step s

The data with the checksum is divided into k+1 segments of n bits each

All the segments are added using ones complement addition to get n bit sum

The sum obtained in the previous step is complemented

If the result obtained is zero then the data is accepted as there is no error. If the complemented result is a
non-zero value then the data is discarded as it is corrupted.
Example: If we consider that the bit sequence to be sent is 110100111011000100001010
and n as 8. The data is divided into three segments of eight bits each. The one' complement sum of the
three segments is 10001111. The complement of the sum is 01110000. It is the checksum and appended to the
RMDEC
CS2363 CN / Unit 1

43
data and transmitted.

1.5.4.1.4 LRC
Longitudinal redundancy check (LRC) or horizontal redundancy check is a form of redundancy check that
is applied independently to each of a parallel group of bit streams. The data must be divided into transmission
blocks, to which the additional check data is added. The term usually applies to a single parity bit per bit
stream, although it could also be used to refer to a larger Hamming code. While simple longitudinal parity
can only detect errors, it can be combined with additional error control coding, such as a transverse redundancy
check, to correct errors.
A longitudinal redundancy check for a sequence of characters may be computed in software by the following
algorithm:
Set LRC = 0
for each character c in the string do
set LRC = LRC XOR c end do
An 8-bit LRC such as this is equivalent to a cyclic redundancy check using the polynomial x8+1, but the
independence of the bit streams is less clear when looked at that way. An example of a protocol that uses such an
XOR-based longitudinal redundancy check character is the IEC 62056-21 standard for electrical meter reading.

1.5.4.2 Error correction


Error correction is the additional ability to reconstruct the original, error-free data. The two most common
error correction mechanisms are retransmission and forward error correction. In the first mechanism the
transmitter will retain the data till it gets an acknowledgement from the receiver. If the acknowledgement is
not received within a predefined time then sender retransmits the data. In this mechanism even if a single bit is
under error the entire data is retransmitted and hence waste of time and resource. Error correction by
retransmission is the mechanism which is used together with the flow control and is discussed in section
1.6.3. There are three mechanisms used for retransmission they are namely: Stop and Wait ARQ, GoBack-N ARQ and Selective Repeat ARQ.
RMDEC

CS2363 CN / Unit 1

44
The most popular Forward error correction method is called as Hamming Code. This method corrects all single
bit errors. In this method the redundant bits included in the data will help in identifying the bit in which the error
has occurred. To calculate the number of redundancy bits (r) required to correct a given number of data bits (n) we
must find the relationship between n and r. When there are n bits of data and there are r redundant bits then the
resulting data size is n + r. r must be able to indicate m+r+l different state in order to identify the bit in error. r
bits can indicate 2r states and hence 2r > m+r+1. The value of r can be calculated by substituting the value of m.

1.5.4.2.1 Hamming code


d

r8

i4

r2

ri: bits 1, 3, 5, 7, 9 and 11

r2: bits 2, 3, 6, 7, 10 and 11

r3: bits 4, 5, 6 and 7

r4: bits 8, 9, 10 and 11

r1

Figure 1.62 Placement of redundant bits

It is a code that can be applied to data of any length. For example let us consider the 7-bit ASCII code. It
requires 4 redundant bits to be added in order to satisfy the condition of the sum of m+r+1 being greater than or
equal to 2r The redundant bits can be added to the end of the data or interspersed with the original data. The
placement of the bits is considered to be 1, 2, 4 and 8 (positions of powers of 2). Let the redundant bits be
RMDEC

CS2363 CN / Unit 1

45
numbered based on their positions i.e. r 1, r2, r4 and r8. Figure 1.62 shows the placement of the data and the
redundant bits.
In hamming code each of the r bit is the parity bit with combination of the data bits as shown in Figure
1.61 It could be seen that each of the data bits are included at least in two sets. The parity value for each
combination of the bits is the value of the corresponding r bit. Let us consider that the data 1001101 is
to be transmitted. The hamming code that would be transmitted by the sender is 10011100101. The
generation of hamming code is shown in Figure 1.63 The number of l's in the bit positions 3, 5, 7, 9 and
11 is 3 hence ri is set to 1. The number of 1's in the bit positions 3, 6, 7, 10 and 11 is 4 hence r 2 is set to 0.
The number of 1's in the bit positions 5, 6 and 7 is 2 hence r 4 is set to O. The number of l's in the bit
positions 9, 10 and 11 is 1 hence r 1 is set to 1.
Let us consider that the data sent is 10011100101. The receiver calculates the bits rl, r2, r4 and r8 as
per the sets mentioned in Figure 1.62 There are two situations:
Situation 1: The data is received (10011100101) without any error. The calculation of the r bits is
shown in Figure 1.64 The calculation show that all the parity bits are 0 and hence the data is without
any error. The parity bits are dropped and the data is retrieved as 1001101.
Situation 2: The data is received (10011110101) with error in one bit. The calculation of the r bits is
shown in Figure 2.8 If the bits are ordered as per their descending bit order we get the value of 0101.
The decimal equivalent of 0101 is 5, hence it is detected that the bit 5 is in error and hence the 5 th bit
is flipped. The corrected code is 10011100101 and the data is retrieved as 1001101, by dropping the
parity bits.
1

1.6. Ethernet
Any system that is connected via a LAN into the internet should need all the five layers of the Internet
model. The data link layer is divided into logical link control (LLC) and medium access control (MAC) sub
layers. LAN standards define the physical media and the connectors used to connect devices to media at the
RMDEC
CS2363 CN / Unit 1

46
physical layer of the OSI reference model. The IEEE Ethernet data link layer has two sub layers namely LLC
and MAC.
Traditional Ethernet was designed to operate at 10 Mbps. Access to the network by a device is through a
contention method Carrier Sense Multiple Access/Collision Detection (CSMA/CD). The media is shared
between all stations. The MAC sublayer governs the operation of the medium access. It also frames data
received from the upper layer and passes them to the Physical Layer Signaling sublayer for encoding.
1.6.1 IEEE 802.3
Ethernet protocols refer to the family of local-area network (LAN) covered by the IEEE 802.3. In the Ethernet
standard, there are two modes of operation: half-duplex and full-duplex modes. In the half duplex mode, data
are transmitted using the popular Carrier-Sense Multiple Access/Collision Detection (CSMA/CD) protocol on a
shared medium. The main disadvantages of the half-duplex are the efficiency and distance limitation, in which
the link distance is limited by the minimum MAC frame size. This restriction reduces the efficiency drastically
for high-rate transmission. Therefore, the carrier extension technique is used to ensure the minimum frame size
of 512 bytes in Gigabit Ethernet to achieve a reasonable link distance.
Four data rates are currently defined for operation over optical fiber and twisted-pair cables:
10 Mbps - 10Base-T Ethernet (IEEE 802.3)
100 Mbps - Fast Ethernet (IEEE 802.3u)
1000 Mbps - Gigabit Ethernet (IEEE 802.3z)
10-Gigabit - 10 Gbps Ethernet (IEEE 802.3ae).
Figure 2.17 shows the IEEE 802.3 logical layers and their relationship to the OSI reference model. As with all
IEEE 802 protocols, the ISO data link layer is divided into two IEEE 802 sublayers, the Media Access Control
(MAC) sublayer and the MAC-client sublayer. The IEEE 802.3 physical layer corresponds to the ISO physical
layer.

Figure 1.17: Ethernet's Logical Relationship to the ISO Reference Model


The MAC-client sublayer may be `one of the following:
Logical Link Control (LLC), if the unit is a DTE. This sublayer provides the interface between the Ethernet
MAC and the upper layers in the protocol stack of the end station. The LLC sublayer is defined by IEEE 802.2
standards.
Bridge entity, if the unit is a DCE. Bridge entities provide LAN-to-LAN interfaces between LANs that use
the same protocol (for example, Ethernet to Ethernet) and also between different protocols (for example,
Ethernet to Token Ring). Bridge entities are defined by IEEE 802.1 standards.
Because specifications for LLC and bridge entities are common for all IEEE 802 LAN protocols, network
compatibility becomes the primary responsibility of the particular network protocol.
The MAC layer controls the node's access to the network media and is specific to the individual protocol. All
IEEE 802.3 MACs must meet the same basic set of logical requirements, regardless of whether they include one
RMDEC

CS2363 CN / Unit 1

47
or more of the defined optional protocol extensions. The only requirement for basic communication between
two network nodes is that both MACs must support the same transmission rate.
The physical layer of IEEE 802.3 is specific to the transmission data rate, the signal encoding, and the type of
media interconnecting the two nodes. Gigabit Ethernet, for example, is defined to operate over either twistedpair or optical fiber cable, but each specific type of cable or signal-encoding procedure requires a different
physical layer implementation.
The MAC sub layer has two primary responsibilities:
Data encapsulation, including frame assembly before transmission, and frame parsing/error detection
during and after reception.
Media access control, including initiation of frame transmission and recovery from transmission failure
The Basic Ethernet Frame Format
The IEEE 802.3 standard defines a basic data frame format that is required for all MAC implementations, plus
several additional optional formats that are used to extend the protocol's basic capability. The basic data frame
format contains the seven fields shown in Figure 2.18.
Preamble (PRE): It consists of 7 bytes. The PRE is an alternating pattern of ones and zeros that tells
receiving stations that a frame is coming, and that provides a means to synchronize the frame-reception
portions of receiving physical layers with the incoming bit stream.
Start-of-frame delimiter (SFD): It is a 1 byte field. The SFD is an alternating pattern of ones and zeros,
ending with two consecutive 1-bits (10101011) indicating that the next bit is the left-most bit in the leftmost byte of the destination address.
Destination address (DA): It is 6 bytes in length. The DA field identifies which station(s) should receive
the frame. The left-most bit in the DA field indicates whether the address is an individual address
(indicated by a 0) or a group address (indicated by a 1). The second bit from the left indicates whether
the DA is globally administered (indicated by a 0) or locally administered (indicated by a 1). The
remaining 46 bits are a uniquely assigned value that identifies a single station, a defined group of
stations, or all stations on the network.
Source addresses (SA): It is a 6 byte field. The SA field identifies the sending station. The SA is always
an individual address and the left-most bit in the SA field is always 0.
Length/Type: It consists of 2 bytes. This field indicates either the number of MAC-client data bytes that
are contained in the data field of the frame, or the frame type ID if the frame is assembled using an
optional format. If the Length/Type field value is less than or equal to 1500, the number of LLC bytes in
the Data field is equal to the Length/Type field value. If the Length/Type field value is greater than
1536, the frame is an optional type frame, and the Length/Type field value identifies the particular type
of frame being sent or received.
Data: It is a sequence of n bytes of any value, where n is less than or equal to 1500. If the length of the
Data field is less than 46, the Data field must be extended by adding a filler (a pad), sufficient to bring
the Data field length to 46 bytes.
7

46-1500bytes

Pre

SFD

DA

SA

Length Type

Data unit + pad

FCS

Figure 1.18: The Basic IEEE 802.3 MAC Data Frame Format
Frame check sequence (FCS): It consists of 4 bytes. This sequence contains a 32-bit cyclic redundancy
check (CRC) value, which is created by the sending MAC and is recalculated by the receiving MAC to
check for damaged frames. The FCS is generated over the DA, SA, Length/Type, and Data fields.
Individual addresses are known as unicast addresses because they refer to a single MAC and are assigned by
the NIC manufacturer from a block of addresses allocated by the IEEE. Group address known as multicast
address identify the end stations in a workgroup and are assigned by the network manager. A special group
address called broadcast address that indicates all stations on the network has all its bits set to 1.
RMDEC
CS2363 CN / Unit 1

48
Whenever an end station MAC receives a transmit-frame request with the accompanying address and data
information from the LLC sublayer, the MAC begins the transmission sequence by transferring the LLC
information into the MAC frame buffer.
The preamble and start-of-frame delimiter are inserted in the PRE and SFD fields.
The destination and source addresses are inserted into the address fields.
The LLC data bytes are counted, and the number of bytes is inserted into the Length/Type field.
The LLC data bytes are inserted into the Data field. If the number of LLC data bytes is less than 46, a
pad is added to bring the Data field length up to 46.
An FCS value is generated over the DA, SA, Length/Type, and Data fields and is appended to the end of
the Data field.
After the frame is assembled, actual frame transmission will depend on whether the MAC is operating in halfduplex or full-duplex mode.
The IEEE 802.3 standard currently requires that all Ethernet MACs support half-duplex operation, in which the
MAC can be either transmitting or receiving a frame, but it cannot be doing both simultaneously. Full-duplex
operation is an optional MAC capability that allows the MAC to transmit and receive frames simultaneously.
MAC Frame with Gigabit Ethernet Carrier Extension (IEEE 802.3z)
1000Base-X has a minimum frame size of 416 bytes, and 1000Base-T has a minimum frame size of 520 bytes.
The Extension is a non-data variable extension field to frames that are shorter than the minimum length.
7
1
6
6
2
Variable
4
Variable
Pre

SFD

DA

SA

Length Type

Data unit + pad

FCS

Ext

Figure 2.19: IEEE 802.3z MAC Data Frame Format


2.4.1 IEEE 802.4 Token Bus
IEEE 802.4 is a standard which is meant for the Token bus networks. In this type of system the nodes are
physically connected as a bus, but they form a logical ring with token passed around to determine the turns for
sending. Token bus supports four distinct priority levels: 0, 2, 4 and 6. 0 is the lowest priority and 6 is the
highest priority. Token bus defines the following times: Token holding time: Maximum amount of time for
which a node holding the token can send priority 6 data. Token rotation time for class 4/2/0 data: Maximum
time a token can circulate and allow transmission of class 4/2/0 data.

Figure 2.20: Format of IEEE 802.4 frame


Preamble: This field is used to synchronize the receiver's clock.
Starting Delimiter (SD): This field is used to mark the starting of the frame. It contains analog encoding of
symbols other than 1 or 0 so that they cannot occur accidentally in the user data. Hence no length field is
needed.
Frame Control (FC): This field is used to distinguish data frames from control frames. For data frames, it carries
the frame's priority as well as a bit which the destination can set as an acknowledgement. For control frames,
the Frame Control field is used to specify the frame type. The allowed types include token passing and various
ring maintenance frames.
Destination Address: This field may be 2 bytes (for a local address) or 6 bytes (for a global address) in length
and indicates the physical address of the destination station.
RMDEC

CS2363 CN / Unit 1

49

Frame
Field

Control

Name

Meaning

00000000

Claim_token

Claim token during ring maintenance

00000001

Solicit_successor_1

Allow stations to enter the ring

00000010

Solicit_successor_2

Allow stations to enter the ring

00000011

Who_follows

Recover from lost token

00000100

Resolve_contention

Used when multiple stations want to enter

00001000

Token

Pass the token

00001100

Set_successor

Allow the stations leave the ring

Figure 2.21: Control frames used for ring maintenance


Source Address: This field may be 2 bytes (for a local address) or 6 bytes (for a global address) in length and
indicates the physical address of the source station.
Data: The Data field carries the actual data and it may be 8182 bytes when 2 byte addresses are used and 8174
bytes for 6 byte addresses.
Checksum: A 4-byte checksum calculated for the data. Used for error detection.
End Delimiter (ED): This field is used to mark the ending of the frame. It contains analog encoding of symbols
other than 1 or 0 so that they cannot occur accidentally in the user data.
2.4.2 IEEE 802.5 Token Ring

Figure 2.22: Token Ring Network


IEEE 802.5 is a standard which is meant for the Token Ring networks. Token-passing networks move a small
frame, called a token, around the network. Possession of the token grants the right for a station to transmit data
on to the network. If a node receiving the token has no information to send, it passes the token to the next
station. If a station possessing the token has information to communicate it seizes the token, converts the token
into start-of-frame sequence, appends the information to be sent and passes the information to the next station.
There wont be a token in the network when a station is transmitting information, hence other stations wont be
able to transmit data and thus collision will not happen in the token ring network. The information frame will
circulate once in the ring and will be removed when it reaches the sending station. Each station can hold the
token for a maximum period of time.
IEEE 802.5 supports two basic frame types namely tokens and data/command frames. Tokens are three bytes in
length and contain three fields namely start delimiter, access control and end delimiter. Data/command frame
RMDEC
CS2363 CN / Unit 1

50
vary in size and depend on the size of the information field. Data frames carry information for the upper layer
protocols and command frame contains control information.

Figure 2.23: Format for Token frame of IEEE 802.5

Figure 2.24: Format for Data/command frame of IEEE 802.5


Start delimiter: It alerts the arrival of a token or a data/command frame. It includes an identification symbol and
violates encoding scheme used in the frame in order to be differentiated from other frame fields.
Access control: The first three bits indicate the priority of the frame. Fourth bit is the token bit that differentiates
the token frame from that of the data/command frame. A station which is wishing to transmit information set the
token bit to 1. Fifth bit is the monitor bit, which is used by the active monitor to check whether the frame is
circling the ring endlessly. The last three bits are the reservation bits.
Frame control: It indicates whether the frame contains data or control information. If the frame is control frame,
this byte specifies the type of control information present in the frame.
Source address: It specifies the physical address of the source of the frame.
Destination address: It specifies the physical address of the destination of the frame
Data: Length of the data is limited by the maximum time a station can hold the token
FCS: Frame Check Sequence is filled by the source depending on the content of the frame. The destination
recalculates it to check for data integrity. The frame is discarded if the frame is found damaged (FCS does not
match).
End delimiter: It signals the end of token or data/command frame and contains damage indicator and last frame
indicator.
Frame status: This is the last field of the data/command frame and includes the address-recognized (A) and
frame copied (C) indicators. Receiving station set the A bit to 1 when it detects its address in the destination
address field. Receiving station also sets the C bit to 1 when it copies the frame.
Token Ring Media Access Control
When none of the stations connected to the ring has anything to send, the token circulates around the ring. Here
one station is designated as the monitor station. The operation of the monitor is described in more detail below.
As the token circulates around the ring, any station that has data to send may seize the token, that is, drain it
off the ring and begin sending data. In 802.5 networks, the seizing process involves simply modifying 1 bit in
the second byte token; the first 2 bytes of the modified token now become the preamble for the subsequent data
packet. Once a station has the token, it is allowed to send one or more packet.
Each transmitted packet contains the destination address of the intended receiver. it may also contain a multicast
(or broadcast) address if it is intended to reach more than one (or all) receivers. As the packet flows past each
node on the ring, each node looks inside the packet to see if it is the intended recipient. If so, it copies the packet
into a buffer as it flows through the network adaptor, but it does not remove the packet from the ring. The
sending station has the responsibility of removing the packet from the ring. For any packet that is longer than
RMDEC

CS2363 CN / Unit 1

51
the number of bits that can be stored in the ring, the sending station will be draining the first part of the packet
from the ring while still transmitting the latter part.
One issue we must address is how much data a given node is allowed to transmit each time it possesses the
token, or said another way, how long a given node is allowed to hold the token. We call this the token holding
time (THT).
Before putting each packet onto the ring, the station must check that the amount of time it would take to
transmit the packet would not cause it to exceed the token holding time. This means keeping track of how long
it has already held the token, and looking at the length of the next packet that it wants to send. From the token
holding time we can derive another useful quantity, the token rotation time (TRT), which is the amount of time
it takes a token to traverse the ring as viewed by a given node. It is easy to see that TRT ActiveNodes THT
+ RingLatency where RingLatency denotes how long it takes the token to circulate around the ring. When no
one has data to send, and ActiveNodes denotes the number of nodes that have data to transmit. The 802.5
protocol provides a form of reliable delivery using 2 bits in the packet trailer, the A and C bits. These are both 0
initially. When a station sees a frame for which it is the intended recipient, it sets the A bit in the frame. When it
copies the frame into its adaptor, it sets the C bit. If the sending station sees the frame come back over the ring
with the A bit still 0, it knows that the intended recipient is not functioning or absent. If the A bit is set but not
the C bit, this implies that for some reason (e.g., lack of buffer space) the destination could not accept the frame.
Thus, the frame might reasonably be retransmitted later in the hope that buffer space had become available.
Another detail of the 802.5 protocol concerns the support of different levels of priority. The token contains a 3bit priority field, so we can think of the token having a certain priority n at any time. Each device that wants to
send a packet assigns a priority to that packet, and the device can only seize the token to transmit a packet if the
packets priority is at least as great as the tokens. The priority of the token changes over time by using
reservation bits in the frame header. For example, a station X waiting to send a priority n packet may set these
bits to n if it sees a data frame going past and the bits have not already been set to a higher value. This causes
the station that currently holds the token to elevate its priority to n when it releases it. Station X is responsible
for lowering the token priority to its old value when it is done.
2.5 Fiber Distributed Data Interface (FDDI)
FDDI networks are typically used as backbones for wide-area networks. FDDI rings are normally constructed in
the form of a "dual ring of trees". A small number of devices, typically infrastructure devices such as routers and
concentrators rather than host computers, are connected to both rings - these are referred to as "dual-attached".
Host computers are then connected as single-attached devices to the routers or concentrators. The dual ring in
its most degenerate form is simply collapsed into a single device. In any case, the whole dual ring is typically
contained within a computer room.
This network topology is required because the dual ring actually passes through each connected device and
requires each such device to remain continuously operational (the standard actually allows for optical bypasses
but these are considered to be unreliable and error-prone). Devices such as workstations and minicomputers that
may not be under the control of the network managers are not suitable for connection to the dual ring.
As an alternative to a dual-attached connection, the same degree of resilience is available to a workstation
through a dual-homed connection which is made simultaneously to two separate devices in the same FDDI ring.
One of the connections becomes active while the other one is automatically blocked. If the first connection fails,
the backup link takes over with no perceptible delay.
An extension to FDDI, called FDDI-2, supports the transmission of voice and video information as well as data.
Another variation of FDDI called FDDI Full Duplex Technology (FFDT) uses the same network infrastructure
but can potentially support data rates up to 200 Mbps.
FDDI's four specifications are the Media Access Control (MAC), Physical Layer Protocol (PHY), PhysicalMedium Dependent (PMD), and Station Management (SMT) specifications. The MAC specification defines
RMDEC
CS2363 CN / Unit 1

52
how the medium is accessed, including frame format, token handling, addressing, algorithms for calculating
cyclic redundancy check (CRC) value, and error-recovery mechanisms. The PHY specification defines data
encoding/decoding procedures, clocking requirements, and framing, among other functions. The PMD
specification defines the characteristics of the transmission medium, including fiber-optic links, power levels,
bit-error rates, optical components, and connectors. The SMT specification defines FDDI station configuration,
ring configuration, and ring control features, including station insertion and removal, initialization, fault
isolation and recovery, scheduling, and statistics collection.
The FDDI frame format is similar to the format of a Token Ring frame. This is one of the areas in which FDDI
borrows heavily from earlier LAN technologies, such as Token Ring. FDDI frames can be as large as 4,500
bytes. Figure 2.25 shows the frame format of an FDDI data frame and token. The following descriptions
summarize the FDDI data frame and token fields illustrated in Figure 2.25.
Preamble: It is a unique sequence that prepares each station for an upcoming frame.
Start delimiter: It indicates the beginning of a frame by employing a signaling pattern that differentiates
it from the rest of the frame.
Frame control: It indicates the size of the address fields and whether the frame contains asynchronous or
synchronous data, among other control information.
Destination Address: It contains a unicast (singular), multicast (group), or broadcast (every station)
address. As with Ethernet and Token Ring addresses, FDDI destination addresses are 6 bytes long.
Source Address: It identifies the single station that sent the frame. As with Ethernet and Token Ring
addresses, FDDI source addresses are 6 bytes long.
Data: It contains either information destined for an upper-layer protocol or control information.
Frame check sequence (FCS): It is filed by the source station with a calculated cyclic redundancy check
value dependent on frame. The destination address recalculates the value to determine whether the frame
was damaged in transit. If so, the frame is discarded.
End delimiter: It contains unique symbols that cannot be data symbols to indicate the end of the frame.
Frame status: It allows the source station to determine whether an error occurred; identifies whether the
frame was recognized and copied by a receiving station.

Figure 2.25: FDDI Frame


Dual Ring: FDDI's primary fault-tolerant feature is the dual ring. If a station on the dual ring fails or is powered
down, or if the cable is damaged, the dual ring is automatically wrapped (doubled back onto itself) into a single
ring. When the ring is wrapped, the dual-ring topology becomes a single-ring topology. Data continues to be
transmitted on the FDDI ring without performance impact during the wrap condition. Figure 2.26 illustrate the
effect of a ring wrapping in FDDI.

RMDEC

CS2363 CN / Unit 1

53

a) from station failure


b) from link failure
Figure 2.26: A Ring Recovers by Wrapping
When a single station fail, as shown in Figure 2.26a, devices on either side of the failed (or powered-down)
station wrap, forming a single ring. Network operation continues for the remaining stations on the ring. When a
cable failure occurs, as shown in Figure 2.26b, devices on either side of the cable fault wrap. Network operation
continues for all stations. It should be noted that FDDI truly provides fault tolerance against a single failure
only. When two or more failures occur, the FDDI ring segments into two or more independent rings, which are
incapable of communicating with each other.
2.6 LAN Wireless
Wireless communication is one of the fastest growing technologies. The demand for connecting devices without
cable is increasing everywhere. Wireless LANs are found on college campuses, office buildings and public
areas. At home, a wireless LAN can connect roaming devices to the Internet. IEEE has defined a specification
for a wireless LAN, called IEEE 802.11.
2.6.1 IEEE 802.11
IEEE 802.11 Wireless LAN specifies two kinds of services. They are basic service set (BSS) and extended
service set (ESS) and shown in Figure 2.27 and Figure 2.28 respectively. BSS is the building block of a wireless
LAN. BSS is made of stationery or mobile wireless stations and a possible central base station, known as the
Access Point (AP). The BSS without an AP is a stand-alone network and cannot send data to other networks.
ESS is made up of two or more BSSs with APs. The BSSs are connected through distribution system, which is
usually a wired LAN. The distribution system connects the APs in the BSSs. When BSSs are connected then we
have an infrastructure network. In this network the stations within reach of one another can communicate with
one another without the use of an AP. However, communication between two stations in different BSSs usually
occurs via APs.

BSS without AP
Figure 2.27: Basic Service Set
RMDEC

BSS with an AP

CS2363 CN / Unit 1

54

Figure 2.28: Extended Service Set


The stations are categorized based on the type of mobility. They are:
No-Transition mobility: This station is either stationary or move only within a BSS.
BSS- Transition mobility: This station can move from BSS to another, but within one ESS.
ESS- Transition mobility: This station can move from one ESS to another.
Physical Layer:
IEEE 802.11 defines specifications for the conversion of bits to a signal in the physical layer. There are five
specifications for the radio frequencies. They are:
1. IEEE 802.11 FHSS: It describes Frequency Hop Spread Spectrum (FHSS) method for signal generation
in a 2.4 GHz ISM band. In this method the sender sends the data on one carrier frequency for a short
amount of time, hops to another for the same amount of time and so on. This process makes it difficult
for unauthorized person to make sense of the data being transmitted.
2. IEEE 802.11 DSSS: It describes the Direct Sequence Spread Spectrum (DSSS) method for signal
generation in a 2.4 GHz ISM band. In DSSS each bit sent is replaced by the chip code. The needed to
send one chip code must be the same as the time needed to send one original bit. The modulation
technique used is PSK at 1 Mbaud/s. the system allows 1 or 2 bits/baud (BPSK or QPSK), which results
in a data rate of 1 or 2 Mbps.
3. IEEE 802.11a OFDM: It describes Orthogonal Frequency-Division Multiplexing (OFDM) method for
signal generation in a 5 GHz ISM band. OFDM is similar to that of FDM except for that all the sub
bands are used by one source at a given time. OFDM uses PSK and QAM for modulation.
4. IEEE 802.11b HR-DSSS: It describes the High-rate DSSS method for signal generation in a 2.4 GHz
ISM band. HRDSSS is similar to DSSS except for the encoding method, which is complementary code
keying. CCK encodes 4 or 8 bits to one CCK symbol. It defines four data rates: 1, 2, 5.5 and 11 Mbps.
The first two data rates use BPSK and QPSK. 5.5 Mbps version uses BPSK and transmits at 1.375
Mbaud/s with 4-bit CCK encoding. The 11-Mbps version uses QPSK and transmits at 1.375 Mbps with
8-bit CCK encoding.
5. IEEE 802.11g OFDM: This is relatively new specification that uses OFDM with a 2.4 GHz ISM band.
The complex modulation technique achieves a 54-Mbps data rate.
MAC Layer:
IEEE 802.11 defines two MAC sub layers: the distribution coordination function (DCF) and point coordination
function (PCF). PCF is an optional and complex access method that can be implemented in an infrastructure
network. Wireless LAN implements CSMA/CA (Collision Avoidance) and is shown in Figure 2.29.
o Before sending a frame, the source station senses the medium by checking the energy level at the carrier
RMDEC

CS2363 CN / Unit 1

55
frequency.
o After receiving the RTS and waiting a short period of time, called the short inter-frame space (SIFS), the
destination station sends a control frame, called the clear to send (CTS), to the source station. This
control frame indicates that the destination station is ready to receive data.
o The channel uses a persistence strategy with back off until the channel is idle.
o After the station is found idle, the station waits for a period of time, called the distributed interframe space (DIFS); then the station sends a control frame, called request to send (RTS).
o After receiving the RTS and waiting a short period of time, called short inter-frame space (SIFS), the
destination station sends a control frame, called clear to send (CTS), to the source station. This control
frame indicates that the destination station is ready to receive data.
o The source station sends data after waiting an amount of time equal to SIFS
o The destination station, after waiting for an amount of time equal to SIFS, sends an acknowledgement to
show that the frame has been received. Acknowledgement is needed in this protocol because the station
does not have any means to check for the successful arrival of its data at the destination.

Figure 2.29: CSMA/CA and NAV


When a station sends an RTS frame, it includes the duration of the time that it needs to occupy the channel. The
stations that are affected by this transmission create a timer called a network allocation vector (NAV) that shows
how much time must pass before these stations are allowed to check the channel for idleness. Each time a
station accesses the system and sends an RTS frame, other stations start their NAV. Before sensing the physical
medium to see if it is idle, each station first checks its NAV to see if it has expired
IEEE 802.11 is a set of standards for the WLAN. The basic access mechanism used in wireless networks is
CSMA/CA. In wireless networks the packets should preferably be small as the probability of a packet getting
corrupted increases with the packet size. There are three main frame types they are namely Data (data
transmission), Control (control medium access) and Management (exchange management information). Each of
these frames is subdivided according to their specific function. The 802.11 frames are composed of four fields
as shown in Figure 2.30.

Figure 2.30: Frame format of IEEE 802.11


Preamble: It is dependent on PHY and includes Synch and SFD. Synch is 80-bit sequence of alternate zeros
and ones, which is used by the PHY circuitry to select the appropriate antenna and to reach steady state
frequency offset correction and synchronization with the received packet timing. SFD contains a 16-bit
sequence of 0000110010111101, which is used to define the frame timing.
PLCP Header: It contains logical information that will be used by the PHY to decode the frame and consist of
RMDEC

CS2363 CN / Unit 1

56
PLCP PDU length (no. of bytes in packet), PLCP signaling (rate information) and Header error check (16 bit
CRC).
MAC Data: The general MAC frame format is shown in Figure 2.31.
Frame Control: This field is two bytes in length and contains the following sub fields:
Protocol Version: It is two bit field which is used to indicate the future versions. The current value for this field
is 0.
Type: The next two bit indicates the type of the frame. It is set as 00 for management frames, 01 for control
frames and 10 for data frames.
Subtype: The next four bits indicates the subtype of the frame. Figure 2.32 shows the value of the type and
subtype fields along with their description
ToDS: This bit is set to 1 when the frame is addressed to the Access Point for forwarding it to the distribution
system. The bit is set to 0 in all other frames.
FromDS: This bit is set to 1 when the frame is coming from distribution system
More flag: This bit is set to 1 when there are more fragments belonging to the same frame follow the current
frame.

Figure 2.31: General MAC frame format


Retry: It indicates that this fragment is a retransmission of a previously transmitted fragment. It would be
helpful in identifying the duplicate frames.
Power management: This bit is used by the stations which are changing state either from Power save to Active
or vice versa. This bit indicates the power management mode in which the station will be after the transmission
of the current frame.
More data: This bit is also meant for power management and used by the Access Point to indicate that there are
more frames buffered to this station. This information could be used by the station to continue polling or change
the mode to active.
WEP: This bit indicates that the frame body is encrypted using the WEP algorithm.
Order: This bit is used to indicate that the frame is being sent using the Strictly-Ordered service class.
Type

Type
Description

Subtype

Subtype
Description

00

Management

0000

Association
Request

00

Management

0001

Association
Response

00

Management

0010

Reassociation
Request

RMDEC

00

Management

0011

Reassociation
Response

00

Management

0100

Probe Request

00

Management

0101

Probe Response

00

Management

01100111

Reserved

00

Management

1000

Beacon

CS2363 CN / Unit 1

57
00

Management

1001

ATIM

00

Management

1010

Disassociation

00

Management

1100

Deauthentication

00

Management

11011111

Reserved

01

Control

00000001

Reserved

01

Control

1010

PS-Poll

01

Control

1011

RTS

01

Control

1100

CTS

01

Control

1101

ACK

01

Control

1110

CF End

01

Control

1111

CF End + CF-ACK

10

Data

0000

Data

10

Data

0001

Data + CF-ACK

10

Data

0010

Data + CF-Poll

10

Data

0011

Data + CF-ACK +
CF-Poll

10

Data

0100

Null Function (no


data)

10

Data

0101

CF-ACK (no data)

10

Data

0110

CF-Poll (no data)

10

Data

0111

CF-ACK +
Poll(no data)

10

Data

10001111

Reserved

10

Data

00001111

Reserved

RMDEC

CF-

CS2363 CN / Unit 1

Figure 2.32: Description for the frame type and subtype field combinations
Duration/ ID: It indicates the station ID, in case of Power-Save poll messages and duration value used for
Network Allocation Vector calculation in all other frames.
Address Fields: Depending on the ToDS and FromDS bits a frame may contain upto four addresses as
follows:
Address 1: It always indicates the Recipient address and indicates address of AP if the ToDS bit is set
and end station address otherwise.
Address 2: It always indicates the Transmitter address and indicates address of AP if the FromDS bit
is set and station address otherwise.
Address 3: In most cases the remaining, missing address, and indicates the original source address if
FromDS set to 1 and destination address otherwise.
Address 4: It is used in the special case where a wireless distribution system is used and the frame is
being transmitted from one AP to another. In this case both ToDS and FromDS bits are set so that the
original destination and the original source addresses are missing.
Sequence Control: It is used to represent the order of different fragments belonging to the same frame and to
recognize duplicates. It consists of two fields namely Fragment number and the Sequence number.
Frame Body: Data or information that is being transmitted
Cyclic Redundancy Check: It is a 32-bit field containing the Cyclic Redundancy Check.
A wireless LAN defined by IEEE 802.11 has three categories of frames namely management, control and
data frames. Management frames are used for initial communication between stations and access points.
Control frames are used for accessing the channel and acknowledging frames. Data frames are used for
carrying data and control information. There are three control frames namely RTS, CTS and ACK, their
format is shown in Figure 2.33.
FC D
Address1
Address2
RTS
FCS
Figure
2.33: Control Frames

FC CTS
D orAddress1
ACK
FCS

Four different cases of addressing mechanisms are supported by IEEE 802.11. The interpretation of four
addresses in the MAC frame depends on the value of ToDS and FromDS flags of the FC field and is shown
in Figure 2.34. Figures 2.35 to 2.38 represent the addressing mechanisms for the four cases.
To
From Address 1
Address 2 Address 3
Address 4 It Means
DS
DS
0
0
Destination Source
BSS ID
N/A
Frame is neither
Address
station
going to nor
coming
from
distribution
system
0
1
Destination Sending
Source
N/A
Frame is coming
Address
AP
station
from distribution
system
1
0
Receiving
Source
Destination N/A
Frame is going to
AP
station
Address
distribution
system
1
1
Receiving
Sending
Destination Source
The distribution
AP
AP
Address
station
system
is
wireless
Figure 2.34: Interpretation of Addresses

Figure 2.35: Addressing Mechanism for ToDS = 0 & FromDS = 0

Figure 2.36: Addressing mechanism for ToDS = 0 & FromDS = 1

Figure 2.37: Addressing mechanism for ToDS = 1 & FromDS = 0

Figure 2.38: Addressing mechanism for ToDS = 1 & FromDS = 1


2.7 Bridges
A network bridge connects multiple network segments at the data link layer (layer 2) of the OSI model, and
the term layer 2 switch is often used interchangeably with bridge. Bridges are similar to repeaters or network
hubs, devices that connect network segments at the physical layer; however, a bridge works by using
bridging where traffic from one network is managed rather than simply rebroadcast to adjacent network
segments. In Ethernet networks, the term "bridge" formally means a device that behaves according to the
IEEE 802.1D standardthis is most often referred to as a network switch in marketing literature.
Since bridging takes place at the data link layer of the OSI model, a bridge processes the information from
each frame of data it receives. In an Ethernet frame, this provides the MAC address of the frame's source and
destination.

Figure 2.39: Bridge connecting LAN segments


Bridges use two methods to resolve the network segment that a MAC address belongs to.
Transparent bridging This method uses a forwarding database to send frames across network
segments. The forwarding database is initially empty and entries in the database are built as the
bridge receives frames. If an address entry is not found in the forwarding database, the frame is
rebroadcast to all ports of the bridge, forwarding the frame to all segments except the source address.
By means of these broadcast frames, the destination network will respond and a route will be created.
Along with recording the network segment to which a particular frame is to be sent, bridges may also
record a bandwidth metric to avoid looping when multiple paths are available. Devices that have this
transparent bridging functionality are also known as adaptive bridges.
Source route bridging With source route bridging two frame types are used in order to find the
route to the destination network segment. Single-Route (SR) frames comprise most of the network
traffic and have set destinations, while All-Route (AR) frames are used to find routes. Bridges send
AR frames by broadcasting on all network branches; each step of the followed route is registered by
the bridge performing it. Each frame has a maximum hop count, which is determined to be greater
than the diameter of the network graph, and is decremented by each bridge. Frames are dropped
when this hop count reaches zero, to avoid indefinite looping of AR frames. The first AR frame
which reaches its destination is considered to have followed the best route, and the route can be used
for subsequent SR frames; the other AR frames are discarded. This method of locating a destination
network can allow for indirect load balancing among multiple bridges connecting two networks. The
more a bridge is loaded, the less likely it is to take part in the route finding process for a new
destination as it will be slow to forward packets. A new AR packet will find a different route over a
less busy path if one exists. This method is very different from transparent bridge usage, where
redundant bridges will be inactivated; however, more overhead is introduced to find routes, and
space is wasted to store them in frames. A switch with a faster backplane can be just as good for
performance, if not for fault tolerance.
A system that is equipped with transparent bridges must meet the following three criteria:
1. Frames must be forwarded from one station to another
2. The forwarding table is automatically made by learning frame movements in the network
3. Loops in the system must be prevented.
A bridge has filtering capability, i.e. it checks the destination address of a frame and decides if the frame
should be forwarded or dropped. If the frame is to be forwarded, the decision must specify the port. The
forwarding table in the bridge maps addresses to ports.
A bridge is said to be either in the Learning stage (Figure 2.40) or Forwarding stage (Figure 2.41).
When the bridge look into the source MAC address and make an entry into the forwarding, if an entry for
that MAC address is not present in the table, it is said to be learning. When the bridge is said to be
forwarding it look into the destination MAC address and make a decision as to whether to forward
(destination in network segment other than that of source) or filter (destination and source are in the same

network segment) when an entry for the corresponding MAC is found in the forwarding table. If the source
and the destination stations are in different network segments and the forwarding table do not have a
corresponding in its forwarding table the bridge broadcast the frame into all but the source port. Transparent
bridges work fine as far as there are no redundant bridges in the system. Redundant bridges actually help to
make the system more reliable but can create loops which are not desirable in the system. To solve the
looping problem the bridges use spanning tree algorithm to create a loopless topology.

Address

Port

Address
Port
A
1
Original
After A sends a
frame to D
Figure 2.40: Learning Bridge

Address
Port
A
1
E E sends 3a
After
frame to A

Address
Port
A
1
E B sends3a
After
Bframe to C 1

Figure 2.41: Forwarding Bridge


A spanning tree is graph that does not have any loop. In a bridged LAN spanning tree refer to the topology in
which each LAN could be reached from any other LAN through one path only. This process involves the
following steps:
1. Every bridge has a built-in ID. The one with the smallest ID is chosen as the root bridge.
2. Make one port of each bridge, except for the root bridge, as the root port. A root port is a port with
the least cost path from the bridge to the root bridge. If two or more ports have the same least-cost
then any one of them is chosen.
3. Choose a designated bridge for each LAN segment. A designated bridge has the least cost path
between the LAN and the root bridge. Make the corresponding port that as the designated port. If two
bridges have the same least-cost value, choose the one with the smaller ID.

4. Mark the root port and the designated port as the forwarding ports and all the other ports as blocking
ports. A forwarding port forwards the frame that the bridge receives. Blocking port does not forward
the frame.
Consider the architecture in Figure 2.42 after applying the steps of spanning tree algorithm the
representation is shown in Figure 2.43. In figure 2.44 forwarding links are represented as solid lines and
dotted lines represent blocking links.
1
LAN 1

B1

B3

LAN
1

LAN 2

B4
2

B2

LAN 3

B5

2
1

1
LAN 4

1
2
Figure 2.42: Connections prior to spanning tree application
B1

B3

B4

2
LAN
3

LAN
2

B2

B5

LAN
4

**

Root
Bridge

**

**

*
**

1
*

Figure 2.43: Applying spanning tree


The arrows in the Figure 2.43 indicate the designated link for the LAN. ** indicate the designated port for
the LAN. * indicate the root port for a bridge.
LAN 1

1
B1
2

B3
3

1
B4

1
2

LAN 3

LAN 2

1
B2

B5

2
LAN 4

Figure 2.44: Forwarding and blocking links


Advantages of bridges
Self configuring
Primitive bridges are often inexpensive
Reduce size of collision domain by micro segmentation in non switched networks
Transparent to protocols above the MAC layer
Allows the introduction of management - performance information and access control
LANs interconnected are separate and physical constraints such as number of stations, repeaters and
segment length don't apply
Disadvantages of bridges
Does not limit the scope of broadcasts
Does not scale to extremely large networks
Buffering introduces store and forward delays - on average traffic destined for bridge will be related
to the number of stations on the rest of the LAN
Bridging of different MAC protocols introduces errors
Because bridges do more than repeaters by viewing MAC addresses, the extra processing makes
them slower than repeaters
Bridges are more expensive than repeaters
To translate between two segments types, a bridge reads a frame's destination MAC address and decides to
either forward or filter. If the bridge determines that the destination node is on another segment on the
network, it forwards it (retransmits) the packet to that segment. If the destination address belongs to the
same segment as the source address, the bridge filters (discards) the frame. As nodes transmit data through
the bridge, the bridge establishes a filtering database (also known as a forwarding table) of known MAC
addresses and their locations on the network. The bridge uses its filtering database to determine whether a
packet should be forwarded or filtered.
2.8 Switches

In the simplest terms, a switch is a mechanism that allows us to interconnect links to form a larger network.
A switch is a multi-input, multi-output device, which transfers packets from an input to one or more outputs.
Thus, a switch adds the star topology (see Figure 2.45) to the point-to-point link, bus (Ethernet), and ring
(802.5 and FDDI) topologies established in the last chapter. A star topology has several attractive properties:
Even though a switch has a fixed number of inputs and outputs, which limits the number of hosts that
can be connected to a single switch, large networks can be built by interconnecting a number of
switches.
We can connect switches to each other and to hosts using point-to-point links, which typically means
that we can build networks of large geographic scope.
Adding a new host to the network by connecting it to a switch does not necessarily mean that the
hosts already connected will get worse performance from the network.
For example, it is impossible for two hosts on the same Ethernet to transmit continuously at 10 Mbps
because they share the same transmission medium. Every host on a switched network has its own link to the
switch, so it may be entirely possible for many hosts to transmit at the full link speed (bandwidth), provided
that the switch is designed with enough aggregate capacity. Providing high aggregate throughput is one of
the design goals for a switch; In general, switched networks are considered more scalable (i.e., more capable
of growing to large numbers of nodes) than shared-media networks because of this ability to support many
hosts at full speed.

Figure 2.45: A Switch Providing Star Topology

Figure 2.46: Example protocol running on a Switch

Figure 2.47: Example Switch with three input and output ports
A switch is connected to a set of links and, for each of these links, runs the appropriate data link protocol to
communicate with the node at the other end of the link. A switchs primary job is to receive incoming
packets on one of its links and to transmit them on some other link. This function is sometimes referred to as
either switching or forwarding, and in terms of the OSI architecture, it is the main function of the network
layer. Figure 2.46 shows the protocol graph that would run on a switch that is connected to two T3 links and

one STS-1 SONET link. A representation of this same switch is given in Figure 2.47. In this figure, we have
split the input and output halves of each link, and we refer to each input or output as a port. (In general, we
assume that each link is bidirectional, and hence supports both input and output.) In other words, this
example switch has three input ports and three output ports.
There are three common switching approaches.
(i)
The datagram (or) connectionless approach.
(ii)
Virtual circuit or connection-oriented approach.
(iii)
Source routing, is less common than these other two, but it is simple to explain and does have some
useful applications.
One thing that is common to all networks is that we need to have a way to identify the end nodes. Such
identifiers are usually called addresses. We have already seen examples of addresses in the previous chapter,
for example, the 48-bit address used for Ethernet. The only requirement for Ethernet addresses is that no two
nodes on a network have the same address. This is accomplished by making sure that all Ethernet cards are
assigned a globally unique identifier.

Figure 2.48: Datagram Forwarding Network


The idea behind datagrams is incredibly simple: It is necessary to ensure that every packet contains enough
information to enable any switch to decide how to get it to its destination. That is, every packet contains the
complete destination address. Consider the example network illustrated in Figure 2.48, in which the hosts
have addresses A, B, C, and so on. To decide how to forward a packet, a switch consults a forwarding table
(sometimes called a routing table), an example of which is depicted in Table 3.1.
Destination
Port
A
3
B
0
C
3
D
3
E
2
F
1
G
0
H
0
Table 3.1: Forwarding Table for Switch2 illustrated in Figure 2.48
Table 3.1 shows the forwarding information that switch2 needs to forward datagrams in the example
network. It is pretty easy to figure out such a table when you have a complete map of a simple network like
that depicted here; we could imagine a network operator configuring the tables statically. It is a lot harder to
create the forwarding tables in large, complex networks with dynamically changing topologies and multiple
paths between destinations.

Connectionless (datagram) networks have the following characteristics:


A host can send a packet anywhere at any time, since any packet that turns up at a switch can be
immediately forwarded (assuming a correctly populated forwarding table). As we will see, this
contrasts with most connection-oriented networks, in which some connection state needs to be
established before the first data packet is sent.
When a host sends a packet, it has no way of knowing if the network is capable of delivering it or if
the destination host is even up and running. Each packet is forwarded independently of previous
packets that might have been sent to the same destination. Thus, two successive packets from host A
to host B may follow completely different paths (perhaps because of a change in the forwarding table
at some switch in the network).
A switch or link failure might not have any serious effect on communication if it is possible to find
an alternate route around the failure and to update the forwarding table accordingly.
A widely used technique for packet switching, which differs significantly from the datagram model, uses the
concept of a virtual circuit (VC). This approach, which is also called a connection-oriented model, requires
that we first set up a virtual connection from the source host to the destination host before any data is sent.
To understand how this works, consider Figure 2.49, where host A again wants to send packets to host B. We
can think of this as a two-stage process. The first stage is connection setup. The second is data transfer. We
consider each in turn. In the connection setup phase, it is necessary to establish connection state in each of
the switches between the source and destination hosts. The connection state for a single connection consists
of an entry in a VC table in each switch through which the connection passes. One entry in the VC table
on a single switch contains

Figure 2.49: An Example Virtual Circuit Network


A virtual circuit identifier (VCI) that uniquely identifies the connection at this switch and that will be
carried inside the header of the packets that belong to this connection
An incoming interface on which packets for this VC arrive at the switch
An outgoing interface in which packets for this VC leave the switch
A potentially different VCI that will be used for outgoing packets
The semantics of one such entry is as follows: If a packet arrives on the designated incoming interface and
that packet contains the designated VCI value in its header, then that packet should be sent out the specified
outgoing interface with the specified outgoing VCI value first having been placed in its header. Note that the
combination of the VCI of packets as they are received at the switch and the interface on which they are
received uniquely identifies the virtual connection.
There may of course be many virtual connections established in the switch at one time. Also, we observe that
the incoming and outgoing VCI values are generally not the same. Thus, the VCI is not a globally significant
identifier for the connection; rather, it has significance only on a given linkthat is, it has link local scope.
Whenever a new connection is created, we need to assign a new VCI for that connection on each link that

the connection will traverse. We also need to ensure that the chosen VCI on a given link is not currently in
use on that link by some existing connection.
There are two broad classes of approach to establishing connection state. One is to have a network
administrator configure the state, in which case the virtual circuit is permanent. A permanent virtual circuit
(PVC) is a long-lived or administratively configured VC. Alternatively, a host can send messages into the
network to cause the state to be established. This is referred to as signaling, and the resulting virtual circuits
are said to be switched. The salient characteristic of a switched virtual circuit (SVC) is that a host may set up
and delete such a VC dynamically without the involvement of a network administrator. SVC could be more
accurately called a signaled VC, since it is the use of signaling (not switching) that distinguishes an SVC
from PVC. Lets assume that a network administrator wants to manually create a new virtual connection
from host A to host B. First, the administrator needs to identify a path through the network from A to B. In
the example network of Figure 2.49, there is only one such path, but in general this may not be the case. The
administrator then picks a VCI value that is currently unused on each link for the connection. For the
purposes of our example, lets suppose that the VCI value 5 is chosen for the link from host A to switch 1,
and that 11 is chosen for the link from switch 1 to switch 2. In that case, switch 1 needs to have an entry in
its VC table configured as shown in the second row of Table 2.2.
Similarly, suppose that the VCI of 7 is chosen to identify this connection on the link from switch 2 to
switch 3, and that a VCI of 4 is chosen for the link from switch 3 to host B. In that case, switches 2 and 3
need to be configured with VC table entries as shown in rows three and four of Table 2.2. Note that the
outgoing VCI value at one switch is the incoming VCI value at the next switch. Once the VC tables
have been set up, the data transfer phase can proceed, as illustrated in Figure 2.50.
Switch
Incoming
Incoming Outgoing Outgoing VCI
Number
Interface
VCI
Interface
1
2
5
1
11
2
3
11
2
7
3
0
7
1
4
Table 2.2 Virtual Circuit Table Entries
For any packet that it wants to send to host B, A puts the VCI value of 5 in the header of the packet and
sends it to switch 1. Switch 1 receives any such packet on interface 2, and it uses the combination of the
interface and the VCI in the packet header to find the appropriate VC table entry. As shown in Table 2.2, the
table entry in this case tells switch 1 to forward the packet out of interface 1 and to put the VCI value 11 in
the header when the packet is sent. Thus, the packet will arrive at switch 2 on interface 3 bearing VCI 11.
Switch 2 looks up interface 3 and VCI 11 in its VC table (as shown in Table 2.2) and sends the packet on to
switch 3 after updating the VCI value in the packet header appropriately, as shown in Figure 2.51.

Figure 2.50: Packet sent into Virtual Circuit Network

Figure 2.51: Packet passing through a Virtual Circuit Network


This process continues until it arrives at host B with the VCI value of 4 in the packet. To host B, this
identifies the packet as having come from host A. In real networks of reasonable size, the burden of
configuring VC tables correctly in a large number of switches would quickly become excessive using the
above procedures. Thus, some sort of signaling is almost always used, even when setting up permanent
VCs. In the case of PVCs, signaling is initiated by the network administrator, while SVCs are usually set up
using signaling by one of the hosts.
Let us consider how the same VC just described could be set up by signaling from the host. To start the
signaling process, host A sends a setup message into the network, that is, to switch 1. The setup message
contains, among other things, the complete destination address of host B. The setup message needs to get all
the way to B to create the necessary connection state in every switch along the way. We can see that getting
the setup message to B is a lot like getting a datagram to B, in that the switches have to know which output
to send the setup message to so that it eventually reaches B.
Let us assume that the switches know enough about the network topology to figure out how to do
that, so that the setup message flows on to switches 2 and 3 before finally reaching host B. When switch 1
receives the connection request, in addition to sending it on to switch 2, it creates a new entry in its virtual
circuit table for this new connection. This entry is exactly the same as shown previously in Table 2.2. The
main difference is that now the task of assigning an unused VCI value on the interface is performed by the
switch. In this example, the switch picks the value 5. The virtual circuit table now has the following
information: When packets arrive on port 2 with identifier 5, send them out on port 1. Another issue is
that, somehow, host A will need to learn that it should put the VCI value of 5 in packets that it wants to send
to B; we will see how that happens below.
When switch 2 receives the setup message, it performs a similar process; in this example it picks the
value 11 as the incoming VCI value. Similarly, switch 3 picks 7 as the value for its incoming VCI. Each
switch can pick any number it likes, as long as that number is not currently in use for some other connection
on that port of that switch. As noted above, VCIs have link local scope; that is, they have no global
significance.
Finally, the setup message arrives at host B. Assuming that B is healthy and willing to accept a connection
from host A, it too allocates an incoming VCI value, in this case 4. This VCI value can be used by B to
identify all packets coming from host A. Now, to complete the connection, everyone needs to be told what
their downstream neighbor is using as the VCI for this connection. Host B sends an acknowledgment of the
connection setup to switch 3 and includes in that message the VCI that it chose (4). Now switch 3 can
complete the virtual circuit table entry for this connection, since it knows the outgoing value must be 4.
Switch 3 sends the acknowledgment on to switch 2, specifying a VCI of 7. Switch 2 sends the message on to
switch 1, specifying a VCI of 11. Finally, switch 1 passes the acknowledgment on to host A, telling it to use
the VCI of 5 for this connection. At this point, everyone knows all that is necessary to allow traffic to flow
from host A to host B. Each switch has a complete virtual circuit table entry for the connection. Furthermore,
host A has a firm acknowledgment that everything is in place all the way to host B. At this point, the
connection table entries are in place in all three switches just as in the administratively configured example
above, but the whole process happened automatically in response to the signaling message sent from A.
The data transfer phase can now begin and is identical to that used in the PVC case. When host A no longer
wants to send data to host B, it tears down the connection by sending a teardown message to switch 1. The

switch removes the relevant entry from its table and forwards the message on to the other switches in the
path, which similarly delete the appropriate table entries. At this point, if host A were to send a packet with a
VCI of 5 to switch 1, it would be dropped as if the connection had never existed.
There are several things to note about virtual circuit switching:
Since host A has to wait for the connection request to reach the far side of the network and return
before it can send its first data packet, there is at least one RTT of delay before data is sent.1
While the connection request contains the full address for host B (which might be quite large, being a
global identifier on the network), each data packet contains only a small identifier, which is only
unique on one link. Thus, the per-packet overhead caused by the header is reduced relative to the
datagram model.
If a switch or a link in a connection fails, the connection is broken and a new one will need to be
established. Also, the old one need to be torn down to free up table storage space in the switches.
The issue of how a switch decides which link to forward the connection request on has been glossed
over. In essence, this is the same problem as building up the forwarding table for datagram
forwarding, which requires some sort of routing algorithm.
A third approach to switching is known as source routing. The name derives from the fact that all the
information about network topology that is required to switch a packet across the network is provided by the
source host. There are various ways to implement source routing. One would be to assign a number to each
output of each switch and to place that number in the header of the packet. The switching function is then
very simple: For each packet that arrives on an input, the switch would read the port number in the header
and transmit the packet on that output.

Figure 2.52
Source Routing in a switched network
However, since there will in general be more than one switch in the path between the sending and the
receiving host, the header for the packet needs to contain enough information to allow every switch in the
path to determine which output the packet needs to be placed on. One way to do this would be to put an
ordered list of switch ports in the header and to rotate the list so that the next switch in the path is always at
the front of the list. Figure 2.52 illustrates this idea. In this example, the packet needs to traverse three
switches to get from host A to host B. At switch 1, it needs to exit on port 1, at the next switch it needs to
exit at port 0, and at the third switch it needs to exit at port 3. Thus, the original header when the packet
leaves host A contains the list of ports (3, 0, 1), where we assume that each switch reads the rightmost
element of the list. To make sure that the next switch gets the appropriate information, each switch rotates
the list after it has read its own entry. Thus, the packet header as it leaves switch 1 en route to switch 2 is
now (1, 3, 0); switch 2 performs another rotation and sends out a packet with (0, 1, 3) in the header.
Although not shown, switch 3 performs yet another rotation, restoring the header to what it was when host A
sent it. There are several things to note about this approach.

First, it assumes that host A knows enough about the topology of the network to form a header that
has all the right directions in it for every switch in the path. This is somewhat analogous to the problem of
building the forwarding tables in a datagram network or figuring out where to send a setup packet in a
virtual circuit network.
Second, observe that we cannot predict how big the header needs to be, since it must be able to hold
one word of information for every switch on the path. This implies that headers are probably of variable
length with no upper bound, unless we can predict with absolute certainty the maximum number of switches
through which a packet will ever need to pass.
Third, there are some variations on this approach. For example, rather than rotate the header, each
switch could just strip the first element as it uses it. Rotation has an advantage over stripping, however: Host
B gets a copy of the complete header, which may help it figure out how to get back to host A. Yet another
alternative is to have the header carry a pointer to the current next port entry, so that each switch just
updates the pointer rather than rotating the header; this may be more efficient to implement. These three
approaches are illustrated in Figure 2.53. In each case, the entry that this switch needs to read is A, and the
entry that the next switch needs to read is B. Source routing can be used in both datagram networks and
virtual circuit networks. For example, the Internet Protocol, which is a datagram protocol, includes a source
route option that allows selected packets to be source routed, while the majority is switched as conventional
datagrams.
Source routing is also used in some virtual circuit networks as the means to get the initial setup
request along the path from source to destination. Finally, we note that source routing suffers from a scaling
problem. In any reasonably large network, it is very hard for a host to get the complete path information it
needs to construct correct headers.

Figure 2.53 Approaches of Source Routing

PART A
Questions & Answers
1. What are the three criteria necessary for an effective and efficient network?
The most important criteria are performance, reliability and security.
Performance of the network depends on number of users, type of transmission medium, and the capabilities
of the connected h/w and the efficiency of the s/w.
Reliability is measured by frequency of failure, the time it takes a link to recover from the failure and the
networks robustness in a catastrophe.
Security issues include protecting data from unauthorized access and viruses.
2. Group the OSI layers by function?
The seven layers of the OSI model belonging to three subgroups.
Physical, data link and network layers are the network support layers; they deal with the physical aspects of
moving data from one device to another.
Session, presentation and application layers are the user support layers; they allow interoperability among
unrelated software systems.
The transport layer ensures end-to-end reliable data transmission.
3. What are header and trailers and how do they get added and removed?

Each layer in the sending machine adds its own information to the message it receives from the layer
just above it and passes the whole package to the layer just below it. This information is added in the form of
headers or trailers. Headers are added to the message at the layers 6,5,4,3, and 2. A trailer is added at layer2.
At the receiving machine, the headers or trailers attached to the data unit at the corresponding sending layers
are removed, and actions appropriate to that layer are taken.
4. What are the features provided by layering?
Two nice features:
It decomposes the problem of building a network into more manageable components.
It provides a more modular design.
5. Why are protocols needed?
In networks, communication occurs between the entities in different systems. Two entities cannot just
send bit streams to each other and expect to be understood. For communication, the entities must agree on a
protocol. A protocol is a set of rules that govern data communication.
6. What are the two interfaces provided by protocols?
Service interface
Peer interface
Service interface- defines the operations that local objects can perform on the protocol.
Peer interface- defines the form and meaning of messages exchanged between protocol peers to implement
the communication service.
7. Mention the different physical media?
Twisted pair(the wire that your phone connects to)
Coaxial cable(the wire that your TV connects to)
Optical fiber(the medium most commonly used for high-bandwidth, long-distance
links)
Space(the stuff that radio waves, microwaves and infra red beams propagate through)
8. Define Signals?
Signals are actually electromagnetic waves traveling at the speed of light. The speed of light is,
however, medium dependent-electromagnetic waves traveling through copper and fiber do so at about twothirds the speed of light in vacuum.
9. What is waves wavelength?
The distance between a pair of adjacent maxima or minima of a wave, typically measured in meters,
is called waves wavelength.
10. Define Modulation?
Modulation -varying the frequency, amplitude or phase of the signal to effect the transmission of
information. A simple example of modulation is to vary the power (amplitude) of a single wavelength.
11. Explain the two types of duplex?
Full duplex-two bit streams can be simultaneously transmitted over the links at the
same time, one going in each direction.
Half duplex-it supports data flowing in only one direction at a time.
12. What is CODEC?
A device that encodes analog voice into a digital ISDN link is called a CODEC, for coder/decoder.

13. What is spread spectrum and explain the two types of spread spectrum?
Spread spectrum is to spread the signal over a wider frequency band than normal in such a way as to
minimize the impact of interference from other devices.
Frequency Hopping
Direct sequence
14. What are the different encoding techniques?
NRZ
NRZI
Manchester
4B/5B
15. How does NRZ-L differ from NRZ-I?
In the NRZ-L sequence, positive and negative voltages have specific meanings: positive for 0 and
negative for 1. in the NRZ-I sequence, the voltages are meaningless.
Instead, the receiver looks for changes from one level to another as its basis for recognition of 1s.
16. What are the responsibilities of data link layer?
Specific responsibilities of data link layer include the following. a) Framing b) Physical addressing c)
Flow control d) Error control e) Access control.
17. What are the ways to address the framing problem?
Byte-Oriented Protocols(PPP)
Bit-Oriented Protocols(HDLC)
Clock-Based Framing(SONET)
18. Distinguish between peer-to-peer relationship and a primary-secondary relationship. peer -to- peer
relationship?
All the devices share the link equally.
Primary-secondary relationship: One device controls traffic and the others must transmit through it.
19. Mention the types of errors and define the terms?
There are 2 types of errors
Single-bit error.
Burst-bit error.
Single bit error: The term single bit error means that only one bit of a given data unit (such as byte
character/data unit or packet) is changed from 1 to 0 or from 0 to 1.
Burst error: Means that 2 or more bits in the data unit have changed from 1 to 0 from 0 to 1.
20. List out the available detection methods.
There are 4 types of redundancy checks are used in data communication.
Vertical redundancy checks (VRC).
Longitudinal redundancy checks (LRC).
Cyclic redundancy checks (CRC).
Checksum.
21. Write short notes on VRC.
The most common and least expensive mechanism for error detection is the vertical redundancy
check (VRC) often called a parity check. In this technique a redundant bit called a parity bit, is appended to
every data unit so, that the total number of 0s in the unit (including the parity bit) becomes even.

22. Write short notes on LRC.


In longitudinal redundancy check (LRC), a block of bits is divided into rows and a redundant row of
bits is added to the whole block.
23. Write short notes on CRC.
The third and most powerful of the redundancy checking techniques is the cyclic redundancy checks
(CRC) CRC is based on binary division. Here a sequence of redundant bits, called the CRC remainder is
appended to the end of data unit.
24. Write short notes on CRC checker.
A CRC checker functions exactly like a generator. After receiving the data appended with the CRC it
does the same modulo-2 division. If the remainder is all 0s the CRC is dropped and the data accepted.
Otherwise, the received stream of bits is discarded and the dates are resent.
25. Define checksum.
The error detection method used by the higher layer protocol is called checksum. Checksum is based
on the concept of redundancy.
26. What are the steps followed in checksum generator?
The sender follows these steps a) the units are divided into k sections each of n bits. b) All sections
are added together using 2s complement to get the sum. c) The sum is complemented and become the
checksum. d) The checksum is sent with the data.
27. Mention the types of error correcting methods.
There are 2 error-correcting methods.
Single bit error correction
Burst error correction.
28. Write short notes on error correction?
It is the mechanism to correct the errors and it can be handled in 2 ways.
When an error is discovered, the receiver can have the sender retransmit the entire
data unit.
A receiver can use an error correcting coder, which automatically corrects certain
errors.
29. What is the purpose of hamming code?
A hamming code can be designed to correct burst errors of certain lengths. So the simple strategy
used by the hamming code to correct single bit errors must be redesigned to be applicable for multiple bit
correction.
30. What is redundancy?
It is the error detecting mechanism, which means a shorter group of bits or extra bits may be
appended at the destination of each unit.
31. Define flow control?
Flow control refers to a set of procedures used to restrict the amount of data. The sender can send
before waiting for acknowledgment.
32. Mention the categories of flow control?
There are 2 methods have been developed to control flow of data across communication links. a)
Stop and wait- send one from at a time. b) Sliding window- send several frames at a time.
33. What is a buffer?

Each receiving device has a block of memory called a buffer, reserved for storing incoming data until
they are processed.
34.What is the difference between a passive and an active hub?
An active hub contains a repeater that regenerates the received bit patterns before sending them out.
A passive hub provides a simple physical connection between the attached devices.
35. For n devices in a network, what is the number of cable links required for a
mesh and ring topology?
Mesh topology n (n-1)/2
Ring topology n
36. Group the OSI layers by function. (MAY/JUNE2007)
The seven layers of the OSI model belonging to three subgroups. Physical, data link and network
layers are the network support layers; they deal with the physical aspects of moving data from one device to
another. Session, presentation and application layers are the user support layers; they allow interoperability
among unrelated software systems. The transport layer ensures end-to-end reliable data transmission.
37.We have a channel with a 1 MHz bandwidth. The SNR for this channel is 63; what is the appropriate bit
rate and signal level?
First, we use the Shannon formula to find our upper limit.
C = B log2 (1 + SNR) = 106 log2 (1 + 63) = 106 log2 (64) = 6 Mbps
Then we use the Nyquist formula to find the
number of signal levels.
4 Mbps = 2 1 MHz log2 L L = 4
= B log2 (1) = B 0 = 0
38.List the Channelization Protocols
Frequency Division Multiple Access (FDMA)
The total bandwidth is divided into channels.
Time Division Multiple Access (TDMA)
The band is divided into one channel that is time shared
Code Division Multiple Access (CDMA)
One channel carries all transmission simultaneously
39.What is protocol?What are its key elements?(NOV/DEC 2007)
Set of rules that govern the data communication is protocol. The key elements are
i)Syntax ii)Semantics iii)Timing
40. What is meant by half duplex?
Each Station can both transmit and receive but not at the same time. When one device is sending the
other can only receive and vice versa.
Direction of data at time 1

Station

Station

Direction of data at time 2


In a half duplex transmission, the entire capacity of a channel is taken over by whichever of the two
devices is transmitting at the time.

41.

List down the Network Criteria Parameters?


Performance
Reliability
Security

42.

List the advantages of star topology?


1. Less expensive than mesh topology
2. Easy to install and reconfigure due to only one link and one I/O part to connect it to any
number of others.
3. Robustness
4. Easy fault identification and fault isolation.
Which OSI layers are network supportive and which are user supportive layers?

43.

Sol: The physical layer, data link and network layers network support layers and session, presentation,
application layers are user support layer. Transport layer links network support and user support layers.

44.

Write the advantages of Optical fiber over twisted pair and coaxial cable.

Sol. Noise resistance of optical fiber is very high.


Without requiring regeneration optical signal can run for many miles-less signal attenuation.
Bandwidth is very high.

45.

What is peer to peer process?

Sol. Between machines layer-x on one machine can communicate with layer-x of another machine. The
process on each machine that communicate at a given layer are called peer to peer process.
46.
What does IEEE 10 Base 5 standard signify?
It is Ethernet standard.
The number 10 signifies the data rate of 10 Mbps and the number 5 signifies the maximum
cable length of 500 meters.
The word Base specifies a digital signal with Manchester encoding.

47.

What is CSMA/CD?
CSMA/CD is the access method used in an Ethernet.
It stands for Carrier Sense Multiple Access with Collision Detection.
Collision: Whenever multiple users have unregulated access to a single line, there is a danger
of signals overlapping and destroying each other. Such overlaps, which turn the signals into
unusable noise, are called collisions.
In CSMA/CD the station wishing to transmit first listens to make certain the link is free, then
transmits its data, then listens again. During the data transmission, the station checks the line
for the extremely high voltages that indicate a collision.
If a collision is detected, the station quits the current transmission and waits a predetermined

48.

49.

amount of time for the line to clear, then sends its data again.
What is token ring?
Token ring is a LAN protocol standardized by IEEE and numbered as IEEE 802.4.
In a token ring network, the nodes are connected into a ring by point-to-point links.
It supports data rate of 4 & 16 Mbps.
Each station in the network transmits during its turn and sends only one frame during each
turn.
The mechanism that coordinates this rotation is called Token passing.
A token is a simple placeholder frame that is passed from station to station around the ring. A
station may send data only when it has possession of the token.
Differentiate 1000base SX and 100 Base FX.
S.No
1
2
3
4
5
6

50.

Feature
Type
Data rate
Medium
Signal
Max.
Distance
Encoding

1000 Base SX
Gigabit Ethernet
1 Gbps
Optical fiber
Short-wave laser
550m

100 Base FX.


Fast Ethernet
I00 Mbps.
Optical fiber
Laser
2000m

4B/5B

8B/10B

Why is there no AC field in the 802.3 frame?


Access control (AC) field in token ring frame specify the priority level to each station, so that
each station sends data during its turn.
But CSMA/CD access method does not specify priority level for any station. So there is no
AC field in the 802.3 frame.

What are the functions of Bridges?


What is the advantage of FDDI over a basic Token Ring?
Part - B
1. Explain the mesh and star topologies of the Network in detail with diagram.
2. Explain the bus, ring and hybrid topologies of the Network in detail with diagram.
3. Explain the following:
a. Protocols and standards
b. Line configuration
4. Explain the Different categories of networks in details with diagram?
a) LAN b) WAN c) MAN
5. List the layers of OSI model and Explain.
6. Write about Guided and Unguided Transmission media.
7. Explain different types line encoding in data communication.
8. Explain error correction and recovery in detail.
9. Explain the different framing protocols.
10. Explain SONET.
11. Explain the different flow control techniques in detail.
12. Explain the various media access protocols.
13. Explain Fiber Distributed data Interface (FDDI) in detail.
14. Explain the frame format, operation and logical ring maintenance feature of IEEE 802.4
protocol.
15. How does a Token Ring LAN Operate? Explain with neat diagram.

MAC

16. Explain the access method and frame format of IEEE 802.3 and IEEE 802.5 in detail.
17. Explain the classification of ETHERNET.
18. Explain the functioning of wireless LAN in detail.

You might also like