You are on page 1of 103

M.E.

T ENGINEERING COLLEGE
DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING
CS6551- COMPUTER NETWORKS
UNIT I
Define computer network.
Computer network is a connection of autonomous computers and network devices for:
Resource sharing (data/devices) in an efficient
manner Communication amongst them.
Compare simplex and duplex communication with example.
In simplex mode, the communication is unidirectional (Eg: keyboard, monitor).
In half-duplex mode, each station can both transmit and receive, but not
simultaneously (Eg. walkie-talkie).
In full-duplex (also called duplex), both stations can transmit and receive
simultaneously (Eg. telephone network).

List the criteria based on which a network can be assessed.


Performance is based on its throughput (no. of packets delivered)
and delay. Reliability is how much the network is fault tolerant.
Security includes preventing unauthorized access and recovery from breaches.
What are the two types of line configuration?
A point-to-point connection provides a dedicated link between two nodes.
In a multipoint connection, more than two nodes share a single link, i.e.,
bandwidth is shared amongst the nodes.

State any two topologies in which a network can be organized.

Mesh

Star

Bus

Ring

Mesh: Each device has a dedicated point-to-point link to every other device. It is
robust and secure. Installation is difficult and expensive n(n-1) link for n node.
Star: Each device has a dedicated point-to-point link only to a central controller
called a hub. All communication goes via the hub. It is less expensive and robust.
A failure in the hub makes the network non-functional. Eg; LAN
Bus: It is multi-point and signal gets weak as it travels through the long cable that

acts as backbone. A fault in the bus stops the entire transmission


Ring: Each device has a dedicated point-to-point connection with the devices on either
side of it. A break in the ring can disable the entire network due to unidirectional traffic.

Classify LAN, WAN, MAN, SAN and Internetwork.


Local Area Network (LAN) is privately owned and links the devices in a single office,
building, or campus. The LAN size is limited to a few kilometers. A LAN will use only
one type of transmission medium. The speed of LAN is in the range 101000 Mbps.
Wide Area Network (WAN) provides long-distance transmission of data, image, audio,
and video over large geographic areas that may comprise a country / continent.

Metropolitian Area Network (MAN) is a network with a size between a LAN and a
WAN. It normally covers the area inside a town or a city. It is designed for
customers who need a high-speed connectivity to the Internet, and have
endpoints spread over a city or part of city.
When two or more independent networks are connected, it becomes an
internetwork or internet.
Storage area network (SAN) is confined to a single room and connect the various
components of a large computing system. For example, fiber channel is used to
connect high-performance computing systems to storage servers.
List the advantages of layering.
It decomposes the problem of building a network into more manageable components.
It provides a more modular design. To add a new service, then it is only needed to
modify the functionality at one layer, reusing the functions at all the other layers.
Uses abstraction to hide complexity of network from application.
Define protocol.
The abstract objects that make up the layers of a network system are called
protocols. Each protocol defines two different interfaces.
o Service interface that specifies the set of operations
o Peer-to-peer interface for messages to be exchanged amongst peers
Protocol is a set of rules that govern communications between devices.

What is a protocol graph?


The suite of protocols that make up a network system is represented as a protocol
graph. The nodes correspond to protocols and edges represent a depends-on relation.

Define network architecture.


Set of rules governing form and content of protocol graph is called network architecture.
Network architecture guides the design and implementation of computer networks.

Two commonly used architecture are


o OSI Architecture
o Internet or TCP/IP architecture
What purpose do header and trailer serve?
A layer communicates control information to its peer, instructing it how to handle
the message when it is received by attaching a header in front of the message.
The trailer usually contains error control information.
A header/trailer is a small data structure consists of a few bytes.

Brief the terms unicast, multicast and broadcast.


The different types of addressing are unicast (one-to-one communication), multicasting
(communicating to all members of a group) and broadcast (sending to all nodes on the network).

What is encapsulation.
As data passes through a layer, it attaches its header and then passes it to the next
layer. For the next layer, the data and header of the previous layer is encapsulated as
a unit. It then attaches its header and passes to the next layer and so on.

Discuss in detail about the layers of OSI model with a neat diagram.
The ISO defined a common way to connect computers, called the Open Systems
Interconnection (OSI) architecture. (eg. public X.25 network).
It defines partitioning of network functionality into seven layers.
The bottom three layers, i.e., physical, data link and network are implemented on
all nodes on the network including switches.

Physical Layer
It coordinates the functions required to carry a bit stream over a physical medium.
Representation of bitsTo be transmitted, bits must be encoded into signals,
electrical or optical. The physical layer defines the type of encoding.
Data rateIt defines the transmission rate (number of bits sent per second).
Physical topologyIt defines how devices are connected (mesh, star, ring, bus
or hybrid) to make a network.
Transmission modeThe physical layer also defines the direction of
transmission between two devices: simplex, half-duplex, or full-duplex

Data Link Layer


The data link layer transforms a raw transmission facility to a reliable link.

FramingThe bit stream is divided into manageable data units called frames.
Physical addressingA header is added to contain physical address of sender
and receiver of the frame.
Flow controlIf receiving rate is less than the transmission rate, data link layer
imposes a flow control mechanism to avoid overwhelming the receiver.
Error controlRedundant information is added as trailer to detect and retransmit
damaged/lost frames and to recognize duplicate frames.
Access controlWhen two or more devices are connected to the same link, link
layer protocols determines which device has control over the link at any given time.

Network Layer
It is responsible for source-to-destination delivery of a data unit called packet.
Logical addressingThe packet is identified across the network using the logical
addressing system provided by network layer and is used to identify the end systems.

RoutingThe connecting devices (routers or switches) prepare routing table to


send packets to their destination.
Transport Layer
Transport layer is responsible for process-to-process delivery of the entire message.
Service-point addressingIt includes a service-point address or port address so that a
process from one computer communicates to a specific process on the other computer.
Segmentation and reassemblyA message is divided into transmittable segments, each
containing a sequence number. These numbers enable the transport layer to reassemble the
message correctly at the destination and to identify which were lost/corrupt.

Connection controlProtocols can be either connectionless or connection-oriented.


Session Layer
It establishes, maintains, and synchronizes interaction among communicating systems.
Dialog controlIt allows two systems to enter into a dialog and communication
between two processes to take place in either half-duplex / full-duplex mode.
SynchronizationThe session layer allows a process to add checkpoints to a
stream of data. In case of a crash data is retransmitted from the last checkpoint.
Bindingbinds together the different streams that are part of a single application. For
example, audio and video stream are combined in a teleconferencing application.

Presentation Layer
It is concerned with syntax and semantics of the information exchanged between peers.
TranslationBecause different computers use different encoding systems, the
presentation layer is responsible for interoperability between these encoding methods.

EncryptionTo carry sensitive information, a system ensures privacy by


encrypting the message before sending and decrypting at the receiver end.
CompressionData compression reduces the number of bits contained in the
information. It is particularly important in multimedia transmission.

Application Layer
The application layer enables the user, whether human or software, to access the network. It
provides user interface and support for services such as electronic mail, remote file

access, shared database management and several types of distributed services.


Explain the layers of TCP/IP architecture in detail.

Features
Internet architecture is a four layered model, also known as TCP/IP architecture.
It evolved out of a packet-switched network called ARPANET.
TCP/IP does not enforce strict layering, i.e., applications are free to bypass
transport layer and can directly use IP or any of the underlying networks.
IP layer serves as focal point in the architecture.
o It defines a common method for exchanging packets to any type of network
o Segregates host-to-host delivery from process-to-process delivery.
For any protocol to be added to the architecture, it must also be accompanied by at
least one working implementation of the specification. Thus efficiency is ensured.
Layers
Subnetwork TCP/IP does not define any specific protocol for the lowest level.
o All standard and proprietary protocols such as Ethernet, FDDI, etc are supported.
o The protocols are generally implemented by a combination of
hardware/software. IP The major protocol in TCP/IP is Internetworking Protocol (IP).

o It supports the interconnection of multiple networking technologies into a


logical internetwork.
o It is an unreliable and connectionless protocol.
o IP sends data in packets called datagrams, each of which is transported
separately and independently.
o Other protocols supported in this layer are ARP, RARP, ICMP and IGMP.
Transport layer is responsible for delivery of a message from one process to
another process. The two protocols supported in this layer are:
o Transmission Control Protocol (TCP) for connection-oriented reliable bytestream channel.
o User Datagram Protocol (UDP) for connectionless unreliable datagram
delivery channel.
Application supports a wide range of protocols such as FTP, TFTP, Telnet (remote
login), SMTP, etc., that enables the interoperation of popular applications.

Explain how framing is done using bit and byte oriented protocols.
Framing enables the message to reach the destination by adding physical
address of sender and destination.
When a message is divided into smaller frames, error affects only that small
frame. In fixed-size framing, there is no need for defining frame boundary.
In variable-size framing, receiver should be able to determine where a frame starts/ends.

BYTE-ORIENTED PROTOCOLS
. The two different approaches are sentinel and the byte-counting.
Sentinel approach
Binary Synchronous Communication (BISYNC) protocol developed by IBM.

SYN special synchronization bits indicating beginning of the frame


SOH special sentinel character that indicates start of header

Header contains physical address of source, destination and other


information STX special sentinel character that indicates start of
text/body
ETX special sentinel character that indicates end of text/body
CRC 16-bit CRC code used to detect transmission error

Character stuffing
The problem with sentinel approach, is that the ETX character might appear in
the data. In such case, ETX is preceded with a DLE (data-link-escape) character.
If the data portion contains escape character, then it is preceded by another DLE.
The insertion of DLE character onto the data is known as character stuffing.
The receiver removes the additional escape characters and correctly interprets the frame. If
ETX field is corrupted, then it is known as framing error. Such frames are discarded.

Byte-Counting Approach
An alternative to detect end-of-frame is to include number of bytes in the frame
body as part of the frame header.
Digital Data Communication Message Protocol (DDCMP) uses the count approach.

The Count field specifies how many bytes are contained in the frames body.
If Count field is corrupted, then it is known as framing error. The receiver comes
to know of it when it comes across the SYN field of the next frame.

BIT-ORIENTED PROTOCOL
The bit-oriented protocols such as High-Level Data Link Control (HDLC) view the
frame as a collection of bits. The frame format

The beginning and end of a frame has a distinguished bit sequence 01111110
Sequence is also transmitted when link is idle for synchronization
Bit Stuffing
To prevent occurrence of bit pattern 01111110 as part of frame body, bit stuffing is
used. In bit stuffing, if a 0 and five consecutive 1 bits are encountered, an extra 0 is
added.

This extra stuffed bit is eventually removed from the data by the receiver.
The real flag 01111110 is not stuffed by the sender and is recognized by the receiver If a
bit such as 01111111 arrives, then an error has occurred and the frame is discarded.

Clock-based Framing
Synchronous Optical Network (SONET) standard is clock-based framing of fixed
size. SONET runs on the carrier's optical network and offers rich set of services.
A SONET STS-1 frame is arranged as nine rows of 90 bytes each, shown below

The first 3 bytes of each row are overhead, with the rest being available
for data The first 2 bytes of the frame contain a special bit pattern
indicating start of frame. Bit stuffing is not employed here
The receiver looks for the special bit pattern once in every 810 bytes. If not so,
the frame is discarded.
The overhead bytes of a SONET frame are encoded using NRZ encoding. To
allow the receiver to recover senders clock, the payload bytes are scrambled.
SONET supports the multiplexing of multiple low-speed links. The links range
from 51.84 Mbps (STS-1) to 2488.32 Mbps (STS-48).
At STS-1 rates, a frame is 810 bytes long, while at STS-3 rates, each frame is
2430 bytes long. The multiplexing of three STS-1 frame onto one STS-3 is shown
STS-N signal can being used to multiplex N STS-1 frames. The payload from
STS-1 frames are linked together to form a STS-N payload, denoted as STS-Nc.

How errors are introduced in the data?


Bit errors are introduced into frames because of electrical interference or thermal
noise. This interference can change the shape of the signal, i.e. bit inversion.
List the types of error with an example.
The two types of error are single-bit error and burst error
Single-bit error means that only 1 bit of a given data unit is changed. Single-bit
errors are the least likely type of error in serial data transmission.

Burst error means that 2 or more bits in the data unit have changed
The length of the burst is measured from the first corrupted bit to the last corrupted bit.

Differentiate error detection and error correction.


Error detection means using redundant information (parity bits) along with data to
enable the receiver detect whether the received data is corrupted or not. Examples
are Two dimensional parity, Internet checksum, CRC, etc. When an error is detected,
the data is discarded and a retransmission is done by the sender.
In error correction, the redundant bits are used to determine which bits are
corrupted and original data is restored by the receiver. Examples are Hamming
code, Reed Solomon, etc. No retransmission is required.
What is Vertical Redundancy Check (VRC)?
It is based on simple parity, which adds one extra bit to a 7-bit
th
code. The 8 bit is set to make number of 1s in the byte as even,
otherwise 0. It is used to detect all odd-number errors in the block.
0110011 01100110
0110001 01100011
What is Longitudinal Redundancy Check (LRC)?
The data bits are divided into equal segments and organized as
a table. Parity bit is computed for each column.
The resulting parity byte is appended and transmitted.

Explain error detection methods in detail with example


Error detection is only to see if any error has occurred A
single-bit error or a burst error is immaterial

The basic idea behind any error detection scheme is to add redundant information to
a frame that can be used to determine if errors have been introduced.
An efficient system should have k redundant bits for n data bits such that k << n

Two-Dimensional Parity
Data is divided into seven byte segments.
Even parity is computed for all bytes (Vertical Redundancy Check).
Even parity is also calculated for each bit position across each of the bytes
(Longitudinal Redundancy Check).
Thus a parity byte for the entire frame, in addition to a parity bit for each byte is sent.

The receiver recomputes the row and column parities. If parity bits are correct,
the frame is accepted else discarded.
Two-dimensional parity catches all 1, 2 and 3-bit errors, and most 4-bit errors.
Internet Checksum
The 16-bit checksum is not used at the link layer but by the upper layer protocols (UDP).
Sender
The data is divided into 16-bit words.
The initial checksum value is 0.

All words (incl. checksum) are summed using one's complement


arithmetic. Carries (if any) are wrapped and added to the sum.
The complement of sum is known as checksum and is sent with data
Receiver
The message (including checksum) is divided into 16-bit
words. All words are added using one's complement addition.

The sum is complemented and becomes the new checksum.


If the value of checksum is 0, the message is accepted, otherwise it is rejected.

7
11
12
6

0111
1011
1100
0110

0111
1011
1100
0110

Initial Checksum
Sum
Carry
Sum

0000
100100
10
0110

Received Checksum
Sum
Carry
Sum

1001
101101
10
1111

Checksum

1001

New Checksum

0000

Sender
Receiver
Analysis
Checksum is well-suited for software implementation and is not strong as CRC.
If value of one word is incremented and another word is decremented by the same
amount, the errors are not detected because sum and checksum remain the same.
Cyclic Redundancy Check (CRC)
CRC developed by IBM uses the concept of finite fields.
A n bit message is represented as a polynomial of degree n - 1.
The message M(x) is represented as a polynomial by using the value of each bit in the
message as coefficient for each term. For eg., 10011010 represents x7 + x4 + x3+ x
For calculating a CRC, sender and receiver agree on a divisor polynomial, C(x) of degree

k such that k n 1
Sender

Multiply M(x) by xk i.e., append k zeroes. Let the modified poly be M'(x)
Divide M'(x) by C(x) using XOR operation. The remainder has k bits
Subtract the remainder from M'(x) using XOR, say T(x) and transmit T(x) with n + k bits.

Sender

Receiver

Receiver
Divide the received polynomial T(x) by C(x) as done in
sender If the remainder is non-zero then discard the frame
If zero, then no errors and redundant bits are removed to obtain data

Divisor Polynomial
The divisor polynomial C(x) should has the following error-detecting properties:
k

o All single-bit errors, as long as the x and x terms have nonzero coefficients.
o Any burst error for which the length of the burst is less than k bits.
o Any odd number of errors, as long as C(x) contains the factor (x + 1)

The versions of C(x) widely used in link-level protocols are CRC-8, CRC-10,
CRC-12, CRC-16, CRC-CCITT and CRC-32.
CRC algorithm is implemented in hardware using a k-bit shift register and
XOR gates. CRC is widely used in networks such as LANs and WANs.

Define flow control.


Flow control is a set of procedures that tells the sender how much data it can
transmit before it must wait for an acknowledgment from the receiver.
It prevents a fast sender from overwhelming a slow receiver with frames.
Define acknowledgement.
An acknowledgment (ACK) is a small control frame that a protocol sends back to
the sender acknowledging the receipt of a frame.
Frames are delivered in a reliable manner using acknowledgement
What is automatic repeat request?
When a corrupt frame arrives at the receiver, it is discarded.
If the sender does not receive an acknowledgment within a specified period (timeout), it
retransmits the original frame. This is known as automatic repeat request (ARQ).

The two ARQ are Stop and Wait ARQ and Sliding Window ARQ
Explain various flow control mechanism or reliable transmission.
Stop and Wait ARQ
The sender keeps a copy of the frame and then transmits it.
The sender waits for an acknowledgment before transmitting the next frame.
If acknowledgment does not arrive before timeout, the sender retransmits the frame.

Scenarios
a) ACK is received before the timer expires. The sender sends the next frame.
b) The frame gets lost in transmission. Sender eventually times out and retransmits frame.

c) ACK frame gets lost. The sender eventually times out and retransmits the frame.
d) The sender times out soon before ACK arrives and retransmits the frame.
Sequence number
In scenarios (c) and (d), since the receiver has acknowledged the received
frame, it treats the arriving frame as the next one. This leads to duplicate frames.
To address duplicate frames, the header for a stop-and-wait protocol includes a
1-bit sequence number (0 or 1) based on modulo-2 arithmetic.

Drawbacks
It allows the sender to have only one outstanding frame on the link at a time
Inefficient if the channel has a large bandwidth and the round-trip delay is long.
Sliding window
To improve efficiency, multiple frames must be in transition while waiting for an
acknowledgment. Sliding window protocol makes this possible.

The window defines range of sequence numbers for both sender and receiver to deal with.
The window position change (slides) due to transmission of frame and acknowledgement

Sender
The sender assigns a sequence number SeqNum to each frame.
A timer with each frame it transmits, and retransmits the frame on
timeout. It maintains three state variables:
o The send window size SWS gives the upper bound on the number of
outstanding frames that the sender can transmit.
o LAR denotes the sequence number of the last acknowledgment
received. o LFS denotes the sequence number of the last frame
sent.
o The invariant LFS LAR SWS is always maintained

When an acknowledgment arrives, the sender moves LAR to the right,


thereby allowing the sender to transmit the subsequent frames.
The sender buffers up to SWS frames (for retransmission), until they are acknowledged.
Receiver
Similarly the receiver maintains three state variables:
o The receive window size RWS gives the upper bound on number of outof-order frames that the receiver is willing to accept.
o LAF denotes acceptable frame with the largest sequence number
o LFR denotes sequence number of the last frame received
o The invariant LAF LFR RWS is always maintained.

A frame numbered SeqNum is accepted if LFR < SeqNum LAF, otherwise


discarded. Frames can arrive out of order and may be buffered.
If all frames, say with sequence number SeqNumToAck have arrived, the
receiver acknowledges frame SeqNumToAck. The variables updated are:
o LFR = SeqNumToAck
o LAF = LFR + RWS
Window size
SWS depend on how many frames are expected to be outstanding on the link. It
is based on delay bandwidth product.
RWS is either set to 1 or the value of SWS.
o If RWS = 1, then receiver does not buffer frames.
o If RWS = SWS, receiver buffers out-of-order frames, but does not
acknowledge.

Example

Lost/Corrupt frames
When frames are lost or corrupt, there is less data in transit, since the sender
cannot advance its window without an acknowledgement.
The receiver acknowledges a frame, only if all lower numbered frames have
arrived. The acknowledgement is cumulative.
The receiver buffers out of order frames but does not acknowledge.
It sends a negative acknowledgement (NAK) indicating to the sender to
retransmit the expected frame.
NAK speeds up retransmission of a frame before timer expires and improves performance.

Sequence Number
m

Sequence numbers are modulo 2 where m is the size of the sequence field in bits.
Sequence numbers wrap around and MaxSeqNum denotes number of available
sequence numbers.
To avoid the issue of identifying sequence numbers of different sets, SWS is defined as
SWS < (MaxSeqNum + 1) / 2
Advantages
It delivers frames reliably across an unreliable link using timeout and acknowledgement.
It preserves the order in which frames are transmitted. The receiver ensures that it does

not pass a frame to the upper layer until all lower numbered frames are passed.
It supports flow control. The receiver through acknowledgement informs the
sender about how many frames it can still receive.
Distinguish between Stop & Wait and Sliding window protocol.
Only one frame could be outstanding in Stop-and-Wait, whereas multiple frames
can be outstanding in sliding window, i.e., improved efficiency.
The Stop-and-Wait ARQ protocol is a special case of sliding window in which the
send window size is 1.
m
Frames are numbered as modulo-2 in sliding window whereas it is sequenced
as modulo-2 in stop and wait protocol.
What is concurrent logical channel?
When more than one logical channel is multiplexed onto a single point-to-point
link is known as concurrent logical channel.
Stop and wait is run on each of these logical channels.
The sender maintains 3-bit state information namely whether busy, sequence
number of next frame and sequence number of next frame expected.
When a node has frame to send, it is sent on the lowest idle channel.

Explain the factors that affect performance of the network.


Bandwidth and Latency
Performance of a network is measured in terms of bandwidth and latency.
Bandwidth refers to number of bits that can be transmitted over the network
within a certain period of time (throughput).
Bandwidth also determines how long it takes to transmit each bit. For example, each
bit on a 1-Mbps link is 1s wide, while each bit on a 2-Mbps link is 0.5s wide

Bandwidth is also based on how many times the software that implements the
channel has to handle.
Latency refers to how long it takes for the message to travel to the other end
(delay). It is a factor of propagation delay, transmission time and queuing delay
Latency = Propagation + Transmit
+ Queue Propagation =
Distance / SpeedOfLight Transmit
= Size / Bandwidth
8
o Speed
of light propagation varies on medium (2.3 10 m/s in copper, 2.0
8
10 m/s in optical fiber) and distance.
o Transmission time depends upon bandwidth and packet size
o Queuing delay occurs at switches and routers.
Round Trip Time (RTT) is a two-way latency.

For applications that have minimal data transfer, latency dominates performance
and for bulk data transfers, bandwidth dominates performance.
Delay Bandwidth Product

Consider a pipe, in which bandwidth is given by diameter and delay corresponds


to length of the pipe.
The delay bandwidth product specifies the number of bits in transit. It corresponds to
how much the sender should transmit before the first bit is received at the other end.

If receiver signals the sender to stop, it would still receive RTT bandwidth of data.
For example, for a cross-country fiber with 10 Gbps bandwidth, distance of 4000
km, the RTT is 40 ms and RTT bandwidth is 400 Mb.
High Speed Networks
High speed networks enhances the bandwidth for applications but latency remains fixed.
For example, when a 1 MB file is transmitted over a 1 Mbps link takes 80 RTTs, whereas

the same file over a 1 Gbps links falls short of 1 RTT.


Effective end-to-end throughput that can be achieved is given as
Throughput = TransferSize / TransferTime
TransferTime includes latency as well as setup time. It is computed as
TransferTime = RTT + 1/Bandwidth TransferSize

Application Performance Needs


Applications generally require as much bandwidth provided by the network.
Some applications such as video, specify an upper limit on bandwidth required.
Video streams are generally compressed but the flow rate varies due to details,
compression algorithm used, etc. The average bandwidth could be determined,
but instantaneous bursty traffic should be accounted for.
In some cases, the latency varies from packet to packet, known as jitter.
Suppose that the packets being transmitted over the network contain video
frames, the receiver will not be able to display, if a frame arrives late. If the
receiver knows the latency that packets may experience, then it can delay
playing first frame of the video. Thus jitter factor is smoothened out by buffering.

Give the format of a PPP frame and explain its fields.

The Point-to-Point Protocol (PPP) is used to carry packets over point-topoint links. The flag field contains special character 01111110
The protocol field is used for multiplexing.
The payload is 1500 bytes by default.

Explain the socket API for implementing network application.


Network protocols are implemented as part of operating system and interface
provided is known as network application programming interface (API).
Network APIs provide syntax through which protocol services are
invoked. The socket interface from Berkley Unix is widely used.
Abstraction of the socket interface is called socket. Socket is an endpoint on the
communication link between applications running on the network.
Operations defined are socket creation, binding socket to network, send / receive
messages and finally close the socket.
Socket Creation
Socket is created using socket interface. A handle is returned on successful
creation.
int socket(int domain, int type, int protocol)

o domain argument specifies protocol family (PF_INET for Internet family,


PF_PACKET for direct access to network, etc)
o type argument specifies stream (SOCK_STREAM for byte stream,
SOCK_DGRAM for message-oriented service, SOCK_RAW for raw
sockets)
o protocol argument specifies the protocol used (default value 0).
Server Process
Server processes perform passive open, i.e., it waits for client requests by
invoking the following operations:
int bind(int socket, struct sockaddr *address, int
addr_len) int listen(int socket, int backlog)
int accept(int socket, struct sockaddr *address, int *addr_len)
o

bind operation attaches the socket to server host's IP address and port. Server
port number is well-known, i.e., 01024 (for example, web servers use port 80).
o listen operation specifies number of pending connections.
o accept operation blocks until a client establishes
connection

Client Process
Client processes perform active open, i.e., it establishes connection with the
server using connect operation.
int connect(int socket, struct sockaddr *address, int addr_len)
Client knows the remote server's logical address and port number and lets the
system fill in detail such as client IP address and ephemeral port number.
Communication
Communication between server and client process takes place after connection
establishment using send and recv operation.
int send(int socket, char *message, int msg
len, int ags) int recv(int socket, char *buffer,
int buf len, int ags)
o

send operation is used to send message over the socket and recv operation
is used to store the message received over the socket onto a buffer.

Chat Application
Chat application is a simple client/server program that uses the socket interface
to send messages over a TCP connection in half-duplex mode.
/* TCP Chat server program */
#include <stdio.h>
#include
<string.h>
#include
<netinet/in.h>

main()
{
int sd, bd, ad;

struct sockaddr_in servaddr,


cliaddr; socklen_t clilen;
char buff[1024], buff1[1024];
/* Socket address structure */
bzero(&servaddr, sizeof(servaddr));
servaddr.sin_family = AF_INET;
servaddr.sin_addr.s_addr =
htonl(INADDR_ANY); servaddr.sin_port
= htons(5555);
/* TCP socket creation */
sd = socket(PF_INET, SOCK_STREAM, 0);
/* Passive Open */
bd = bind(sd, (struct sockaddr*)&servaddr,
sizeof(servaddr)); listen(sd, 5);
printf("Server is running\n");
ad = accept(sd, (struct
sockaddr*)&cliaddr, &clilen);
printf("Client connection established\n");
while(1)
{
/* Receive message from
client */ recv(ad, buff,
sizeof(buff), 0);
printf("Received from the client:
%s", buff); if (strcmp(buff, "bye")
== 0)
break;
/* Send message to the
client */ printf("\nEnter the
input data: "); fgets(buff1,
1024, stdin);
send(ad, buff1, strlen(buff1)+1, 0);
}
printf("\nClient disconnected\n");
}
Header file <netinet/in.h> defines IPv4 socket address structure called
sockaddr_in
Socket data structure is constructed using a port number that is not used for any
internet service, say 5555.
Server's IP address is set as INADDR_ANY to accept connections on any of the
hosts IP addresses.
After passive open, server exchanges messages with the client.
/* TCP Chat client

program */ #include
<stdio.h>

#include
<string.h>
#include
<netinet/in.h>

main(int argc, char *argv[])


{
int sd, cd;
struct sockaddr_in
servaddr; char
buff[1024], buff1[1024];
/*Socket address structure*/
bzero(&servaddr,
sizeof(servaddr));
servaddr.sin_family =
AF_INET;
if (argc == 2)
servaddr.sin_addr.s_addr = inet_addr(argv[1]);
else
servaddr.sin_addr.s_addr =
inet_addr("127.0.0.1"); servaddr.sin_port =
htons(5555);
/* TCP Socket creation */
sd = socket(PF_INET, SOCK_STREAM, 0);
/* Active Open */
cd = connect(sd, (struct sockaddr*)&servaddr, sizeof(servaddr));
printf("Enter \"bye\" to
quit\n"); while(1)
{
/* Send message to the
server */ printf("Enter the
input data: "); gets(buff);
send(sd, buff, strlen(buff)
+1, 0); if (strcmp(buff,
"bye") == 0)
break;
/* Receive message from the server
*/ recv(sd, buff1, sizeof(buff1), 0);
printf("Received from the server:
%s", buff1);

}
printf("Connection Terminated\n");
}

Client builds data structure required for the socket interface and opens an active
connection with the server.

If server's IP address is not given as command line argument, then server


process is running on the same localhost.
The client uses sentinel value (bye) to terminate the connection.

Discuss the requirements for building a computer network.


Perspectives
An application programmer list the services based on application needs. For
example, a guarantee that each message will be delivered without error or within
a certain time or to allow graceful switching in a mobile environment.
A network operator lists the characteristics of a system that is easy to administer and
manage. For example, fault isolation, adding new devices, easy to account for usage, etc.

A network designer lists the properties of a cost-effective design. For example,


efficient utilization of network resources, fair allocation to users, etc.
Scalable Connectivity
A system that is designed to support growth to an arbitrarily large size is scalable.
Physical medium is referred to as link, and devices that connect to the link are nodes.

Link could be either dedicated point-to-point between nodes or shared amongst


nodes with multiple access.
End nodes can be connected through a set of forwarding nodes called switches.
Switching could be either circuit or packet switching.
Packet switching networks uses store-and-forward method, i.e. the switch
receives a packet, stores in its buffer and later forwards onto another link.
Independent networks are connected to form internetwork or internet. A node that
connects two or more networks is known as router.
The process of forwarding frames from source to destination is known as routing.
A node can also send messages to a group of nodes (multicasting) or to all
nodes on the network (broadcasting).
Each node on the network is assigned a unique address.

Cost-Effective Resource Sharing


Hosts can share network resources using the concept of multiplexing. For
example, multiple flows can be multiplexed onto a single physical link.
Synchronous time-division multiplexing (STDM) divides time into equal slots and
flows use the slots in a round-robin manner.
Frequency-division multiplexing (FDM) transmits each flow at different frequency.
In statistical multiplexing, link is shared over time as in STDM, but packets are
transmitted from each flow on demand, rather than on predetermined slot.
Packets multiplexed at one end, is demultiplexed at the other switch.

Switch decides which packet is to be transmitted from the packets queued up,
according to queuing discipline such as FIFO.

Support for Common Services


Since applications have common services in, it is apt for the network designer to identify
and implement a common set of services for the application designer to build upon.

Network provide logical channels and set of services required for process-toprocess communication.
Functionalities may include guaranteed delivery, in-order delivery, privacy, etc.
File access program such as FTP / NFS or sophisticated digital library application
require read and write operation performed either by client / server.
Two types of communication channels that could be provided are request/reply
and message stream channel
Request/reply channel guarantees delivery of message and ensures privacy and
integrity of data required in case of FTP or digital library.
Message stream channel does not guarantee delivery of all data but assures inorder delivery, required in applications like video conferencing.
Reliability
Reliability is an important characteristic to be provided by the network, i.e., it
should be possible for the network to recover from errors.
Single bit/ burst errors may occur during data transmission due to interference.
Such errors can be detected and retransmission sought for.
Packets can be dropped due to congestion or wrongly routed.
Links can fail or node can crash. In case of failed link, it should be possible to
route the packet along alternate path.
Manageability
Network needs troubleshooting to adapt to increase in traffic or to improve performance.
Managing network devices on the internet to work correctly is a challenging one.
Automating network management tasks is needed for scalability and cost-effectiveness.
Network nowadays is common and could be managed by consumers with little skill level.

UNIT II
What is CSMA?
In Carrier Sense Multiple Access (CSMA), each station first checks state of the
medium using one of the persistence methods before sending.
The possibility of collision still exists because of propagation delay. When a
station sends a frame, it takes time for the first bit to reach every station.
Define persistent methods 1-persistent, non-persistent and P-persistent.
1-Persistent
When a station finds the line idle, sends its frame immediately (with probability 1).
This method has the highest chance of collision because two or more stations
may find the line idle and send frames immediately.
Non-persistent
When a station senses the line to be idle, it sends immediately.
If the line is not idle, it waits a random amount of time and then senses the line again.
Reduces collision since it is unlikely that stations will wait same amount of time and retry
Less efficient because the medium remains idle when there may be frames to send.

p-Persistent
This method is used if channel has time slots equal to
propagation time. Reduces collision and improves efficiency.
With probability p, station transmits frame, else waits for next time slot and
checks again. If the line is busy, back off procedure is adopted.
What is CSMA/CD mechanism?
Carrier Sense Multiple Access with Collision Detection (CSMA/CD) method
handles collisions detected over a wired medium.
Station monitors the medium after it sends a frame
If a collision is detected during transmission, the station transmits a brief jamming
signal to alert all stations about collision and aborts transmission.
It waits for a random amount of time and attempts retransmission.
Explain IEEE 802.3 standard or Ethernet in detail.
Ethernet is standardized as IEEE 802.3
Standard Ethernet is the most successful LAN technology with a data rate of 10 Mbps. It
has evolved to Fast Ethernet (100 Mbps), Gigabit Ethernet (1 Gbps) and Ten-Gigabit

Ethernet (10 Gbps).


Physical Properties

Hosts are tapped on to the Ethernet segment, each at least 2.5 m apart.

Transceiver is responsible for transmitting/receiving frames and collision


detection. Protocol logic is implemented in the adaptor.
Ethernet can support a maximum of 1024
hosts. Maximum length of ethernet is 2500 m.

Ethernet segments are connected using repeater.


Manchester encoding scheme is used and digital signaling (baseband) at 10 Mbps.
Various forms of Standard Ethernet are 10Base5, 10Base2, 10Base-T & 10Base-F.
o 10Base5 or Thick Ethernet uses thick coax cable (up to 500 m) with bus topology.
o 10Base2 or Thin Ethernet uses thin coax cable (up to 200 m) with bus topology. o
10BaseT (Twisted-Pair) uses CAT-5 cable (up to 100 m) with star topology.
o 10BaseF (Fiber) uses fiber-optic cable (up to 2000 m) with star topology.

Access Protocol
Medium Access Control (MAC) regulates access to the shared Ethernet link.
Frame Format

Preamblecontains alternating 0s and 1s that alerts the receiving node.


Destination addresscontains physical address of the destination host.
Source addresscontains the physical address of the sender.
Type/LengthIt may contain either type of the upper layer protocol or frame length.
Datacarries data (461500 bytes) encapsulated from the upper-layer protocols.

CRCcontains error detection information (CRC-32).


Addressing
Each host on the Ethernet network has its own network interface card (NIC).
NIC provides a globally unique 6-byte physical address (in Hex for readability).
If LSB of the first byte in a destination address is 0, then it is unicast else
multicast. In broadcast address, all bits are 1s.
Transmitter
Ethernet is a working example of CSMA/CD.
Minimum frame length (64 bytes) is required for operation of CSMA/CD.
Signals placed on the ethernet propagate in both directions and is broadcasted.
Ethernet is said to be a 1-persistent protocol. When the adaptor has a
frame to send: o If line is idle, it transmits the frame immediately.
o If line is busy, it waits for the line to go idle and then transmits immediately.
It is possible for two (or more) adaptors to begin transmitting at the same time.

o In such cases, frames collide


o A 96-bit runt frame (64-bit preamble + 32-bit jamming sequence) is
sent and transmission is aborted.
Retransmission is attempted after a back-off procedure (k 51.2s). After 16
attempts, retransmission is given up.
Receiver
Each frame transmitted on an Ethernet is received by every adaptor on that network.

Adaptor accepts the frame with destination address if it matches its address,
broadcast address (all 1s), multicast address, if it's part of that multicast group.
Frames are discarded, if it is not meant for that host.
Accepts all frames, if configured to run in promiscuous
mode. Ethernet does not acknowledge received frames.
List the advantages and disadvantages of Ethernet.
Easy to administer, maintain and relataively
inexpensive. Produces better output only under lightly
loaded conditions. It is an unreliable medium.
Why the minimum frame length in Ethernet should be at least 64 bytes?
Consider the following worst case scenario in which hosts A and B are at either ends.

Host A begins transmitting a frame at time t.


It takes link latency d for the frame to reach host B. Thus, the first bit of As frame
arrives at B at time t + d.
Suppose an instant before host As frame arrives, B senses it idle line, host B
begins to transmit its own frame.
Bs frame will immediately collide with As frame, and this collision will be
detected by host B. Host B will send the 32-bit jamming sequence.
Host A will not know that the collision occurred until Bs frame reaches it at time t + 2d. On
a maximally configured Ethernet, round-trip delay is 51.2s, i.e., 512 bits (64 bytes).

List the function of a repeater?


A repeater is a device that connects LAN segments and extends length of the LAN.
A repeater reconstructs the received weak signal to its original strength and
forwards it on all outgoing segments.
Utmost four repeaters can be placed between a pair
of hosts. It operates in the physical layer.
What is token ring?
Token ring was developed by IBM and later became standard
IEEE 802.5 Stations are connected in a ring-based topology.
A small frame called token circulates around the ring in a unidirectional way.
A station can send data only if it possess a token. Eventually sender removes its
data frame and inserts a new token onto the ring.
Twisted-pair is used as physical medium with differential manchester encoding.
Maximum of 250 stations can be included in the ring with data rate of 4 / 16 Mbps.

Compare Ethernet and Token ring.


There is bidirectional data flow in Ethernet, whereas in token ring it is
unidirectional. Medium in both Ethernet and Token ring is shared.
All stations see all frames in both Ethernet and Token ring.
Topology in ethernet can be either bus or star, whereas token ring uses ring topology
Medium access by a station is controlled in Token ring, whereas in Ethernet it is random.

List the properties of FDDI.


Fiber Distributed Data Interface (FDDI) network consists of two independent
rings primary and secondary, designed to transmit data in opposite directions.
Secondary ring is used to transmit data, only if the primary fails.
FDDI is a 100-Mbps network with fiber-optic cable being the physical
medium. FDDI network can have 500 stations.
FDDI uses 4B/5B encoding.
Why collision cannot be used in wireless environment?
Signal fading can prevent a station at one end from hearing a collision at other
end. Collision detection requires costly stations and increased bandwidth.
Collision may not be detected because of the hidden station problem.
What is spread spectrum technique.
In spread spectrum, the signal is spread over a wider frequency band to minimize
interference. The two types are frequency hopping and direct sequence.
In FHSS, signal is transmitted over a set of frequencies computed by a pseudorandom
generator. Receiver uses the same algorithm and seed, thereby synchronized.

In DSSS for each data bit, the sender transmits the XOR of that bit and n random
bits (chipping code) from pseudorandom number generator.
Only the intended receiver can interpret the signal. For other nodes, it appears as a noise.

What is adhoc network?


Adhoc or wireless mesh network has no base station. All nodes are
peers. Messages are forwarded through peers in a chain formation.
Mesh topology is very robust and fault-tolerant.
Explain the functioning of wireless LAN or IEEE 802.11 in detail
Wireless LAN or WLAN or WiFi is designed for use in a limited geographical area
(office, campus, building, etc).
Physical Properties
WLAN runs over physical media based on FHSS (frequency hopping over 79 1-MHzwide frequency bandwidth) and DSSS (direct 11-bit chipping sequence) with 2 Mbps.

802.11b runs in 2.4 GHz frequency band with a data rate of 11 Mbps.

802.11a/g runs in 5-GHz band using orthogonal FDM with a max rate of 54
Mbps. 802.11n is latest standard that uses multiple antennas and offers up to
100 Mbps.

802.11 defines maximum bit rate for a/b/g/n. Optimal bit rate for transmission is
chosen based on signal-to-noise ratio in the environment.
Collision Avoidance
Collision detection is not feasible, since all nodes are not within the reach of each other.

Hidden Node

Exposed Node

Hidden Node
Suppose node B is sending data to A. At the same time, node C also wishes to
send to A. Since node B is not within the range of C, C finds the medium free and
transmits to A. Frames from nodes B and C sent to A collide with each other.
Thus nodes B and C are hidden from each other.
Exposed Node
Suppose node A is transmitting to node B and node C has some data to be sent to node D.
Node C finds the medium busy, since it hears the transmission from node A and refrains

from sending to node D, even though its transmission to D would not


interfere. Thus node C is exposed to transmission from node A to B
Multiple Access with Collision Avoidance (MACA)
In MACA, sender and receiver exchange short control frames to reserve access,
so that nearby nodes avoids transmission during duration of the data frame.
Control frames used to avoid collision are Request to Send (RTS) and Clear to Send
(CTS). Sender sends RTS frame to the receiver containing sender/receiver address and

transmission duration.
Nodes that receive RTS frame are close to sender and wait for CTS to be transmitted
back. Receiver acknowledges and sends a CTS frame containing sender address and
duration. Nodes that receive CTS remain silent for the upcoming data transmission.
Nodes that receive RTS but not CTS, is away from the receiver and is free to transmit.
Receiver sends an ACK frame to the sender after successfully receiving a data frame.
If RTS frames from two or more nodes collide, then they do not receive CTS. Each node
waits for a random amount of time and then tries to send RTS again (back-off
procedure).

Solution for hidden node

Solution for exposed node

Distribution System
In wireless network, nodes are mobile and the set of reachable nodes change with time.
Mobile nodes are connected to a wired network infrastructure called access points (AP)
Access points are connected to each other by a distribution system (DS) such as ethernet.

Nodes communicate directly with each other if they are reachable (eg, A and C)
Communication between two nodes in different APs occurs via two APs (eg, A and
E)
The technique for selecting an AP is called active scanning. It is done whenever
a node joins a network or switches over to another AP.
o The node sends a Probe frame.
o All APs within reach reply with a Probe Response frame.
o The node selects one of the APs and sends an Association
Request frame. o The AP replies with an Association Response
frame
APs also periodically send a Beacon frame that advertises its features such as
transmission rate. This is known as passive scanning.
Frame Format

Control contains subfields that includes 6-bit frame type i.e., management,
control (RTS, CTS, ACK) or data, and pair of 1-bit fields ToDS and FromDS.
Duration specifies duration of frame transmission.
Addresses
The four address fields depend on value of ToDS and FromDS
subfields.
ToDS FromDS
Addr1
0
0
Destination
0

Destination
Receiving
AP
Receiving
AP

Addr2
Source
Sending
AP
Source
Sending
AP

Addr3

Addr4

Description
Sent directly
Frame is coming from
Source
a distribution system
Frame is going to a
Destination
distribution system
Frame is going from
Destination Source one AP to another
AP

Sequence Control defines sequence number of the frame to be used in flow


control. Payload contains a maximum of 2312 bytes.
CRC contains CRC-32 error detection sequence.
Write short notes on Bluetooth
Bluetooth technology, standardized as IEEE 802.15.1 is a personal area network

(PAN). It is used for short-range wireless communication (maximum 10 m) between


mobile

phones, PDAs, notebook and other peripheral devices.

Uses low power transmission, operates in 2.45 GHz band with data rate up to 3 Mbps.
Bluetooth Special Interest Group has specified a set of profiles for a range of application.
Bluetooth network is known as piconet. A piconet can have up to eight stations, one of
which is called the master and the rest are called slaves.
Slaves do not directly communicate with each other, but via the master.
Bluetooth uses FHSSS and synchronous TDM for transmission. Master transmits in
odd-numbered slots, whereas slave transmits to master in even-numbered slots.
Slaves in parked or inactive state cannot communicate, until it is activated by the
master. Maximum of 255 devices can be in parked state.

Piconet
Compare the different wireless technologies.
Bluetooth
IEEE standard 802.15.1
Link length
10 m
Bandwidth
2.1 Mbps (shared)
Usage
Link a peripheral
to a computer

WiFi
802.11
100 m
54 Mbps (shared)
Link a computer
to a wired base

WiMax
802.16
10 km
70 Mbps
Link a building
to a wired tower

3G
Tens of km
384 Kbps
Link a cell phone
to a wired tower

What is a switch and its function?


A switch is a multi-input, multi-output device, receives packets on one of its links and
transmits them on one or more other links. This is known as switching or forwarding
Large networks can be built by interconnecting a number of
switches. Hosts are connected to the switch using point-to-point link.

Distinguish between circuit and packet switching.


Circuit switching
Packet switching
Source and destination host are physically No such physical connection exists
connected
Switching takes place at the physical layer Switching takes place at network (datagram) or
data link layer (VCN)
Resources such as bandwidth, switch buffer & Resources are allocated on demand
processing time, are allocated in advance.

Resources remain allocated for the entire


duration of data communication.
There is no delay during data transfer.
Data transferred between the two stations is a
continuous flow of signal
Example: Telephony

Resources can be reallocated when idle. i.e.,


improved efficiency.
Delay exists at each switch during data transfer
Data is transferred as discrete packets
Example: Internet

Explain the different switching techniques in detail.


Datagram
Datagram is referred to as connectionless network, with switching done at
network layer. Message is divided into packets.
Resources such as bandwidth are not reserved for a packet but allocated on
demand. The lack of reservation creates delay.
Each packet is routed independently. Packets belonging to same message may
travel different paths to reach their destination.
Packets can arrive out of order or may also be dropped due to lack of
resources. A switch or link failure does not have adverse effect.
Routing table
Each switch has a routing table that contains destination address and output port.
When a switch examines a packet, the destination address is looked-up in the routing
table to determine the corresponding port, onto which the packet is forwarded.

Routing table is dynamic and is updated periodically.

Routing table for Switch 2


Virtual Circuit Switching
A virtual-circuit network (VCN) is a connection-oriented model. A virtual
connection from source to the destination is established before any data is sent.
Each switch contains VC table with each entry containing incoming port,
incoming VCI, outgoing port and outgoing VCI.
A Virtual Circuit Identifier (VCI) uniquely identifies a connection. It is a small
number with link local scope. The incoming and outgoing VCI is always distinct.
The combination of a packet's VCI and the interface on which it was received,
uniquely identi es the virtual connection.
Connection state can be set either by the network administrator (permanent) or
by the hosts through signaling (switched).
VCN is implemented in the data link layer.

Setup Request

Acknowledgement

Setup Request
Switch 1 receives connection setup request frame from host A.
o It knows that frames for host B should be forwarded on port 3.
o The switch creates an entry in its VC table for the new connection with
incoming port=1 and outgoing port=3.
o Chooses an unused VCI for frames to host B, say 14 as incoming VCI.
< The outgoing VCI is unknown (left blank) and the frame is forwarded to switch 2.
Similarly entries are made at other switches as frame is forwarded to destination.

Destination B accepts the setup request frame, if it is ready to receive frames


from host A. Assigns an unused VCI, say 77, for frames that come from host A.
Acknowledgment
Host B sends an acknowledgment to switch 3.
o The ACK frame carries source & destination addresses and chosen VCI by host B.
0 Switch 3 uses this VCI, i.e., 77 as outgoing VCI and completes VC table entry.
Finally switch 1 sends an acknowledgment to source host A containing VCI as 14.

Source host A uses 14 as its outgoing VCI for data frames to be sent to destination B.
Source Routing
All information about network topology that is required to route a packet across
the network to the destination is provided by the source host.
Header contains ordered list of intermediate hosts, through which packet must traverse.
For each packet, the header carries a pointer to the current next port entry, with each

switch just updates the pointer.


Source routing can be used in both datagram and virtual circuit networks.

Define Permanent Virtual Circuit


Virtual circuit table at each switch is configured by the network administrator.
If a packet arrives on incoming interface with designated VCI value in its header, then
the packet is sent on outgoing interface with outgoing VCI value as specified in VC table.

PVC is used for small-sized network only.


Define bridge.
Bridge is a two-layered switch used to forward frames between shared-media
LANs such as Ethernet.

A bridge is a multi-input, multi-output node between two LANs that runs in promiscuous
mode, accepts frames transmitted from either sides and forwards them to the other.
Bridge implements collision detection mechanism on all its interfaces.
LANs connected by one or more bridges is called extended LAN.

What is static bridge?


Bridge is configured with a forwarding table during setup by the administrator
manually. When a frame arrives, the bridge performs a look-up on the table.
Outgoing port for the destination is obtained and the frame is sent on that
port. Table must be updated manually when stations are added or removed.
How learning (transparent) bridge does builds forwarding table dynamically. Give an example

Learning bridges builds forwarding table gradually by learning from frame


movements. Forwarding table is empty when the bridge boots up.
Bridge uses source address to add entries and destination address to
forward frames. Source address and incoming port is appended to the table,
if an entry does not exist. The table is looked up for destination address:
o If source and destination are from same LAN, then the frame is dropped,
since destination host would have already received the frame.
o If an entry exists, then frame is forwarded on the corresponding port.
0
Otherwise, the frame is flooded on all other ports.
Learning process continues as bridge forwards frames and optimizes forwarding
decision. The table is discarded periodically and rebuilt.
Example

Bridged network
Forwarding Table
When station A sends a frame to station D:
o The bridge has no entry for either station D or A
From source address, the bridge learns that station A is located on the LAN
connected to port 1, i.e., frames destined for A must be sent out through port 1.

0 The bridge appends entry to the table and floods the frame on all
other ports. When station E sends a frame to station A:
o The bridge has an entry for station A, so it forwards the frame only to port 1.
o It adds source address of the frame, i.e., E, to the
table. When station B sends a frame to station C:
0 The bridge has no entry for station C.
o It floods the network and adds one more entry to the table.

When does learning bridge fail?


Learning bridge works fine as long as there is no loop.
Loops are formed when redundant bridges are introduced to improve reliability.
When loop exists, multiple copies of the frame exists as they are flooded by bridges.
IEEE 802.1 mandates bridges to use spanning tree algorithm to create loop-less topology.

Explain the working of spanning tree algorithm with an example.


Extended LAN is represented as a graph, which may contain loops.
Spanning tree algorithm creates a sub-graph that has no loops, i.e., each LAN
can be reached from any other LAN through one path only.
Each bridge decides the ports on which it is willing to forward frames.
o Some ports are removed, reducing the extended LAN to an acyclic graph.
Spanning tree algorithm is dynamic, i.e., bridges reconfigure the spanning tree
due to some failure or additions or deletions.
Algorithm
Each bridge has a unique identifier.
Bridges exchange configuration message (Y, d, X) with each other, known as
bridge protocol data unit (BPDU) to decide on root/designated bridge, where:
o Y is id of the root bridge according to sending bridge.
o d is the distance in hops from sending bridge to root
bridge. o X is id of the bridge that is sending the message.

The system stabilizes in a while, with the selection of root bridge and designated
bridges. o Thereafter, root bridge alone generates configuration messages.
o The designated bridge forwards those messages.
Root Bridge
Initially each bridge considers itself to be the root and broadcasts configuration
message with distance 0.
When a bridge receives a BDPU, it compares with its own. Discards its own and
saves the received BDPU, if the received BDPU has:
o a root with a smaller id.
o a root with an equal id but with a shorter distance.
o the root id and distance are equal, but the sending bridge has a smaller id.
Once a bridge receives a configuration message indicating that it is not the root, it

o Stops generation of its own messages


o Forwards messages from other bridges after incrementing distance-to-root
field Eventually, the bridge with the smallest id is selected as the root bridge.
The root bridge always floods frames on all ports.
Absence of periodical configuration message from root, forces bridges to repeat
the above process to elect a new root bridge and designated bridges.
Designated Bridge
All the bridges connected to a LAN elect a designated bridge.
Each bridge computes shortest path to the root and notes the port on the
path. Each LANs designated bridge is the one that is closest to the root.
If two or more bridges are equally close to root, then bridge with smallest id is chosen.
The designated bridge is responsible for forwarding frames to the root bridge.

When a bridge receives a configuration that indicates it is not the designated


bridge for that port, it stops sending messages over that port.
Example

Extended LAN with loop

Loop-less topology at B3

B3 receives (B2, 0, B2). B3 accepts B2 as root, since B2 is the lower id.


B3 increments the distance advertised by B2 and sends (B2, 1, B3)
towards B5. B2 accepts B1 as root because it has the lower id and sends
(B1, 1, B2) to B3. B5 accepts B1 as root and sends (B1, 1, B5) to B3.
B3 accepts B1 as root, and knows that both B2 and B5 are closer to the root
than itself. B3 stops forwarding messages on both its interfaces.
B2 and B5 are chosen as the designated bridges for LAN A and C respectively.
List the advantages of bridge.
It increases total bandwidth of the network. For eg, while a single Ethernet
segment can carry only 10 Mbps of total traffic, an Ethernet bridge can carry as
much as 10n Mbps, where n is the number of ports on the bridge.
Another advantage of bridge is separation of the collision domain as few stations
contend for access to the medium. Thus the probability of collision is reduced.
Networks can be connected without the end hosts having to run any additional protocols.

List the limitations of a bridge.


Bridges lack on issues of scalability and heterogeneity.
No provision to impose hierarchy on the extended LAN.

Bridges forward all broadcast frames, which is not liked in a large environment.
Bridges support networks that have the same address format. For example,
ethernet and token ring but not ethernet and ATM.
Write short notes on VLAN.
Virtual LAN (VLAN) increases the scalability of extended LAN.
VLAN partitions a single extended LAN into several separate LANs.
VLAN is defined as a local area network configured by software, not by physical wiring.
VLANs group stations belonging to one or more physical LANs into broadcast domains.
Stations in a VLAN communicate with one another as though they belonged to the same

physical segment. Each VLAN is a workgroup in the organization.


In VLAN, it is possible to change the logical topology without moving any wires or
changing any address. Changes are made in bridge configuration.

Each VLAN is assigned an identifier and packets can only travel from one
segment to another if both segments have the same identifier.
Example

Hosts W and X are configured as VLAN 100, hosts Y and Z as VLAN 200.
When a packet sent by host X arrives at bridge B2, the bridge inserts a VLAN
header between Ethernet header and its payload with VLAN ID as 100.
Bridge forwards the packet, only on interfaces that is part of VLAN 100.
Packet is forwarded to bridge B1, which forward the packet to host W but not to Y.
List the advantages using VLAN.
Cost and Time Reduction VLANs reduce the migration cost of stations going
from one group to another. Physical reconfiguration takes time and is costly.
Creating Virtual Work Groups VLANs can be used to create virtual work groups.
This can reduce traffic if the multicasting capability of IP was previously used.
Security In VLANs people belonging to the same group can send broadcast messages with
the guaranteed assurance that users in other groups will not receive these messages.

Why repeaters are not used instead of bridges?


More than two repeaters are not allowed between any pair
of hosts It does not have the filtering ability.
Differentiate LAN and extended LAN (bridge).
If a bridge becomes congested, it drops frames, whereas Ethernet does not drop frames
The latency between any pair of hosts on an extended LAN becomes both larger
and more highly variable than in Ethernet
Frame order is not shuffled in ethernet, whereas reordering is possible in extended LAN.
Define internetwork.
Internetwork is interconnection of different physical networks to provide host-to-host
packet delivery service. It is a logical network built on a collection of physical network.

Node that interconnects different networks is known as router.


What is the drawback of class-based addressing in IPv4?
In c1assful addressing, a large part of the available addresses were wasted,
since Class A and B were too large for most organizations.
Class C is suited only for small organization and therefore class B
was opted. Reserved addresses were sparingly used.
What is the use of option field in IPv4.
If HLen > 5 then options are specified (up to 40 bytes). Some options are:

o Record Route used to record the routers that handle the datagram.
o Strict Source Route used by the source to predetermine a route for the datagram.
Discuss Internetworking Protocol in detail.
Internet Protocol (IP) is used to build scalable, heterogeneous internetworks.
Ability of IP to run over any networking technology is its strength.

The IP Service model has two parts


o Datagram (connectionless) model of data delivery
o Addressing scheme to identify all hosts uniquely in the internetwork.

Internetwork with different networks connected by routers


Datagram Delivery
Best-effort, connectionless service is adopted by IP to deliver a
datagram Packets may be lost or corrupted.
Packets can also be delivered out of order.
IP provides neither error control nor flow control. It is an unreliable service.
Packet Format
IPv4 datagram is a variable-length packet consisting of two parts, header and
data. Header is 2060 bytes and contains information essential to routing and
delivery Minimum packet length is 20 bytes and maximum 65,535 bytes.

Version specifies version of the IPv4 protocol, i.e. 4.

HLen defines length of the datagram header in 4-byte words. When there are no
options, the value is 5 (5 4 = 20).
TOS allows packets to be placed on separate queues based on parameters
delay, throughput, reliability and cost.
Length specifies total packet length (header + data), which is restricted to
65,535 bytes. Ident a 16-bit identifier that uniquely identifies a datagram
packet.
Flags It is a 3-bit field. The first bit is reserved. The second bit D is called the do
not fragment bit. The third bit M is called the more fragment bit.
Offset shows relative position of this fragment with respect to the whole datagram. It
is offset of the data in the original datagram measured in units of 8 bytes.
TTL defines lifetime of the datagram (default value 64) in hops. Each router
decrements TTL by 1 before forwarding. If TTL is zero, the datagram is
discarded.
Protocol specifies the higher-level protocol (e.g., 6 for TCP, 17 for UDP, 1 for
ICMP). Checksum contains 16-bit internet checksum for the packet header.
SourceAddr/ DestinationAddr 32-bit address of the source and destination host.
Fragmentation and Reassembly
Each physical network has Maximum Transmission Unit, i.e., largest IP datagram
that can be contained in a frame. For example, MTU for Ethernet is 1500, etc.
If the datagram payload is greater than MTU, then it is fragmented by the router to fit the
link-layer frame. The fragmented packets are each of size MTU, except the last one.
If D flag bit is set, then datagram is not fragmented and if exceeding MTU, it is discarded.
When the router fragments a datagram (of MTU size, except last one), fields affected are:

Copies Ident field contents to all fragments.

o Sets the M bit in the flags field for all fragments, except the last one.
o The Offset field contains 8-byte count (first fragment is set to 0).
o Sets Length equal to number of bytes in each fragment.
Reassembling is done by the destination host using Ident field.
IP does not attempt to recover from missing fragments and discards all fragments arrived.

Example

Fragmentation at R2
Suppose host H5 sends a datagram to host H8 with a payload of 1400 bytes.

Datagram goes through Ethernet and FDDI network without any fragmentation.
When the packet arrives at router R2, which has an MTU of 532 bytes, it is
fragmented with a maximum payload of 512 (plus 20 bytes for IP header).
o The Ident field value x, which is copied onto all fragments. o
The first fragment has Offset field set to 0 and M bit set to
1.
o The second fragment has Offset field set to 64 (64 8 = 512) and M bit set to 1. o
The third fragment has Offset set to 128 (128 8 = 1024), whereas M bit set to 0.

Three fragments are forwarded by router R3 through Ethernet to the destination host.
Global Addressing
IP addresses are hierarchical, i.e., it corresponds to hierarchy in the
internetwork. IP addresses consist of two parts, network id and host id.
Network id identifies physical network to which the host is attached. Hosts
attached to a network have the same network id in their IP address.
Host id is used to uniquely identify a host on that network.
Router that connects networks has a unique address on each of its interface.
32

IPv4 uses 32-bit addresses, i.e., approximately 4 billion addresses (2 ).

IPv4 address in human readable form is expressed as four octets (in the range
0255) in dotted decimal notation (eg. 172.16.15.161).
IPv4 Classful Addressing
In classful addressing, the address space is divided into five classes: A, B, C, D and E. IP
address class is identified by MSBs in binary or first byte in decimal representation.

Class
A
B
C
D
E

Binary
0
10
110
1110
1111

Decimal
0127
128191
192223
224239
240255

Addressing No. of Hosts


224 2
Unicast
65,534
Unicast
254
Unicast
Multicast
Reserved

Application
WAN
Campus Network
LAN

Classes A, B and C are used for unicast addressing.


Class D was designed for multicasting and class E is reserved.
Classes A, B, C have certain bits for network part and rest for host part i.e.,
networks belonging to a class and number of hosts attached to it are fixed.

Class A

Class B

Class C

Datagram Forwarding
Destination address is used by routers to forward packets in a connectionless
manner. Forwarding table at a router is a list of (NetworkNum, NextHop)
pairs.
Algorithm
if (NetworkNum of destination = NetworkNum of any of its
interface) then Deliver packet to destination over that interface
else
if (NetworkNum of destination is in forwarding table) then

Deliver packet to NextHop


router else
Deliver packet to default router
Example
Suppose H5 wants to send a datagram to H8, then forwarding is as follows:
o H1 sends datagram to its default router R1, since it cannot deliver directly.
R1 sends datagram over token ring to its default router R2, since H8
network id does not match any of its interfaces.
o R2 forwards the datagram to R3 based on its forwarding table.
o

R3 forwards the datagram directly to H8, since both are on the same network.

R2 Forwarding Table
Detail the process of determining the physical address of a destination host (ARP).
A host or router to send an IP datagram, needs to know both the logical and
physical address of the destination.
Address Resolution Protocol (ARP) enables a source host to know the physical
address of another node when the logical address is known.
ARP relies on broadcast support from physical networks such as ethernet, token ring, etc.
ARP enables each host on a network to build up a mapping table between IP address and

physical address.
Packet Format
0

16

HardwareType
HLen

PLen

31

Protocol Type
Operation

SourceHardwareAddr
SourceProtocolAddr
TargetHardwareAddr
TargetProtocolAddr
HardwareType defines type of the physical network (e.g., 1 for
ethernet). ProtocolType specifies the value of upper-layer protocol
(e.g., 8 for IPv4).
HLen specifies length of the physical address in bytes (e.g., 6 for Ethernet
address). PLen specifies length of the logical address in bytes (e.g., 4 for IPv4
address).
Operation defines the type of ARP (1 for request, 2 for
reply). SourceHardwareAddr contains physical address of
the sender. SourceProtocolAddr contains logical address
of the sender.
TargetHardwareAddr contains physical address of the
target. TargetProtocolAddr contains logical address of the

target.

Address Translation
The host checks its ARP table with destination IP address. If an entry exists, then
corresponding physical address is used to send a datagram.
Otherwise, source host finds physical address of the destination using ARP.

Source host creates a ARP request packet


with: o Operation field set to 1.
o Target Physical address field is unknown and filled with 0s (broadcast
address). ARP request is encapsulated in IP packet and broadcasted over the physical
network.

Each host takes note of sender's logical and physical address. All nodes except
the destination host discard the packet.
Destination host constructs an ARP reply packet with Operation field
set to 2. ARP reply is unicast and sent back to the sender.
Sender stores target logical-physical address pair in its ARP table from reply packet.
If target node does not exist on the same network, then ARP request is sent to the
default router, which then forwards it to the next hop router and so on till destination.
Define RARP.
A diskless workstation booted from its ROM or newly booted workstation does
not know its IP address as it is assigned by the network administrator.
Reverse Address Resolution protocol (RARP) allows a host to find its IP address
using RARP request (broadcasted) and RARP reply.
RARP is replaced by protocols such as BOOTP and DHCP.
Discuss the automatic configuration of IP address to hosts using DHCP.
Operating systems allow system administrator to manually configure IP address,
which is tedious and error-prone.
Dynamic Host Configuration Protocol (DHCP) enables auto configuration of IP
address to hosts using DHCP server.
DHCP is derived from Bootstrap Protocol (BOOTP) and is connectionless.
DHCP server sends and receives message using UDP over ports 67 and 68 respectively.
DHCP provides both static (manual) and dynamic (automatic) address allocation.

It is difficult to identify a malfunctioning host.


Packet Format
Operation specifies type of DHCP packet.
HType value for type of the physical network (e.g., 1 for ethernet).

HLen length of the physical address in bytes (e.g., 6 for ethernet


address). Xid specifies the transaction id.
ciaddr specifies client IP address in case of
DHCPREQUEST yiaddr known as your IP address,
filled by DHCP server. siaddr contains IP address of the
DHCP server.
giaddr contains IP address of the Gateway or relay agent.
chaddr contains hardware (physical) address of the client.

options contains information such as lease duration, default route, DNS server,
etc.

Dynamic Address Allocation


Dynamic allocation is required when a host moves from one network to another
or when connected to a network.
The administrator provides DHCP server, a range of unassigned addresses to be
assigned to hosts on demand.
To contact DHCP server, a booted/attached host, creates a DHCP packet with it's
physical address placed in chaddr field
The client broadcasts a DHCPDISCOVER message with IP address
255.255.255.255
The DHCP server checks its static database first. If the lookup is successful, the
corresponding IP address is returned.
Otherwise, the server selects an unassigned IP address for yiaddr field and
adds an entry to dynamic database along with client's physical address.
DHCP server sends DHCPOFFER message containing client's IP and physical
address, server IP address and options.
The client sends a DHCPREQUEST message, requesting the offered address.
Based on transaction id, the DHCP server acknowledges with a DHCPACK
message containing the requested configuration.
When lease period expires, client attempts to renew. The server can accept or reject it.
DHCP relay
DHCP is an application layer protocol, i.e., server/client need not be on the same network
When DHCP server is on different network, DHCP relay agent receives broadcast

message from the client.


o DHCP relay knows IP address of DHCP server.
o Stores relay address in giaddr before sending it to DHCP server.

DHCP server response is sent to relay agent, which is sent back to the client.

Write short notes on error reporting using ICMP.


IP has no error-reporting mechanism and lacks mechanism for host/management queries.
Internet Control Message Protocol (ICMP) is used to report errors to source host or to

diagnose network problems.


ICMP message is encapsulated within an IP packet.
Debugging tools such as ping and traceroute use ICMP messages internally.
Error reporting
Destination Unreachable When a router cannot route a datagram, the datagram
is discarded and sends a destination unreachable message to source host.
Source Quench When a router or host discards a datagram due to congestion, it sends a
source-quench message to the source host. This message acts as flow control.

Time Exceeded Router discards a datagram when TTL field becomes 0 and a
time-exceeded message is sent to the source host.
Parameter Problem If a router discovers ambiguous or missing value in any field of the
datagram, it discards the datagram and sends parameter problem message to source.
Redirection Redirect messages are sent by the default router to inform the source
host to update its forwarding table when the packet is routed on a wrong path.

Query Messages
Echo Request & Reply The combination of echo-request and echo-reply
messages determines whether two systems can communicate at the IP level.
Timestamp Request & Reply Two machines can use the timestamp request and
timestamp reply messages to determine the round-trip time (RTT).
Address Mask Request & Reply A host to obtain its subnet mask, sends an address mask
request message to the router, which responds with an address mask reply message.

Router Solicitation & Advertisement A host broadcasts a router solicitation


message to know about the router. The router then broadcast its routing
information with router advertisement message.
What are subnets and why are they required? Explain routing using subnets.
Consider a large campus with a set of internal networks and each need to be
connected to the internet.
For networks with more than 255 hosts, Class B address is required.
Class B address are sought after, anticipating hosts more than 255 to be added in
future. Over four billion hosts can't be connected using available 214 Class B network.

IPv4 address space is exhausted by assigning an IP address for each physical network.
At most 253 addresses can go unused in a class C network whereas over 64,000 addresses
can go unused in a class B network, i.e., inefficient usage of available address space.

Increase in network numbers, increases size of forwarding tables and degrade


router performance.
Subnetting
Each physical network is referred to as subnet and must be adjacent to each other.
All nodes on a subnet are configured with a subnet mask. It introduces additional
level of hierarchy into IP address.
For example, subnet mask 255.255.255.0 implies 24-bit network part and last 8
bits for host part.
Subnet mask need not be aligned on byte boundary nor 1s need to be contiguous.
Bitwise AND of IP address and its subnet mask gives the subnet number.

Thus all nodes have the same subnet number, i.e., hosts on different physical
network share a single network number.
Subnetted IP address contains 3 parts namely network, subnet and host.

Subnetting reduces total number of network numbers assigned.


Subnets are hidden to external routers and thereby only one entry in forwarding
table for the router that connects organization and the external world.
Connecting router routes packet to the correct subnet internally.
Example

H1 with IP address 128.96.34.15 is configured with a subnet mask of 255.255.255.128


Subnet number for H1 is AND(128.96.34.15, 255.255.255.128) = 128.96.34.0

All host on the same network as H1 have the same subnet number.

Routing
Entries in routing table are of the form (SubnetNumber, SubnetMask, NextHop)
When a host wants to send a packet to another host, it performs a bitwise AND
between its own subnet mask and the destination IP address.
If result equals its own subnet number, then packet is delivered directly over the subnet.
Otherwise a lookup is done by doing a AND (destination address, SubnetMask) for
each

entry and forwarded onto the corresponding NextHop router.


Forwarding Algorithm
D = destination IP address
for each forwarding table entry (SubnetNumber,
SubnetMask, NextHop) D1 = SubnetMask & D
if D1 = SubnetNumber
if NextHop is an interface
deliver datagram directly to
destination else
deliver datagram to NextHop
For the above subnetted network, R1 forwarding table is given below.
SubnetNumber SubnetMask
NextHop
128.96.34.0
255.255.255.128 Interface 0
128.96.34.128 255.255.255.128 Interface 1
Suppose H1 sends a packet to H2, R1 performs AND(128.96.34.139) with
subnet mask of each entry.
Second entry matches. Thus R1 delivers packet to H2 over Interface1.
Write short notes on CIDR or Superneting.
Subnetting helps in address assignment, but exhaustion of address space
centers on exhaustion of Class B address.
Address efficiency in Class C can be as low as 0.78% (2/55) and in Class B can
be as low as 0.39% (256/65535).
If Class C addresses were given, then number of entries in routing tables gets larger.
Classless Interdomain Routing (CIDR) tries to balance between minimize the
number of routing table entries and handling addresses space efficiently.
CIDR aggregates routes, by which an entry in the forwarding table is used to
reach multiple networks.
CIDR aims to collapse the multiple addresses that would be assigned to a single
AS onto one address, i.e., supernetting.
Example
Consider an autonomous system (AS) with 16 Class C networks.
Instead of providing 16 class addresses at random, a block of contiguous Class
C address is given. For example, from 192.4.16 to 192.4.31
Bitwise analysis shows 20 MSBs (11000000 00000100 0001) are the same
for that block, i.e., a 20-bit network id.
20-bit network number supports hosts that range between Class B and C address.

Thus higher address efficiency is achieved by providing small chunks of address,


smaller than Class B network. Thus a single network prefix is used in forwarding table.

Restrictions
Addresses in a block must be contiguous.
Number of addresses in a block must be a power of 2.
First address must be evenly divisible by the number of addresses.
CIDR uses a new type of notation to represent network numbers or prefixes as
/x. Prefix can be of any length. For example, 192.4.16/24
Protocol such as BGP is required to support classless addressing.
Example

Consider an ISP to provide internet connectivity to customers, who are assigned


adjacent 24-bit network prefixes.
Customers prefix starts with the same 21 bits for all, since they are reachable
through the same provider network.
Thus a single route is advertised with common 21-bit prefix that all customers share.
List the disadvantage of virtual circuit network.
One RTT delay before data is sent due to setup request and acknowledgement.
Overhead for a data packet is less, since VCI is a small number.

If a switch or link fails, the connection is teardown and a new one is setup.

UNIT III
Distinguish between forwarding and routing table.
A forwarding table contains mapping between network number and outgoing
interface as well as physical address of the next hop.
A routing table contains mapping between network number and logical address
of next hop. It is built by routing algorithm.
Define autonomous system or domain.
A domain or autonomous system is an internetwork in which all routers are under
a single administrative control (eg. University campus, Service provider network).
Routing within a domain is known as intra-domain routing whereas routing
between domains is known as inter-domain routing.
Distance vector routing (RIP) and Link state routing (OSPF) are intra-domain
routing protocols, whereas Path vector (BGP) is inter domain routing.
Explain distance vector routing (or) Routing Information Protocol with an example.
Distance vector routing is distributed, i.e., algorithm is run on all nodes.
Each node knows the distance (cost) to each of its directly connected neighbors.
Infinite cost is assigned if link is down.
Each node constructs a vector (Destination, Cost, NextHop) to reach all other
nodes and distributes the vector to its neighbors.
Nodes compute routing table of minimum distance to every other node via
NextHop using information obtained from its neighbors.
Initial State

In given network, cost of each link is 1 hop.


Each node sets a distance of 1 (hop) to its immediate neighbor and cost to
itself as 0. Distance for non-neighbors is marked as unreachable with value
(infinity).
For node A, nodes B, C, E and F are reachable, whereas nodes D and G are unreachable.

Destination Cost NextHop


A
0
A
B
1
B
C
1
C
D

E
1
E
F
1
F
G
Node A's initial table

Destinatio
n
Cost NextHop Destination Cost NextHop
A
1
A
A
1
A
B
1
B
B
C
0
C
C
D
1
D
D
E
E
F
F
0
F
G
G
1
G
Node C's initial table
Node F's initial table

Sharing & Updation


Each node sends its initial table (distance vector) to neighbors and receives their estimate.
Node A sends its table to nodes B, C, E & F and receives tables from nodes B, C, E & F.
Each node updates its routing table by comparing with each of its neighbor's table

For each destination, Total Cost is computed as:


Total Cost = Cost(Node to Neighbor) + Cost(Neighbor to Destination)
0 If Total Cost < Cost then
Cost = Total Cost
NextHop =
Neighbor
Node A learns from C's table to reach node D and from F's table to reach node G.
o

Total Cost to reach node D via C = Cost (A to C) + Cost(C to D) = 1 + 1 = 2.


Since 2 < , entry for destination D in A's table is changed to (D, 2, C)

o Total Cost to reach node G via F = Cost(A to F) + Cost(F to G) = 1 + 1 = 2


Since 2 < , entry for destination G in A's table is changed to (G, 2, F)
Each node builds complete routing table after few exchanges with its neighbors.

Destination Cost NextHop


A
0
A
B
1
B
C
1
C
D
2
C
E
1
E
F
1
F
G
2
F
Node A's final routing table
System stabilizes when all nodes have complete routing information, i.e., convergence.
Routing tables are exchanged periodically (every 30 sec.) and in case of triggered update.

Triggered Update
Link failure is assumed, if a node does not receive periodic updates from its neighbor.
When a node's routing table changes, it updates its neighbors, neighbors update
their neighbors and so on. This is known as triggered update.
Assume that node F detects that its link to G has failed.
o Node F sets distance to G as and shares its table
with A. o Node A updates its distance to G as .
o Meanwhile, node A receives periodic update from C with distance to
G as 2 hops. o Node A updates its distance to G as 3 hops via C.
o Eventually node F is updated to reach G via A in 4 hops.
Loop Instability
Suppose link from node A to E goes down.
Node A advertises a distance of to E, meanwhile B and C advertise a distance of 2 to E.
o Node B updated by C, concludes that E can be reached in 3 hops via C.

o
o

Node B advertises to A as 3 hops to reach E


Node A in turn updates C with a distance of 4 hops to E and so on.

Thus nodes update each other until cost to E reaches infinity.


Convergence does not occur. This problem is called loop instability or count to infinity.
It is avoided by redefining infinity to a small number or by using poison reverse.

Routing Information Protocol (RIP)


RIP is an intra-domain routing protocol based on distance-vector algorithm.
Routers advertise the cost of reaching networks, instead of reaching
other routers. Cost of each link is 1. The metric in RIP is hop count.
Routers update cost and next hop information for each network number.
Infinity is defined as 16, i.e., any route within an autonomous system cannot
have more than 15 hops. RIP is limited to run on small-sized networks only.
Routers send their advertisements every 30 seconds or in case of triggered
update. RIPv2 packet format contains (network address, distance) pair.

List the solutions for count-to-infinity or loop instability problem.


Infinity is redefined to a small number. Most implementations define 16 as
infinity. Thus distance vector routing cannot be used in large networks.
When a node updates its neighbors, it does not send those routes it learned from
each neighbor back to that neighbor. This is known as split horizon.
Split horizon with poison reverse allows nodes to advertise back to the sender
but with a warning message.
Explain link state routing (or) OSPF with an example.
Each node knows the state of link to its neighbors and the cost
involved. Link-state routing protocols rely on two mechanisms:
o Reliable dissemination of link-state information to all nodes
0
Route calculation from the accumulated link-state knowledge
Each node creates an update packet called link-state packet (LSP) that includes:
o ID of the node
o List of neighbors for that node and associated
cost o Sequence number
o Time to live

Sequence number and Time to live fields are used in flooding whereas the other
two fields are used for route calculation.
Reliable Flooding
Each node sends its LSP out on each of its directly connected links.
Transmission of LSPs between adjacent routers is made reliable using acknowledgment.
When a node receives LSP of another node, checks if it has one for that node.
o If not, it stores and forwards the LSP on all other links except the incoming one.

o Otherwise, if the received LSP has a bigger sequence number, then it is


stored and forwarded. The older one is discarded.

Thus recent LSP of a node eventually reaches all nodes, i.e., reliable flooding.
LSP is generated either periodically or when there is a change in the topology.

Link-state routing stabilizes quickly without generating much traffic and is


dynamic. Amount of information stored (LSP for each node) at nodes is large.

Flooding of LSP in a small network


Reducing Overhead
Flooding creates traffic and overhead for the network. Mechanisms to reduce
are: o Timer using long timers, in terms of hours for periodic generation.
o Sequence number 64-bit sequence does not wrap around soon and is used to discard
old LSPs since nodes increment its sequence number for each new LSP it creates.

o Time to live When TTL reaches 0, the node re-floods that LSP, which signals
nodes to delete their stored LSP for that ID.
Route Calculation
Each node knows the entire topology, once it has LSP from every other node.
Routing table is determined from the LSPs using a variation of Dijkstra algorithm
called forward search algorithm
Each node maintains two lists namely Tentative and Confirmed with entries of
the form (Destination, Cost, NextHop).
Forward Search algorithm
Initialize the Confirmed list with an entry for the Node and Cost = 0.
The node just added to Confirmed list, is called Next and select its LSP.
For each neighbor of Next, calculate cost to reach each neighbor as Cost(Node to
Next) +

Cost(Next to Neighbor).
o If Neighbor is currently on neither Confirmed nor Tentative list, then
add (Neighbor, Cost, NextHop) to Tentative list.
p If Neighbor is currently on Tentative list, and the Cost is less than currently
listed cost for Neighbor, then replace the entry with (Neighbor, Cost,
NextHop).
If Tentative list is empty then Stop, otherwise select least cost entry from
Tentative list and move it to Confirmed list. Go to Step 2.

For the given network, the process of building routing table for node D is tabulated
Step
1
2
3
4
5
6

Confirme
d
Tentative Comment
D is moved to Confirmed list initially
(D, 0, )
(D, 0, )
(B, 11, B) Based on D's LSP, its immediate neighbors B and C are
(C, 2, C) added to Tentative list
The lowest-cost member C of Tentative list is moved
(D, 0, )
(B, 11, B) onto
Confirmed list. C's LSP is to be examined next.
(C, 2, C)
(D, 0, )
(B, 5, C) Cost to reach B through C is 5, so the entry (B,11,B) is
(C, 2, C)
(A, 12, C) replaced. C's neighbor A is also added to Tentative list
The lowest-cost member B is moved to the Confirmed
(D, 0, )
(A, 12, C) list.
(C, 2, C)
B's LSP is to be examined next
(B, 5, C)
(D, 0, )
(A, 10, C) Since A could be reached B at a lower cost than the existing
one, the Tentative list entry (A,12,C) is replaced to
(C, 2, C)
(A,10,C).
(B, 5, C)
The lowest-cost and only member A is moved to
(D, 0, )
Confirmed
(C, 2, C)
list. Processing is over.
(B, 5, C)
(A, 10, C)

Open Shortest Path First Protocol (OSPF)


OSPF is a non-proprietary widely used link-state routing
protocol. Features added to link state routing include:
o

Authentication of routing messages

Malicious host can collapse a network by

advertising to reach every host with cost 0. Such disasters are averted by
mandating routing updates to be authenticated.
Additional hierarchy Domain is partitioned into areas, i.e., a router need not
know the complete network, instead only its area.
o Load balancing Allows multiple routes to the same place to be assigned the
same cost for traffic to be distributed evenly.
OSPF Header

o
o

Version represents the current version, i.e., 2.


Type represents the type value (15) of OSPF message.
o Type 1 known as hello message to find out whether its neighbors are alive. o
Other types are used to request, send and acknowledge link-state messages.

o SourceAddr identifies the sender


o AreaId 32-bit identifier of the area in which the node is
located o Checksum 16-bit checksum
o Authentication type has value 0 if no authentication is used, 1 for simple
password and 2 for cryptographic authentication checksum.
o Authentication contains password or cryptographic checksum
Link State Advertisement

LS Age is incremented at each node until it reaches a maximum


Type defines type of LSA. Type1 LSAs advertise the cost of links between
routers. Link-state ID 32-bit identifier that identifies the router.
LS sequence number used to detect old or duplicate
packets LS checksum covers all fields except LS Age
Length length of the LSA in
bytes Link ID and Link Data
identify a link Metric specifies
cost of the link.
Link Type specifies type of link (for example, point-to-point)
TOS allows OSPF to choose different routes based on the value in TOS field
Discuss Interdomain routing (or) Border Gateway Protocol.
Internet is divided into autonomous systems (AS), since no routing protocol can
update the routing tables of all routers.
Interdomain routing involves sharing set of IP address reachable through an AS
with other autonomous systems.
Goal of interdomain routing should be reachability and not optimality.
Border Gateway Protocol has replaced EGP as the major interdomain routing protocol.
Challenges
Each autonomous system has an intradomain routing protocol, its own policy and
metric. For example, an AS may refuse to carry transit traffic.

An internet backbone must be able to route packets to the destination that


complies with policies of autonomous system along the path and is loop-less.
Service providers have trust deficit and may not trust route advertisements by other AS.
Internet Structure

Internet consists of multiple backbone networks and sites connected to each


other. Providers connect at a peering point.
Traffic on the internet is of two types:
o traffic within an autonomous system is called local.
1 traffic that passes through an autonomous system is called
transit. Autonomous Systems (AS) are classified as:
o

Stub AS is connected to only one another autonomous system and can carry local

traffic only (e.g. Small corporation).


o Multihomed AS has connections to multiple autonomous systems but
refuses to carry transit traffic (e.g. Large corporation).
o Transit AS has connections to multiple autonomous systems and is designed
to carry both transit and local traffic (e.g. Backbone service provider).
Border Gateway Protocol (BGP-4)
BGP views internet as a set of autonomous systems interconnected arbitrarily.

Each AS have a border router (gateway), by which packets enter and leave that
AS. In above figure, R3 and R4 are border routers.
One of the nodes in each autonomous system is designated as BGP speaker.
BGP Speaker exchange reachability information with other BGP speakers,
known as external BGP session.
BGP advertises complete path as enumerated list of AS (path vector) to reach a
particular network. Paths must be without any loop, i.e., AS list is unique.
o For example, backbone network advertises that networks 128.96, 192.4.153,
192.4.32, and 192.4.3 can be reached along the path <AS1, AS2, AS4>.

o AS3 receiving advertisement from AS1, advertises that to AS2 as <AS3, AS1,
AS2, AS4>. Since AS2 is part of the path, i.e., loop, it is not used by AS2.

If there are multiple routes to a destination, BGP speaker chooses one based on
policy. Speakers need not advertise any route to a destination, even if one exists.
Advertised paths can be cancelled, if a link/node on the path goes down. This
negative advertisement is known as withdrawn route.
Attributes in a path can be well known or optional.
Designed for classless addressing with prefix of any
length. TCP is used by BGP to ensure reliability.
Routes are not repeatedly sent. If there is no change, keep alive messages are sent.

BGP-4 update packet format


Policies
Provider-Customer Provider advertises the routes it knows to the customer and
advertises the routes learnt from customer to everyone.
Customer-Provider advertise own pre xes and routes learned from customers to
provider, advertise routes learned from provider to customers, but doesnt
advertise routes learned from one provider to another provider.
Peer Two providers access to each others customers without having to pay.
Integrating inter and intra domain
Any network that has not been explicitly advertised in the intradomain protocol is
reachable through the border router.
Variant of BGP known as interior BGP (iBGP) is used by routers to update routing
information learnt from other speakers to routers inside the autonomous system.
iBGP enables router in an AS to learn the best border router to use.
Each router in an AS knows how to reach each border router using intra domain protocol.
Combining these information, each router in the AS is able to determine the appropriate

next hop for all pre xes.

Discuss the notation, representation and address space of IPv6.


CIDR and subnetting could not solve address space exhaustion faced by IPv4.
IPv6 was evolved to solve address space problem and offered set of services
that include: o Support for real-time services
o Security support
o Auto configuration
o Support for mobile hosts
Addresses Space Allocation
38

IPv6 provides a 128-bit address space to handle up to 3.4 10 nodes.


IPv6 uses classless addressing, but classification is based on MSBs.

Prefix
00...
0
(128 bits)
00...
1
(128 bits)
1111 1111
1111 1110 10
1111 1110 11
Everything else

Usage
Unspecified
Loopback
Multicast addresses
Link local use addresses
Site local use addresses
Global unicast

IPv4's classes A, B and C start with 001 prefix.


Multicast address serves the purpose of class D
address. Unicast IPv6 address has prefix 001.
Large chunks (87%) of address space are left unassigned for future use.
IPv6 defines local addresses for private networks. It is classified into
o Link local enables a host to construct an address that need not be globally unique.

0 Site local allows valid local address for use in a isolated site with several
subnets. Reserved addresses start with prefix of eight 0s. It is classified into
o unspecified address is used when a host does not know its address
o loopback address is used for testing purposes before connected to network
o compatible address is used when IPv6 hosts communicate through IPv4 network o
mapped address is used when a IPv6 host communicates with a IPv4 host.

Address Notation
Standard representation of IPv6 is x:x:x:x:x:x:x:x where x is a 16-bit
hexadecimal address separated by colon (:)
47CD:1234:4422:ACO2:0022:1234:A456:0124
IPv6 address with contiguous 0 bytes can be written compactly. For example,
47CD:0000:0000:0000:0000:0000:A456:0124 47CD::A456:0124
IPv4 address can be mapped to IPv6 address by prefixing the 32-bit IPv4 address
with 2 bytes of 1s and then zero-extending the result to 128 bits. For example,
128. 96.33.81 ::FFFF:128.96.33.81
Address Aggregation
Goal of IPv6 address allocation plan is to provide aggregation of routing
information to reduce the burden on routers.
Aggregation is done by assigning prefixes at continental level.

For example, if all addresses in Europe have a common prefix, then routers in
other continents would need one routing table entry for all networks in Europe.
Format for provider-based unicast address aggregation is:

o RegistryID contains identifier assigned to the continent. It is either


INTERNIC (North America), RIPNIC (Europe) or APNIC (Asia and Pacific)
o ProviderID identifies the provider for Internet access such as
an ISP. o SubscriberID specifies the assigned subscriber
identifier
o SubnetID defines a specific subnet under the territory of
subscriber. o InterfaceID contains the link level or physical
address.
Packet Format
IPv6 base header is 40 bytes long.

Versionspecifies the IP version, i.e., 6.


TrafficClassdefines priority of the packet with respect to traffic congestion. It is
either congestion-controlled or non-congestion controlled
FlowLabelis designed to provide special handling for a particular flow of data.
The router handles flow with the help of a flow table.
PayloadLengives the length of the packet, excluding the IPv6 header
NextHeaderOptions (if any), are specified as a header following IP header
with a pointer contained in NextHeader field.
HopLimitIt serves the same purpose as TTL field in IPv4.
SourceAddress and DestinationAddresscontain 16-byte address of the
source and destination host respectively.
Extension Headers
Extension header provides greater functionality to IP.
Base header can be followed by up to six extension headers in the specified order.
Each extension header contains a NextHeader field to identify the header
following it. Last extension header is followed by transport layer header.
1. Hop-by-Hopsource host passes information to all routers visited by the packet

2. Source Routingrouting information (strict/loose) provided by the source host.


3. FragmentationIn IPv6, only the source host can fragment. Source uses a
path MTU discovery technique to find smallest MTU on the path.

4. Authenticationused to validate the sender and ensures data integrity.


5. Encrypted Security Payloadprovides confidentiality against eavesdropping.
6. Destinationsource host information passed only to the destination.

Advanced Capabilities
Longer address format helps in providing auto or stateless configuration of IP
address to hosts without the need for a server.
Anycast addressing in IPv6 is used to specify topological entity such as backbone
provider. Packet with anycast address is delivered to only one member of anycast group.

Enhanced routing support for mobile hosts is provided by using anycast


addressing in the routing header.
Advantages
Address space IPv6 has 128 address space. The address space is so huge, that
1500 address can be allocated for each sqft of earth surface.
Header format In IPv6, options are separated from the base header. Each router
thus need not process unwanted addition information.
Options IPv6 has new options to allow additional functionalities such as
enhanced routing functionality, support for mobile hosts, etc.
Extensible IPv6 is designed to allow extension of protocol, if required by new
technologies or applications.
Resource allocation In IPv6, flow label has been added to enable the source to
request special handling of the packet such as real-time audio and video.
Security The encryption and authentication options in IPv6 provide confidentiality
and integrity of the packet.
Auto configuration IPv6 allows a host to be connected to a network without the
help of a DHCP server.
State the drawbacks of IPv4?
Despite all short-term solutions, such as subnetting, classless addressing, and
NAT, address depletion is still a long-term problem.
Internet must accommodate real-time audio and video transmission that requires
minimum delay strategies and reservation of resources.
Internet must provide encryption and authentication of data for some applications.
Define dual-stack operation and tunneling.
In dual-stack, nodes run both IPv6 and IPv4, uses Version field to decide which
stack should process an arriving packet.
IPv6 packet is encapsulated with an IPv4 packet as it travels through an IPv4 network. This
is known as tunneling and packet contains tunnel endpoint as its destination address.

How NAT helps to solve address space depletion?


Network Address Translation (NAT) enables hosts to use Internet without the
need to have globally unique addresses.
Enables organization to have a set of addresses internally and one address externally.
Addresses reserved for internal use are (10.0.0.010.255.255.255,
172.16.0.0 172.31.255.255 and 192.168.0.0192.168.255.255).
For NAT to work, the organization must have single connection to the Internet
through a router that runs the NAT software.
What is the function of IGMP or MLD?
Hosts communicate their desire to join / leave a multicast group to a router using Internet
Group Message Protocol (IGMP) in IPv4 or Multicast Listener Discovery (MLD) in IPv6
Provides multicast routers with information about the membership status of hosts

connected to the network.


Enables a multicast router to create and update list of loyal members for each group.
Explain Multicast routing protocols in detail.
To support multicasting, routers additionally have multicast forwarding tables.
Multicast forwarding table is a tree structure, known as multicast distribution trees.
Internet multicast is implemented on physical networks that support broadcasting
by extending forwarding functions.
Multicast routing is the process by which multicast distribution trees are
determined. Prominent multicast routing protocols are:
o Distance-Vector Multicast (DVMRP) o
Protocol Independent Multicast (PIM)

Distance-Vector Multicast (DVMRP)


Distance vector routing for unicast is extended to support multicast routing.
Each router maintains a table of (Destination, Cost, NextHop) for all
destination through exchange of distance vectors.
Multicasting is added to distance-vector routing in two stages.
o Reverse Path Broadcast mechanism that floods packets to other networks
o Reverse Path Multicasting that prunes end networks that do not have
hosts belonging to a multicast group.
o DVMRP is also known as flood-and-prune protocol.
Reverse-Path Broadcasting
Router on receiving a multicast packet from source S to a Destination from
NextHop, forwards the packet on all out-going links, since it comes from shortest
path.
Packet is flooded but not looped back to S. The drawbacks are:
o It floods a network, even if it has no members for that group.
o Packets are forwarded by each router connected to a LAN (duplicate
flooding). Duplicate flooding is avoided by
o Designating a router as parent router for each link.
o Router that has the shortest path to source S, is selected as parent router.
o Only parent router forwards multicast packets from source S to that LAN.
Routers maintain a bit indicating whether it is the parent for that source/link pair or not.

Reverse-Path Multicasting
Multicasting is achieved by pruning networks that do not have members for a
group G. This is done in two stages.
Step 1: Identify a leaf network which has only one router (parent).
Leaf network is monitored to determine if it has any members for group G,
by having hosts periodically announce to which group it belongs to.
0 Router uses information from hosts to decide whether or not to forward
packets addressed to group G over that LAN.
Step 2: Propagate the information "no members of G here" up the shortest path tree.
o

Routers augment the (Destination, Cost) pairs it sends to its neighbors with the
set of groups for which the leaf network is interested in receiving multicast
packets.

o This information is propagated amongst routers so that a router knows for


what groups it should forward on each of its links.
Including all this information in a routing update is expensive.

Example internet with members of group G in color


Protocol Independent Multicast (PIM)
Existing multicast routing protocols such as DVMRP did not scale well.
PIM divides multicast routing problem into sparse and dense mode.

PIM sparse mode (PIM-SM) is widely used multicast routing protocol. PIM does
not rely on any type of unicast routing protocol, hence protocol independent.
Routers explicitly join and leave multicast group using PIM Join and Prune
messages.
A router is designated as rendezvous point (RP) for each group in a domain to
receive PIM messages.
Routers in the domain know the IP address of RP for each group.
A multicast forwarding tree is built as a result of routers sending Join messages to RP.
Initially the tree is shared by multiple senders and depending on traffic it may be
source-specific to a sender.
Shared Tree
When a router sends Join message for group G to RP, it goes through a set of
routers. o Join message is wildcarded (*), i.e., it is applicable to all senders.

Router create an entry (*, G) in its forwarding table for the shared tree.

o Interface on which the Join arrived is marked to forward packets for that group.
o Forwards Join towards RP on an interface where packets for that group arrive.
Eventually, the message arrives at RP. Thus a shared tree with RP as root is formed.

Example

Join from R4

Join from R5

Multicast message to group G

Router R4 sending Join message for group G to rendezvous router RP.


Join message is received by router R2. R2 makes an entry (*, G) in its table and
forwards the message to RP.
As routers send Join message for a group, branches are added to the tree, i.e., shared.
When R5 sends Join message for group G, R2 does not forwards the Join. It
adds an outgoing interface to the forwarding table created for that group.
Hosts create a packet with multicast address and send to designated router for
its network Suppose router R1, receives a message to group G.
o R1 has no state for group G.
o

It encapsulates the multicast packet in a PIM Register message addressed to RP

o Multicast packet is tunneled along the way to RP.


RP decapsulates the packet and sends multicast packet onto the shared tree, towards R2.
R2 forwards the multicast packet to routers R4 and R5 that have members for group G.

Source-specific tree.
RP can force routers to know about group G, by sending Join message to the
sending host, so that tunneling can be avoided.
Intermediary routers create sender-specific entry (S, G) in their tables. Thus a
source-specific route from R1 to RP is formed.
If there is high rate of packets sent from a sender to a group G, then shared-tree is
replaced by source-specific tree with sender as root.
Example

Source-specific Join from RP

Routers switch to Source tree

Rendezvous router RP sends a Join message to the host router R1.


Router R3 learns about group G through the message sent by RP.
Router R4 send a source-specific Join due to high rate of packets from
source. Router R2 learns about group G through the message sent by R4.

Eventually a source-specific tree is formed with R1 as root.

Analysis
PIM is protocol independent because, tree maintenance is based on Join
messages that come via the shortest path.
Shared trees are more scalable than source-specific trees.
Source-specific trees enable efficient routing than shared trees.
PIM-SM protocol is used within a domain, not across domains.

Define IP multicasting.
IP multicast supports both source-specific multicast (one-to-many) and any source
multicast (many-to-many), where each group has its own IP multicast address.
Hosts that are members of a group receive copies of any packet sent to that
groups multicast address.
IP multicast allows any host to send multicast traffic, it neednt even be a
member. IP multicast is more scalable because it eliminates redundant traffic.
What is multicast addressing?
Range of IP address is reserved for multicasting (Class D in IPv4).
Multicast addresses are associated with an abstract group, whose members are dynamic. If
not for multicast addressing, a host would have to send a separate packet with the

identical data to each member of the group.


In IPv4, there are 28 bits of possible multicast addresses, ignoring the prefix.
Write short notes on routing areas.
An area is a set of routers configured to exchange link-state
information. Area introduces an additional level of hierarchy.

Special area known as backbone area is denoted as area 0. Routers R1, R2 and
R3 are part of backbone area.
Routers in backbone area are also part of non-backbone areas. Such routers
are known as area border routers (ABR).
Link-state advertisement is exchanged between routers in a non-backbone area, but do
not see LSAs of other areas. For example, routers in area 1, is not aware of LSA in area
3 ABR router advertises routing information in their area to other areas. For example, R2
advertises area 2 routing which is spread to other areas through ABRs.
Thus, all routers learn how to reach all networks in the domain.

When a packet is to be sent to a network in another area, it goes through


backbone area via ABR and reaches the destination area.
Areas improve scalability but packets may not necessarily travel on the shortest path.

Briefly explain the functioning of a Switch.

Basic Switch with 3 network interfaces


Packet is moved from the network interface directly to the main memory using
direct memory access (DMA).
CPU examines packet header to determine the interface on which the packet is
to be sent using DMA.
Each packet crosses the I/O bus twice due to read/write to main memory.
Average throughput of a switch is half the bandwidth of either main memory or I/O bus.
Throughput = pps (BitsPerPacket)
For example, a shorter packet (say 64 bytes) with processing speed of 2 million
packets per second, throughput is 1 Gbps. If the switch has 20 ports, then data
rate is 50 Mbps on each port.
Well designed switches move data from input to output interfaces in parallel, thus
throughput increases.
Ports

4 4 switch
Ports contain buffers to hold packets before it is forwarded.
Fabric that switches packet by using information in the packet header is
known as self-routing.
Input port receives stream of packets, analyzes the header, determines the
output port and passes the packet onto the fabric.
Simple input buffering such as FIFO can lead to head-of-line blocking.
If packets at several input ports are to be put on to a single output port,
then only one of them is forwarded.
o Thus packets at the front prevent packets queued up from being forwarded. o
To avoid this problem, majority of switches use pure output buffering.

Head-of-line blocking
Buffer space also determines the QoS characteristics of a switch.
A switch fabric moves packets from input ports to output ports with minimal
delay and meets the throughput goals of the switch.
Fabric type may be either shared bus / shared memory / cross bar / self routing.
Control processor is responsible for running the algorithm to build forwarding tables.
Write short notes on routing metrics in ARPANET.
Assigning uniform cost to all links (say 1 hops) have the following drawbacks:
o Latency on a link is not considered. For example, links with different
latency say 250 ms and 1 ms are not distinguished.
Bandwidth of the link is not considered. For example, links with different
capacity such as 10 Kbps and 45 Mbps are treated in a similar manner.
0 Current load is not considered, i.e., routing around overloaded links is
impossible Original ARPANET routing metric measured queue length on each link.
o

Higher cost was assigned for links with large queue than one with short queue.

o Thus packets were moved towards short queues, not towards destination.
0 Bandwidth and latency was not considered.
ARPANETs new routing mechanism was as follows:

o ArrivalTime and DepartTime of a packet at a router was recorded.


o When a node receives ACK, it computes delay as:
Delay = (DepartTime ArrivalTime) +
TransmissionTime + Latency o Weight was assigned to
links based on average delay.
o Link bandwidth and latency was taken into account.
o Links advertise high cost when congested and low cost
when idle. In revised ARPANET routing metric,
o Change in cost is smoothened. Therefore, nodes abandoning a route
based on cost is largely reduced.
o Compression of dynamic range is achieved by feeding the measured
utilization, link type, and link speed into a function.

Routing metric vs Link utilization


In real-world metrics are static and changed only by the network administrator.

UNIT IV

Distinguish between network and transport layer


Network layer
Responsible for host-to-host delivery
Host address is required for delivery
Flow control is not done
Multicasting capability is not inbuilt

Transport layer
Responsible for process-to-process delivery
Host IP, port number is required for delivery
Flow control is not done
Support for multicasting is embedded

List the features desired of a transport layer protocols.


Guaranteed and in-order delivery of the message.
Supports multiple application processes on each host.

Supports synchronization between the sender and the


receiver. Allows receiver to apply flow control to the sender.

Transport layer protocols are UDP, TCP and RTP.


Define process and port number?
Processes are programs that run on hosts. It could be either server or
client. Processes are assigned a unique 16-bit port number on that host.

Server processes operate at well-known ports (01023), assigned by IANA.


Client processes are assigned ephemeral ports (4915265535) by operating
system.
Write short notes on simple demultiplexer or UDP.
User Datagram Protocol (UDP) is a connectionless, unreliable transport protocol. Adds
process-to-process communication to best-effort service provided by IP.

Simple demultiplexer allows multiple processes on each host to


communicate. Does not provide flow control / reliable / ordered delivery.
Delivers message to the correct recipient process using checksum.
UDP is suitable for a process that requires simple request-response
communication with little concern for flow control/error control.
Message Queue

< port, host > pair is used as key for demultiplexing


Ports are implemented as a message queue.

o When a message arrives, UDP appends it to end of the queue.

When queue is full, the message is discarded.


0 When a message is read, it is removed from the queue.
Well known UDP ports are 7Echo, 53DNS, 111RPC, 161SNMP, etc.

UDP Header
UDP packets known as user datagrams, have a fixed-size header of 8 bytes.

SrcPort and DstPortSource and destination port number of the message.


Length16-bit field defines total length of the user datagram, i.e., header plus
data. Checksumcomputed over UDP header, data and pseudo header.
Checksum is

optional in IPv4, but mandatory in IPv6.


Pseudo header consists of three IP fields (Protocol, SourceAddr and
DestinationAddr) and UDP Length field.
Applications
Used for management processes such as SNMP.
Used for route updating protocols such as RIP. It is a
suitable transport protocol for multicasting.

UDP is suitable for a process with internal flow and error control
mechanisms such as Trivial File Transfer Protocol (TFTP).
With a neat architecture, explain TCP in detail.
Transmission Control Protocol (TCP) offers connection-oriented, bytestream service. Guarantees reliable, in-order delivery of message.
TCP is a full-duplex protocol.
Like
UDP,
TCP
provides
process-to-process
communication. Has built-in congestion-control mechanism.

Ensures flow control, as sliding window forms heart of TCP operation.

Some well-known TCP ports are 21FTP, 23 TELNET, 25SMTP, 80HTTP, etc.
Sending TCP buffers bytes in send buffer and transmits data unit as segments.
Segments are stored in receive buffer at the other end for application to read.

TCPs demux key is given by the 4-tuple < SrcPort, SrcIPAddr, DstPort,
DstIPAddr >

Segment Format
Data unit exchanged between TCP peers are called segments.
SrcPort and DstPort port number of source and destination process.
SequenceNum contains sequence number, i.e. first byte of data segment.
It is twice as big as window size and does not wrap around.
Acknowledgment specifies byte number of segment, the receiver
expects next. HdrLen specifies length of TCP header as 4-byte words.

Flags contains six control bits known as flags.


o URGindicates that the segment contains urgent data.
o ACKindicates that value of acknowledgment field is valid. o
PUSHindicates sender has invoked the push operation.

o
o

RESETsignifies that receiver wants to abort the connection.

SYNsynchronize sequence numbers during connection establishment.

o FINterminates the TCP connection.


AdvertisedWindow defines receivers window size and acts as flow control.
Checksum It is computed over TCP header, Data, and pseudo header
containing IP fields (Length, SourceAddr & DestinationAddr). In TCP,
checksum is mandatory.
UrgPtr specifies first byte of normal data contained in the segment, if URG
bit is set. Optional 40 bytes of optional information can be added to the
header.
Connection Establishment
Connection establishment in TCP is asymmetric, i.e., server does a passive
open, whereas client does an active open. It's a three-way handshaking.
1. Client sends a SYN segment to the server containing its initial sequence
number (Flags = SYN, SequenceNum = x)
2. Server responds with a single segment that acknowledges the clients
sequence number (Flags = ACK, Ack = x + 1) and specifies its inital
sequence number (Flags = SYN, SequenceNum = y).
3. Finally, the client responds with a segment that acknowledges the
servers sequence number (Flags = ACK, Ack = y + 1).

Connection Termination
Connection termination or teardown is symmetric. It can be done in two ways
Three-way closeBoth client and server close simultaneously.
o Client sends a FIN segment. The FIN segment can include last
chunk of data. o Server responds with FIN + ACK segment to inform
its closing.
o Finally, client sends an ACK segment.
Half-CloseOne end can stop sending while still receiving data, known as
half-close. o Client half-closes the connection by sending a FIN segment.
o Server accepts the half-close by sending the ACK segment. Data
transfer from client to the server stops.
o After sending all data, server sends a FIN segment to the client, which is
acknowledged by the client.
State Transition Diagram
States involved in opening and closing a connection is shown above and
below ESTABLISHED state respectively.
Operation of sliding window is hidden in the
ESTABLISHED state Events that trigger a state
transition is:

Segments that arrive from its peer.


Application process invokes an operation on TCP

Opening
1. Server invokes a passive open on TCP, which causes TCP to move to LISTEN state
2. Later, the client does an active open, which causes its end of the connection

to send a SYN segment to the server and to move to the SYN_SENT state.

3. When SYN segment arrives at the server, it moves to SYN_RCVD state and

responds with a SYN + ACK segment.

4. Arrival of SYN + ACK segment causes the client to move to ESTABLISHED

state and sends an ACK to the server.

5. When ACK arrives, the server finally moves to ESTABLISHED state.


6. Even if the client's ACK gets lost, sever will move to ESTABLISHED state

when the first data segment from client arrives.


Closing
Process on both sides of the connection can independently close its half of the
connection or simultaneously. Transitions from ESTABLISHED to CLOSED
state are:
One side closes: ESTABLISHED FIN_WAIT_1 FIN_WAIT_2 TIME_WAIT
CLOSED Other side closes: ESTABLISHED CLOSE_WAIT LAST_ACK
CLOSED
Simultaneous close: ESTABLISHED FIN_WAIT_1 CLOSING TIME_WAIT
CLOSED
How is urgent data delivered in TCP?
A process may send urgent data. For example, abort a process by Ctrl + C keystroke.
Sending TCP inserts the urgent data at beginning of the segment and sets URG flag.

When TCP receives a segment with URG bit set, it delivers urgent data out
of order to the receiving application.
What is push operation in TCP?

Receiving TCP buffers the data and delivers when process is ready.
When a process issues Push operation, the sending TCP sets the PUSH
flag, which forces the TCP to create a segment and send it immediately.
When TCP receives a segment with PUSH flag set, it is delivered immediately.
Distinguish between connection-less and connection-oriented protocol in transport layer.

UDP (Connection-less)
Datagram model (connection-less)
Unreliable delivery
No flow control
No congestion control
Light overhead
Data is collected in order of receipt

TCP (Connection-oriented)
Byte-stream service (connection-oriented)
Reliable delivery using acknowledgement
Supports flow control
Built-in congestion control mechanism
Heavy overhead
Segments are ordered using sequence number

Explain TCP flow control or adaptive flow control in detail.


TCP uses a variant of sliding window known as adaptive flow control
that: o guarantees reliable and in order delivery of data
o enforces flow control at the sender

Receiver advertises its window size to the sender using AdvertisedWindow field.
Sender thus cannot have unacknowledged data greater than AdvertisedWindow.

Reliable / Ordered Delivery

Send Buffer
Receive Buffer
Send Buffer
Sending TCP maintains a send buffer, divided into 3 segments namely
acknowledged data, unacknowledged data and data to be transmitted.
Send buffer maintains three pointers LastByteAcked, LastByteSent,
and
LastByteWritten such that:
LastByteAcked LastByteSent LastByteWritten
A byte can be sent only after being written and only a sent byte can be acknowledged. Bytes
to the left of LastByteAcked are not kept as it had been acknowledged.

Receive Buffer
Receiving TCP maintains receive buffer to hold data even if it arrives out-of-order.
Receive buffer maintains three pointers namely LastByteRead,
NextByteExpected, and LastByteRcvd such that:
LastByteRead < NextByteExpected LastByteRcvd + 1
A byte cannot be read until that byte and all preceding bytes have been received.
If data is received in order, then NextByteExpected = LastByteRcvd + 1
Bytes to the left of LastByteRead are not buffered, since it is read by the
application.
Flow Control
Size of send and receiver buffer is MaxSendBuffer and MaxRcvBuffer
respectively. Sending TCP prevents overflowing of send buffer by maintaining

LastByteWritten LastByteAcked MaxSendBuffer


Receiving TCP avoids overflowing its receive buffer by maintaining
LastByteRcvd LastByteRead MaxRcvBuffer
Receiver throttles the sender by advertising a window that is no larger than
the amount of free space that it can buffer.
AdvertisedWindow = MaxRcvBuffer ((NextByteExpected 1)
LastByteRead)
Sending TCP adheres to AdvertisedWindow
by computing
EffectiveWindow that limits how much data it should send.
EffectiveWindow = AdvertisedWindow (LastByteSent
LastByteAcked)
When data arrives, LastByteRcvd moves to its right (incremented),
and
AdvertisedWindow shrinks.
Receiver acknowledges only, if preceding bytes have arrived.
AdvertisedWindow expands when data is read by the application.
o If data is read as fast as it arrives then AdvertisedWindow =
MaxRcvBuffer
o If data is read slowly, it eventually leads to a AdvertisedWindow of size
0.

AdvertisedWindow field is designed to allow sender to keep the pipe full.


The 16-bit length accounts for product of delay bandwidth.
Fast Sender vs Slow Receiver
If a fast sender transmits at a higher rate, receiver's buffer gets filled up.
Hence, AdvertisedWindow shrinks, eventually to 0.

Receiver advertises a window of size 0, thus sender cannot transmit as it gets blocked.
When receiving process reads some data, those bytes are acknowledged and

AdvertisedWindow expands.
When an acknowledgement arrives for x bytes, LastByteAcked is
incremented by x and send buffer space is freed accordingly to send further
data.
What is silly window syndrome? When should TCP transmit a segment?
TCP sends a segment if:
o Maximum Segment Size (MSS) bytes are ready, where MSS
= MTU. o Sending process invokes a push operation
p On timeout
If AdvertisedWindow < MSS, TCP aggressively decides to transmit a
small segment, since delay affects interactive applications.
Receiver acknowledges those bytes, thus small segments (< MSS) are introduced into the
system remains indefinitely, since they do not combine with adjacent segments.

The strategy of taking advantage of any available window leads to multiple


tiny segments called silly window syndrome.
Receiver can prevent a smaller AdvertisedWindow by delaying and
combining acknowledgements.

Nagles Algorithm
Nagle suggested an elegant self-clocking solution that provides a simple,
unified rule for deciding when TCP should transmit data.
When the application produces data to send
if both the available data and the

window MSS send a full segment


else
if there is unACKed data in flight
buffer the new data until an
ACK arrives else
send all the new data now
TCP transmits a full segment, if AdvertisedWindow MSS.
TCP also transmits a smaller segment, if there is no unacknowledged data
If there is unacknowledged data, the sender must wait for an ACK before
transmitting the next segment.
What is adaptive retransmission? Explain the algorithms used?
TCP guarantees reliability through retransmission on timeout before ACK.
Timeout is based on RTT, but it is highly variable for any two hosts on the
internet. Appropriate timeout is chosen using adaptive retransmission.
Original Algorithm
In original TCP, running average of the RTT is maintained and timeout is
computed as function of RTT.
SampleRTT is the time difference between sending a segment and arrival of its ACK.
EstimatedRTT is then computed as weighted average of previous estimate and current

sample.
EstimatedRTT = EstimatedRTT + (1 ) SampleRTT
where is the smoothening factor with value ranging between 0.8
0.9 Timeout is determined as twice the value of EstimatedRTT .
TimeOut = 2 EstimatedRTT
Karn/Partridge Algorithm
The flaws discovered in TCP original algorithm after years of use was that
an ACK segment, acknowledges receipt of data, not a transmission.
When an ACK arrives after retransmission, it is impossible to decide,
whether to pair it with original or retransmission segment, for SampleRTT
estimation.
o
If ACK is associated with original one, then SampleRTT becomes too
large
o

If ACK is associated with retransmission, then SampleRTT becomes too small

Karn and Partridge proposed that SampleRTT should be taken for


segments that are sent only once, i.e, for segments that are not
retransmitted.
Each time TCP retransmits, the timeout is doubled, since loss of segments is
mostly due to congestion and hence TCP source should be conservative.
Jacobson/Karels Algorithm
Jacobson and Karel discovered that problem with original algorithm was
variance in SampleRTT not being taken into account.
If variation among samples is small then EstimatedRTT can be trusted,
otherwise timeout must not be tightly coupled with EstimatedRTT.

Mean RTT and variation in that mean is calculated as follows:


Difference = SampleRTT EstimatedRTT
EstimatedR = EstimatedRTT + (
TT Difference)
Deviation + (|Difference|
Deviation = Deviation)
o where is a fraction between 0 and 1
TCP computes TimeOut as a function of both EstimatedRTT and Deviation
as:
TimeOut = EstimatedRTT +
Deviation where = 1 and = 4
When variance is small TimeOut is close to EstimatedRTT, otherwise
Deviation dominates in TimeOut calculation.
TCP samples the round-trip time once per RTT (rather than per packet) due to cost.
Define Congestion.
Congestion occurs if load on the network (the number of packets sent) is greater
than capacity of the network (the number of packets a network can handle).
When load is less than network capacity, throughput increases proportionally.
When the load exceeds capacity, the queues become full and the routers
discard some packets and throughput declines sharply.
TCP uses mechanisms to control or avoid congestion.
Distinguish between flow control and congestion control.
Flow control prevents a fast sender from overrunning the capacity of slow receiver.
Congestion control prevents too much data from being injected into the network,

thereby causing switches or links overloaded beyond its capacity.


Flow control is an end-to-end issue, whereas congestion control is
interaction between hosts and network.
Explain TCP congestion control mechanisms in detail.
Each source determines available capacity of the network, so as to send
packets without loss.
TCP uses ACKs for further transmission of packets, i.e., self-clocking. TCP
maintains a state variable CongestionWindow for each connection.

A source is thus not allowed to send faster than network or destination host.
MaxWindow = MIN(CongestionWindow,
AdvertisedWindow)
EffectiveWindow = MaxWindow (LastByteSent
LastByteAcked)
Congestion control mechanisms are:
1. Additive Increase / Multiplicative Decrease (AIMD)
2. Slow Start
3. Fast Retransmit and Fast Recovery
Additive Increase/Multiplicative Decrease (AIMD)
Initially, TCP source sets CongestionWindow based on the level of
congestion it perceives to exist in the network.
Source increases CongestionWindow when level of congestion goes down
and decreases CongestionWindow when level of congestion goes up.
TCP interprets timeouts as a sign of congestion and reduces the rate of transmission. On
timeout, source reduces its CongestionWindow by half. This is known as

multiplicative decrease. For example, if CongestionWindow = 16, after


timeout it is set to 8.
Irrespective of the level of congestion in the network, CongestionWindow
MSS,

When ACK arrives for a packet sent, CongestionWindow is incremented


marginally. This is known as additive increase.
Increment = MSS (MSS / CongestionWindow)
CongestionWindow += Increment
CongestionWindow increases and decreases throughout lifetime of the
connection.
When CongestionWindow is plotted as a function of time, a saw-tooth pattern
results.

Additive Increase

CongestionWindow Trace

Analysis
AIMD decreases its CongestionWindow aggressively but increases
conservatively.
Small CongestionWindow results in less probability of packets being
dropped. Thus congestion control mechanism becomes stable.
Since timeout indicates congestion, TCP needs the most accurate timeout mechanism.
AIMD is appropriate only when source is operating close to network capacity.

Slow Start
Slow start is used to increase CongestionWindow exponentially from a
cold start. Source TCP starts by setting CongestionWindow to one
packet.
TCP doubles the number of packets sent every RTT on successful transmission.
o When ACK arrives for first packet TCP adds 1 packet to
CongestionWindow and sends two packets.
o When two ACKs arrive, TCP increments CongestionWindow by 2
packets and sends four packets and so on.
Initially TCP has no idea about congestion, henceforth it increases
CongestionWindow rapidly until there is a timeout.
On timeout:
p TCP decreases CongestionWindow by half (multiplicative
decrease).
o CongestionThreshold is assigned the current value of
CongestionWindow.

o CongestionWindow is reset to 1 packet


Slow
start
is
repeated
until
CongestionWindow
reaches
CongestionThreshold and thereafter 1 packet per RTT.
Example
In example trace, initial slow start causes increase in CongestionWindow
up to 34KB, Congestion occurs at 0.4 seconds and packets are lost.
ACK does not arrive and therefore trace of CongestionWindow becomes flat.
Timeout
occurs
at
2
sec.
Thus
CongestionThreshold=17KB,

CongestionWindow=1PKT
Slow start is done till 17KB and additive increase thereafter till congestion occurs.

Exponential Increase

CongestionWindow Trace

Analysis
Slow start provides exponential growth and is designed to avoid bursty nature of TCP. TCP
loses more packets initially, because it attempts to learn the available bandwidth

quickly through exponential increase.


When connection went dead while waiting for timer to expire, slow start
phase was used only up to current value of CongestionWindow.
Fast Retransmit and Fast Recovery
Coarse-grained implementation of TCP timeouts led to long periods of time
during which the connection went dead while waiting for a timer to expire.
Fast retransmit is a heuristic approach that triggers retransmission of a
dropped packet sooner than the regular timeout mechanism.
Fast retransmit does not replace regular timeouts.
When a packet arrives out of order, receiving TCP resends the same
acknowledgment (duplicate ACK) it sent last time.
When a duplicate ACK arrives, sender infers that earlier packet may be lost
due to congestion.

Sending TCP waits for three duplicate ACK to confirm that packet is lost, before
retransmitting the lost packet. This is called fast retransmit before regular timeout.
When packet loss is detected using fast retransmit, the slow start phase is replaced by
additive increase, multiplicative decrease method. This is known as fast recovery.

Instead of setting CongestionWindow to one packet, this method uses the


ACKs that are still in pipe to clock the sending of packets.
Slow start is only used at the beginning of a connection and after regular
timeout. At other times, it follows a pure AIMD pattern.
Example
In example, packets 1 and 2 are received whereas packet 3 gets lost.
o Receiver sends a duplicate ACK for packet 2 when packet 4 arrives.
o Sender receives 3 duplicate ACKs after sending packet 6 retransmits packet 3.
o When packet 3 is received, receiver sends cumulative ACK up to packet 6. In
example trace, slow start is used at beginning and during timeout at 2 secs.

o Fast recovery avoids slow start from 3.8 to 4 sec.


o CongestionWindow is reduced by half from 22 KB to
11 KB. o Additive increase is resumed thereafter.

Duplicate ACK

CongestionWindow Trace

Analysis
Long periods with flat congestion window and no packets sent are eliminated.
TCP's fast retransmit can detect up to three dropped packets per window.

Fast retransmit/recovery results increase in throughput by 20%.


Explain in detail about TCP congestion avoidance algorithms.
Congestion avoidance mechanisms prevent congestion before it actually occurs. TCP
creates loss of packets in order to determine bandwidth of the connection. Routers
help the end nodes by intimating when congestion is likely to occur.

Congestion-avoidance mechanisms are:


1. DECbit
2. Random Early Detection (RED)
3. Source-based congestion avoidance
DECbit
Each router monitors the load it is experiencing and explicitly notifies the end
node when congestion is likely to occur by setting a binary congestion bit
called DECbit in packets that flow through it.
Destination host copies the DECbit onto the ACK and sends back to the source.
Eventually source reduces its transmission rate and congestion is avoided.

A single congestion bit is added to the packet header.


Router sets this bit in a packet if its average queue length is 1.
Average queue length is measured over a time interval that spans the last
busy + last idle cycle + current busy cycle.
Router calculates average queue length by dividing the curve area with time interval.
Source determines how many ACK has DECbit set for previous window packets.

If less than 50% of ACK have DECbit set, then source increases its congestion
window by 1 packet, otherwise decreases the congestion window by 87.5%.
Increase by 1, decrease by 0.875 rule was based on AIMD for stabilization.

Random Early Detection (RED)


Router notifies the source that congestion is likely to occur by dropping
packets before its buffer space exhausts (early drop), rather than forcibly
later due to congestion.
Source is implicitly notified by timeout or duplicate ACK.

Each incoming packet is dropped with a probability known as drop probability


when the queue length exceeds drop level. This is called early random drop.
Average queue length is computed as a weighted running average:
AvgLen = (1 Weight) AvgLen + Weight SampleLen
0 < Weight < 1 and SampleLen is queue length during sample
measurement.
Two queue length thresholds used by RED are MinThreshold and
MaxThreshold.
When a packet arrives, gateway compares current AvgLen with these
thresholds and decides whether to queue or drop the packet as follows:
if
AvgLen
MinThreshold
queue
the
packet
if MinThreshold < AvgLen <
MaxThreshold
calculate
probability P
drop the arriving packet with probability P

if AvgLen MaxThreshold
drop the arriving
packet
When AvgLen exceeds MinThreshold, a small percentage of packets are dropped, forces
TCP connections to reduce their window sizes, which in turn reduces the rate at which
packets arrive at the router. Thus, AvgLen decreases and congestion is avoided.

Drop probability P is computed as a function of both AvgLen and time since


a last packet was dropped.
TempP = MaxP (AvgLen MinThreshold)/(MaxThreshold
MinThreshold)
p = TempP/(1 count TempP)
Drop probability increases slowly when AvgLen is between two thresholds.
On reaching MaxP at the upper threshold, it jumps to unity.

RED thresholds

Drop probability function

MaxThreshold is set to twice of MinThreshold, to work for bursty Internet


traffic.
Because RED drops packets randomly, the probability that RED decides to drop
a flows packet(s) is roughly proportional to share of the bandwidth for that flow.
Source-Based Congestion Avoidance
Source looks for signs of congestion, for instance, increase in RTT indicates
queuing at a router.
Some mechanisms
1. Every two round-trip delays, TCP checks to see if current RTT is greater
than the average of the minimum and maximum RTT. If so, congestion
window is decreased by one-eighth, else normal increase.
2. Every RTT, TCP increases window size by one packet and compares the
throughput achieved when the window was one packet smaller. If difference is less
than one-half the throughput achieved earlier, window is decreased by one packet.

TCP Vegas
Throughput increases as congestion window increases. Increase in window size
beyond available bandwidth, results in packets queuing at the bottleneck router.
TCP Vegas goal is to measure and control the right amount of extra data in transit.
Extra data refers to amount of data that source would have refrained from
sending so as to not exceed the available bandwidth.
A flows BaseRTT is set to RTT of the packet when flow is not congested.
BaseRTT = MIN(RTTs)
CongestionWindow is assumed as total number of bytes in transit.
Expected throughput without overflowing the connection is

ExpectedRate = CongestionWindow / BaseRTT


ActualRate, i.e., current sending rate for a packet is calculated by recording
bytes transmitted during a SampleRTT.
ActualRate = ByteTransmitted / SampleRTT
Difference between ExpectedRate and ActualRate is calculated.
Diff = ExpectedRate ActualRate

Thresholds and are defined and corresponds to less data and too much
extra data in the network, such that < .
TCP uses difference in rates and adjusts CongestionWindow accordingly.
o If Diff < , CongestionWindow is linearly increased during the next
RTT o If Diff > , CongestionWindow is linearly decreased during
the next RTT

o If < Diff < , CongestionWindow is unchanged


When actual and expected rates vary significantly, it indicates congestion in
the network. The threshold triggers decrease in sending rate.
When actual and expected rate is almost the same, there is available
bandwidth that goes wasted. The threshold triggers increase in sending rate.
Overall goal of TCP Vegas is to keep between and extra bytes in the network.

Black line (actual throughput), color line (expected throughput) and threshold (shaded region)

List the difference between DECbit and RED.


In DECbit, explicit notification about congestion is sent to source, whereas
RED implicitly notifies the source by dropping a few packets.
DECbit may lead to tail drop policy, whereas RED drops packet based on
drop probability in a random manner.
What is explicit congestion notification?
RED drops packets early to notify congestion is not acceptable to
applications that are intolerant to delay or loss of packets.
A bit in the TOS field can be set by routers along the path when congestion
is encountered. The bit is echoed back to source. This is known as explicit
congestion notification (ECN).
Define QoS?
Certain applications are not satisfied with best-effort service offered by the
network. o Multimedia applications require minimum bandwidth.
o Real-time applications require timeliness rather than correctness.
Network that supports different level of service based on application
requirements offers Quality of Service (QoS).
QoS is defined as a set of attributes pertaining to the performance of a
connection. Attributes may be either user or network oriented.
List the approaches to improve QoS.
Approaches to improve QoS are classified as either fine-grained or coarse-grained.
Fine-grained approaches provide QoS to individual applications or flows. Integrated

Services, a QoS architecture used with RSVP belongs to this category.

Coarse-grained approaches provide QoS to large classes of data or aggregated traffic.

Differentiated Services belongs to this category.


Briefly explain taxonomy or classification of applications.
Applications are classified as real time and non-real time or elastic.

Applications such as Telnet, FTP, email, Web browsing, etc., that can work
without timely delivery are termed as elastic.
Real-time applications are classified based on how they handle packet loss.
Robot control program can malfunction (intolerant) due to loss of a packet,
whereas loss of an audio sample will have less effect on audio quality (tolerant).
Real-time applications can also be classified based on their adaptability.
An audio application adapts to delay experienced in the network by buffering,
whereas video coding algorithms are rate adaptive with quality based on bandwidth.

Explain how QoS is provided through integrated services or RSVP.


Integrated Services (IntSrv) is a flow-based QoS model, i.e., user creates flow
from source to destination and informs all routers of the resource requirement.
Service classes defined in IntSrv are :
o Guaranteed service for intolerant application, wherein network assures
that delay will not be beyond some maximum if flow stays within TSpec.
o Controlled load service for tolerant, adaptive applications such as file
transfer, e-mail, etc which requires low-loss or no-loss.
Flow specification (flowspec), i.e., information given to network about a flow
contains two parts namely TSpec and Rspec.
0 TSpec defines traffic characterization of the flow.
o

RSpec defines resources that flow needs to reserve (buffer, bandwidth, etc).

TSpec
Bandwidth of real-time application varies constantly and exceeds their average rate.
Varying bandwidth characteristic of a flow is described using token bucket filter.

Token bucket is used to control the amount and rate of traffic sent to the
network. Two parameters used by the filter are token rate r and bucket depth
B.

A token is required to send a byte of data.


Host can accumulate tokens at rate r per second, but not more than B tokens.
Bursty data of more than r bytes per second is not permitted, i.e., traffic is
smoothened out by spreading bursty data over a long interval.
Token bucket allows bursty traffic at a regulated maximum rate.

Flows with different rates and bucket descriptions

Flow A generates data at a steady rate of 1 Mbps. It is described using a token bucket with
rate r = 1 Mbps and depth B = 1 byte. Flow A does not need depth to store tokens Flow B
sends at rate of 0.5 Mbps for 2 secs and then 2 Mbps for 1 sec. It is described using a token
bucket with rate r = 1 Mbps and depth B = 1 MB. Flow B uses depth of 1 MB to accumulate
tokens while sending 0.5 Mbps (2 0.5 = 1 MB) and uses those

tokens to send bursty data of 2 Mbps.


Resource Reservation Protocol (RSVP)
Resource Reservation Protocol (RSVP) is a signaling protocol that helps IP
to create a flow and required resource reservations.
RSVP is designed to support multicast flows (eg. multimedia) and provides
resource reservations for all kinds of traffic.
Since IP is stateless, RSVP ensures robustness by using soft state in the routers.
Soft state times out every 30 ms unless refreshed. Deletion is not required explicitly. RSVP
is receiver-oriented approach that enables receivers to track their requirements.

RSVP Messages
Sender sends a PATH message to all receivers containing its TSpec every 30 seconds. A
PATH message contains necessary information for downstream receivers.

Each router uses PATH message to determine the reverse path for sending
reservation from receiver back to the sender.
Receiver sends a reservation request RESV message back to the sender
(upstream), containing sender's TSpec and receiver's requirement RSpec.
Each router on the path looks at the RESV request and tries to allocate
necessary resources and passes RESV onto the next router.
If allocation is not feasible, the router sends an error message to the receiver
In case of link failure, a new path is discovered between sender and the receiver.
Routers reserve resources as long as it receives RESV message, otherwise released.

Reservation Merging on a multicast tree


In RSVP, the resources are not reserved for each receiver in a flow, but merged.
When a RESV message travels from receiver up the multicast tree, it is likely to
come across a router where reservations have already been made for some other
flow.
If new requirements can be met using existing flows, then new reservation is not done A
router that handles multiple requests with one reservation is known as merge point.
Reservation merging meets the needs of all receivers downstream of the merge point.

For example, receiver B has already made a request for 3 Mbps. If A comes
with a new request for 2 Mbps, then no new reservations are made.

Admission Control
Admission control refers to mechanism used by a router to accept or reject a
flow based on flow specifications.
When a flow requests a level of service, its TSpec and RSpec are examined.
A flow is admitted, if desired service can be provided with currently available
resources without degrading service of previously admitted flows, otherwise denied.

Admission control may be based on policy. For example, a network admin


may allow CEO to make reservations and forbid other employee requests.

Weighted fair queuing or a combination of queuing disciplines is used for scheduling.

Issues
Scalability IntSrv requires router to maintain information for each flow, which
is not feasible for today's internet growth
Service type limitation Only two types of services are provided. Certain
applications may require more than the offered services.
Discuss Differentiated Services QoS mechanism.
Differentiated Services (DiffServ) is a class-based QoS model designed for
IP. Default best-effort model is enhanced as a new class called premium.

Premium packets have bits set in the header by the gateway or ISP router.

IETF has defined a set of behaviors for routers known as per-hop behaviors
(PHB). TOS / TrafficClass field is replaced with 6-bit DiffServ Code Points
(DSCP).

DSCP can be used to define 64 PHB that could be applied to a packet.


Three PHBs defined are Default, Expedited Forwarding and Assured Forwarding.
Default is same as best-effort delivery.
Expedited Forwarding (EF)
Packets marked as EF should be forwarded by the router with minimal delay
and loss by ensuring required bandwidth.
Routers guarantee EF, only if arrival rate of EF packets is less than forwarding rate.
Rate limiting of EF packets is achieved by configuring edge routers of a domain not

to allow packets beyond bandwidth of the slowest link.


Queuing can be either strict priority or weighted fair queuing.
o In strict priority, EF packets are preferred over others.
o In WFQ, EF packets can be dropped, if there is excessive EF traffic.
Assured Forwarding (AF)
Assured Forwarding is based on RED with In and Out (RIO)
algorithm. In RIO, drop probability increases as the average
queue length increases. RIO has two classes named in and out.

The out curve has a lower MinThreshold than in curve, therefore under low
levels of congestion, only packets marked out will be discarded.
If the average queue length exceeds Minin, packets marked in are also
dropped.

For example, "Customer X is allowed to send up to y Mbps of assured


traffic", terms in and out are:
o If the customer sends packets less than y Mbps then packets are marked in. o
When customer exceeds y Mbps, the excess packets are marked out.

Combination of profile meter at the edge router and RIO in all routers, assures (but
does not guarantee) the customer that packets within the profile will be delivered

RIO does not change the delivery order of in and out packets.
RIO generalized to provide more than two drop probability is known as
weighted RED (WRED).
DSCP code can be used to classify packets for WFQ scheduler. Weight for
premium queue is chosen as:
Bpremium = Wpremium / (Wpremium + Wbest-effort)
For example, if Wpremium = 1 and Wbest-effort = 4, then Bpremium = 0.2, i.e., 20%
How limitations of integrated services are handled in differentiated services?
1. The main processing was moved from the core of the network to edge of the network
(scalability). Thus routers need not store information about flows. The applications

define the type of service they need each time when a packet is sent.
2. Per-flow service is changed to per-class service. Router routes the packet
based on class of service defined in the packet, not the flow. Different types
of classes (services) are based on the needs of applications.
Define equation-based congestion control.
TCPs congestion-control algorithm is not appropriate for real-time applications.
A smooth transmission rate is obtained by ensuring that flows behavior
adheres to an equation that models TCPs behavior.

To be TCP-friendly, the transmission rate must be inversely proportional to


RTT and square root of loss rate ( ).
List QoS service classes in ATM network.
1. Constant bit rate Sources send data at a constant rate, i.e., peak rate = average rate

Variable bit rateDivided into two sub-classes, real-time and non-real time.
Varying traffic is described using a token bucket.
3. Unspecified bit rateBest-effort delivery without any guarantee.
4. Available bit rateDefines a set of congestion-control mechanism.
2.

List the issues in sliding window algorithm when used over internet.
Single link always connects the same two hosts, whereas TCP allows
processes running on any two hosts to be connected.
TCP connections have varying RTTs, whereas over a single link, RTT is fixed.
Packets may be reordered over internet, whereas on a link it is impossible.

Single link supports delay bandwidth product, whereas in TCP its flow control.
Congestion is not possible on a link, whereas TCP connection is not aware
of what links it will traverse and their capacities.

UNIT V

Distinguish between application programs and application protocol.


Application program are client process that run on hosts, such as Chrome, Firefox, etc.
Application protocols control client/server communication, such as FTP, HTTP, SMTP.

Different applications might be use the same protocol. For example, web
browsers use HTTP to retrieve web pages from a web server.
Discuss the components of an email system and the protocols used.

Alice sends a mail to Bob


User Agent
User agent (UA) is software (eg. Microsoft Outlook, Netscape) that facilitates:
o Compose create message by providing template with built-in editor.
o Read read mail and provide sender, subject, flag (read, new) information.
o Reply allows user to reply (send message) back to sender

o Forward
facilitates forwarding message to a third party.
o Mailboxes
two mailboxes for each user namely inbox and outbox.
Message Format
RFC 822 defines email message with two parts namely header and body.
Each header line contains type and value separated by a colon (:).
Some are: o From identifier sender of the message.
o To mail address of the recipient(s).
o Subject says about purpose of the message.
o Date timestamp of when the message was transmitted.
E-mail address is userid@domain where domain is hostname of the mail server.
Body contains the actual message. Header is separated from the body by a blank line.
Multipurpose Internet Mail Extension (MIME)
Email system was designed to send messages only in NVT 7-bit ASCII format.
o

Languages such as French, German, Chinese, Japanese, etc are not supported.

0 Image, audio and video files cannot be sent.


MIME is a supplementary protocol that converts non-ASCII data to NVT ASCII
before sending and reconverts to non-ASCII data at the receiver.

Headers defined in MIME are:


o MIME-Version specifies the current version, i.e., 1.1
o Content-Type defines message type such as text/html, image/jpeg,
video/mpeg application/word, etc. For multiple content messages, it is
multipart/mixed.
o Content-Transfer-Encoding defines how message is encoded (default
7-bit). base64 encoding is used to encode binary data into ASCII format.
o Content-Id unique identifier for the whole message if it contains multiple
types. o Content-Description describes type of the message body.
Example
MIME-Version: 1.1
Content-Type: multipart/mixed; boundary="-------417CA6-------"
From: Alice Smith <alice@cisco.com>
To: bob@cs.princeton.edu
Subject: photo
Date: Mon, 07 Sep 1998 19:45
---------417CA6------Content-Type: text/plain
Content-TransferEncoding: 7bit PFA my
photo
---------417CA6------Content-Type: image/jpeg
Content-Transfer-Encoding:
base64
Message Transfer Agent (MTA)
MTA is a mail daemon (sendmail) that runs on each host to send an email.
Simple mail transfer protocol (SMTP) defines client/server MTA communication
over TCP on port 25.
Mails are stored entirely and forward through intermediate mail gateways until it
reaches the recipient mail server.
Commands are sent from the client MTA to server MTA. Some are:
Command Argument
HELO
Sender's host name
MAIL
FROM
Sender of the message
RCPTTO
Recipient of the message
DATA
Body of the mail
QUIT
Terminate
Responses are sent from server MTA to client MTA. Some are:
Code
221
250
354

Description
Service closing transmission channel
Request completed
Start mail input

Example
In each exchange, client posts a command and server responds with a code.

HELO cs.princeton.edu
250 Hello
daemon@mail.cs.princeton.edu
MAIL
FROM:<bob@cs.princeton.edu>
250 OK
RCPT TO:<alice@cisco.com>
250 OK
DATA
354 Start mail
input See u at
conference 250
OK
QUIT
221 Closing connection
Message Access Agent (MAA) / Mail Readers
MAA or mail reader allows user to retrieve messages in the mailbox from a
remote host, so that user can perform actions such as reply, forwarding, etc.
MAA protocols used are Post office protocol and Internet message access protocol
SMTP is a push type protocol whereas POP and IMAP are pop type protocol.

Internet Message Access Protocol (IMAP)


IMAP is a client/server protocol running over TCP. Current version is
IMAP4. Client is authenticated in order to access the mailbox.
LOGIN, AUTHENTICATE, SELECT, EXAMINE, CLOSE, LOGOUT, FETCH,
STORE, DELETE, etc., are some commands that the client can issue.
Server responses are OK, NO (no permission), BAD (incorrect command), etc.
When user asks to FETCH a message, server returns it in MIME format and the
mail reader decodes it.
Message attributes such as size are also exchanged.
Flags (Seen, Answered, Deleted, Recent) are used by client to report about user
actions.

IMAP4 State transition


List the functions of POP3
POP is simple and limited in functionality. Current version is POP3.
POP client is installed on the recipient computer and POP server on the mail server.

Client opens a connection to the server using TCP on port 110.


Client sends username and password to access mailbox and to retrieve
messages. POP works in two modes namely, delete and keep mode.
o In delete mode, mail is deleted from the mailbox after retrieval
o In keep mode, mail after reading is kept in mailbox for later retrieval.
List the advantages of IMAP over POP.
IMAP is more powerful and more complex than POP.
User can check the e-mail header prior to downloading.
User can search e-mail for a specific string of characters prior to downloading.
User can download partially, very useful in case of limited bandwidth.

User can create, delete, or rename mailboxes on the mail server.


What is hypertext?
Hypertext is a text that contains embedded URL known as links.
When hypertext is clicked, browser opens a new connection, retrieves file from
the server and displays the file.
Explain WWW or HTTP protocol or URL in detail.
WWW is a distributed client/server service, in which a client (browser such as IE,
firefox, etc.) can access services at a server (web server such as IIS, Apache).
HyperText Transfer Protocol (HTTP) is a stateless request/response protocol that
governs client/server communication using TCP on port 80.
Uniform Resource Locator (URL) provides information about its location on
the Web http://www.domain_name/filename
When user enters URL, browser forms a request message and sends it to the server.
Web server retrieves the requested URL and sends back a response message.

Web browser renders the response in HTML or appropriate format.

Request Message
Request Line
Request Header : Value
Body (optional)
Request Line
Request line contains three fields namely Request type, URL and HTTP
version. HTTP version specifies current version of the protocol i.e., 1.1
Request type specifies methods that operate on the URL. Some are:

Method
GET
HEAD
PUT
TRACE
DELETE
CONNECT

Description
retrieve document specified as URL
retrieve meta-information about the URL document
store document under specified URL
Loopback request message (echoing).
delete specified URL
Used by proxies

Request Header
Headers defined for request message include:
Request Header Description
Authorization
specifies what permissions the client has
From
specifies e-mail address of the user
Host
specifies host name and port number of the server
If-modifiedsince
server sends the URL if it is newer than specified date
Referrer
specifies URL of the linked document
User-agent
specifies name of the browser
For example, request message to retrieve file index.html on host cs.princeton.edu is:
GET index.html
HTTP/1.1 Host:
www.cs.princeton.edu
Response Message
Status Line
Response Header : Value
Body
Status Line
Status line contains three fields namely HTTP version, status code and status phrase.
3-digit status code classifies HTTP result based on leading digit (1xxInformational,
2xx Success, 3xxRedirection, 4xxClient error and 5xxServer error).
Status phrase gives brief description about status code. Some are:
Code Phrase
Description
100 Continue
Initial request received, client to continue process
200 OK
Request is successful
Moved
301 permanently
Requested URL is no longer in use
404 Not found
Document not found
Internal server
500 error
An error such as a crash, at the server site
Response Header
Provides additional information to the client. Some are:

Response Header Description


Contentencoding
specifies the encoding scheme

Content-type
specifies the medium type
Expires
gives date and time up to which the document is valid
Last-modified
gives date and time when the document was last updated
Location
specifies location of the created or moved document
For example, response for a moved page is:
HTTP/1.1 301 Moved Permanently
Location: http://www.princeton.edu/cs/index.html.
TCP Connection
HTTP 1.1 uses persistent connection, i.e., client and server exchange multiple
messages over the same TCP connection. The advantages are:
o Eliminates connection setup overhead and additional load on the server.
o

Congestion window is very efficient by avoiding slow start phase for each page.

Caching
Caching enables the client to retrieve document faster and reduces load on the server.
Caching is implemented at places such as ISP router, Proxy server, Browser.

Server sets expiration date (Expires header), beyond which the page is not cached.
Cache document is verified whether it is a recent copy using If-ModifiedSince header. A page must not be cached if no-cache directive is specified.
Define Uniform Resource Identifiers (URI).
URI is a string that identifies resources such as document, image, service, etc. It
is of the form scheme:scheme-specific
Scheme identifies a resource type, such as mailto for mail address, file for file name,
etc. and scheme-specific is a resource identifier. Example is mailto:
skvijaianand@gmail.com

URI identifies a resource, whereas URL is used to locate a resource.


What is non-persistent TCP connection?
In non-persistent, a separate connection for each data item retrieved from server.
For example, a page with text and dozen graphics requires 13 separate TCP connections.
Two RTTs are incurred to setup each connection and at least another couple of RTT in

sending request and retrieving responses.


Define proxy server.
\Proxy server copies responses sent by the server to recent requests.
Client's request is intercepted by proxy and responds if it has a cache of the
document, or else forwards request to the server.
Write a note on web services.
Web services are architectures that offer remotely accessible services for client
applications to form network applications, such as business-to-business (B2B)
and enterprise application integration (EAI).
An example is, an application from Amazon.com tracking shipping of a book
order by interacting with application on Fedex.com
Two web services architectures are WSDL/SOAP (custom) and REST (generic):
WSDL and SOAP are frameworks based on XML for specifying and implementing
application protocols and transport protocols, customized to a network application

REST treats individual web services as WWW resources, identified by URI and
accessed via HTTP.
Web Service Description Language (WSDL)
WSDL is an operation model, where a web interface is a set of named operations
that represents interaction between client and web service.
Each operation specifies a message exchange pattern (MEP) that provides the
sequence in which the messages are to be transmitted.
Commonly used MEPs are In-Only (a message from client to service) and InOut (request from a client and corresponding reply from the service).
Message formats are defined as an abstract data model using XML Schema.
Concrete part specifies how MEPs are mapped onto it (binding). Predefined
bindings exist for HTTP and SOAP-based protocols.
Specification of a web service may contain multiple WSDL documents, and these
documents could be used in other web service.
Each WSDL document specifies URI of the target XML namespace.
A WSDL document can incorporate components of another by

o including the second document if both share the same target


namespace o importing it if the target namespaces differ.
SOAP
SOAP is used to define transport protocols with features required to support a
particular application protocol.
A SOAP feature specification includes:
o URI that identi es the feature
o State information and processing required at each SOAP node for
implementation o Information to be relayed to the next node
o If MEP then, life cycle and temporal relationships of the messages exchanged.
SOAP is binded to an underlying protocol, to derive its features. This is known as layering.
For example a request-response protocol is obtained by binding SOAP to HTTP.

A SOAP message consists of an envelope, which contains a header that


contains header blocks, and a body that contains data.

SOAP message structure


A SOAP header block contains information that corresponds to a particular feature.
A SOAP module is a specification of syntax and semantics of one or more header blocks. A
given message may be processed not only by sender/receiver, but also by SOAP-aware

nodes based on SOAP role.


For example, role next implies all nodes can process, whereas role
ultimateReceiver specifies only receiver can process.

Each header block specifies a role. A node processes header blocks that specify
a role assumed by the node and forwards the message.
A SOAP fault is generated if a node does not understand the blocks it should process.
REpresentational State Transfer (REST)
REST web services architecture is based on re-applying the model underlying
the WWW architecture.
In REST model, complexity is shifted from protocol to the payload.
Payload is a representation of the abstract state of a resource. For example, a
GET returns a representation of current state of the resource.
Message size is reduced by transmitting parts of a state by reference or URI.
XML and JSON are widely used as presentation language to define document structure.
REST uses infrastructure deployed to support the Web. For example, proxies can enforce

security mechanism.
Web supports intermediary nodes as in SOAP. For example, since GET is readonly, nodes can cache the response.
Compare SOAP/WSDL and REST protocol.
WSDL/SOAP integrates application via protocols customized to each application
protocol whereas REST adopts generic approach by using WWW architecture.
WSDL has user-defined operations, whereas REST uses HTTP methods GET and
POST. Interoperability in SOAP depends on the agreement with the underlying protocol,

whereas in REST there is no interoperability problem.


Interface of legacy applications easily map onto WSDL operations than REST states.
What is a profile?
A profile is a set of guidelines that narrow choices in standards like WSDL, SOAP, etc.
Widely used profile is WS-I Basic Profile. It requires WSDL be bound exclusively to

SOAP, and SOAP be bound exclusively to HTTP and use HTTP POST method.
WS-I Basic Security Profile adds security constraints to basic profile by
specifying how SSL/TLS layer is to be used.
Explain the role of DNS on a computer network.
Naming service maps user-friendly names to router-friendly address, i.e., middle-ware.
Domain Hierarchy
Domain Naming System (DNS) includes:
o namespace to define domain names without any
collision, o binds domain names to IP address and
o name server to lookup IP address for a given name.
DNS implements hierarchical name space for domains in the Internet.
DNS names are processed from right to left and use periods (.) as separator.
DNS hierarchy is represented as a tree, where each node is a domain and leaves are hosts.
Six top level domains (TLD) are .edu .com .gov .mil .org and .net. TLD also exists for

each country, e.g., .fr (france) .in (india), etc.


Domain hierarchy is partitioned into zones. Each zone acts as central authority
for that part of the sub-tree. For example, in .edu domain, princeton is a zone.
Zones can be further sub-divided such as CS department under Princeton university.

Part of domain hierarchy


Name Servers (NS)
Root NS is maintained by NIC and TLD name servers are managed by ICANN.
Name servers contain zonal information as resource records that binds name-to-value.
Name servers receive query and return IP address of domain name or another NS to
client Resource record is a 5-tuple with fields <Name, Value, Type, Class, TTL>
o

Name field specifies the domain/zone name. It is used as primary search key.

o Type field indicates what kind of record it is. Commonly used types
are: NS Value field contains address of a name server
CNAME Value field contains alias name for the
host. MX Value field contains a mail server.
A Value field contains an IP address
o Class field is always IN for internet domain names.
o TTL field gives an indication of how long the resource record is valid.

Hierarchy of CS Nameserver
Resource Records
Root name server contain a NS record for each TLD name server and an A
record that translates TLD into corresponding IP address.
< edu, a3.nstld.com, NS, IN >
< a3.nstld.com, 192.5.6.32, A, IN >

Each TLD name server has a NS record for each zone-level name server and an
A record that translates zone name into corresponding IP address.
Resource records for TLD edu name server looks like:
2 princeton.edu, dns.princeton.edu, NS, IN >
3 dns.princeton.edu, 128.112.129.15, A, IN >

Zone name server princeton.edu resolves some queries directly (email.princeton.edu)


whereas it redirects others to a server at another layer in the hierarchy (cs.princeton.edu).

< email.princeton.edu, 128.112.198.35, A, IN >


< cs.princeton.edu, dns1.cs.princeton.edu, NS, IN >
< dns1.cs.princeton.edu, 128.112.136.10, A, IN >

Third-level name server cs.princeton.edu contains A records for all hosts on that network.

< penguins.cs.princeton.edu, 128.112.155.166, A, IN >

Name Resolution

Client does not know address of root name server, therefore it sends query about
penguins.cs.princeton.edu to the local name server.
q Local NS sends a query containing penguins.cs.princeton.edu to the root server.
r Root server finds no exact match for query. Best match is NS record for edu that point to
p

s
t

TLD server a3.nstld.com. Therefore returns A record for a3.nstld.com to local NS.
Local NS resends the query to 192.5.6.32 since it has not got IP address for the query.
TLD edu server returns A record (128.112.129.15) for the best zone match
princeton.edu

Local NS resends the query to zonal NS princeton.edu and receives A record


(128.112.136.10) for cs.princeton.edu
v Finally local NS resends the query to cs.princeton.edu and gets the A record
(128.112.155.166) for penguins.cs.princeton.edu
w Local NS caches the response and sends it to the client. Client uses the IP
address to communicate with the server.
u

What is the role of hosts.txt?


NIC maintained a table of name-to-address bindings called hosts.txt
Any host that joins the internet, mails its name and IP address to NIC.

NIC updates hosts.txt and mails it to all hosts. Thus a host comes to know
about IP address of other hosts.
Internet grew in the 80's, after which hosts.txt approach failed and DNS evolved.
Distinguish between host name and IP address.
Host on a network is uniquely identified by its IP address. It is numeric with fixed
length and suitable for processing by routers.
Host names are of variable-length and mnemonic. It is easier to remember than
an IP address, but does not help in locating a host on the network.
Explain how SNMP is used to manage nodes on the network
Simple Network Management Protocol (SNMP) is an application layer protocol
that monitors and manages routers, distributed over a network.
SNMP uses the concept of manager and agent. Manager is a host that runs SNMP
client program (GUI), whereas agent is a router that runs SNMP server program.
SNMP uses services of UDP on two well-known ports: 161 (agent) and 162 (manager).
SNMP is supported by two protocols: Structure of Management Information (SMI) and

Management Information Base (MIB).


SMI Object Identifier
SMI defines rules for naming objects using Abstract syntax notation one (ASN.1).
Basic Encoding Rules (BER) encoding is used to transmit data over the network.
Object identifiers are hierarchical that begins with root and uses lexicographic ordering.

MIB Groups
Each agent has its own MIB, which is a collection of objects to be managed.
SNMP objects are located under mib-2 object, identifiers beginning with
1.3.6.1.2.1 MIB-II (version 2) classifies objects under ten groups. Some
are:
o

sys (system

information about the node such as name, location, lifetime, etc.

p if (interface information about interfaces attached to the node such as


physical address, packets sent and received on each interface, etc.

at (address translation

ip

information about ARP table

information about IP such as routing table, datagrams forwarded/dropped, etc

tcp information related to TCP such as connection table, time-out value,


number of TCP packets sent / received, etc.
udp

information on UDP traffic such as number of UDP packets sent/received.

mib-2 Groups
MIB variables
MIB variables are of two types namely simple and table.
Simple variables are accessed using group-id (1.3.6.1.2.1.7) followed by variable-id
and
0 (instance suffix). For example, udpInDatagrams is accessed as
1.3.6.1.2.1.7.1.0
Tables are ordered as column-row rules, i.e., column by column from top to bottom.
Leaf elements are only accessible in a table type, with group id followed by table id, leaf
element
and
instance
suffix.
For
example,
udpLocalAddress
id
is
1.3.6.1.2.1.7.5.1.1.3

UDP variables
Protocol Data Unit (PDU)
SNMP is request/reply protocol that supports various operations using
PDUs: o GET used by manager to retrieve value of agent variable.
o GET-NEXT used by manager to retrieve next entries in a
agent's table. o SET used by manager to set value of an agent's
variable.
o RESPONSE sent from an agent to manager in response to GET/GETNEXT that contains value of variables.
o TRAP sent from agent to the manager to report an event such as reboot.
When administrator selects a piece of information, manager puts identifier for the
MIB variable and sends request message to the agent.
Agent maps the identifier, retrieves value of the variable, and sends encoded
value back to the manager.

You might also like