You are on page 1of 27

UDP TCP Adaptive Flow Control Adaptive Retransmission Congestion control Congestion avoidance QoS

TCP
-

Stands for Transmission Control Protocol

TCP provides a connection oriented, reliable, byte stream


service. The term connection-oriented means the two
applications using TCP must establish a TCP connection with
each other before they can exchange data. It is a full duplex
protocol, meaning that each TCP connection supports a pair of
byte streams, one flowing in each direction. TCP includes a flowcontrol mechanism for each of these byte streams that allow the
receiver to limit how much data the sender can transmit. TCP
also implements a congestion-control mechanism.
TCP frame format
TCP data is encapsulated in an IP datagram. The figure shows the
format of the TCP header. Its normal size is 20 bytes unless options
are present. Each of the fields is discussed below:

Source port (16 bits) identifies the sending port


Destination port (16 bits) identifies the receiving
The sequence number identifies the byte in the stream of data
from the sending TCP to the receiving TCP that the first byte of
data in this segment represents.
Acknowledgment number (32 bits) if the ACK flag is set then
the value of this field is the next sequence number that the
receiver is expecting.
Header length (4 bits) specifies the size of the TCP header in
32-bit words.

Reserved (4 bits) for future use and should be set to zero


URG (1 bit) indicates that the Urgent pointer field is significant
ACK (1 bit) indicates that the Acknowledgment field is
significant. All packets after the initial SYN packet sent by the
client should have this flag set.
PSH (1 bit) Push function. Asks to push the buffered data to
the receiving application.
RST (1 bit) Reset the connection
SYN (1 bit) Synchronize sequence numbers. Only the first
packet sent from each end should have this flag set. Some other
flags change meaning based on this flag, and some are only
valid for when it is set, and others when it is clear.
FIN (1 bit) No more data from sender
Window (16 bits) the size of the receive window, which
specifies the number of bytes (beyond the sequence number in
the acknowledgment field) that the receiver is currently willing to
receive
Checksum (16 bits) The 16-bit checksum field is used for
error-checking of the header and data
Urgent pointer (16 bits) if the URG flag is set, then this 16-bit
field is an offset from the sequence number indicating the last
urgent data byte
Options (Variable 0-320 bits, divisible by 32) The length of this
field is determined by the data offset field. Options 0 and 1 are a
single byte (8 bits) in length
The data portion of the TCP segment is optional.

TCP vs UDP
TCP
-

Transmission control
protocol
Slower than UDP
Reliable
Connection oriented
Provides flow control
and error control
TCP offers error
connection and
Guaranteed Delivery
Provides or sends
larger packets
Ordered message
delivery

UDP
-

User Datagram protocol


Faster than TCP
Unreliable
Connection less
No flow control
UDP doesnt offer error
connection & delivery
Provides or sends
smaller packets
Unordered message
delivery
Light weight- No
ordering of messages
and no tracking

Heavyweight-when the
segment arrives in the
wrong order, resend
requests have to be
sent, and all the out of
sequence parts have to
be put back together
If any segments are
lost, the receiver Will
send ACK to intimate
lost segments

connections etc
No Acknowledgement

TCP Connection establishment and Termination


The algorithm used by TCP to establish and terminate a connection is
called a three-way handshake.
Three way handshake for connection establishment
It involves exchange of three messages between client and server as
illustrated by the timeline given in figure

1 The client sends the starting sequence number (x) it plans to


use.
2 The server responds with a ACK segment (x+1) that
acknowledges the client sequence number and starts its own
sequence number (y).
3 Finally client responds with a third segment (y+1) that
acknowledges the server sequence number
Three way handshake for connection termination
Three-way handshaking for connection termination as shown in figure

1 The client TCP, after receiving a close command from the client
process, sends the first segment, a FIN segment. The FIN
segment consumes one sequence number(x)
2 The server TCP, after receiving the FIN segment, informs its
process of the situation and sends the second segment, a FIN
+ACK segment, to confirm the receipt of the FIN segment from
the client and at the same time to announce the closing of the
connection in the other direction. The FIN +ACK segment
consumes one sequence number(y)
3 The client TCP sends the last segment, an ACK segment, to
confirm the receipt of the FIN segment from the TCP server. This
segment contains the acknowledgment number(y+1), which is 1
plus the sequence number received in the FIN segment from the
server.
TCPs State Transition Diagram
TCP uses a finite state machine to keep track of all the events
happening during the connection establishment, termination and data
transfer. The following figure shows that.

State

Description

CLOSED

No connection

LISTEN

The server is waiting for call from client

SYN-SENT

A connection request is sent, waiting for ACK

SYN-RECD

A connection request is received

ESTABLISHE
D

A connection is established

FIN-WAIT1

The application
connection

has

requested

the

closing

FIN-WAIT2

The other side has


connection request

accepted

the

closing

TIME-WAIT

Waiting for retransmitted segment to die

CLOSE-WAIT

The server is waiting for the application to close

LAST-ACK

The server is waiting for the LAST-ACK

Client program
The client program can be in one of the following states:
CLOSED, SYS-SENT, ESTABLISHED, FIN-WAIT1, FIN-WAIT2
and TIME-WAIT
1 The client TCP starts at CLOSED-STATE
2 The client TCP can receive an active open request from the client
application program. It sends SYN segment to the server TCP
and goes to the SYN-SENT state.
3 While in this state, the client TCP can receive a SYN+ACK
segment from the other TCP and goes to the ESTABLISHED
state. This is data transfer state. The client remains in this state
as long as sending and receiving data.
4 While in this state, the client can receive a close request from
the client application program. It sends FIN segment to the other
TCP and goes to FIN-WAIT1 state.
5 While in this state, the client TCP waits to receive an ACK from
the server TCP. When the ACK is received, it goes to the FINWAITE2 state. It does not send any thing. Now the connection is
closed in one direction.
6 The client remains in this state, waiting for the server TCP to
close the connection from the other end. If the client TCP
receives FIN segment from the other end, it sends an ACK and
goes to TIME-WAIT state.

7 When the client in this state, it starts time waited timer and
waits until this timer goes off. After the time out the client goes
to CLOSED state.
Server program
The server program can be in one of the following states:
CLOSED, LISTEN, SYN-RECD, ESTABLISHED, CLOSE-WAIT,
LAST-ACK
1 The server TCP starts in the CLOSED-STATE.
2 While in this state, the server TCP can receive a passive open
request from the server application program. It goes to LISTEN
state.
3 While in this state, the server TCP can receive a SYN segment
from the client TCP. It sends a SYN+ACK segment to the client
TCP and goes to the SYN-RECD state.
4 While in this state, the server TCP can receive ACK segment
from the client TCP. Then it goes to ESTABLISHED state. This is
data transfer state. The sender remains this state as long as it is
receiving and sending data.
5 While in this state, the server TCP can receive a FIN segment
from the client, which means that the client wishes to close the
connection. It can send an ACK segment to the client and goes
to the CLOSE-WAIT state.
6 While in this state, the server waits until it receives a close
request from the server application program. It then sends a FIN
segment to the client and goes to LAST-ACK state.
7 While in this state, the server waits for the last ACK segment. If
then goes to the CLOSED state.
TCP Flow control

Flow control defines the amount of data a source can send


before receiving an ACK from the receiver.
Sliding window protocol is used by TCP for flow control.

Sliding window for flow control by TCP

The sliding window serves several purposes:


(1) It guarantees the reliable delivery of data
(2) It ensures that the data is delivered in order,
(3) It enforces flow control between the sender and the receiver.
For reliable and ordered delivery
The sending and receiving sides of TCP interact in the following
manner to implement reliable and ordered delivery:
Each byte has a sequence number.
ACKs are cumulative.
Sending side
o LastByteAcked <=LastByteSent
o LastByteSent <= LastByteWritten
o bytes between LastByteAcked and LastByteWritten must
be buffered.
Receiving side
o LastByteRead < NextByteExpected
o NextByteExpected <= LastByteRcvd + 1
o bytes between NextByteRead and LastByteRcvd must be
buffered.
For flow Control
Sender buffer size : MaxSendBuffer
Receive buffer size : MaxRcvBuffer
Receiving side
o LastByteRcvd - NextBytteRead <= MaxRcvBuffer
o AdvertisedWindow = MaxRcvBuffer - (LastByteRcvd NextByteRead)
Sending side
o LastByteSent - LastByteAcked <= AdvertisedWindow
o EffectiveWindow = AdvertisedWindow - (LastByteSent LastByteAcked)
o LastByteWritten - LastByteAcked <= MaxSendBuffer

Block sender if (LastByteWritten - LastByteAcked) + y >


MaxSendBuffer

Always send ACK in response to an arriving data segment


Silly Window Syndrome:
A problem can occur in the sliding window operation when either the
sender creates the data slowly or receiver consumes data slowly or
both. This situation is called as silly window syndrome.
Solution for creating data slowly by sender
1 Nagles Algorithm
Step 1: The sender sends first segment of data even if it is only 1
byte.
Step2: After sending the segment, the sender accumulates the data in
the output buffer and waits for an ACK from the receiver or
accumulated data enough to fill the maximum size segment. At this
time TCP can send next segment
Step3: Step2 is repeated for the rest of the transmission. Segment 3
must be sent if an ACK for segment 2 is received or enough data are
accumulated to fill a maximum size segment.
Solution for consuming data slowly by the receiver
1 Clarks algorithm
Send an ACK as soon as data arrive but to announce the window size
of zero until either there is enough space to accommodate a segment
of maximum size in the buffer or until one half of the buffer is empty.
2 Delayed Acknowledgements
Delay the sending ACK. This means, when a segment arrives, it is not
acknowledged immediately. The receiver waits until there is space in
its buffer.
TCP Timers
The TCP uses the following timers during the data transmission
Retransmission timer
Persistence timer
Keep alive timer
Time waited timer
Retransmission timer: used to calculate the retransmission time of the
next segment.
Retransmission time = 2 x RTT where RTT is round trip time.
RTT= (previous RTT) + (1-) (Current RTT)

Persistence timer: When the sender receives the ACK with a window
size zero, it starts the persistence timer. When it goes off, the sender
sends special segment called probe. The probe is an alert that alerts
the receiver that the ACK was lost and should be resent.
Keep alive timer: is used to prevent a long idle connection between
sender and receiver
Time waited timer: is used during the connection termination
Adaptive Retransmission

1.

TCP uses multiple timers. Most important timer is retransmission


timer
Adaptive retransmission is used for retransmission timer
management
When a segment is sent, a retransmission timer is started. If the
segment is acknowledged before the timer expires, the timer is
stopped. On the other hand, if the acknowledgement comes in
after the time out occurs, then retransmission is started.
How long should be the time out interval?
Choosing an appropriate timeout value is not easy.
The length of time we use for retransmission timer is very
important. If it is set too low, we may start retransmitting a
segment earlier, if we set the timer too long, we waste time that
reduces overall performance.
To address this problem, TCP uses an adaptive retransmission
mechanism
The solution is to use a dynamic algorithm that constantly
adjusts the timeout interval based on continuous measurement
of n/w performance
Original Algorithm

Adaptive Retransmission Based on Round-Trip Time (RTT)


Calculations
Measure SampleRTT for each segment / ACK pair
every time TCP sends a data segment, it records the time.
When an ACK for that segment arrives, TCP reads the time
again and then takes the difference between these two
times as a SampleRTT
Compute weighted average of RTT
EstimatedRTT = *EstimatedRTT + (1- )
*SampleRTT
Where between 0.8 and 0.9

2.

Karn/Partridge Algorithm

Used in calculating accurate SampleRTT


One problem that occurs with dynamic algorithm(Original
algorithm) while estimating RTT dynamically. The following figure
illustrates this.

Whenever a segment is retransmitted and then an ACK arrives at


the sender, it is impossible to determine if this ACK should be
associated with the first or the second transmission of the
segment for the purpose of measuring the sample RTT. To
compute accurate SampleRTT, it is necessary to know, the ACK is
associated with which transmission
if you assume that the ACK is for the original transmission but it
was really for the second, then the SampleRTT is too large as
shown in (a)
if you assume that the ACK is for the second transmission but it
was actually for the first, then the SampleRTT is too small as
shown in (b).
The solution is simple
Do not measure SampleRTT when retransmitting a
segment; instead only measures SampleRTT for segments
that have been sent only once.
Double the timeout after each retransmission

3.

Set timeout based on EstimatedRTT


TimeOut = 2 * EstimatedRTT

Jacobson/Karels Algorithm

Used to measure accurate timeout value


According to original algorithm, the timeout is calculated as
TimeOut = 2 * EstimatedRTT
Where the constant value (2) was inflexible because it failed
to respond when the variance went up. The experience shows
that.
In the new approach, the sender measures a new SampleRTT as
before. It then
Calculates the timeout as follows:
Difference = SampleRTT EstimatedRTT
EstimatedRTT = EstimatedRTT + ( Difference)
Deviation = Deviation + (|Difference| Deviation)
where is a fraction between 0 and 1. That is, calculate
both the mean RTT and the variation in that mean.
TCP then computes the timeout value as a function of both
EstimatedRTT and Deviation as follows:
TimeOut = EstimatedRTT + Deviation
where based on experience, is typically set to 1 and is set
to 4.

TCP Congestion Control


Congestion
Occurs when the load on the network exceeds capacity
Congestion control refers to mechanisms and techniques used to
control congestion and keep load below capacity
Techniques and mechanisms that can either prevent congestion
or remove congestion
Open-loop policy applied to prevent congestion before it
happens
Closed-loop policy applied to reduce congestion after it
happens
TCP contain four algorithms for congestion control
1 Slow start
2 Additive Increase and Multiplicative Decrease (AMID)
3 Fast retransmit and Fast Recovery
4 Equation based congestion control
1 Slow start(SS) : algorithm

The sender starts by


transmitting
one
segment and waiting
for its ACK.

When that ACK is


received,
the
congestion window is
incremented from one
to two, and two
segments
can
be
sent.

When each of those


two
segments
is
acknowledged,
the
congestion window is
increased to four.

This
provides
exponential

growth(2^0,
and so on)
o

2^1,

Slow
start
cannot
continue indefinitely.
There
must
be
threshold to stop this
phase.
When
the
window size reaches
this threshold, slow
start stops and the
next phase starts.

an

2 Additive Increase and Multiplicative Decrease (AIMD)


To avoid congestion before it happens, it is necessary to stop this
exponential growth. TCP performs another algorithm called
additive increase.
Additive Increase (AI)
o When the congestion window reaches the threshold value, the
size of the congestion window is increased by 1.
o TCP increases the congestion window additively until the time
out occurs.

Multiplicative Decrease (MD)


o If time out occurs, the threshold must be set to one half of
the last congestion window size and congestion window
size should start from 1 again.
o The threshold is reduced to one half of the previous
congestion window size each time a time out occurs
means, the threshold is decreased in a multiplicative
manner.
The following graph illustrates the TCP congestion control

3 Fast retransmit and Fast Recovery


Fast retransmission

The idea of fast retransmit is straightforward. Every time a


data packet arrives at the receiving side, the receiver
responds with an acknowledgment.
When a packet arrives out of order i.e., not received by the
receiver, it resends the same acknowledgment that it sent the
last time. This second transmission of the same
acknowledgment is called a duplicate ACK.
When the sender sees a duplicate ACK, it knows that the
receiver must have received a packet out of order, which
suggests that an earlier packet might have been lost.
The sender waits until it sees some number of duplicate ACKs
and then retransmits the missing packet. In practice, TCP

waits until it has seen three duplicate ACKs before


retransmitting the packet.
After the retransmission of the lost segment, the receiver will
send a cumulative ACK to the sender.
The following figure illustrates this.

In this example, the destination receives packets 1 and 2, but


packet 3 is lost in the network. Thus, the destination will send a
duplicate ACK for packet 2 when packet 4 arrives, again when
packet 5 arrives, and so on. When the sender sees the third
duplicate ACK for packet 2the one sent because the receiver had
gotten packet 6it retransmits packet 3. When the retransmitted
copy of packet 3 arrives at the destination, the receiver then sends
a cumulative ACK for everything up to and including packet 6 back
to the source.
Fast recovery

When the fast retransmit mechanism signals congestion,


rather than dropping the congestion window and starting the
slow start, it is possible to use the ACKs that are still in the
pipe to reduce the sending of packets. This mechanism is
called fast recovery.
It removes the slow start phase and cuts the congestion
window in half and resumes additive increase.

4 The Equation based congestion control

In TCP, the data transmission is modeled by the following equation:


which says the transmission rate must be inversely proportional
to the round-trip time (RTT) and the
square root of the loss rate ().
TCP congestion avoidance
It refers predict when congestion is about to happen and reduce the
rate of data transmission before packets being discarded. Congestion
avoidance can be either
1 router-centric (Router Based Congestion Avoidance): a) DECbit
and b) RED gateways
2 host-centric (Source Based Congestion Avoidance): c) TCP Vegas

a) DECbit

Destination Experiencing Congestion bit


The idea is to more evenly split the responsibility for congestion
control between the routers and the end nodes. Each router
monitors the load it is experiencing and explicitly notifies the end
nodes when congestion is about to occur. This notification is
implemented by setting a binary congestion bit in the packets
that flow through the router; hence the name DECbit.
The destination host then copies this congestion bit into the ACK
and it sends back to the source. Finally, the source adjusts its
sending rate so as to avoid congestion.
A bit is added to the packet header to signify the congestion.
A router sets this bit in a packet if its average queue length is
greater than or equal to 1 at the time the packet arrives
.The Router monitors average queue length over last busy+idle
cycle
Figure shows the queue length at a router as a function of time.
The router calculates the area under the curve and divides this
value by the time interval to compute the average queue length.

Queue length

Current
time
T
ime
Previous
Current
cycle
cycle
veraging
A
interval

The router sets the congestion bit if average queue length > 1
The router attempts to balance throughout against delay
The algorithm uses a threshold of 50%. If less than 50% of the
ACKs for a connection show the congestion bit to be set we
increase the CongestionWindow setting by 1.
Once more than 50% of the ACKs have the congestion bit set
we decrease the Congestion Window by .85.times the previous
value.

b) Random Early Detection (RED)

This is a proactive approach in which the router discards one or


more packets before the buffer becomes completely full.
In RED, First, each time a packet arrives, the RED algorithm
computes the average queue length, AvgLen.
AvgLen is computed as
AvgLen = (1 Weight) AvgLen + Weight
SampleLen
where 0 <Weight < 1 and SampleLen is the length of the queue
when a sample measurement is made.
Second, RED has two queue length thresholds that trigger
certain activity: MinThreshold and MaxThreshold. When a packet
arrives at the gateway, RED compares the current AvgLen with
these two thresholds, according to the following rules:
if AvgLen MinThreshold
queue the packet
if MinThreshold < AvgLen < MaxThreshold
calculate probability P
drop the arriving packet with probability P
if MaxThreshold AvgLen
drop the arriving packet
That is, if the average queue length is smaller than the lower
threshold, no action is taken, and if the average queue length is

larger than the upper threshold, then the packet is always


dropped. If the average queue length is between the two
thresholds, then the newly arriving packet is dropped with some
probability P. This situation is depicted in following figure

The approximate relationship between P and AvgLen is shown in


following figure

To summarize it,
If AvgLen is lower than some lower threshold, congestion is
assumed to be minimal or non-existent and the packet is
queued.
If AvgLen is greater than some upper threshold, congestion is
assumed to be serious and the packet is discarded.
If AvgLen is between the two thresholds, this might indicate the
onset of congestion. The probability of congestion is then
calculated.
c) TCP Vegas
TCP Vegas was the nickname used for the next version of the TCP/IP
UNIX stack. These approaches are host-centric. The basis of the
technique is:
Compare the current RTT with the current window size:
(CurrentWindow OldWindow)X(CurrentRTT-OldRTT)
When this value is greater than zero assume congestion is approaching
and incrementally decrease the window. When the value is negative or
zero values, incrementally increase the window size

QUALITY OF SERVICE (QoS)


There are some techniques that can be used to improve the quality of
service. There are four common methods:
1
scheduling
2
traffic shaping
3
admission control
4
resource reservation
1.

Scheduling

Several scheduling techniques are designed to improve the quality of


service. Two of them are
1 FIFO queuing
2 Fair Queuing (FQ) (Weighted fair queuing).
a) FIFO queuing

first-in, first-out (FIFO) queuing


packets wait in a buffer (queue) until it is processed by the node
FIFO are two separable ideas.
o Scheduling disciplineit determines the order in which
packets are transmitted.
o Tail drop is a drop policyit determines which packets get
dropped.

If the average arrival rate is > the average processing rate, the queue
will fill up and new packets will be discarded.

b) FAIR queuing (weighted fair


queueing)

For example, if the weights are


3, 2, and 1, three packets are
processed from the first queue,
two from the second queue, and
one from the third queue

A better scheduling method


The packets are assigned to
different Flows and admitted to
different queues.
The queues are weighted based
on the priority of the queues;
higher priority means a higher
weight.
The system processes packets
in each queue in a round-robin
fashion Number of packets
selected for processing is based
on the corresponding weight.
2.

Traffic Shaping

Traffic shaping is a mechanism


to control the amount and the
rate of the traffic sent to the
network. Two techniques can
shape traffic:
1 leaky bucket
2 token bucket

the bucket. The rate at which


the water leaks does not depend
on the rate at which the water is
input to the bucket unless the
bucket is empty. The input rate
can vary, but the output rate
remains constant.

a) Leaky Bucket - can send


regulated rate of data
If a bucket has a small hole at
the bottom, the water leaks
from the bucket at a constant
rate as long as there is water in
The following steps are performed:
When the host has to send a packet, the packet is thrown into
the bucket.
The bucket leaks at a constant rate, meaning the network
interface transmits packets at a constant rate.
Bursty traffic is converted to a uniform traffic by the leaky
bucket.

In practice the bucket is a finite queue that outputs at a finite


rate.
Whenever a packet arrives, if there is room in the queue it is
queued up and if there is no room then the packet is discarded.
2 If there is a ready packet,
a token is removed from
the
bucket,
and
the
packet is send.
3 If there is no token in the
bucket, the packet cannot
be send.

b) Token bucket
- It sends
bursty amount of data
In this algorithm leaky bucket
holds
token,
generated
at
regular intervals.
Steps:
1 In regular intervals tokens
are
thrown
into
the
bucket. The bucket has a
maximum capacity.

Combining Token Bucket and Leaky Bucket

The two techniques can be combined to credit an idle host and at the same
time regulate
the traffic.
3.

Admission control

Different approaches can be followed:

First approach is once the congestion has been signaled, do not set-up
new connections, once the congestion is signaled. This type of
approach is often used in normal telephone networks. When the
exchange is overloaded, then no new calls are established.

Second approach is to allow new virtual connections, but route these


carefully so that none of the congested router or none of the problem
area is a part of this route.

Third approach is to negotiate different parameters between the host


and the network, when the connection is setup. During the setup time
itself, Host specifies the volume and shape of traffic, quality of service,
maximum delay and other parameters, related to the traffic it would
be offering to the network. Once the host specifies its requirement, the
resources needed are reserved along the path, before the actual
packet follows.
4.

Resource Reservation

Some of the approaches that have been developed to provide a range of


qualities of service. These can be divided into two broad categories:
fine-grained approaches, which provide QoS to individual applications
or flows
coarse-grained approaches, which provide QoS to large classes of data
or aggregated traffic
Integrated Services and associated RSVP is belonging to first category.
Differentiated Services lies in the second category.
INTEGRATED SERVICES
Integrated Services, sometimes called IntServ, is a fiow-based QoS model,
which means that a user needs to create a flow, a kind of virtual circuit, from
the source to the destination and inform all routers of the resource
requirement.
Integrated Services is a flow~based QoS model designed for IP.
Signaling

IP is a connectionless, datagram, packet-switching protocol. How can we


implement a flow-based model over a connectionless protocol? The solution
is a signaling protocol to run over IP that provides the signaling mechanism
for making a reservation. This protocol is called resource ReSerVation
Protocol (RSVP)
Flow Specification
When a source makes a reservation, it needs to define a flow specification. A
flow spedfication has two parts:
Rspec (resource specification) and
Tspec (traffic specification).
Rspec defines the resource that the flow needs to reserve (buffer,
bandwidth, etc.).
Tspec defines the traffic characterization of the flow.
Service Classes
Two classes of services have been defined for Integrated Services:
guaranteed service and
controlled-load service.
Guaranteed Service Class
This type of service is designed for real-time traffic that needs a guaranteed
minimum end-to-end delay. The end-to-end delay is the sum of the delays in
the routers, the propagation delay in the media, and the setup mechanism.
This type of service guarantees that the packets will arrive within a certain
delivery time and are not discarded if flow traffic stays within the boundary
of Tspec. The guaranteed services are quantitative services.
Controlled-Load Service Class
This type of service is designed for applications that can accept some delays,
but are sensitive to an overloaded network and to the danger of losing
packets. Good examples of these types of applications are file transfer, email, and Internet access. The controlled load service is a qualitative type of
service
RSVP
In the Integrated Services model, an application program needs resource
reservation. In IntServ model, the resource reservation is for a flow.

This means that if we want to use IntServ at the IP level, we need to create
a flow, a kind of virtual-circuit network, out of the IP, which was originally
designed as a datagram packet-switched network. A virtual-circuit network
needs a signaling system to set up the virtual circuit before data traffic can
start. The resource reservation protocol (RSVP) is a signaling protocol to
help IP create a flow and consequently make a resource reservation. RSVP
Messages
RSVP has two types of messages: Path and Resv.
Path Messages
In Receiver Based Reservation, the receivers, not the sender, make the
reservation. However, the receivers do not know the path traveled by
packets before the reservation is made. The path is needed for the
reservation. To solve the problem, RSVP uses Path messages. A Path
message travels from the sender and reaches all receivers in the multicast
path. On the way, a Path message stores the necessary information for the
receivers. A Path message is sent in a multicast environment; a new
message is created when the path diverges. The following figure shows path
messages.

Resv Messages After a receiver has received a Path message, it sends a


Resv message. The Resv message travels toward the sender (upstream) and
makes a resource reservation on the routers that support RSVP. If a router
does not support RSVP on the path, it routes the packet based on the besteffort delivery methods the following figure shows the Resv messages.

Reservation Styles

When there is more than one flow, the router needs to make a reservation to
accommodate
all of them. RSVP defines three types of reservation styles, as shown in the
following figure

Wild Card Filter Style In this style, the router creates a single reservation for
all senders. The reservation is based on the largest request. This type of
style is used when the flows from different senders do not occur at the same
time.
Fixed Filter Style In this style, the router creates a distinct reservation for
each flow. This means that if there are n flows, n different reservations are
made. This type of style is used when there is a high probability that flows
from different senders will occur at the same time.
Shared Explicit Style In this style, the router creates a single reservation
which can be shared by a set of flows.
Soft State
The reservation information (state) stored in every node for a flow needs to
be refreshed periodically. This is referred to as a soft state. The default
interval for refreshing is currently 30 s.
Problems with Integrated Services
There are at least two problems with Integrated Services that may prevent
its full implementation in the Internet: scalability and service-type limitation.
1 Scalability
The Integrated Services model requires that each router keep
information for each flow. As the Internet is growing every day, this is
a serious problem.

2 Service-Type Limitation
The Integrated Services model provides only two types of services,
guaranteed and control-load. Those opposing this model argue that
applications may need more than these two types of services.
DIFFERENTIATED SERVICES
Differentiated Services (DS or Diffserv) was introduced by the IETF (Internet
Engineering Task Force) to handle the shortcomings of Integrated Services.
Two fundamental changes were made:
1.
of
to
of

The main processing was moved from the core of the network to the edge
the network. This solves the scalability problem. The routers do not have
store information about flows. The applications, or hosts, define the type
service they need each time they send a packet.

2. The per-flow service is changed to per-class service. The router routes the
packet based on the class of service defined in the packet, not the flow. This
solves the service-type limitation problem. We can define different types of
classes based on the needs of applications.
Differentiated Services is a class-based QoS model designed for IP.
DS Field
In Diffserv, each packet contains a field called the DS field. The value of this
field is set at the boundary of the network by the host or the first router
designated as the boundary router.
The DS field contains two subfields: DSCP and CU.

The DSCP (Differentiated Services Code Point) is a 6-bit subfield that


defines the per-hop behavior (PHB).
The 2-bit CU (currently unused) subfield is not currently used.

Per-Hop Behavior
The Diffserv model defines per-hop behaviors (PHBs) for each node that
receives a packet. There are two PHBs are defined:
a EF PHB
b AF PHB.

EF: The EF (expedited forwarding) provides the following services


Low loss
Low latency
Ensured bandwidth:
AF: The AF (assured forwarding) delivers the packet with a high assurance
as long as the class traffic does not exceed the traffic profile of the node.

You might also like