You are on page 1of 6

ISSN: 2231-2803 http://www.ijcttjournal.

org Page 1521



A Survey on Host to Host Congestion Control
Mrs. P.RADHADEVI *1, G. ANJ ANEYULU *2
Assistant Professor, Dept of Computer Applications, SNIST, Ghatkesar, Hyderabad, AP, India
M.C.A Student, Dept of Computer Applications, SNIST, Ghatkesar, Hyderabad, AP, India

ABSTRACT:

Congestion has been considered as one of the
basic important issue in packet switched network.
Congestion control refers to the mechanisms and
techniques to control the congestion and keep the
load below the capacity. In data networking and
queuing theory, network congestion occurs when
a link or node is carrying so much data that its
quality of service deteriorates. Typical effects
include queuing delay, packet loss or the blocking
of new connections. A consequence of these latter
two is that incremental increases in offered load
lead either only to small increase in network
throughput, or to an actual reduction in network
throughput. A survey of various congestion
control proposals that preserve the original host-
to-host idea of TCP-namely, that neither sender
nor receiver relies on any explicit notification
from the network. Many host-to-host techniques
address several problems with different levels of
reliability and precision. There have been
enhancements allowing senders to detect fast
packet losses and route changes. Other techniques
have the ability to estimate the loss rate, the
bottleneck buffer size, and level of congestion.
The survey describes each congestion control
alternative, its strengths and its weaknesses.

KEYWORDS: CSMA/CA, Queuing Theory,
TCP Software.

I.INTRODUCTION
In data networking and queueing theory,
network congestion occurs when a link or node is
carrying so much data that its quality of service
deteriorates. Typical effects include queueing
delay, packet loss or the blocking of new
connections. A consequence of these latter two is
that incremental increases in offered load lead
either only to small increase in network
throughput, or to an actual reduction in network
throughput. Network protocols which use
aggressive retransmissions to compensate for
packet loss tend to keep systems in a state of
network congestion even after the initial load has
been reduced to a level which would not normally
have induced network congestion. Thus, networks
using these protocols can exhibit two stable states
under the same level of load. Congestion control
is a method used for monitoring the process of
regulating the total amount of data entering the
network .so as to keep traffic levels at an
acceptable value. This is done in order to avoid
the telecommunication network reaching what is
termed congestive collapse. Modern networks use
congestion control and network congestion
avoidance techniques to try to avoid congestion
collapse. These include: exponential back off in
protocols such as 802.11's CSMA/CA and the
original Ethernet, window reduction in TCP, and
fair queuing in devices such as routers Congestion
avoidance techniques monitor network traffic
loads in an effort to anticipate and avoid
congestion at common network bottlenecks.

II.RELATED WORK:
The standard fare in TCP implementations can
be found in RFC 2581. This reference document
specifies four standard congestion control
algorithms that are now in common use. Each of
the algorithms noted within that document was
actually designed long before the standard was
published. Their usefulness has passed the test of
time.
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1522

Slow Start, a requirement for TCP software
implementations is a mechanism used by the
sender to control the transmission rate, otherwise
known as sender-based flow control. This is
accomplished through the return rate of
acknowledgements from the receiver. In other
words, the rate of acknowledgements returned by
the receiver determines the rate at which the
sender can transmit data.
When a TCP connection first begins, the Slow
Start algorithm initializes a congestion window to
one segment, which is the maximum segment size
(MSS) initialize by the receiver during the
connection establishment phase. When
acknowledgements are returned by the receiver,
the congestion window increases by one segment
for each acknowledgement returned. Thus, the
sender can transmit the minimum of the
congestion window and the advertised window of
the receiver, which is simply called the
transmission window.
Slow Start is actually not very slow when
the network is not congested and network
response time is good. For example, the first
successful transmission and acknowledgement of
a TCP segment increases the window to two
segments. After successful transmission of these
two segments and acknowledgements completes,
the window is increased to four segments. Then
eight segments, then sixteen segments and so on,
doubling from there on out up to the maximum
window size advertised by the receiver or until
congestion finally does occur.
At some point the congestion window may
become too large for the network or network
conditions may change such that packets may be
dropped. Packets lost will trigger a timeout at the
sender. When this happens, the sender goes into
congestion avoidance mode as described in the
next section.
During the initial data transfer phase of a TCP
connection the Slow Start algorithm is used.
However, there may be a point during Slow Start
that the network is forced to drop one or more
packets due to overload or congestion. If this
happens, Congestion Avoidance is used to slow
the transmission rate. However, Slow Start is used
in conjunction with Congestion Avoidance as the
means to get the data transfer going again so it
doesnt slow down and stay slow.
In the Congestion Avoidance algorithm a
retransmission timer expiring or the reception of
duplicate ACKs can implicitly signal the sender
that a network congestion situation is occurring.
The sender immediately sets its transmission
window to one half of the current window size
(the minimum of the congestion window and the
receivers advertised window size), but to at least
two segments. If congestion was indicated by a
timeout, the congestion window is reset to one
segment, which automatically puts the sender into
Slow Start mode. If congestion was indicated by
duplicate ACKs, the Fast Retransmit and Fast
Recovery algorithms are invoked.

2.1Fast Retransmit
When a duplicate ACK is received, the sender
does not know if it is because a TCP segment was
lost or simply that a segment was delayed and
received out of order at the receiver. If the
receiver can re-order segments, it should not be
long before the receiver sends the latest expected
acknowledgement. Typically no more than one or
two duplicate ACKs should be received when
simple out of order conditions exist. If however
more than two duplicate ACKs are received by the
sender, it is a strong indication that at least one
segment has been lost. The TCP sender will
assume enough time has lapsed for all segments to
be properly re-ordered by the fact that the receiver
had enough time to send three duplicate ACKs.
When three or more duplicate ACKs are received,
the sender does not even wait for a retransmission
timer to expire before retransmitting the segment.
This process is called the Fast Retransmit
algorithm.
2.1 Fast Recovery
Since the Fast Retransmit algorithm is used
when duplicate ACKs are being received, the TCP
sender has implicit knowledge that there is data
still flowing to the receiver. Why? The reason is
because duplicate ACKs can only be generated
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1523

when a segment is received. This is a strong
indication that serious network congestion may
not exist and that the lost segment was a rare
event. So instead of reducing the flow of data
abruptly by going all the way into Slow Start, the
sender only enters Congestion Avoidance mode.
Rather than start at a window of one segment
as in Slow Start mode, the sender resumes
transmission with a larger window, incrementing
as if in Congestion Avoidance mode. This allows
for higher throughput under the condition of only
moderate congestion.


Fig1: Congestion Control Overview
III.PROBLEM STATEMENT:
To resolve the congestion control problem, a
number of solutions have been proposed. All of
them share the same idea, namely of introducing a
network-aware rate limiting mechanism alongside
the receiver-driven flow control. For this purpose
the congestion window concept was introduced: a
TCP senders estimate of the number of data
packets the network can accept for delivery
without becoming congested. In the special case
where the flow control limit is less than the
congestion control limit (i.e., the congestion
window), the former is considered a real bound for
outstanding data packets. Although this is a
formal definition of the real TCP rate bound, we
will only consider the congestion window as a rate
limiting factor, assuming that in most cases the
processing rate of end-hosts is several orders of
magnitude higher than the data transfer rate that
the network can potentially offer. Additionally,
we will compare different algorithms, focusing on
the congestion window dynamics as a measure of
the particular congestion control algorithm
effectiveness.

IV.EVOLUTION OF TCP
To resolve the congestion collapse problem, a
number of solutions have been proposed.
4.1 TCP TAHOE
Tahoe refers to the TCP congestion control
algorithm which was suggested by Van Jacobson.
TCP is based on a principle of conservation of
packets, i.e. if the connection is running at the
available bandwidth capacity then a packet is not
injected into the network unless a packet is taken
out as well. TCP implements this principle by
using the acknowledgements to clock outgoing
packets because an acknowledgement means that
a packet was taken off the wire by the receiver. It
also maintains a congestion window CWD to
reflect the network capacity.
However there are certain issues, which need
to be resolved to ensure this equilibrium.
1) Determination of the available
bandwidth.
2) Ensuring that equilibrium is maintained.
3) How to react to congestion.
For congestion avoidance Tahoe uses
Additive Increase Multiplicative Decrease. A
packet loss is taken as a sign of congestion and
Tahoe saves the half of the current window as a
threshold value.
The problem with Tahoe is that it takes a
complete timeout interval to detect a packet loss.
It doesnt send immediate ACKs, it sends
cumulative acknowledgements, and therefore it
follows a go back n approach.
4.2 TCP RENO
This Reno retains the basic principle of Tahoe,
such as slow starts and the coarse grain re-
transmit timer. So Reno suggests an algorithm
called Fast Retransmit. Whenever we receive 3
duplicate ACKs we take it as a sign that the
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1524

segment was lost, so we re-transmit the segment
without waiting for timeout.
Reno performs very well over TCP when the
packet losses are small. But when we have
multiple packet losses in one window then RENO
doesnt perform too well and its performance is
almost the same as Tahoe under conditions of
high packet loss.
4.3 New-Reno
It prevents many of the coarse grained
timeouts of New-Reno as it doesnt need to wait
for 3duplicate ACKs before it retransmits a lost
packet.
Its congestion avoidance mechanisms to
detect incipient congestion are very efficient and
utilize network resources much more efficiently.
Because of its modified congestion avoidance
and slow start algorithm there are fewer
retransmits.
4.4 TCP SACK
TCP with Selective Acknowledgments is an
extension of TCP Reno and it works around the
problems face by TCP RENO and New-Reno
namely detection of multiple lost packets, and re-
transmission of more than one lost packet per
RTT. SACKS retain the slow-start and fast
retransmit parts of RENO. It also has the coarse
grained timeout of Tahoe to fall back on, in case a
packet loss is not detected by the modified
algorithm. SACK TCP requires that segments not
be acknowledged cumulatively but should be
acknowledged selectively. Thus each ACK has a
block which describes which segments are being
acknowledged.
The biggest problem with SACK is that
currently selective acknowledgements are not
provided by the receiver to implement SACK
well need to implement selective
acknowledgment which is not a very easy task.
4.5 TCP VEGAS
Vegas is a TCP implementation which is a
modification of Reno. It builds on the fact that
proactive measure to encounter congestion is
much more efficient than reactive ones. It tried to
get around the problem of coarse grain timeouts
by suggesting an algorithm which checks for
timeouts at a very efficient schedule. Also it
overcomes the problem of requiring enough
duplicate acknowledgements to detect a packet
loss. It does not depend solely on packet loss as a
sign of congestion. It detects congestion before
the packet losses occur. However it still retains the
other mechanism of Reno and Tahoe, and a packet
loss can still be detected by the coarse grain
timeout of the other mechanisms fail.
4.6 TCP FACK
Although SACK (Section II-E) provides the
receiver with extended reporting capabilities, it
does not define any particular congestion control
algorithms. We have informally discussed one
possible extension of the Reno algorithm utilizing
SACK information, whereby the congestion
window is not multiplicatively reduced more than
once per RTT. Another approach is the FACK
(Forward Acknowledgments) congestion control
algorithm. It defines recovery procedures which,
unlike the Fast Recovery algorithm of standard
TCP (TCP Reno), use additional information
available in SACK to handle error recovery (flow
control) and the number of outstanding packets
(rate control) in two separate mechanisms. The
flow control part of the FACK algorithm uses
selective ACKs to indicate losses. It provides a
means for timely retransmission of lost data
packets, as well. Because retransmitted data
packets are reported as lost for at least one RTT
and a loss cannot be instantly recovered, the
FACK sender is required to retain information
about retransmitted data. This information should
at least include the time of the last retransmission
in order to detect a loss using the legacy timeout
method (RTO). The rate control part, unlike
Renos and New Renos Fast Recovery
algorithms, has a direct means to calculate the
number of outstanding data packets using
information extracted fromSACKs.

V.PACKET REORDERING
All the congestion control algorithms discussed
in the previous section share the same assumption
that the network generally does not reorder
packets. This assumption has allowed the
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1525

algorithms to create a simple loss detection
mechanism without any need to modify the
existing TCP specification. The standard already
requires receivers to report the sequence number
of the last in-order delivered data packet each time
a packet is received, even if received out of order.
For example, in response to a data packet
sequence 5,6,7,10,11,12, the receiver will ACK
the packet sequence 5,6,7,7,7,7). In the idealized
case, the absence of reordering guarantees that an
out-of-order delivery occurs only if some packet
has been lost. Thus, if the sender sees several
ACKs carrying the same sequence numbers
(duplicate ACKs), it can be sure that the network
has failed to deliver some data and can act
accordingly (e.g., retransmit lost packet, infer a
congestion state, and reduce the sending rate).
Of course in reality, packets are reordered.
This means that we cannot consider a single
duplicate ACK (i.e., ACK for an already ACK
data packet) as a loss detection mechanism with
high reliability. To solve this problem of false loss
detection, a solution employed as a rule of thumb
establishes a threshold value for the minimal
number of duplicate ACKs required to trigger a
packet loss detection. However, there is a clear
conflict with this approach. Loss detection will be
unnecessarily delayed if the network does not
reorder packets. At the same time, the sender will
overreact (e.g., retransmit data or reduce
transmission rate needlessly) if the network does
in fact reorder packets.

Reactive (Loss-based)






Fig2. Evolutionary graph of TCP variants that
solve the packet reordering problem
In this section we present a number of
proposed TCP modifications that try to eliminate
or mitigate reordering effects on TCP flow
performance. All of these solutions share the
following ideas: (a) they allow nonzero
probability of packet reordering, and (b) they can
detect out-of-order events and respond with an
increase in flow rate (optimistic reaction).
Nonetheless, these proposals have fundamental
differences due to a range of acceptable degrees of
packet reordering, from moderate in TD-FR to
extreme in TCP PR, and different baseline
congestion control approaches.


VI. CONCLUSION

The paper presents the study of host-to-
host congestion control and elaborates various
issues related with it. The survey highlighted the
fact that the research focus has changed with the
development of the Internet, from the basic
problem of eliminating the congestion collapse
phenomenon to problems of using available
network resources effectively in different types of
environments (wired, wireless, high-speed, long
delay, etc.). The first part of our survey, we
showed that basic host-to- host congestion control
principles can solve not only the direct congestion
problem but also provide a simple traffic
prioritizing feature. The first proposal, Tahoe,
introduces the basic technique of gradually
probing network resources and relying on packet
loss to detect that the network limit has been
reached. Unfortunately, although this technique
solves the congestion problem, it creates a great
deal of inefficient use of the network. In the last
part of this survey, we classified and discussed
proposals that build a foundation for host-to-host
congestion control principles.
VII. REFERENCES:
[1]Alexander Afanasyev, Neil Tilley, Peter
Reiher, and Leonard Kleinrock, Host-to-Host
Congestion Control for TCP
[2] Sapna Gupta, Nitin Kumar Sharma, K.P.
Yadav, A Survey on Congestion Control &
Avoidance
[3] Van Jacobson, Michael J. Karels, Congestion
Avoidance and Control
TD-FR
DOOR
RR
Eifel
PR
ISSN: 2231-2803 http://www.ijcttjournal.org Page 1526

[4] J. Widmer, R. Denda, and M. Mauve, A
survey on TCP-friendly congestion control, IEEE
Network, vol. 15, no. 3, pp. 2837, May/June
2001.
[5] J. Postel, RFC793transmission control
protocol, RFC, 1981.
[6] S. Low, F. Paganini, and J .Doyle, Internet
congestion control

You might also like