You are on page 1of 2

Abstract

No doubt Internet is the second largest service after the public switched telephone
network and quickly becoming part of every work and service on this planet. TCP model
always tries to provide best effort service which means efficiently deliver your data as
possible. One of the infamous problem faces by network of networks is congestion. It
occurs when resource demands exceed the capacity. Huge amount of packets
generated and send, therefore performance of Internet is largely governed by large
amount of data fluctuations. Generally it is seen that traffic spikes goes beyond the
capacity limit as certain number of users use their maximum rate at that time. As these
packets cannot be sent across the media then there are only two things happen excess
packets can be buffered or can be dropped. Genuine router usually try to place these
excess packets in the buffer which are stored on the basis of first come first serve basis
and only drop that packets which cant be buffer due to overflow of buffer. The
underlying assumptions can be traffic reduction that time to drain out the queue or can
be reserving enough space for buffer for long queue. Making oversized buffer generate
basically two problems like storing packets in long queue add delay, therefore tries to
get the optimized queue length. Other prevalent solution to this problem is controlling
the congestion. The main goal of controlling congestion is to use the network efficiently
so that attaining the throughput can be achieved at highest level while maintaining loss
rate and low latency. Congestion should be avoided as it dictates queue length and this
length leads to loss and delay. Congestion can be controlled sender side through
Transmission control protocol (TCP) as in our dissertation. Packet loss is sensed as
timeout and interpreted as congestion and additive increase and multiple decreases
(AIMD) control law is used. Other technique is used to control the congestion by using
precise knowledge of RTT. We have used two terms for controlling the congestion is
using loss based and delay based model. Loss based model use window based
technique and controlling the transmission window. Loss of packets is used as
indication of congestion and reduces the transmission window according to AIMD
policy. Other method used on the basis of RTT can be called equation based model. In
delay based model (rate based control) , sending host is aware of specific data rate and
the receiver or router communicate sending host of a new rate that should not be
exceeded. Delay based control method is easy and considered as suitable for media
streaming applications. Loss based model is contemporary method for dealing with
congestion in the Internet as its simple principle a new packet is not put into the network
until an old packet leaves. It is less harmful due to sender will stop sending when there
is huge problem occurs in the network and no more feedback arrives. The other side
(disadvantage) of this policy it can lead to traffic bursts.
Congestion can be controlled by managing the queue at bottleneck router level. By
using active queue management technique congestion can controlled at significant
level. Random early detection is famous method used and deployed most. It makes
decision to drop a packet on the basis of avg queue length and a random function. It
keeps the queue size small as possible thus infer low end-to-end delay while allowing
bursts of traffic in the queue. Many methods are also prevalent are BLUE, CoDel with
their pros and cons.
An explicit method of conveying congestion rather than loss of packet as implicit method
is considered better approach. It is literally realized by using single bit in congestion
packet header. In this scheme router set this bit on sensing the congestion and this
ECN bit is found by end host and then host update their sending rate according to rule.
Obvious advantage is it causes less loss but single ECN bit does not serve for each and
every case as buffer can still overflow in presence of burst of traffics.
It is literally supposed that all user support is fully cooperative but the question remain
same that how much bandwidth to allocate which user so that fairness can be achieved
at higher level. This type of per-user prioritization is most wanted issue also called
Quality of Service (QoS). Jain fairness index is method to check the fairness issue in
the range from 0 to 1. One is used as highly fairly and zero is considered as not fair. It is
good method to measure the fairness.

You might also like