You are on page 1of 21

Project Report

NS2 Simulation

Anand Shah
CWID#10433548
We have the following data for the simulation.
• Link 3-4 is the bottleneck of the topology.
• All links have a propagation delay of 10 ms.
• All links have drop-tail queue.

TCP is widely used, reliable, connection-oriented protocol. Here with the use of Network
Simulator 2 we are trying to understand various mechanism used in different versions of
TCP protocol, used for congestion control, flow control and other mechanism for reliable
data transfer.

With the use of NS2 we will make a network shown in figure and will simulate it with
different conditions like different queue sizes, window size, different bandwidth etc…
And hence will be able to characterize different characteristics of TCP protocol.

If not given default TCP Tahoe is used.


Part 1
1
Command used: and output.
Byte Transferred:
awk '{if($3=="0") sum +=$6 } END {print sum}' P11.tr
Total number of bytes transferred : 5536040

Segment Transferred:
awk '{if($3=="0"&& $6=="40") sum +=$6 } END {print sum/40}' P11.tr
Number of segments of size 40 : 3

awk '{if($3=="0"&& $6=="1040") sum +=$6 } END {print sum/1040}' P11.tr


Number of segments of size 1040 : 5323

Total Segments = 3 + 5323 = 5326

Average Throughput: (queue size = 50 (default))


awk '{if((($3=="2" && $4=="3")||($3=="3" && $4=="2")) && $1=="r" && $5=="tcp")
sum +=$6} END {print sum*.8}' P11.tr
Throughput = 1466848 bps

As bottleneck capacity is 1.5Mbps and we are getting almost 1.4Mbps of Throughput


which fits in 1.5Mbps link and is very good value of Throughput.

Queue size = 5
Throughput = 1099936
Queue Size = 500
Throughput = 1466848

With these three observation we can see that for default queue size that is 50 throughput
is almost maximum. It is not increasing with queue size. And as we decrease queue size
average throughput will decrease with that.

Files : /Part1/P11.tcl
/Part1/Part11.nam
/Part1/P11.tr

2
For congestion window, rtt and timeout. Graphs can be seen by these commands.
xgraph P12srtt.tr #for Ertt
xgraph P12cwnd.tr & #for Congestion window
xgraph P12rto.tr & #for rto
xgraph P12rtt.tr & #for rtt
Graph for Congestion Window

Graph for RTT


Graph for EstRTT

Graph for RTO


Files : /Part1/P12.tcl
/Part1/Part12.nam
/Part1/ P12srtt.tr
/Part1/ P12cwnd.tr
/Part1/ P12rto.tr
/Part1/ P12rtt.tr
Part 2
2

Throughput TCP
Throughput UDP

The reason for this is The UDP applications achieve higher throughput than the
elastic TCP applications. UDP achieves higher throughputs than TCP in a congestion
episode due to the elasticity (back-off) of the TCP protocol.

Files : /Part2/P22.tcl
/Part2/Part22.nam
/Part2P21.tr
/Part2P22.tr
/Part2P22.tr
3
TCP + TCP: Both flows are TCP, with a maximum window size (MWS) of 30 packets
for flow-1 and 6 packets for flow-2.

For queue size = 50


For queue size = 40

Red line in the graph indicates TCP with window size = 30


And green line indicates TCP with window size = 6

As we can see from graphs that as windows size increases throughput of TCP increases.
And both are used simultaneously then TCP with high window size will utilize channel
more then other one.

Files : /Part2/P23.tcl
/Part2/Part23.nam
/Part2/P231.tr
/Part2/P232.tr
/Part2/P233.tr
4
TCP + TCP: Both flows are TCP, with a packet size of 1000 bytes for flow-1 and 500
bytes for flow-2, respectively. Choose the same MWS for the two flows.

For queue size = 40


For queue size = 50

Green line indicates TCP with packet size = 1000


Red line indicates TCP with packet size = 500

For default queue size that is 50 both flow gets almost same throughput. But as queue
size decreases TCP with window size 1000 will utilize channel more then other as shown
in 1st graph.

Files : /Part2/P24.tcl
/Part2/Part24.nam
/Part2/P241.tr
/Part2/P242.tr
/Part2/P243.tr
5
TCP Tahoe + TCP Reno: Flow-1 is TCP Tahoe, and TCP-2 is TCP Reno. Choose the
same MWS for the two flows.

Queue size = 50
Queue size = 40

Queue size = 50 (vs time)


Queue size = 40 (vs time)
Green line indicates TCP/Reno and red line indicates TCP/Tahoe

As we can see from graph that with default queue size both TCP gets almost same
Throughput
But as windows size decreases to 40 TCP/Reno gets more throughput then TCP/Tahoe.

Files : /Part2/P25.tcl
/Part2/Part25.nam
/Part2/P251.tr
/Part2/P252.tr
/Part2/P253.tr
/Part2/P254.tr
6
TCP Tahoe + TCP SACK: Flow-1 is TCP Tahoe, and TCP-2 is TCP SACK.

Queue size = 40
Queue size = 50

Queue size = 40 (vs time)


Queue size = 50 (vs time)
Green line indicates TCP/SACK and red line indicates TCP/Tahoe

With graph we can see that TCP/SACK is more efficient then TCP/Tahoe as it uses
selective acks its performance and throughput increases.

Files : /Part2/P26.tcl
/Part2/Part26.nam
/Part2/P261.tr
/Part2/P262.tr
/Part2/P263.tr
/Part2/P264.tr
7
TCP Tahoe + TCP SACK: Read about TCP with selective acknowledgement (TCP
SACK). Flow-1 is TCP Tahoe, and TCP-2 is TCP SACK. Choose the same MWS for the
two flows.

Queue size = 40
Queue size = 50

Queue size = 40 (vs time)


Queue size = 50 (vs time)
Green line indicates TCP/SACK and red line indicates TCP/Reno

As we can see by graph that performance of TCP/SACK is better then TCP/Reno when
window size is less, and almost same as performance of TCP/Reno while window size is
default.

Files : /Part2/P27.tcl
/Par2/Part27.nam
/Part2/P271.tr
/Part2/P272.tr
/Part2/P273.tr
/Part2/P274.tr

8
As shown by last three graphs we can conclude that TCP/Reno is more efficient then
TCP/Tahoe. And TCP/SACK is more efficient then TCP/Rano.
So we can put them in decreasing order as.
TCP/SACK > TCP/Reno > TCP/Tahoe.

You might also like