You are on page 1of 7

Computer Networks Fundamentals

R3. For a TCP connection between Host A and Host B, the TCP segments from A to B
have source port number x and destination port number y. What are the source and
destination port numbers for the segments sent from B to A? (2 points)
Source port number y and destination port number x.
R4. Describe why an application developer might choose to run an application over UDP
rather than TCP. Give examples of such applications. (4 points)
An application developer may not want its application to use TCPs
congestion control, which can throttle the applications sending rate at
times of congestion. Often, designers of IP telephony and IP
videoconference applications choose to run their applications over UDP
because they want to avoid TCPs congestion control. Also, some
applications do not need the reliable data transfer provided by TCP.
R5. Why is it that voice and video traffic is often sent over TCP rather than UDP in
todays Internet? (Hint: not because of TCPs congestion-control mechanism) (4 points)
Since most firewalls are configured to block UDP traffic, using TCP for
video and voice traffic lets the traffic though the firewalls.
[more: Technically speaking, UDP would provide a better trade-off since voice/video
apps (especially interactive) tolerate loss (up to ~10%) but do not tolerate delay or jitter.
TCPs congestion control and reliability mechanisms lead to 100% delivery but lead to
variable delays (i.e., increased jitter), and hence is technically unsuitable for voice/video
delivery].
R6. Is it possible for an application to enjoy reliable data transfer even when the
application runs over UDP? If so, how? (4 points)
Yes. The application developer can put reliable data transfer into the
application layer protocol. This would require a significant amount of
work and debugging, however.
R7. A process in Host C has a UDP socket with port number 6789. Host A and B each
send a UDP segment to Host C with destination port number 6789. Will both of these
segments be directed to the same socket at Host C? How will the process at Host C know
that these two segments originated from two different hosts? (4 points)
Yes, both segments will be directed to the same socket. For each
received segment, at the socket interface, the operating system will
provide the process with the IP addresses to determine the origins of
the individual segments.

Q. To provide reliability in a transport layer, why do we need sequence numbers? Why do


we need timers? Will a timer still be required if the RTT between sender and destination
is constant? (9 points)
Sequence numbers are required for a receiver to find out whether an
arriving packet contains new data or is a retransmission.
To handle losses in the channel. If the ACK for a transmitted packet is
not received within the duration of the timer for the packet, the packet
(or its ACK or NACK) is assumed to have been lost. Hence, the packet is
retransmitted.
A timer would still be necessary. If the round trip time is known then
the only advantage will be that, the sender knows for sure that either
the packet or the ACK (or NACK) for the packet has been lost, as
compared to the real scenario, where the ACK (or NACK) might still be
on the way to the sender, after the timer expires. However, to detect
the loss, for each packet, a timer of constant duration will still be
necessary at the sender.
R14. True or false? (14 points)
a. Host A is sending Host B a large filer over a TCP connection. Assume Host B has
no data to send Host A. Host B will not send acknowledgements to Host A
because Host B cannot piggyback the acknowledgements on data.
b. The size of the TCP RcvWindow never changes throughout the duration of the
TCP connection.
c. Suppose Host A is sending Host B a large file over a TCP connection. The number
of unacknowledged bytes that A sends cannot exceed the size of the receive
buffer.
d. Suppose Host A is sending a large file to Host B over a TCP connection. If the
sequence number for a segment of this connection is m, then the sequence number
of the subsequent segment will necessarily be m+1.
e. The TCP segment has a field in its header for RcvWindow.
f. Suppose that the last SampleRTT in a TCP connection is equal to 1 sec. The
current value of TimeoutInterval for the connection will necessarily be 1 sec.
g. Suppose that Host A sends one segment with sequence number 38 and 4 bytes of
data over a TCP connection to Host B. In this same segment the
acknowledgement number is necessarily 42.
14. a) false b) false c) true d) false e) true f) false g) false

R16. Consider the Telnet example discussed in the figure below. A few seconds after the
user types the letter C, the user types the letter R. After typing the letter R. how many
segments are sent, and what is put in the sequence number and acknowledgement fields
of the segments? (6 points)
16. 3 segments. First segment: seq = 43, ack =80; Second segment:
seq = 80, ack =
44; Third segment; seq = 44, ack = 81

P13. Consider a reliable data transfer protocol that uses only negative acknowledgements.
Suppose the sender sends data only infrequently. Would a NAK-only protocol be
preferable to a protocol that uses ACKs? Why? Now suppose the sender has a lot of data
to send and the end-to-end connection experiences few losses. In this second case, would
a NAK-only protocol be preferable to a protocol that uses ACKs? Why? (8 points)
In a NAK only protocol, the loss of packet x is only detected by the
receiver when packet
x+1 is received. That is, the receivers receives x-1 and then x+1, only
when x+1 is received does the receiver realize that x was missed. If
there is a long delay between the transmission of x and the
transmission of x+1, then it will be a long time until x can be recovered,
under a NAK only protocol.

On the other hand, if data is being sent often, then recovery under a
NAK-only scheme could happen quickly. Moreover, if errors are
infrequent, then NAKs are only occasionally sent (when needed), and
ACK are never sent a significant reduction in feedback in the NAKonly case over the ACK-only case.
Q. Describe (with the aid of a graph) the different phases of network load/overload
outlining the degrees of congestion with increase of load. Indicate the point of congestion
collapse and explain why it occurs. (6 points)
Where does TCP operate on that graph? Explain for the various phases of TCP; slow
start, congestion avoidance (due to timeout), fast retransmit-fast recovery triggered by
duplicate ACKs. (6 points)

(I)

(II)

(III)

(I) No Congestion
(II) Moderate Congestion
(III) Severe Congestion (Collapse)
- no congestion --> near ideal performance
- overall moderate congestion:
- severe congestion in some nodes
- dynamics of the network/routing and overhead of protocol adaptation
decreases the network Tput
- severe congestion:
- loss of packets and increased discards
- extended delays leading to timeouts
- both factors trigger re-transmissions
- leads to chain-reaction bringing the Tput down

For TCP: in slow start, the load starts from CongWin=1 (at the beginning of phase I),
then ramps up quickly (exponential growth of CongWin) until a loss is experienced (in
phase II or beginning of phase III).
After the loss, if a timeout occurs, TCP goes down to CongWin=1 (at the beginning of
phase I) then ramps up to roughly half the load the led to the loss (i.e., half way in phase
I).
In congestion avoidance CongWin increases linearly, which means the load increases
slowly towards the end of phase I and into phase II, until another loss occurs.
In fast retransmit fast recovery (due to duplicate acks), the load is cut in half (half way
into phase I), then slow (linear) increase towards phase II (as in congestion avoidance).
Q. Mention the different kinds of protocols that may be used for congestion control.
Argue for (or against) having network-assisted congestion control. (10 points)
1- Back pressure using hop by hop flow control (e.g., Go back N or selective reject)
2- Choke packet that is sent from the congested router to the source(s)
3- Explicit congestion signaling, or network-assisted congestion control (such as that
used by ATM ABR) where the switches or routers in the network indicate the
congestion occurrence in packets/cells sent to the end points (senders or
receivers). The congestion indication can be in the form of a bit (binary
indication), explicit rate, or credit.
4- Implicit congestion signaling (such as that used by TCP) where the end points use
sequence numbers, acks and delay estimates to attempt to infer congestion events
(through loss detection or extended delays).
Network-assisted congestion control can be useful to indicate actual congestion events to
the end points (instead of leaving it up to the end points to guess when congestion
occurs). This could be helpful especially when loss occurs due to non-congestion events
(such as high loss rate on wireless channels). Also the network can have a better estimate
of what the network can handle in terms of load or rate, so congestion may be alleviated
faster and effectively if network-assisted information is available to the end points.
On the other hand, network-assisted congestion control requires the change of the
internals (core) of the network [routers, switches, etc.]. This requires dramatic changes in
the network (from the ISPs point of view) whenever a new version of the congestion
control mechanism is employed. Whereas implicit congestion signaling, without network
assistance does not require such changes in the network, but only requires changes at the
end hosts.
P35. We discussed the doubling of the timeout interval after a timeout event. This
mechanism is a form of congestion control. Why does TCP need a window-based
congestion-control mechanism in addition to this doubling-timeout-interval mechanism?
(4 points)

If TCP were a stop-and-wait protocol, then the doubling of the time out
interval would
suffice as a congestion control mechanism. However, TCP uses
pipelining (and is
therefore not a stop-and-wait protocol), which allows the sender to
have multiple
outstanding unacknowledged segments. The doubling of the timeout
interval does not
prevent a TCP sender from sending a large number of first-timetransmitted packets into
the network, even when the end-to-end path is highly congested.
Therefore a congestioncontrol
mechanism is needed to stem the flow of data received from the
application
above when there are signs of network congestion.
Q. In TCP slow start mechanism, what is the equation used to increase CongWin and how
does the window grow according to this equation? How does the equation change for the
congestion avoidance phase and how does the window grow then?
In the details of the fast retransmit-fast recovery mechanism the window grows according
to the equation used in slow start (before the cumulative ACK is received). Does that
mean that TCP can send as many segments as it would have in slow start (for the same
window size)? Why or why not? (9 points)
In slow start: CongWin = CongWin + 1 for every ACK received
This leads to an exponential growth of the congestion window size.
In congestion avoidance: CongWin = CongWin + 1 for every RTT
This leads to a linear increase of the congestion window size.
In fast retransmit-fast recovery, although the equation used is:
CongWin = CongWin + 1 for every ACK received
The Acks received in this case are duplicate acks, that do not advance the beginning of
the window (since there is a lost segment, the beginning of the window is frozen).
Hence, in general TCP will not be able to send as many segments as it does in the slow
start.
Q. Why do we say that TCP uses an AIMD mechanism? (4 points)
The increase of CongWin in TCP in steady state uses linear (additive increase, or AI)
where CongWin is increased by 1 with every RTT, hence the AI term.
When TCP senses the loss of a packet, using duplicate Acks, it cuts down the CongWin
by half [and this occurs everytime 3 dup acks are received], hence the multiplicative
decrease (MD) term.
Q. If you use TCP to transfer a big file, you are likely to get a fair share of the network
bandwidth (i.e., similar to that given to other long TCP connections). Describe a way in

which you can get more than your fair share. Reason about how much more bandwidth
you can get. (6 points)
By using parallel TCP sessions, each sending a chunk of the file. If every session gets
R/M rate (where R is the capacity of the bottleneck link and M is the number of flows
crossing that link), then by using N parallel flows instead of one, an end host can get NR/
(N+M).
Q. (6 points) In ATM ABR congestion control the equation to increase the rate is given
by:
Ratenew=Rateold-Rateold*RDF, where RDF is the rate decrease factor,
a.
b.

discuss how fast/slow does the sender respond to congestion for the
various value of RDF.
If the equation was changed to Ratenew=Rateold*alpha, do you think the
response will be better or worse and why.

a. for high RDF the response will be fast, potentially causing oscillations. If RDF is low
then the response will be slow.
b. The response is very similar to part a. Note that there is no difference in the
mechanism if we put alpha=1-RDF.
We get Ratenew=Rateold-Rateold*RDF= Rateold (1-RDF) =Rateold*alpha
The rate of response will be reversed (as compared to part a above), i.e., when alpha is
high (close to 1) the response will be slow, but when alpha is low (closer to 0) then the
response is fast.
Q. (9 points) Argue for or against this statement (reason using examples as necessary):
Packets are lost only when network failures occur (e.g., a link goes down). But when the
network heals (e.g., the failed link comes back up again), packets do not get lost.
When the network fails (e.g., a link goes down), a number of packets that cross that link
may be lost.
When the network heals, packets/flows may cross relatively shorter paths to get to the
destination. Shorter paths have a delay-bandwidth product less than longer paths, and
hence can hold fewer bytes in the pipe (i.e., fewer segments can be in flight). Having a
TCP connection with a high CongWin go over a shorter path may also cause packet loss,
since the shorter paths will get congested, and buffers may overflow causing multiple
(sometimes severe) packet losses.

You might also like