You are on page 1of 11

1.

When you access a Web site through a browser, you get a message
"Connecting to" followed by the IP address of the Web server. What is the use of
this address to you? Is it necessary that this address be displayed at all? Discuss.
Answer
When you access a Web site through a browser, the Domain Name Service of
your Internet service provider gives the IP address of the origin server in which
the resource is located. Then the TCP connection is established between the client
and the origin server.
When you see the message Connecting to followed by the IP address of the Web
server, it is an indication that the DNS of your ISP has done its job.
If the DNS is down, you will not get this message and will not be able to access
the URL. Note that the DNS may be working, but still you may not be able to
access the resource if the origin server is down.
2. Standardization of various protocols by the international standards bodies is
done through consensus. Everyone has to accept the proposal, only then the
proposal becomes a standard. For standardizing the OSI architecture, there were
two proposalsone proposal based on six layers and the other proposal based
on eight layers. Finally, seven layers were accepted (just the average of six and
eight, but no other reason!). Develop and suggest six layer and eight layer
architectures and study the pros and cons of each.
Answer
A six-layer architecture for computer communication can be just the elimination
of the session layer in the ISO/OSI architecture. Session layer functionality is
minimal and can be eliminated. An eight-layer-architecture can have an additional
layer to provide security features that runs above the transport layer

3. The TCP/IP protocol suite is quite similar to the OSI reference


The TCP/IP protocol suite is quite similar to the OSI reference
model and each contributed to the other. The main differences between
the OSI architecture and that of TCP/IP relate to the layers above the
transport layer (layer 4) and those below the network layer (layer 3). OSI
has both the session layer and the presentation layer, whereas TCP/IP
combines them into the application layer. Also, TCP/IP combines
OSIs physical layer and data link layer into a network interface level.
In reality, the TCP/IP model is agnostic to what exists below layer 3;
however, it is common to see it referred to as the network interface The
figure below shows the basic layering approach in both the schemes.
4. In token ring systems, suppose that the destination station removes the
data frame and immediately sends a short acknowledgement frame to the
sender, rather than letting the original frame return to the sender. How
does this affect performance? Explain.
Answer
It may actually take longer with acknowledgment frames. If the source
station removes the frame, then the time for one cycle of activity is:
Tt = Tf + T360
Where:
Tt = total time for one transmission plus acknowledgment
Tf = time for sender to transmit one frame
T360 = propagation time completely around the ring
If the destination station removes the frame, then the time for one cycle of
activity is:
Tt = Tf + Tx + Tack + T360x
= Tf + T360 + Tack
Where:
Tack = time for receiver to transmit an acknowledgment frame.

Tx = propagation time from sender to receiver


T360x = propagation time from receiver back to sender.
The difference is that, in the first case, the receiver acknowledges a frame by
setting one or more bits while the frame goes by. In the second case, the receiver
absorbs the entire frame, and then sends an acknowledgment frame.
5. Describe why an application developer might choose to run and application
over UDP rather than TCP. Is it possible for an application to enjoy reliable data
transfer even when the application runs over UDP? If so, how?
Answer:
Because they may want to avoid TCPs congestion control , which can throttle the
applications sending rate at times of congestion. Also, some applications do not
need the reliable data transfer provided by TCP .
Yes. The application developer can put reliable data transfer into the application
layer protocol. This would require a significant amount of work and debugging,
however.
6. Describe the TCP segment format and explain why TCP is not well suited for
real-time communication.
Answer:
TCP Message (Segment) Format
Consists of : (see the figure in Ur exercise book)
1.

Source Port: The 16-bit port number of the process that originated
the TCP segment on the source device.

2.

Destination Port: The 16-bit port number of the process that is the
ultimate intended recipient of the message on the destination device.

3.

Sequence Number: For normal transmissions, the sequence number


of the first byte of data in this segment.

4.

Acknowledgment Number: When the ACK bit is set, this segment is


serving as an acknowledgment (in addition to other possible duties)
and this field contains the sequence number the source is next
expecting the destination to send.

5.

Data Offset: Specifies the number of 32-bit words of data in the TCP
header.

6.

Reserved: 6 bits reserved for future use; sent as zero.

7.

Control bits : are certain bits that are set to indicate the the
communication of control information .

8.

Window: Indicates the number of octets of data the sender of this


segment is willing to accept from the receiver at one time.

9.

Checksum: It is used to protect the entire TCP segment against not


just errors in transmission, but also errors in delivery.

10.

Urgent Pointer: Used in conjunction with the URG control bit for
priority data transfer. This field contains the sequence number of the
last byte of urgent data.

11.

Options : This variable-length field specifies extra TCP options such as the
maximum segment size..

12.

Padding: If the Options field is not a multiple of 32 bits in length,


enough zeroes are added to pad the header so it is a multiple of 32
bits.

13.

Data: The bytes of data being sent in the segment.

To provide reliable service TCP uses acknowledgments and retransmissions that


result in packet delay that can not be tolerated by real-time traffic.
Due to this reasons TCP is not applicable for real-time communication .

UDP provides connectionless service and delivers packets quickly. In case of


packet loss, UDP does not provide retransmission, but some degree of packet loss
can be tolerated by voice.
11 .How long does it take a packet of length 1,000 bytes to propagate over a link
of distance 2,500 km, propagation speed 2.5108 m/s, and transmission rate of
2Mbps? More generally, how long does it take packet of length L to propagate
over a link of distance of d, propagation speed s, and transmission rate R bps?
Does this delay depends on packet length? Does this delay depends on
transmission rate?
Answer:
(a) The general expression of propagation delay is:
dprop = d/s
(b) The propagation delay for the given data is:
dprop = (2.5 10^6)/(2.5 10^8) = 0.01 s
(c) The propagation delay is independent of the packet length (see equation).
(d) The propagation delay is independent of the transmission rate used to
transmit data.
Transmission delay = L/R = 8 bits/byte * 1,000 bytes / 2,000,000 bps = 4 ms
Propagation delay = d/s = 2,500 / 2.5105 = 10 ms
Total Time = 4 + 10 = 14 ms
14. Consider an application which transmits data at a steady rate (for example,
the sender generates a N-bit unit of data every k time units, where k is small and
xed). Also, when such an application starts, it will stay on for relatively long
period of time. Answer the following questions, briey justifying your answer:
(a) Would a packet-switched network or a circuit-switched network be more
appropriate for this application? Why?
(b) Suppose that a packet-switched network is used and only trac in this
network comes from such application as described above. Furthermore, assume

that the sum of the application data rates is less than the capacities of each and
every link. Is some form of congestion control needed? Why?
Answer:
(a) A circuit-switched network would be well suited to the application described,
because the application involves long sessions with predictable smooth
bandwidth requirements. Since the transmission rate is known and the trac is
not bursty, bandwidth can be reserved for each application session circuit with no
signicant waste. In addition, we need not worry greatly about the overhead
costs of setting up and tearing down a circuit connection, which are amortized
over the lengthy duration of a typical application session.
(b) Given such generous link capacities, the network needs no congestion control
mechanism. In the worst (most potentially congested) case, all the applications
simultaneously transmit over one or more particular network links. However,
since each link oers sucient bandwidth to handle the sum of all of the
applications data rates, no congestion (very little queueing) will occur.

15. The following question is about propagation delay and transmission delay.
Consider two hosts: A and B, connected by a single link of R bps. Suppose that
the two hosts are separated by m meters, and suppose the propagation speed
along the link is s meters/second. Host A is to send a packet of size L bits to Host
B.
(a) Express the propagation delay Dprop in terms of m and s
(b) Determine the transmission time of the packet Dtrans in terms of Land R
(c) Ignoring processing and queuing delays, obtain an expression for the end-toend delay.
Answer:(a) Expression of propagation delay in term of m and s is:

dprop = m/s
(b) Time taken to transmit L bits of data at the rate of R bps is shown in the
following equation.
dtran = L/R
(c) We know that the end-to-end delay is the sum of all delays that the data
encounters. In this case, it will be sum of transmission and propagation delays
only.
dtotal = m/s+ L/R
15. Consider two hosts, A and B, connected by a single link of rate R bps.
Suppose that the two hosts are separated by m meters, and suppose the propagation
speed along the link is s meters/sec. Host A is to send a packet of size L bits to
Host B.
(a) Express the propagation delay, dprop, in terms of m and s.
(b) Determine the transmission time of the packet, dtrans, in terms of L and R.
(c) Ignoring processing and queueing delays, obtain an expression for the end-toend delay.
(d) Suppose Host A begins to transmit the packet at time t = 0. At time t = dtrans,
where is the last bit of the packet?
(e) Suppose dprop is greater than dtrans. At time t = dtrans, where is the first bit of
the packet?
(f) Suppose dprop is less than dtrans. At time t = dtrans, where is the first bit of the
packet?
(g) Suppose s = 2.5108, L = 100 bits, and R = 28 kbps. Find the distance m so
that dprop equals dtrans.
a) d prop = m / s seconds.
b) d trans = L / R seconds.

c) d end -to-end =( m / s + L / R ) seconds.


d) The bit is just leaving Host A.
e) The first bit is in the link and has not reached Host B.
f) The first bit has reached Host B.
g) want =ls/r=
30. Consider a Go Back N protocol with a sender window size of 3 and a
sequence number range of 1024. Suppose that at time t, the next in-order packet
that the receiver is expecting has a sequence number of k. Assume that the
medium does not reorder messages. Answer the following questions:
(a) What are the possible sets of sequence numbers inside the senders window
at time t? Justify you answer.
(b) What are all possible values of the ACK field in all possible messages
currently propagating back to the sender at time t? Justify your answer.
Answer
a. Here we have a window size of N=3. Suppose the receiver has received packet
k-1, and has ACKed that and all other preceding packets. If all of these ACK's
have been received by sender, then sender's window is [k, k+N-1]. Suppose next
that none of the ACKs have been received at the sender. In this second case,
the sender's window contains k-1 and the N packets up to and including k-1. The
sender's window is thus [k-N,k-1]. By these arguments, the senders window is
of size 3 and begins somewhere in the range [k-3,k].
b. If the receiver is waiting for packet k, then it has received (and ACKed) packet k1 and the N-1 packets before that. If none of those N ACKs have been yet
received by the sender, then ACK messages with values of [k-N,k-1] may still be
propagating back. Because the sender has sent packets [k-N, k-1], it
must be the case that the sender has already received an ACK for k-N-1. Once the
receiver has sent an ACK for k-N-1 it will never send an ACK that
is less that k-N-1. Thus the range of in- flight ACK values can range from k-4 to k-1.

32. Host A is sending an enormous file to Host B over a TCP connection. Over this
connection there is never any packet loss and the timers never expire. Denote the
transmission rate of the link connecting Host A to the Internet by R bps. Suppose
that the process in Host A is capable of sending data into its TCP socket at a rate
of S bps, where S = 10*R. Further suppose that the TCP receive buffer is large
enough to hold the entire file, and the send buffer can hold only 1% of the file.
What would prevent the process in Host A from continuously passing data to its
TCP socket at rate of S bps? TCP flow control? TCP congestion control? Or
something else? Explain.
Answer
In this situation there is no danger in overflowing the receiver since there is no
loss and acknowledgements are returned before timeouts. Also TCP congestion
congestion control does not hold back the sender. But in one case host A will not
continuously pass data to the socket because the send buffer will quickly fill up.
39. Suppose the IEEE 802.11 RTS and CTS frames were as long as the standard
DATA and ACK frames. Would there be any advantage to using the CTS and RTS
frames? Why?
Answer
=> Yes. They are still neccessary to avoid the hidden terminal problem. The CTS
and RTS frames makes sure a node A sending to B wont interfer with another
node C also attempting to contact B, even if A and C are unable to see each other.
==> If a collision occurs while transmitting the RTS frame, one could just as
well have transmitted the DATA frame. The point of RTS/CTS is to reduce
collisions when transmitting a large amount of data.
Second, the CTS/RTS frames introduce delay and consumes channel
resources.
42. What is the essential difference between Dijkstras algorithm and the
Bellman-Ford algorithm?
Answer
==>Dijkstra's algorithm requires all edge costs to be nonnegative, whereas the
Bellman-Ford algorithm does not. They are used to find shortest paths, so for
example, you could use such an algorithm to suggest shortest driving routes.
43.There is one kind of adaptive routing scheme known as backward learning. As
a packet is routed through the network, it carries not only the destination
address, but the source address plus a running hop count that is incremented for
each hop.

Each node builds a routing table that gives the next node and hop count for
each destination. How is the packet information used to build the table? What
are the advantages and disadvantages of this technique?
==>The popularity of adaptive routing is mainly due to the following reasons:
Adaptive routing improves performance of the network. It aids in avoiding
congestion.
44. Consider a system using flooding with a hop counter. Suppose that the hop
counter is originally set to the diameter of the network. When the hop count
reaches zero, the packet is discarded except at its destination. Does this always
ensure that a packet will reach its destination if there exists at least one operable
path? Why or why not?
Answer
Yes. With flooding, all possible paths are used. So at least one path that is
the minimum-hop path to the destination will be used.
45. Why is it that when the load exceeds the network capacity, delay tends to
infinity? Here is a simple intuitive explanation of why delay must go to infinity:
Suppose that each node in the network is equipped with buffers of infinite size
and suppose that the input load exceeds network capacity. Under ideal
conditions, the network will continue to sustain a normalized throughput of 1.0.
Therefore, the rate of packets leaving the network is 1.0. Because the
rate of packets entering the network is greater than 1.0, internal queue sizes
grow. In the steady state, with input greater than output, these queue sizes grow
without bound and therefore queuing delays grow without bound.
46. What is the difference between backward and forward explicit congestion
signaling?
Backward: Notifies the source that congestion avoidance procedures should be
initiated where applicable for traffic in the opposite direction of the received
notification. It indicates that the packets that the user transmits on this logical
connection may encounter congested resources.
Forward: Notifies the user that congestion avoidance procedures should be
initiated where applicable for traffic in the same direction as the received
notification. It indicates that this packet, on this logical connection, has
encountered congested resources.
49.Explain the difference between slow FHSS and fast FHSS
=>Slow FHSS = multiple signal elements per hop.
fast FHSS = multiple hops per signal element."

You might also like