Professional Documents
Culture Documents
Hand Book
Year: 2015
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 2
1.0 Introduction to computer Network
1.1 What is Network?
In the simplest terms, a network consists of two or more computers that are
connected together to share information.
Networking may be defined as two computers being linked together, either
physically through a cable or through a wireless device. This link allows the
computers to share files, printers and even internet connections
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 3
It can be a Computer, Workstation, Telephone handset, television and so on.
3) Destination
The Destination/Receiver is the device that receives the message.
It can be a Computer, Workstation, Telephone handset, television and so on.
4) Medium
The transmission medium is the physical path by which a message travels from
Sender to Receiver.
It can consist of twisted pair wire, Coaxial cable, Fibre optics cable, Laser, or
Radio waves.
5) Protocol
A protocol is a formal description of a set of rules and conventions that govern a
particular aspect of how devices on a network communicate.
Protocols determine the format, timing, sequencing, and error control in data
communication.
Without protocols, the computer cannot make or rebuild the stream of incoming bits
from another computer into the original format.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 4
Communication between Source and Destination
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 5
End User Device
Within small organizations, the two most prevalent types of networks are “peer-to-
peer” networks and “client/server” networks.
Peer-to-Peer
Peer-to-peer networks are the simplest and least expensive type of networks
available and are most suitable for organizations with less than 5 computers.
A peer-to-peer network will allow an organization to share files, printers and
even modems and Internet connections.
In general, a peer-to-peer network does not have a central server and consists of
2 or more computers connecting through a device called a ‚Hub.‛
The hub allows multiple computers and devices to connect via network cable.
While simpler and less expensive, peer-to-peer networks do not offer many of
the benefits of client/server networks.
And as an organization and network grows, the administration of these peer-to-
peer networks becomes more difficult and expensive.
Client/Server
Client/server networks are networks that connect individual computers, known
as ‚clients,‛ and one or more central computers, called ‚servers.‛
There are many types of servers, the most common being a file server.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 6
In a client/server network, the file server acts as a shared resource – a repository
for files, such as documents, spreadsheets, databases, etc.
Instead of storing these files on each individual machine, the file server permits
storage on one central computer.
In addition to the obvious advantage of reducing the possibility of multiple
iterations of a single file, it allows the organization to have one centralized point
from which to backup its files.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 7
While some theoretical people would claim that the hardware involved in a
network isn't extremely important, they probably haven't ever actually dealt
with setting one up.
Hardware is important. While in theory, every hub should send and receive
signals perfectly, that isn't always the case. And the problem is that if you ask
two network administrators what hub they recommend, you will probably get
two entirely different, yet passionate answers. From picking the cable (optical
fiber, coaxial, or copper), to choosing a server, you should find the most
suitable hardware for your needs.
1.8 Protocols and Standards
Protocol is a set of rule that govern Data Communication.
A protocol defines what is communicated? How it is communicated? And when
it is communicated?
The Key Elements of a Protocol are:
Syntax
Semantics
Timing
1.9 Standards:-
A Standard provides a model for Development that makes it possible for a product to
work regardless of the individual manufacturer.
Standards Creation Committees
ISO (International Standards Organization)
ITU-T (International telecommunication Union-Telecommunication Standards
Sector)
ANSI (American National Standards Institute)
IEEE (Institute of Electrical and Electronics Engineers)
EIA (Electronics Industries Association)
Telcordia
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 8
2.0 : Basic concepts of Network
Bus Topology:-
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 9
Connects one Node to the next and the last Node to the first.
Use the token.
Token is passing from 1 Node to another Node.
Only the node with the token is allowed to send data.
Advantage:-If Link is broken then that node is reachable from other side of ring.
Disadvantage:- Un directional Traffic
Star topology:-
Tree Topology:-
It’s also known as hierarchical topology.
All the nodes are connected in a tree structure.
Advantage:-
Simple and easy to identify Fault.
Disadvantage:-
If one link is broken then sub branch node can not send data to
other side.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 10
Mesh topology:-
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 12
LAN Network
2.3.2 Metropolitan area network (MANs)
Wireless bridge technologies that send signals across public areas can also be
used to create a MAN.
A MAN usually consists of two or more LANs in a common geographic area.
For example, a bank with multiple branches may utilize a MAN.
Typically, a service provider is used to connect two or more LAN sites using
private communication lines or optical services.
A MAN can also be created using wireless bridge technology by beaming
signals across public area.]
Some common MAN technologies include the following:
o ATM
o SMDS (Switched Multimegabit Data Service)
MAN Network
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 13
2.3.3 Wide area networks (WANs).
WANs interconnect LANs, which then provide access to computers or file servers
in other locations.
Because WANs connect user networks over a large geographical area, they make
it possible for businesses to communicate across great distances.
WANs allow computers, printers, and other devices on a LAN to be shared with
distant locations.
WANs provide instant communications across large geographic areas.
Collaboration software provides access to real-time information and resources and
allows meetings to be held remotely.
WANs have created a new class of workers called telecommuters. These people
never have to leave their homes to go to work.
WANs are designed to do the following:
Operate over a large and geographically separated area
Allow users to have real-time communication capabilities with other users
Provide full-time remote resources connected to local services
Provide e-mail, Internet, file transfer, and e-commerce services
Some common WAN technologies include the following:
Integrated Services Digital Network (ISDN)
Digital subscriber line (DSL)
Frame Relay
T1, E1, T3, and E3
Synchronous Optical Network (SONET)
WAN Network
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 14
2.4 Transmission modes:-
Data flow :-
Data flow is the flow of data between 2 points. The direction of the data flow can be
described as:
1) Simple,
2) Half-Duplex,
3) Full-Duplex
1) Simplex:-
Data flows in only one direction on the data communication line (medium).
Examples are Radio and Television broadcasts. They go from the TV station to
your home television.
2) Half-Duplex:-
Data flows in both directions but only one direction at a time on the data
communication line.
Ex. Conversation on walkie-talkies is a half-duplex data flow. Each person takes
turns talking. If both talk at once - nothing occurs!
Bi-directional but only 1 direction at a time!
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 15
Half-Duplex is like the dreaded "one lane" road you may have run into at
construction sites.
Only one direction will be allowed through at a time. Railroads have to deal with
this scenario more often since it's cheaper to lay a single track.
A dispatcher will hold a train up at one end of the single track until a train going
the other direction goes through.
3) Full-Duplex:
Data flows in both directions simultaneously. Modems are configured to flow
data in both directions.
Bi-directional both directions simultaneously!
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 16
Telecommuting allows employees to perform office work at home by "Remote
Access" to the network.
Automated Banking Machines allow banking transactions to be performed
everywhere: at grocery stores, Drive-in machines etc..
Information Service Providers: provide connections to the Internet and other
information services. Examples are CompuServe, Genie, Prodigy, America On-
Line (AOL), etc...
Electronic Bulletin Boards (BBS - Bulletin Board Services) are dialup connections
(use a modem and phone lines) that offer a range of services for a fee.
Value Added Networks are common carriers such as AGT, Bell Canada, etc..
(can be private or public companies) who provide additional leased line
connections to their customers. These can be Frame Relay, ATM (Asynchronous
Transfer Mode), X.25, etc.. The leased line is the Value Added Network.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 18
3.0 : Transmission media
3.1 Introduction
Data is represented by computers and other telecommunication devices using
signals. Signals are transmitted in the form of electromagnetic energy from one
device to another. Electromagnetic signals travel through vacuum, air or other
transmission mediums to travel between one point to another(from source to
receiver).
Electromagnetic energy (includes electrical and magnetic fields) includes power,
voice, visible light, radio waves, ultraviolet light, gamma rays etc.
Transmission medium is the means through which we send our data from one place
to another. The first layer (physical layer) of Communication Networks OSI Seven
layer model is dedicated to the transmission media,
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 20
STP & UTP
UTP connector
Advantages of UTP
o Cost is less
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 21
o Easy to use
o Easy to install
o Flexible
UTP Used in Ethernet and Token ring
STP has a metal foil or cover.
Crosstalk (effect of one channel on the other channel) is less in STP.
STP has the same consideration as UTP
Shield must connect with ground.
Disadvantage of STP : cost is high
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 22
The most common connector is barrel connector.
The most popular type is BNC(Bayonet Network connector)
Two other types are T-connectors and terminators.
T-connectors and terminators are used in bus topology.
Advantages:-
o Easy to Install.
o Inexpensive installation.
o It is better for Higher Distance at Higher speed than twisted pair.
o Excellent noise immunity.
Disadvantage:-
o High Cost
o Harder to work
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 23
• This consists of a central glass core, surrounded by a glass cladding of
lower refractive index, so that the light stays in the core (using Total
Internal Reflection)
• outside is covered with plastic jacket
• Many fibers may be bundled together surrounded by another plastic
cover
Refraction:-
When Light travels from one medium to another medium changes occurs in its
speed and direction, This changes is called refraction.
I - Angle of Incidence.
R- Angle of Refraction.
Critical Angle:-
At some points, the changes in the incident angle results in the refracted angle of 90
degrees, with the refracted beam lying along the Horizontal. The incident angle at
this point is known as Critical Angle.
Reflection:-
When the angle of incidence becomes grater than the critical angle, a new
phenomenon is Occurs is called reflection.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 24
Light traveling of fiber optic cable
o The source of light is usually a Light Emitting Diode (LED) or a LASER. The
light source is placed at one end of the optical fiber.
o The detector, which is placed at the other end of the fiber, is usually a Photo
Diode and it generates an electrical pulse when light falls on it.
o Hence by attaching a light source on one end of an optical fiber and a detector at
the other end, we have a unidirectional data transmission system (Simplex)
o The light source would accepts an electrical signal, converts and transmits it as
light pulses
o The detector at the far end reconverts the light pulses into an electrical signal to
be then interpreted as 1 or a 0.
o The limits the data rate is 1Gb/sec (1x109 bits / sec)
Propagations Mode:-
Single Mode:-
A mono mode (or single mode) fiber is one that allows a small number
of wavelengths of light to pass down it.
Only single angle passes
Superior performance
High cost
High speed
Long distance (up to 100km)
Multimode Fiber
Each light ray is said to have a different mode, so a fiber that allows a
lot of rays to travel through it is called a multimode fiber.
Variety of angles of light will reflect and propagate.
Low cost
Low speed
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 25
Less distance(up to 2km)
Two different light sources – both emit light when voltage applied
o LED – Light Emitting Diode – less costly, longer life
o ILD - Injection Laser Diode – greater data rate
Introduction
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 27
Information is usually transmitted by either radio or microwave transmission.
Unguided media transport electromagnetic waves without using a physical conductor.
Signals are broadcast through air (or in a few cases, water).
Bands
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 28
o At higher frequencies (>100MHz) radio waves tend to travel in straight lines and
bounce of obstacles and can be absorbed by rain (e.g in the 8GHz range)
o At all frequencies, radio waves are subject to interference from motors and other
electrical equipment
In very low frequencies (VLF), low frequencies (LF) and medium
frequency bands (MF) (<1 MHz) radio waves follow the
ground.(The maximum possible distance that these waves can
travel is approximately 1000km)
Terrestrial Microwave
Satellite Communication:
It’s Line-of-site Microwave Transmission.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 30
– Point to Point (Ground station to satellite to ground station)
– Multipoint(Ground station to satellite to multiple receiving stations)
• Satellite orbit
– 35,784 Km, to match earth rotation
– Stays fixed above the transmitter/receiver station as earth rotates
• Satellites need to be separated by distance
– Avoid interference
• Applications
– TV, long distance telephone(satellite phone), private business networks
• Optimum frequency range
– 1 – 10 GHz
– Below 1GHz results in noise, above 10GHz results in severe attenuation
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 31
3.4.3 Infrared:
o Infrared signals can be used for short-range communication in a closed
area using line-of-sight propagation.
o Transceivers must be within line of sight of each other or via reflection
o Does not penetrate walls like microwave
o No frequency allocation or licensing
3.4.4 Bluetooth
o Bluetooth is a wireless technology standard for exchanging data over short
distances (2.4 to 2.485 GHz) from fixed and mobile devices and building personal
area networks (PANs).
o Invented by telecom vendor Ericsson in 1994, it was originally conceived as a
wireless alternative to RS-232 data cables.
o It can connect several devices, overcoming problems of synchronization.
o Penetrate Wall or other objects.
o No Line-of-sight required.
Cell: for tracking of caller (customer), each cellular service area is divided into
small regions called cells.
o Cell office: Each cell contains an antenna and is controlled by a small office,
called the cell office.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 32
o Cell size is not fixed and can be increased or decreased depending on the
population of the area.
o the typical radius of a cell is 1 to 12 miles
o High-density areas require more, geographically smaller cells to meet traffic
demands than do lower density areas.
MTSO: Each cell office, in turn, is controlled by a switching office called a
mobile telephone switching office (MTSO).
o The MTSO coordinates communication between all of the cell offices and the
telephone central office.
o MTSO is a computerized center that is responsible for connecting calls as well as
recording call information and billing.
o MTSO searches for the location of mobile phone by sending query signal to each
cell in a process called paging.
Handoff
o It may be possible that during conversation the mobile phone moves from one
cell to another cell.
o At this time call must not be terminate. MTSO of one cell give responsibility of
call to another MTSO.
o Which will be responsible for handling continues call.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 33
4.0: OSI Model & TCP/IP Model
4.1 Introduction
ISO is an Organization and OSI is a model.
ISO stands for International Standard Organization.
OSI stands for Open System Interconnections.
OSI model is used for understanding the concept of network architecture.
OSI consists of seven separate but related layers, each of which defines a
segment of the process of moving information across a network.
Interface:- Each interface defines what information and services a layer must
provide for the layer above it. Interface is required to transferring data between
different layers.
Header or Trailers: - Header or Trailers are the control data added to the
beginning or the end of a data parcel. . A trailer is added at layer 2.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 34
4.3 OSI Layers:
4.3.1 Physical Layer
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 35
4.3.2 Data Link Layer
Transmitting Frame
Responsible for Node to Node delivery
Makes the physical layer appear error free to the upper layer (network layer).
Framing:
o the data link layer divides the stream of bits received from the network
layer into manageable data units called frames.
Physical addressing:
o Add MAC Address (Layer 2 address/physical address) of Source and
Receiver.
Flow control:
o Flow of data must be controlled.
o If sending rate of sender is higher than receiving rate of receiver then flow
control is required.
Error Control:
o Adds reliability
o Detect and Retransmit damaged or lost frames.
o Prevent duplication of frames.
o Trailer is added to the end of the frame for error controlling.
Access Control:
o Determine the Controlling of node over the link at any time.
o HUB and Switch (L2) are operated at Layer2.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 36
4.3.3 Network Layer:
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 37
Responsibilities of the transport layer
Service-point Addressing:
o Computers often run several programs at the same time.
o The transport layer header therefore must include a type of address called
a service-point address (or port address).
Segmentation and reassembly:
o A message is divided into transmittable segments.
o Each segment is containing a sequence number.
o Reassemble the message correctly upon arriving at the destination and to
identify and
o Replace packets that were lost in transmission.
Connection Control:
o The transport layer can be either connectionless or connection-oriented.
o A connectionless transport layer treats each segment as an independent
packet and delivers it to the transport layer at the destination machine.
o A connection-oriented transport layer makes a connection with the
transport layer at the destination machine first before delivering the
packets. After all the data are transferred, the connection is terminated.
Flow control:
o It is responsible for flow control for end to end rather than across a single
link.
Error control:
o It is responsible for error control. for end to end rather than across a single
link.
o The sending transport layer makes sure that entire message arrives at the
receiving transport layer without error (damage, loss or duplication).
o Error correction is usually achieved through retransmission.
Dialog control:
o The session layer allows two systems to enter into a dialog.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 38
o It allows the communication between two processes to take place either in
half-duplex (one way at a time) or full-duplex (two ways at a time). For
example, the dialog between a terminal connected to a mainframe can be
half-duplex.
Synchronization:
o It allows a process to add checkpoints (synchronization points) into a
stream of file.
o For example, if a system is sending a file of 2000 pages, it is advisable to
insert checkpoints after every 100 pages to ensure that each 100-page unit
is received and acknowledged independently. In this case, if a crash
happens during the transmission of page 523, retransmission begins at
page 501: pages 1 to 500 need not be transmitted.
Translation:
o The processes (running programs) in two systems are usually exchanging
information in the form of character strings, numbers, and so on.
o The information should be changed to bit streams before being
transmitted. Because different computers use different encoding systems,
the presentation layer is responsible for interoperability between these
different encoding methods.
o The presentation layer at the sender changes the information from its
sender-dependent format into a common format. The presentation layer at
the receiving machine changes the common format into its receiver-
dependent format.
Encryption:
o To carry sensitive information, a system must be able to assure privacy.
o Encryption means that the sender transforms the original information to
another form and sends the resulting message but over the network.
Decryption reverses the original process to transform the message back its
original form.
Compression:
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 39
o Data compression reduces the number of bits to be transmitted.
o Data compression becomes particularly important in the transmission of
multimedia such as text, audio, and video.
The application layer, enables the user, whether human or software, to access the
network.
It provides user interfaces and support for services such as electronic mail,
remote file access and transfer, shared database management, and other types
of distributed information services.
The application layer, enables the user, whether human or software, to access the
network.
It provides user interfaces and support for services such as electronic mail,
remote file access and transfer, shared database management, and other types
of distributed information services.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 40
4.4 Internet Protocol
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 41
Trivial File Transfer Protocol (TFTP)
Network File System (NFS)
Simple Mail Transfer Protocol (SMTP)
Telnet
Simple Network Management Protocol (SNMP)
Domain Name System (DNS)
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 42
5.0: Transmission Signal Delay
5.1 Introduction
Type of signal communicated (analog or digital).
(1) Analog:
Those signals that vary with smooth continuous changes.
A continuously changing signal similar to that found on the speaker wires of a
high-fidelity stereo system.
(2) Digital:
Those signals that vary in steps or jumps from value to value. They are usually in
the form of pulses of electrical energy (represent 0s or 1s).
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 43
Inexpensive to use. Expensive to use.
Not complicated Complicated
Analog can deliver better sound quality Not get that much of sound clarity
Not get that much of picture clarity Digital offers better picture clarity
Analog Bandwidth Measured in Hz Digital Bandwidth Measures in bps
Bandwidth: Analog Bandwidth: Digital
– The difference between the highest – Number of bits per second (bps) that
and lowest frequencies that can be sent can be sent over a link.
over an analog link (like phone lines). – The wider the bandwidth, the more
– Measurement is given in hertz (Hz). diverse kinds of information can be
sent.
As an analogy, consider a dimmer switch As an analogy, consider a light switch that is
(analog) that allows you to vary the light either on or off (digital)
in different degrees of brightness.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 44
Transmission time = Packet Length/Transmission Rate=L/R
Ex-1 What is the propagation time if the distance between the two points is 12000
km? Assume the propagation speed to be 2.4X108 m/s in cable.
• Propagation time=
Distance/ Propagation speed
• Propagation time=
12000X1000/ 2.4X108 =50ms
Ex-2 What is the Transmission time for a 2.5KB message if the bandwidth of the
network is 1 Gbps?
Solution:
• 2.5KB=2.5*1000 Byte=2.5*1000*8 bits=2500*8 bits
1 gbps=109 bps
• Transmission Time=(2500*8)/109
= 0.00002 seconds
Answer=0.02 ms.
Ex-3 A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per
minute with each frame carrying an average of 10,000 bits. What is the throughput of this network?
Solution:
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 45
Ex-4 What are the propagation time and the transmission time for a 5-Mbyte message (an image) if
the bandwidth of the network is 1 Mbps? Assume that the distance between the sender and the receiver is
12,000 km and that light travels at 2.4 × 10 8 m/s.
Solution:
Note that in this case, because the message is very long and the bandwidth is not
very high, the dominant factor is the transmission time, not the propagation time. The
propagation time can be ignored.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 46
6.0: Multiplexing
6.1 Multiplexing:
Multiplexing is sending multiple signals or streams of information on a carrier at
the same time in the form of a single, complex signal and then recovering the
separate signals at the receiving end.
In analog transmission, signals are commonly multiplexed using frequency-
division multiplexing (FDM), in which the carrier bandwidth is divided into sub
channels of different frequency widths, each carrying a signal at the same time in
parallel. In digital transmission, signals are commonly multiplexed using time-
division multiplexing (TDM), in which the multiple signals are carried over the
same channel in alternating time slots.
In some optical fiber networks, multiple signals are carried together as separate
wavelengths of light in a multiplexed signal using dense wavelength division
multiplexing (DWDM).
6.2 FDM:
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 47
One of FDM's most common applications is cable television. Only one cable
reaches a customer's home but the service provider can send multiple television
channels or signals simultaneously over that cable to all subscribers.
Receivers must tune to the appropriate frequency (channel) to access the desired
signal.
Multiplexing
Demultiplexing
6.2 WDM:
Its an analog multiplexing technique to combine optical signal.
In fiber-optic communications, wavelength-division multiplexing (WDM) is a
technology which multiplexes a number of optical carrier signals onto a single
optical fiber by using different wavelengths (i.e. colors) of laser light.
This technique enables bidirectional communications over one strand of fiber, as
well as multiplication of capacity.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 48
6.3 TDM
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 49
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 50
7.0. Error Correction and Error Detection
7.1 Introduction
Data can be corrupted during transmission.
So for that Reliable system must have mechanism for detecting and correcting
errors.
Different Factors affecting the signal: Unpredictable interference from heat,
magnetism and other forms of electricity.
7.2 Types of Error
Errors
Single-bit Error:
In a single-bit error, only one bit in the data unit has changed.
00100010 changed to <
00000010
So, there is a single-bit error in the data unit.
Burst Error:
Burst error means that two or more bits in the data unit have changed from
1to 0 or 0 to 1.
Length of burst
A burst error does not necessarily occur in the consecutive bits.
So, we find the length between first corrupted bit and last corrupted bit and
it’s called Length of burst.
Original data : 00100010
Received data : 0 1 1 0 1 1 1 0
*--------*
Length of burst error is 5 bit.
Redundancy
Error detection uses the concept of redundancy, which means adding extra bits
for detecting errors at the destination.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 51
There is mechanism required for detect an error from the data.
We are using different method for detect an error.
Detection methods
Blocks of bits are organized in a table format (means In a Row and Column).
Example:
We have data
01100111000111010001100100101001
32 bit block is divided into 4 rows and 8 columns.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 52
01100111 00011101 00011001 00101001
Arrange it in table Format.
01100111
00011101
00011001
00101001
------------
01001010 LRC
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 53
Example-1
[A] Binary division in a CRC generator
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 54
Example-2
1) If user enters n number of fixed code then append n-1 0 in the original string.
2) If 1’st bit of string is 1 then put 1 in Quotient and do Ex-Or between n bit of
string and fixed code.
3) If 1’st bit of string is 0 then put 0 in Quotient and do Ex-Or between n bit of
string and n ZERO.
4) Apply same method for remaining bits, Derive 1 by 1 bit from string.
5) Find the Reminder.
6) Append reminder to the original String.
Original data frame and generator also represent in the Polynomial format.
Example: - 11011011 can be representing in this format.
X7 + X6 + X4 + X3 + X + 1
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 55
[4] Checksum:
It is an error detection method used by higher layer protocol.
The Sender follows these steps.
1) Divide whole data unit into k section, each contain n bits.
2) All sections are added together.
3) The sum is complemented and become the checksum.
4) The checksum is send with the data.
10101001
00111001
---------------------
SUM- 11100010
CHECKSUM: 0 0 0 1 1 1 0 1
At Receiver side
Data of 24 bit
10101001 00111001 00011101
10101001
00111001
00011101
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 56
---------------------
SUM-1 1 1 1 1 1 1 1
2r >= m+r+1
Hamming code
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 57
Example:
Redundancy bit calculation
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 58
8.0 Ethernet
8.1 Introduction
first implemented by the Digital, Intel, and Xerox group (DIX)
Most of the traffic on the Internet originates and ends with an Ethernet
connection.
The success of Ethernet is due to the following factors
Simplicity and ease of maintenance
Ability to incorporate new technologies
Reliability
Low cost of installation and upgrade
Institute of Electrical and Electronics Engineers (IEEE) 802.3 specification
802.3u for Fast Ethernet,
802.3z for Gigabit Ethernet over fiber
802.3ab for Gigabit Ethernet over UTP
Fast Ethernet is used
To connect backbone devices
To connect enterprise servers
8.2 Connectors
RJ-45 : a connector commonly used for finishing a twisted pair cable
AUI : A connector that interfaces between computer’s NIC or router interface
and an Ethernet cable.
GBIC : A device used as an interface between the Ethernet and fiber optic
systems.
The RJ-45 transparent end connector shows eight colored wires.
Four of the wires, T1 through T4, carry the voltage and are called tip.
The other four wires, R1 through R4, are grounded and are called ring
The wires in the first pair in a cable or a connector are designated as T1 and R1.
The second pair is T2 and R2, the third is T3 and R3, and the fourth is T4 and R4
8.3 UTP Connection
Use straight-through cables for the following connections:
Switch to router
Switch to PC or server
Hub to PC or server
Use crossover cables for the following connections:
Switch to switch
Switch to hub
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 59
Hub to hub
Router to router
PC to PC
Router to PC
8.4 History
The first Ethernet standard was published in 1980
1985, the IEEE standards committee for Local and Metropolitan Networks
published standards for LANs.
1995, IEEE announced a standard for a 100-Mbps Ethernet
Gigabit Ethernet in 1998 and 1999
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 60
Installation was easier because of its smaller size, lighter weight, and
greater flexibility
It has a low cost and does not require hubs
T-shaped connector is used
There may be up to 30 stations on a 10BASE2 segment
10BaseT [Twisted-Pair Ethernet]
10BASE-T used cheaper and easier to install
10BASE-T uses Manchester encoding
T568-A or T568-B cable pinout arrangement
100 Mbps Ethernet / Fast Ethernet
100BaseTx
Category 5 UTP cable is used
100BASE-TX uses 4B/5B encoding
Distance between Hub and Station should be less than 100 m.
100BaseFx
100BASE-FX uses NRZI encoding
Uses Optical Fiber.
Distance between Hub and Station should be less than 2000 m.
1000Mbps Ethernet / Gigabit Ethernet
IEEE 802.3ab, uses Category 5
IEEE 802.3z, specifies 1Gbps full duplex over optical fiber
Fiber-based Gigabit Ethernet uses 8B/10B encoding, which is similar to the
4B/5B concept.
Followed by the simple nonreturn to zero (NRZ) line encoding of light on
optical fiber
Switched Ethernet
When Ethernet was originally designed, computers were fairly slow and
networks were rather small. Therefore, a network running at 10 Mbps was
more than fast enough for just about any application. Nowadays,
computers are several orders of magnitude faster and networks consist of
hundreds or thousands of nodes, and the demand for bandwidth placed
on the network is often far more than it can provide.
When the load on a network is so high that it results in large numbers of
collisions and lost frames, the productivity of the users is greatly reduced.
This is called congestion and it can be solved in one of two ways: either
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 61
scrap the entire network currently in place and replace it with a faster one;
or install an Ethernet switch to create multiple small networks.
In a "traditional" Ethernet network, there is 10 Mbps of bandwidth
available. This bandwidth is shared among all of the users of the network
who wish to transmit or receive information at any one time. In a large
network, there is a very high probability that several users will make a
demand on the network at the same time, and if these demands occur
faster than the network can handle them, eventually the network seems to
slow to a crawl for all users.
Switches allow us to create a "dedicated road" between individual users
(or small groups of users) and their destination (usually a file server). The
way they work is by providing many individual ports, each running at 10
Mbps interconnected through a high speed backplane. Each frame, or
piece of information, arriving on any port has a Destination Address field
which identifies where it is going to. The switch examines each frame's
Destination Address field and forwards it only to the port which is
attached to the destination device. It does not send it anywhere else.
Several of these conversations can go through the switch at one time,
effectively multiplying the network's bandwidth by the number of
conversations happening at any particular moment.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 62
9.0 Interconnecting Devices
9.1 Introduction
9.1.1 Different category of connecting devices
9.2 Repeater
Repeater is also known as a Regenerator.
It’s an electronic device.
Used at Physical layer of OSI model.
Used for carry information at longer distance.
Not work as an amplifier, it’s not amplifying the signal.
It regenerates the original bit patterns from weak signal patterns.
So, Repeater is a regenerator not an amplifier.
It’s not an intelligent device.
9.3 Bridge
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 63
Bridge operates at both the physical layer and data link layer.
Bridge divides a larger network onto smaller segments.
Maintain traffic for each segment.
Maintain physical address of each node of each segment.
Store address into look-up table.
Types of bridge
Bridge
1. Simple Bridge
o Used to connect two segments
o Least expensive
o Address of each node entered manually
o Installation and maintenance is high & time consuming
2. Multiport Bridge
o Used to connect more than two LAN
o Maintain physical address of each station
o If three segment is connected then it maintain three tables
o Address of each node entered manually
o Installation and maintenance is high & time consuming
3. Transparent Bridge
o It’s also known as a learning Bridge
o Maintain physical address of each station
o Manually entry of each node is not required
o Maintain address by its own table is empty
o At initial level
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 64
o After each process its storing details of each node
o Its self updating
9.4 Router
It is used at network layer of OSI model
Maintain logical address of each node
It’s an intelligent device
It has its own software
Determines the best path among available different path
Used to connect two different networks
Manually handling is not required
We can configure the router according to our requirement
Maintain address by its own
At initial level table is empty
After each process its storing details of each node
Its self updating
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 65
9.5 Gateway
Operates In all seven layer of OSI model
Gateway is a protocol converter
Both network is working on different protocol then accept a packet from one
network and transmit packet to the different network.
It convert’s packet into suitable form
A gateway is software installed within a router.
A gateway adjusts the data rate size and format of each packet.
9.6.2 Brouter
Its combination of Bridge and Router
It may be single protocol or multiprotocol router
Work as a both Bridge and Router
It routes the packet based on a network layer address
It also divides the network into smaller segments
9.6.3 Switch
Two type of switch: Layer two and Layer three switch
Layer two switch is operates at second layer and layer three switch is operates at
third layer
We can configure layer three switch
Switch(l2) maintain physical address of each node
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 66
L3 switch handle logical address
Not manually handling is required for tables
For the first time broadcast the message if destination address is not available in
the table.
Two different strategies of switch
o Store and forward Switch: store the frame in the input buffer until the
whole packet has arrived.
o Cut through Switch: Not wait for other frames; just transmit it towards
the destination
9.6.4 Hub
Used at physical layer
Dump device
Not maintaining any type of node details
Each time broadcast the packets
Cheaper
Used to connect two or more number of computers
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 67
10.0 Framing
10.1 Framing
In the OSI model of computer networking, a frame is the protocol data unit at the data
link layer. Frames are the result of the final layer of encapsulation before the data is
transmitted over the physical layer
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 68
• (a) A frame delimited by flag bytes.
• (b) Four examples of byte sequences before and after stuffing.
• Single ESC: part of the escape sequence.
• Doubled ESC: single ESC is part of data.
• De-stuffing.
• Problem:
• What if character encoding does not use 8-bit characters?
10.3 Bit Stuffing
• Allows character codes with arbitrary bits per character.
• Each frames begins and ends with special pattern.
• Example: 01111110.
• When sender’s DLL finds 5 consecutive 1’s in data stream, stuffs 0.
• When receiver sees 5 1’s followed by 0, de-stuffs.
Bit Stuffing: Example
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 69
11.0 Multiple Access Protocols
The data link layer is divided into two sub layers:
The media access control (MAC) layer and
The logical link control (LLC) layer.
The former controls how computers on the network gain access to the data and obtain
permission to transmit it; the latter controls packet synchronization, flow control and
error checking.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 70
B. Random Access
channel not divided, allow collisions
‚recover‛ from collisions
C. “Taking turns”
tightly coordinate shared access to avoid collisions
A.1 FDMA
channel spectrum divided into frequency bands
each station assigned fixed frequency band
unused transmission time in frequency bands go idle
example: 6-station LAN, 1,3,4 have pkt, frequency bands 2,5,6 idle
A.2 TDMA
access to channel in "rounds"
each station gets fixed length slot (length = pkt trans time) in each round
unused slots go idle
example: 6-station LAN, 1,3,4 have pkt, slots 2,5,6 Idle
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 71
A.3 CDMA
unique ‚code‛ assigned to each user; ie, code set partitioning
used mostly in wireless broadcast channels (cellular, satellite,etc)
all users share same frequency, but each user has own ‚chipping‛ sequence (ie,
code) to encode data
encoded signal = (original data) X (chipping sequence)
decoding: inner-product of encoded signal and chipping sequence allows
multiple users to ‚coexist‛ and transmit simultaneously with minimal terference
(if codes are ‚orthogonal‛)
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 72
If 2 or more nodes transmit in slot, all nodes detect collision
Operation:
When node obtains fresh frame, it transmits in next slot
No collision, node can send new frame in next slot
If collision, node retransmits frame in each subsequent slot with prob. p until
success
Pros:
Single active node can continuously transmit at full rate of channel
Highly decentralized: only slots in nodes need to be in sync
Simple
Cons:
Collisions, wasting slots
Idle slots
Nodes may be able to detect collision in less than time to transmit packet
Clock synchronization
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 74
B.3 CSMA (Carrier Sense Multiple Access)
Introduction:
A station senses the channel before it starts transmission
o If busy, either wait or schedule backoff (different options)
o If idle, start transmission
o Vulnerable period is reduced to tprop (due to channel capture effect)
o When collisions occur they involve entire frame transmission times
o Human analogy: don’t interrupt others!
CSMA Options:
Transmitter behavior when busy channel is sensed
o 1-persistent CSMA (most greedy)
Start transmission as soon as the channel becomes idle
Low delay and low efficiency
o Non-persistent CSMA (least greedy)
If busy, wait a backoff period, then sense carrier again
High delay and high efficiency
o p-persistent CSMA (adjustable greedy)
Wait till channel becomes idle, transmit with prob. p; or
wait one mini-slot time & re-sense with probability 1-p
Delay and efficiency can be balanced
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 75
B.4 CSMA/CD (Carrier Sense Multiple Access with Collision Detection)
Introduction:
Monitor for collisions & abort transmission
o Stations with frames to send, first do carrier sensing
o After beginning transmissions, stations continue listening to the medium
to detect collisions
If collisions detected, all stations involved abort transmission, reschedule
random backoff times, and try again at scheduled times - quickly terminating a
damaged frame saves Time & Bandwidth
Binary exponential backoff: after k collisions, a random number between 0 to 2^k
– 1 is chosen, that number of slots is skipped
Assumptions:
Collisions can be detected and resolved in 2tprop
Time slotted in 2tprop slots during contention periods
Assume n busy stations, and each may transmit with probability p in each
contention time slot
Once the contention period is over (a station successfully occupies the channel),
it takes X seconds for a frame to be transmitted
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 76
It takes tprop before the next contention period starts.
B.5 CSMA/CA
It is used in Wireless environment.
C. “Taking turns”
Channel partitioning MAC protocols:
o share channel efficiently and fairly at high load
o Inefficient at low load: delay in channel access, 1/N bandwidth allocated
even if only 1 active node!
Random access MAC protocols
o efficient at low load: single node can fully utilize channel
o high load: collision overhead
‚Taking turns‛ protocols
o Look for best of both worlds!
C.1 Polling:
Master node ‚invites‛ slave nodes to transmit in turn
Concerns:
o polling overhead
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 77
o latency
o single point of failure (master)
C.3 Reservation
Bit Map Protocol can be used for Reservation
A bit-map protocol:
o A contention period has exactly M slots and a station j announces it has a
frame to send by inserting 1 into slot j
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 78
11.3 Objectives to remember:
(A) Short Question:
What is Single-bit Error?
In a single-bit error, only one bit in the data unit has changed.
1. 0 0 1 0 0 0 1 0 changed to <
00000010
So, there is a single-bit error in the data unit.
What is Burst Error?
2.
Burst error means that two or more bits in the data unit have changed from 1to 0 or 0 to 1.
What is Length of burst?
A burst error does not necessarily occur in the consecutive bits.
So, we find the length between first corrupted bit and last corrupted bit and it’s called Length of
burst.
3.
Original data : 0 0 1 0 0 0 1 0
Received data : 0 1 1 0 1 1 1 0
*--------*
Length of burst error is 5 bit.
What is Redundancy?
4. Error detection uses the concept of redundancy, which means adding extra bits for detecting errors at
the destination.
How Error Correction is Done?
Error correction can be done in two ways:
5. 1) Whenever error detect Retransmit entire data unit.
2) Whenever error detect Correct the error using any techniques: Forward Error Correction and Burst
Error Correction
6. Why do you need error detection?
As the signal is transmitted through a media, the signal gets corrupted because of noise and distortion. In other
words, the media is not reliable. To achieve a reliable communication through this unreliable media, there is
need for detecting the error in the signal so that suitable mechanism can be devised to take corrective actions.
7. Explain different types of Errors?
The errors can be divided into two types: Single-bit error and Burst error.
• Single-bit Error : The term single-bit error means that only one bit of given data unit (such as a byte,
character, or data unit) is changed from 1 to 0 or from 0 to 1.
• Burst Error :The term burst error means that two or more bits in the data unit have changed from 0 to 1 or
vice-versa. Note that burst error doesn’t necessary means that error occurs in consecutive bits.
8. Explain the use of parity check for error detection?
In the Parity Check error detection scheme, a parity bit is added to the end of a block of data. The value of the
bit is selected so that the character has an even number of 1s (even parity) or an odd number of 1s (odd parity).
For odd parity check, the receiver examines the received character and if the total number of 1s is odd, then it
assumes that no error has occurred. If any one bit (or any odd number of bits) is erroneously inverted during
transmission, then the receiver will detect an error.
9. What is forward error correction?
The ability of the receiver to both detect and correct errors is known as forward error correction (FEC).
10. What is backward error correction?
When the receiver detects an error in the data received, it requests back the sender to retransmit the data unit.
11. List the services provided by the Link layer.
Framing.
Link access.
Reliable delivery.
Error detection and correction.
12. Where Is the Link Layer Implemented?
The link layer is implemented in a network adapter, also sometimes known as a network interface card (NIC).
13. List the 2 taking turn protocols.
Polling protocol
Token Passing protocol
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 79
(B) Give True or False. Correct False statement with Justification:
1. VRC –Vertical Redundancy Check is an Error Correction Method. (False)
2. LRC –Longitudinal Redundancy Check is an Error Correction Method. ( False )
3. CRC –Cyclic Redundancy Check is an Error Correction Method. ( False )
4. Checksum is an Error Correction Method.( False )
5. Hamming code is an Error Detection Method.(False)
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 80
12.0 Fragmentation and Switching
12.1 Fragmentation
Fragmentation is when a datagram has to be broken up into smaller datagrams to fit the
frame size of a certain network. Different networks have different MTUs (maximum
transfer unit), when a datagram enters a network with a smaller MTU the
gateway/router needs to fragment this packet into smaller packets that fit the new MTU.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 81
IP is not concerned with the content of the packets but looks for a path to the
destination.
Four supporting protocols
ARP (Address Resolution Protocol)
RARP (Reverse Address Resolution Protocol)
ICMP (Internet Control Message Protocol)
IGMP (Internet Group Message Protocol)
Unreliable and connectionless datagram Protocol
If Reliability is important then IP must be paired with a reliable Protocol such as
TCP.
IP transport data in packets called Datagram, Each of which is transport
separately.
Datagram is a unit of data
12.3 Switching:
Whenever we have multiple devices, we have the problem of how to connect
them to make one-on-one communication possible. One solution is to install a
point-to-point connection between each pair of devices (a mesh topology) or
between a central device and every other device (a star topology).
These methods are impractical and wasteful when applied to very large
networks. The number and length of the links require too much infrastructure to
be cost efficient, and the majority of those links would be idle most of the time.
A better solution is switching. A switched network consists of a series of
interlinked nodes, called switches.
Switches are hardware and/or software devices capable of creating temporary
connections between two or more devices linked to the switch but not to each other. In a
switched network, some of the nodes are connected to the communicating
devices. Others are used only for routing.
There are three methods of switching.
Switching methods
12.3.1Circuit switching: ‚It creates a direct physical connection between two devices
such as phones or computers.‛
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 82
For example in figure, instead of point-to-point communication between the
three computers on left (A, B, and C) to the four computers on the right (D, E,
F and G), requiring 12 links. In figure, computer A is connected through
switches I, II, and III to computer D. By moving the levers of the switches,
any computer on the left can be connected to any computer on the right.
A circuit switch is a device with n inputs and m outputs that creates a
temporary connection between an input link and an output link. The number
of inputs does not have to match the number of outputs.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 83
• Much of the time a data connection (line) is idle and facilities are wasted.
• Circuit switching is inflexible. Once a circuit has been established, that circuit is
the path taken by all parts of the transmission whether or not it remains the most
efficient or available.
• Circuit switching sees all transmissions as equal.
• Data rate is fixed-Both ends must operate at the same rate
12.3.2 Packet Switching:
Circuit switching designed for voice
So a better solution, For data transmission is packet switching.
Basic Operation:
• Data transmitted in small packets
Typically 1000 octets
Longer messages split into series of packets
Each packet contains a portion of user data plus some control info
• Control info
Routing (addressing) info
• Packets are received, stored briefly (buffered) and past on to the next node
Store and forward
Advantages
• Line efficiency
Single node to node link can be shared by many packets over time
Packets queued and transmitted as fast as possible
• Data rate conversion
Each station connects to the local node at its own speed
Nodes buffer data if required to equalize rates
• Packets are accepted even when network is busy
Delivery may slow down
• Priorities can be used
There are two popular approaches to packet switching:
(i) Datagram approach (Connectionless service)
(ii) Virtual circuit approach. (Connection oriented)
(i) Datagram
Every packet contains a complete destination address.
Switch contains a routing table
Two successive packets may follow different paths.
Each switch processes packets independently.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 84
(ii) Virtual Circuit Switching
Based on the connection oriented model
Virtual connection has to be established first
Two stage process:- connection setup, data transfer
Two type of virtual circuits
o Permanent virtual circuits (PVC)
o Switched virtual circuit (SVC)
12.3.3 Message Switching
• A store-and-forward network where the block of transfer is a complete message.
• Since messages can be quite large, this can cause:
– buffering problems
– high mean delay times
12.3.4 Virtual circuit and Datagram network Comparison
Datagram Virtual Circuit
Figure
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 85
13.0 Routing Algorithm
13.1 Routing:
The routing algorithm is that part of the network layer software responsible for
deciding which output line an incoming packet should be transmitted on.
Routing algorithms can be grouped into two major classes: nonadaptive and adaptive.
Nonadaptive algorithms do not base their routing decisions on measurements or
estimates of the current traffic and topology. Instead, the choice of the route to use to
get from I to J (for all I and J) is computed in advance, off-line, and downloaded to the
routers when the network is booted. This procedure is sometimes called static routing.
Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the
topology, and usually the traffic as well. Adaptive algorithms differ in where they get
their information (e.g., locally, from adjacent routers, or from all routers), when they
change the routes (e.g., every T sec, when the load changes or when the topology
changes), and what metric is used for optimization (e.g., distance, number of hops, or
estimated transit time). In the following sections we will discuss a variety of routing
algorithms, both static and dynamic.
Modern computer networks generally use dynamic routing algorithms rather than the
static ones described above because static algorithms do not take the current network
load into account. Two dynamic algorithms in particular, distance vector routing and
link state routing, are the most popular. In this section we will look at the former
algorithm. In the following section we will study the latter algorithm.
Distance vector routing algorithms operate by having each router maintain a table (i.e, a
vector) giving the best known distance to each destination and which line to use to get
there. These tables are updated by exchanging information with the neighbors.
The distance vector routing algorithm is sometimes called by other names, most
commonly the distributed Bellman-Ford routing algorithm and the Ford-Fulkerson
algorithm, after the researchers who developed it (Bellman, 1957; and Ford and
Fulkerson, 1962). It was the original ARPANET routing algorithm and was also used in
the Internet under the name RIP.
In distance vector routing, each router maintains a routing table indexed by, and
containing one entry for, each router in the subnet. This entry contains two parts: the
preferred outgoing line to use for that destination and an estimate of the time or
distance to that destination. The metric used might be number of hops, time delay in
milliseconds, total number of packets queued along the path, or something similar.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 86
The router is assumed to know the ''distance'' to each of its neighbors. If the metric is
hops, the distance is just one hop. If the metric is queue length, the router simply
examines each queue. If the metric is delay, the router can measure it directly with
special ECHO packets that the receiver just timestamps and sends back as fast as it can.
As an example, assume that delay is used as a metric and that the router knows the
delay to each of its neighbors. Once every T msec each router sends to each neighbor a
list of its estimated delays to each destination. It also receives a similar list from each
neighbor. Imagine that one of these tables has just come in from neighbor X, with X i
being X's estimate of how long it takes to get to router i. If the router knows that the
delay to X is m msec, it also knows that it can reach router i via X in Xi + m msec. By
performing this calculation for each neighbor, a router can find out which estimate
seems the best and use that estimate and the corresponding line in its new routing table.
Note that the old routing table is not used in the calculation.
This updating process is illustrated in Fig. 13-1. Part (a) shows a subnet. The first four
columns of part (b) show the delay vectors received from the neighbors of router J. A
claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to D, etc.
Suppose that J has measured or estimated its delay to its neighbors, A, I, H, and K as 8,
10, 12, and 6 msec, respectively.
Figure 13-1. (a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.
Consider how J computes its new route to router G. It knows that it can get to A in 8
msec, and A claims to be able to get to G in 18 msec, so J knows it can count on a delay
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 87
of 26 msec to G if it forwards packets bound for G to A. Similarly, it computes the delay
to G via I, H, and K as 41 (31 + 10), 18 (6 + 12), and 37 (31 + 6) msec, respectively. The
best of these values is 18, so it makes an entry in its routing table that the delay to G is
18 msec and that the route to use is via H. The same calculation is performed for all the
other destinations, with the new routing table shown in the last column of the figure.
Distance vector routing works in theory but has a serious drawback in practice:
although it converges to the correct answer, it may do so slowly. In particular, it reacts
rapidly to good news, but leisurely to bad news. Consider a router whose best route to
destination X is large. If on the next exchange neighbor A suddenly reports a short
delay to X, the router just switches over to using the line to A to send traffic to X. In one
vector exchange, the good news is processed.
To see how fast good news propagates, consider the five-node (linear) subnet of Fig. 13-
2, where the delay metric is the number of hops. Suppose A is down initially and all the
other routers know this. In other words, they have all recorded the delay to A as
infinity.
When A comes up, the other routers learn about it via the vector exchanges. For
simplicity we will assume that there is a gigantic going somewhere that is struck
periodically to initiate a vector exchange at all routers simultaneously. At the time of
the first exchange, B learns that its left neighbor has zero delay to A. B now makes an
entry in its routing table that A is one hop away to the left. All the other routers still
think that A is down. At this point, the routing table entries for A are as shown in the
second row of Fig. 13-3(a). On the next exchange, C learns that B has a path of length 1
to A, so it updates its routing table to indicate a path of length 2, but D and E do not
hear the good news until later. Clearly, the good news is spreading at the rate of one
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 88
hop per exchange. In a subnet whose longest path is of length N hops, within N
exchanges everyone will know about newly-revived lines and routers.
Now let us consider the situation of Fig. 13-3(b), in which all the lines and routers are
initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4, respectively.
Suddenly A goes down, or alternatively, the line between A and B is cut, which is
effectively the same thing from B's point of view.
At the first packet exchange, B does not hear anything from A. Fortunately, C says: Do
not worry; I have a path to A of length 2. Little does B know that C's path runs through
B itself. For all B knows, C might have ten lines all with separate paths to A of length 2.
As a result, B thinks it can reach A via C, with a path length of 3. D and E do not update
their entries for A on the first exchange.
On the second exchange, C notices that each of its neighbors claims to have a path to A
of length 3. It picks one of the them at random and makes its new distance to A 4, as
shown in the third row of Fig. 13-3(b). Subsequent exchanges produce the history
shown in the rest of Fig. 13-3(b).
From this figure, it should be clear why bad news travels slowly: no router ever has a
value more than one higher than the minimum of all its neighbors. Gradually, all
routers work their way up to infinity, but the number of exchanges required depends
on the numerical value used for infinity. For this reason, it is wise to set infinity to the
longest path plus 1. If the metric is time delay, there is no well-defined upper bound, so
a high value is needed to prevent a path with a long delay from being treated as down.
Not entirely surprisingly, this problem is known as the count-to-infinity problem. There
have been a few attempts to solve it (such as split horizon with poisoned reverse in RFC
1058), but none of these work well in general. The core of the problem is that when X
tells Y that it has a path somewhere, Y has no way of knowing whether it itself is on the
path.
Distance vector routing was used in the ARPANET until 1979, when it was replaced by
link state routing. Two primary problems caused its demise. First, since the delay metric
was queue length, it did not take line bandwidth into account when choosing routes.
Initially, all the lines were 56 kbps, so line bandwidth was not an issue, but after some
lines had been upgraded to 230 kbps and others to 1.544 Mbps, not taking bandwidth
into account was a major problem. Of course, it would have been possible to change the
delay metric to factor in line bandwidth, but a second problem also existed, namely, the
algorithm often took too long to converge (the count-to-infinity problem). For these
reasons, it was replaced by an entirely new algorithm, now called link state routing.
Variants of link state routing are now widely used.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 89
Figure 13-3. a) Router b) Routing Information
The idea behind link state routing is simple and can be stated as five steps. Each router
must do the following:
In effect, the complete topology and all delays are experimentally measured and
distributed to every router. Then Dijkstra's algorithm can be run to find the shortest
path to every other router.
Link-state routing protocols were developed to alleviate the convergence and loop
issues of distance-vector protocols. Link-state protocols maintain three separate tables:
Neighbor table – contains a list of all neighbors, and the interface each neighbor
is connected off of. Neighbors are formed by sending Hello packets.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 90
Topology table – otherwise known as the ‚link-state‛ table, contains a map of all
links within an area, including each link’s status.
Shortest-Path table – contains the best routes to each particular destination
(otherwise known as the ‚routing‛ table‛)
13.4 Difference between Distance Vector Routing and Link State Routing
Let us begin our study of feasible routing algorithms with a technique that is widely
used in many forms because it is simple and easy to understand. The idea is to build a
graph of the subnet, with each node of the graph representing a router and each arc of
the graph representing a communication line (often called a link). To choose a route
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 91
between a given pair of routers, the algorithm just finds the shortest path between them
on the graph.
The concept of a shortest path deserves some explanation. One way of measuring path
length is the number of hops. Using this metric, the paths ABC and ABE in Fig. 13-5 are
equally long. Another metric is the geographic distance in kilometers, in which case
ABC is clearly much longer than ABE (assuming the figure is drawn to scale).
However, many other metrics besides hops and physical distance are also possible. For
example, each arc could be labelled with the mean queueing and transmission delay for
some standard test packet as determined by hourly test runs. With this graph labeling,
the shortest path is the fastest path rather than the path with the fewest arcs or
kilometers.
In the general case, the labels on the arcs could be computed as a function of the
distance, bandwidth, average traffic, communication cost, mean queue length,
measured delay, and other factors. By changing the weighting function, the algorithm
would then compute the ''shortest'' path measured according to any one of a number of
criteria or to a combination of criteria.
Figure 13-5. The first five steps used in computing the shortest path from A to D. The
arrows indicate the working node.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 92
Several algorithms for computing the shortest path between two nodes of a graph are
known. This one is due to Dijkstra (1959). Each node is labeled (in parentheses) with its
distance from the source node along the best known path. Initially, no paths are known,
so all nodes are labeled with infinity. As the algorithm proceeds and paths are found,
the labels may change, reflecting better paths. A label may be either tentative or
permanent. Initially, all labels are tentative. When it is discovered that a label represents
the shortest possible path from the source to that node, it is made permanent and never
changed thereafter.
To illustrate how the labeling algorithm works, look at the weighted, undirected graph
of Fig. 13-5(a), where the weights represent, for example, distance. We want to find the
shortest path from A to D. We start out by marking node A as permanent, indicated by
a filled-in circle. Then we examine, in turn, each of the nodes adjacent to A (the working
node), relabeling each one with the distance to A. Whenever a node is relabeled, we also
label it with the node from which the probe was made so that we can reconstruct the
final path later. Having examined each of the nodes adjacent to A, we examine all the
tentatively labeled nodes in the whole graph and make the one with the smallest label
permanent, as shown in Fig. 13-5(b). This one becomes the new working node.
We now start at B and examine all nodes adjacent to it. If the sum of the label on B and
the distance from B to the node being considered is less than the label on that node, we
have a shorter path, so the node is relabeled.
After all the nodes adjacent to the working node have been inspected and the tentative
labels changed if possible, the entire graph is searched for the tentatively-labeled node
with the smallest value. This node is made permanent and becomes the working node
for the next round. Figure 13-5 shows the first five steps of the algorithm.
To see why the algorithm works, look at Fig. 13-5(c). At that point we have just made E
permanent. Suppose that there were a shorter path than ABE, say AXYZE. There are
two possibilities: either node Z has already been made permanent, or it has not been. If
it has, then E has already been probed (on the round following the one when Z was
made permanent), so the AXYZE path has not escaped our attention and thus cannot be
a shorter path.
Now consider the case where Z is still tentatively labeled. Either the label at Z is greater
than or equal to that at E, in which case AXYZE cannot be a shorter path than ABE, or it
is less than that of E, in which case Z and not E will become permanent first, allowing E
to be probed from Z.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 93
13.4 Delivery semantics
Routing schemes differ in their delivery semantics:
Unicast delivers a message to a single specific node. Unicast is the dominant
form of message delivery on the Internet.
Anycast delivers a message to anyone out of a group of nodes, typically the one
nearest to the source
Multicast delivers a message to a group of nodes that have expressed interest in
receiving the message
o In computer networking, multicast (one-to-many or many-to-many
distribution) is group communication where information is addressed to a
group of destination computers simultaneously. Multicast should not be
confused with physical layer point-to-multipoint communication.
o Group communication may either be application layer multicast or
network assisted multicast, where the latter makes it possible for the
source to efficiently send to the group in a single transmission. Copies are
automatically created in other network elements, such as routers, switches
and cellular network base stations, but only to network segments that
currently contain members of the group.
Geocast delivers a message to a geographic area
Broadcast delivers a message to all nodes in the network
o In telecommunication and information theory, broadcasting refers to a
method of transferring a message to all recipients simultaneously.
Broadcasting can be performed as a high level operation in a program, for
example broadcasting Message Passing Interface, or it may be a low level
networking operation, for example broadcasting on Ethernet.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 94
13.5 Comparison of Hub, Switch & Router.
Figure
Symbolic notation
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 96
Netgear EN 104, Alcatel's OmniSwitch 9000;
Linksys WRT54GL
Cisco 1358 Micro Hub Cisco Catalyst switch 4500
Latest Models Juniper MX & EX series
Cisco 1538 Series and 6500(10 Gbps), 3Com
Cisco 3900,2900,1900
Linksys NMH405 7700, 7900E, 8800.
Port range On/Off Firewall
All time Broadcasting
Features Priority setting of port VPN
Regenerate and transmit the
VLAN Dynamic hadling of
data
Port mirroring Bandwidth
In a LAN environment L3
In a different network
switch is faster than Router
Faster - environment (MAN/ WAN) Router
(built in switchning
is faster than L3 Switch.
hardware)
NAT(Network Address
- Can Perform NAT
Translation)
Take more time for
Take faster Routing
Routing Decision - complicated routing
Decision
Decision
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 97
Objectives to Remember
(A) Fill in the Blanks:
1. Computer networks that provide only a connection service at the network layer are called _______
while computer networks that provide only a connectionless service at the network layer are
called_________. (virtual-circuit (VC) networks, datagram networks )
2. The forwarding functions implemented by a router’s input ports, output ports, and switching
fabric are also known as _________. (router forwarding plane)
3. Count-to-Infinity problem occurs in ________. (distance vector routing)
14.2 IP Datagram
Packets of IP is called Datagram
Maximum size of IP Datagram is (65,536 bytes)
IP datagram containing two part: Header and Data
Size of Header may be vary between 20 to 60 bytes
Header containing the information regarding routing and delivery
Description of each field of IP Datagram:
VER (4 BITS)
o The version field is set to the value '4' in decimal or '0100' in binary.
o The value indicates the version of IP (4 or 6, there is no version 5).
HLEN (4 BITS)
o Defines the length of header
o Length is in a multiple of four bytes
o The four bit can represent a number between 0 to 15, which multiply by 4
gives a maximum of 60 bytes
Service types (8 Bits)
o Define How the Datagram should be handled.
o Define the Priority.
TOTAL LENGTH (16 BITS)
o This informs the receiver of the datagram where the end of the data in this
datagram is.
o This is why an IP datagram can be up to 65,535 bytes long, as that is the
maximum value of this 16-bit field.
IDENTIFICATION (16 bits)
o It is used for Fragmentation.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 99
o Sometimes, a device in the middle of the network path cannot handle the
datagram at the size it was originally transmitted, and must break it into
fragments.
o If an intermediate system needs to break up the datagram, it uses this field
to aid in identifying the fragments.
FLAGS (3 BITS)
o The flags field contains single-bit flags that indicate whether the datagram
is a fragment, whether it is permitted to be fragmented, and whether the
datagram is the last fragment, or there are more fragments.
o The first bit in this field is always zero.
FRAGMENT OFFSET (13 BITS)
o When a datagram is fragmented, it is necessary to reassemble the
fragments in the correct order.
o The fragment offset numbers the fragments in such a way that they can be
reassembled correctly.
TIME TO LIVE (8 BITS)
o This field determines how long a datagram will exist.
o At each hop along a network path, the datagram is opened and it's time
to live field is decremented by one (or more than one in some cases).
o When the time to live field reaches zero, the datagram is said to have
'expired' and is discarded.
o This prevents congestion on the network that is created when a datagram
cannot be forwarded to it's destination.
o Most applications set the time to live field to 30 or 32 by default.
PROTOCOL (8 BITS)
o This indicates what type of protocol is encapsulated within the IP
datagram. Some of the common values seen in this field include:
Number
Protocol
(Decimal)
ICMP 1
IGMP 2
TCP 6
UDP 17
HEADER CHECKSUM (16 BITS)
o The checksum allows IP to detect datagram with corrupted headers and
discard them.
o Since the time to live field changes at each hop, the checksum must be re-
calculated at each hop.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 100
SOURCE ADDRESS (32 BITS)
This is the IP address of the sender of the IP datagram.
DESTINATION ADDRESS (32 BITS)
This is the IP address of the intended receiver(s) of the datagram. If the host
portion of this address is set to all 1's, the datagram is an 'all hosts' broadcast.
OPTIONS & PADDING (VARIABLE)
o Various options can be included in the header by a particular vendor's
implementation of IP.
o If options are included, the header must be padded with zeroes to fill in any
unused octets so that the header is a multiple of 32 bits, and matches the count of
bytes in the Header Length (HLEN) field.
<
14.3 IP Addressing
14.3.1 Introduction
IP address is used to unlikely identify the node in the network.
IPV4 version contains 32 bit address.
Divided into main two parts : Net ID and Host ID
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 101
Doted decimal notation is used to represent IPV4 Address
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 102
14.4.1 ARP [Address Resolution Protocol]
ARP associates an IP Address with the Physical Address. [IP to MAC
Binding]
ARP is used to find the Physical Address of the node when its internet
address is known.
Anytime a host needs to find the Physical Address of another Host on its
N/W, It Send ARP Query packets that includes the IP Address and
Broadcast it over the N/W.
Every Host I the N/W Receive the ARP Packet but only host matching the
IP Address Reply it backs and give own Physical Address.
ARP - is a low-level protocol used to bind addresses dynamically.
ARP allows a host to find a physical address of a target host on the same
physical network, given only it’s IP address.
ARP broadcasts special packets with the destination’s IP address to ALL
hosts.
The destination host (only) will respond with it’s physical address.
When the response is received, the sender uses the physical address of
destination host to send all packets.
ARP packet
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 103
Encapsulation of ARP packet
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 104
Encapsulation of RARP packet
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 105
14.4.4 IGMP [Internet Group Message Protocol]
The IP protocol can be viewed in three type of communication. –Unicasting, Multicasting and
Broadcasting.
Unicasting is the communication between one sender and one Receiver; it is one to one
communication.
Broadcasting is a one type of communication in which one sender send message to all nodes
available in the network.
Some process is needed to send same message to a large number of receivers simultaneously at a
time. This is called Multicasting.
Ex:-Multiple Stock brokers can simultaneously be informed of changes in price, Video on
demand.
IP address support multicasting addresses.
All IP Address starts with 1110 (Class D) Multicast Address., 28 remaining are group bits.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 106
14.5 IPV6 address
It’s important to understand that IPv6 is much more than an extension of IPv4
addressing. IPv6, first defined in RFC 2460, is a complete implementation of the
network layer of the TCP/IP protocol stack and it covers a lot more than simple address
space extension from 32 to 128 bits (the mechanism that increases IPv6’s ability to
allocate almost unlimited addresses to all the devices in the world for years to come).
IPv6 offers many improvements over IPv4, and Table 1 compares IPv4 and IPv6
operation at a glance.
More efficient routing. IPv6 routers no longer have to fragment packets, an overhead-
intensive process that just slows a network down. „ Quality of service (QoS) built-in.
IPv4 has no way to distinguish delay-sensitive packets from bulk data transfers,
requiring extensive workarounds, but IPv6 does. „ Elimination of NAT to extend
address spaces. IPv6 increases the IPv4 address size from 32 bits (about 4 billion) to 128
bits (enough for every molecule in the solar system). „ Network layer security built-in
(IPsec). Security, always a challenge in IPv4, is an integral part of IPv6. „ Stateless
address autoconfiguration for easier network administration. Many IPv4 installs were
complicated by manual default router and address assignment. IPv6 handles this in an
automated fashion. „ Improved header structure with less processing overhead. Many
of the fields in the IPv4 header were optional and used infrequently. IPv6 eliminates
these fields (options are handled differently).
IPv4 IPv6
32-bit (4 byte) address supporting 128-bit (16 byte) address supporting 228
4,294,967,296 address (although many were (about 3.4 x 1038) addresses
lost to special purposes, like 10.0.0.0 and
127.0.0.0)
Address Shortages: Larger address space:
IPv4 supports 4.3×109 (4.3 billion) addresses, IPv6 supports 3.4×1038 addresses, or
which is inadequate to give one (or more if 5×1028(50 octillion) for each of the
they possess more than one device) to every roughly 6.5 billion people alive today.
living person.
NAT can be used to extend address No NAT support (by design)
limitations
IP addresses assigned to hosts by DHCP or IP addresses self-assigned to hosts with
static configuration stateless address auto-configuration or
DHCPv6
IPSec support optional IPSec support required
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 107
Options integrated in header fields Options supported with extensions
headers (simpler header format)
Has broadcast addresses for all devices No such concept in IPv6 (uses multicast
groups)
Uses 127.0.0.1 as loopback address Uses ::1 as loopback address
IPv4 is subdivided into classes <A-E> IPv6 is classless. IPv6 uses a prefix and
an Identifier ID known as IPv4 network
IPv4 header has 20 bytes. IPv6 header is the double, it has 40 bytes
IPv4 header has many fields (13 fields) IPv6 header has fewer fields, it has 8
fields.
IPv4 address uses a subnet mask. IPv6 uses a prefix length.
IPv4 has lack of security. IPv6 has a built-in strong security
IPv4 was never designed to be secure - Encryption
- Originally designed for an isolated military - Authentication
network
- Then adapted for a public educational &
research network
IPv6 packets have their own frame Ethertype value, 0x86dd, making it easy for
receivers that must handle both IPv4 and IPv6 to distinguish the frame content on the
same interface. The IPv6 header is comprised of the following fields:
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 108
Payload Length: A 16-bit field giving the length of the packet in bytes, excluding
the IPv6 header. „
Next Header: An 8-bit field giving the type of header immediately following the
IPv6 header (this serves the same function as the Protocol field in IPv4). „
Hop Limit: An 8-bit field set by the source host and decremented by 1 at each
router. Packets are discarded if Hop Limit is decremented to zero (this replaces
the IPv4 Time To Live field). Generally, implementers choose the default to use,
but values such as 64 or 128 are common.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 109
15.0 Sub netting:
15.1 Subnetting:
Subneting is a process to divide network into sub networks.
Host ID bit is used in Net ID
A Network with Two Levels of Hierarchy
Masking: Masking is a process that extracts the address of the physical network
from an IP address.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 110
15.2 Classful and Classless IP
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 111
15.4 Subnetting example for Class C
Default subnet mask of class C is 255.255.255.0. CIDR notation of class C is /24, which
means 24 bits from IP address are already consumed by network portion and we have 8
host bits to work with. We cannot skip network bit, when we turned them on.
Subnetting moves from left to right. So Class C subnet masks can only be the following:
As we have already discussed earlier in this article that we have to have at least 2 host
bits for assigning IP addresses to hosts, that means we can't use /31 and /32 for
subnetting.
/25
CIDR /25 has subnet mask 255.255.255.128 and 128 is 10000000 in binary. We used one
host bit in network address.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 112
/26
CIDR /26 has subnet mask 255.255.255.192 and 192 is 11000000 in binary. We used two
host bits in network address.
N=2
H=6
Total subnets ( 2N ) :- 22 = 4
Block size (256 - subnet mask) :- 256 - 192 = 64
Valid subnets ( Count blocks from 0) :- 0,64,128,192
Total hosts (2H) :- 26 = 64
Valid hosts per subnet ( Total host - 2 ) :- 64 - 2 = 62
/27
CIDR /27 has subnet mask 255.255.255.224 and 224 is 11100000 in binary. We used three
host bits in network address.
N=3
H=5
Total subnets ( 2N ) :- 23 = 8
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 113
Block size (256 - subnet mask) :- 256 - 224 = 32
Valid subnets ( Count blocks from 0) :- 0,32,64,96,128,160,192,224
Total hosts (2H) :- 25 = 32
Valid hosts per subnet ( Total host - 2 ) :- 32 - 2 = 30
Sub = Subnet
/28
CIDR /28 has subnet mask 255.255.255.240 and 240 is 11110000 in binary. We used four
host bits in network address.
N=4
H=4
Total subnets ( 2N ) :- 24 = 16
Block size (256 - subnet mask) :- 256 - 240 = 16
Valid subnets ( Count blocks from 0) :-
0,16,32,48,64,80,96,112,128,144,160,176,192,208,224,240
Total hosts (2H) :- 24 = 16
Valid hosts per subnet ( Total host - 2 ) :- 16 - 2 = 14
/29
CIDR /29 has subnet mask 255.255.255.248 and 248 is 11111000 in binary. We used five
host bits in network address.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 114
N=5
H=3
Total subnets ( 2N ) :- 25 = 32
Block size (256 - subnet mask) :- 256 - 248 = 8
Valid subnets ( Count blocks from 0) :-
0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152,160,168,176,184,192,200,20
8,216,224,232,240,248
Total hosts (2H) :- 23 = 8
Valid hosts per subnet ( Total host - 2 ) :- 8 - 2 = 6
/30
CIDR /30 has subnet mask 255.255.255.252 and 252 is 11111100 in binary. We used six
host bits in network address.
N=6
H=2
Total subnets ( 2N ) :- 26 = 64
Block size (256 - subnet mask) :- 256 - 252 = 4
Valid subnets ( Count blocks from 0) :-
0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,96,100,104,108,112,116,1
20,124,128,132,136,140,144,148,152,156,160,164,168,172,176,180,184,188,192,196,200,204,20
8,212,216,220,224,228,232,236,240,244,248,252
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 115
Total hosts (2H) :- 22 = 4
Valid hosts per subnet ( Total host - 2 ) :- 4 - 2 = 2
1. In a block address, we know the IP address of one host is 182.44.82.16/26. What are
the first address (network address) and the last address (limited broadcast address) in
this block?
Answer :)
182.44.82.0 is the subnet address
182.44.82.63 is the broadcast address
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 116
16.0 Dynamic Routing Protocols
16.1 IP routing:
In general terms, routing is the process of forwarding packets between connected
networks. For TCP/IP-based networks, routing is part of Internet Protocol (IP) and is
used in combination with other network protocol services to provide forwarding
capabilities between hosts that are located on separate network segments within a
larger TCP/IP-based network.
Static Routing:
Advantages:
Minimal CPU/Memory overhead
No bandwidth overhead (updates are not shared between routers)
Granular control on how traffic is routed
Disadvantages:
Infrastructure changes must be manually adjusted
No ‚dynamic‛ fault tolerance if a link goes down
Impractical on large network
Dynamic Routing:
Advantages:
Simpler to configure on larger networks
Will dynamically choose a different (or better) route if a link goes down
Ability to load balance between multiple links
Disadvantages:
Updates are shared between routers, thus consuming bandwidth
Routing protocols put additional load on router CPU/RAM
The choice of the ‚best route‛ is in the hands of the routing protocol, and not the
network administrator
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 118
16.2 Dynamic Routing Protocols
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 119
Scalability: Scalability defines how large a network can become, based on the routing
protocol that is deployed. The larger the network is, the more scalable the routing
protocol needs to be.
Classful or classless (use of VLSM): Classful routing protocols do not include the
subnet mask and cannot support variable-length subnet mask (VLSM). Classless routing
protocols include the subnet mask in the updates. Classless routing protocols support
VLSM and better route summarization.
Resource usage: Resource usage includes the requirements of a routing protocol such as
memory space (RAM), CPU utilization, and link bandwidth utilization. Higher resource
requirements necessitate more powerful hardware to support the routing protocol
operation, in addition to the packet forwarding processes.
Implementation and maintenance: Implementation and maintenance describes the level
of knowledge that is required for a network administrator to implement and maintain
the network based on the routing protocol deployed.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 120
17.0 Transport Layer
17.1 Introduction
Transmission Control Protocol (TCP) supports the network at the transport layer.
Transmission Control Protocol (TCP) provides a reliable connection oriented
service.
Connection oriented means both the client and server must open the connection
before data is sent.
TCP provides:
o End to end reliability.
o Data packet re sequencing.
o Flow control.
TCP relies on the IP service at the network layer to deliver data to the host. Since
IP is not reliable with regard to message quality or delivery, TCP must make
provisions to be sure messages are delivered on time and correctly
At the sending end of each transmission, TCP divides long transmission into
smaller data units and Packages each into a frame called a “Segment”
The header is followed by data.
Port Address:
o Each port is defined by a positive integer address carried in the header of
the transport Layer Packet.
o Size of Port address is 16 bits.
o Allow up to 65,536 Ports.
o Port Address range is 0 to 65,536.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 121
17.2 Transport Layer Services:
Service-point Addressing:
o Computers often run several programs at the same time.
o The transport layer header therefore must include a type of address called
a service-point address (or port address).
Segmentation and reassembly:
o A message is divided into transmittable segments.
o Each segment is containing a sequence number.
o Reassemble the message correctly upon arriving at the destination and to
identify and
o Replace packets that were lost in transmission.
Connection Control:
o The transport layer can be either connectionless or connection-oriented.
o A connectionless transport layer treats each segment as an independent
packet and delivers it to the transport layer at the destination machine.
o A connection-oriented transport layer makes a connection with the
transport layer at the destination machine first before delivering the
packets. After all the data are transferred, the connection is terminated.
Flow control:
o It is responsible for flow control for end to end rather than across a single
link.
Error control:
o It is responsible for error control. for end to end rather than across a single
link.
o The sending transport layer makes sure that entire message arrives at the
receiving transport layer without error (damage, loss or duplication).
o Error correction is usually achieved through retransmission.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 123
Three-Way Handshaking
In TCP, connection-oriented transmission requires three phases: connection
establishment, data transfer, and connection termination.
Connection Establishment
TCP transmits data in full-duplex mode. When two TCPs in two machines are
connected, they are able to send segments to each other simultaneously. This
implies that each party must initialize communication and get approval from the
other party before any data are transferred.
The connection establishment in TCP is called three way handshaking. In our
example, an application program, called the client, wants to make a connection
with another application program, called the server, using TCP as the transport
layer protocol.
The process starts with the server. The server program tells its TCP that it is
ready to accept a connection. This is called a request for a passive open. Although
the server TCP is ready to accept any connection from any machine in the world,
it cannot make the connection itself.
The client program issues a request for an active open. A client that wishes to
connect to an open server tells its TCP that it needs to be connected to that
particular server. TCP can now start the three-way handshaking process as
shown in Figure
To show the process, we use two time lines: one at each site. Each segment has
values for all its header fields and perhaps for some of its option fields, too.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 124
However, we show only the few fields necessary to understand each phase. We
show the sequence number, the acknowledgment number, the control flags (only
those that are set), and the window size, if not empty. The three steps in this
phase are as follows.
Step 1: The client sends the first segment, a SYN segment, in which only the SYN flag is
set. This segment is for synchronization of sequence numbers. It consumes one
sequence number. When the data transfer starts, the sequence number is incremented
by 1. We can say that the SYN segment carries no real data, but we can think of it as
containing 1 imaginary byte.
A SYN segment cannot carry data, but it consumes one sequence number.
Step 2: The server sends the second segment, a SYN +ACK segment, with 2 flag bits set:
SYN and ACK. This segment has a dual purpose. It is a SYN segment for
communication in the other direction and serves as the acknowledgment for the SYN
segment. It consumes one sequence number.
A SYN +ACK segment cannot carry data, but does consume one sequence number.
Step 3: The client sends the third segment. This is just an ACK segment. It
acknowledges the receipt of the second segment with the ACK flag and
acknowledgment number field. Note that the sequence number in this segment is the
same as the one in the SYN segment; the ACK segment does not consume any sequence
numbers.
An ACK segment, if carrying no data, consumes no sequence number.
Data transfer
After connection is established, bidirectional data transfer can take place. The client and
server can both send data and acknowledgments.
In this example, after connection is established (not shown in the figure), the client
sends 2000 bytes of data in two segments. The server then sends 2000 bytes in one
segment. The client sends one more segment. The first three segments carry both data
and acknowledgment, but the last segment carries only an acknowledgment because
there are no more data to be sent. Note the values of the sequence and acknowledgment
numbers. The data segments sent by the client have the PSH (push) flag set so that the
server TCP knows to deliver data to the server process as soon as they are received. The
segment from the server, on the other hand, does not set the push flag.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 125
Connection Termination
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 126
17.4 TCP Message Format / TCP Segment Format
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 127
11. Options and Padding - (variable length) Convey additional Information for
alignment purpose.
1. Source port number (16 bits) - An optional field, The Address of Application
Program that has created the message.
2. Destination port number (16 bits) –The Address of Application Program that
will receive the message.
3. UDP length (16 bits) –Gives the total Data Length of the user Datagram.
4. UDP checksum (16 bits)-Used in error detection.
Use of UDP
The following lists some uses of the UDP protocol:
UDP is suitable for a process that requires simple request-response
communication with little concern for flow and error control. It is not usually
used for a process such as FrP that needs to send bulk data.
UDP is suitable for a process with internal flow and error control mechanisms.
For example, the Trivial File Transfer Protocol (TFTP) process includes flow and
error control. It can easily use UDP.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 128
UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software but not in the TCP software.
UDP is used for management processes such as SNMP.
UDP is used for some route updating protocols such as Routing Information
Protocol (RIP).
Features
UDP is used when acknowledgement of data does not hold any significance.
UDP is good protocol for data flowing in one direction.
UDP is simple and suitable for query based communications.
UDP is not connection oriented.
UDP does not provide congestion control mechanism.
UDP does not guarantee ordered delivery of data.
UDP is stateless.
UDP is suitable protocol for streaming applications such as VoIP, multimedia
streaming.
UDP application
Here are few applications where UDP is used to transmit data:
Domain Name Services
Simple Network Management Protocol
Trivial File Transfer Protocol
Routing Information Protocol
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 129
For Simplification:
The addressing mechanism allows multiplexing and demultiplexing by the transport
layer, as shown in Figure below:
Multiplexing
At the sender site, there may be several processes that need to send packets. However,
there is only one transport layer protocol at any time. This is a many-to-one relationship
and requires multiplexing. The protocol accepts messages from different processes,
differentiated by their assigned port numbers. After adding the header, the transport
layer passes the packet to the network layer.
De-multiplexing
At the receiver site, the relationship is one-to-many and requires demultiplexing. The
transport layer receives datagrams from the network layer. After error checking and
dropping of the header, the transport layer delivers each message to the appropriate
process based on the port number.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 130
Two clients, using the same destination port number (80) to communicate with the same Web
server application.
In this section, we will incrementally develop the sender and receiver sides of a reliable
data transfer protocol, considering increasingly complex models of the underlying
channel.
One assumption we’ll adopt throughout our discussion here is that packets will be
delivered in the order in which they were sent, with some packets possibly being lost;
that is, the underlying channel will not reorder packets. Below figure illustrates the
interfaces for our data transfer protocol. The sending side of the data transfer protocol
will be invoked from above by a call to rdt_send(). It will pass the data to be delivered
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 131
to the upper layer at the receiving side. (Here rdt stands for reliable data transfer protocol
and _send indicates that the sending side of rdt is being called.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 132
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 133
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 134
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 135
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 136
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 137
17.8 Congestion Control
17.8.1 Congestion:
An important issue in a packet-switched network is congestion. Congestion in a
network may occur if the load on the network-the number of packets sent to the
network-is greater than the capacity of the network-the number of packets a network can
handle. Congestion control refers to the mechanisms and techniques to control the
congestion and keep the load below the capacity.
We may ask why there is congestion on a network. Congestion happens in any system
that involves waiting. For example, congestion happens on a freeway because any
abnonnality in the flow, such as an accident during rush hour, creates blockage.
Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. Retransmission in general may increase
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 138
congestion in the network. However, a good retransmission policy can prevent
congestion. The retransmission policy and the retransmission timers must be designed
to optimize efficiency and at the same time prevent congestion. For example, the
retransmission policy used by TCP (explained later) is designed to prevent or alleviate
congestion.
Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat
window is better than the Go-Back-N window for congestion control. In the Go-Back-N
window, when the timer for a packet times out, several packets may be resent, although
some may have arrived safe and sound at the receiver. This duplication may make the
congestion worse. The Selective Repeat window, on the other hand, tries to send the
specific packets that have been lost or corrupted.
Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the
receiver does not acknowledge every packet it receives, it may slow down the sender
and help prevent congestion. Several approaches are used in this case. A receiver may
send an acknowledgment only if it has a packet to be sent or a special timer expires. A
receiver may decide to acknowledge only N packets at a time. We need to know that the
acknowledgments are also part of the load in a network. Sending fewer
acknowledgments means imposing less load on the network.
Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time
may not harm the integrity of the transmission. For example, in audio transmission, if
the policy is to discard less sensitive packets when congestion is likely to happen, the
quality of sound is still preserved and congestion is prevented or alleviated.
Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent
congestion in virtual-circuit networks. Switches in a flow first check the resource
requirement of a flow before admitting it to the network. A router can deny establishing
a virtualcircuit connection if there is congestion in the network or if there is a possibility
of future congestion.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 139
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
Several mechanisms have been used by different protocols. We describe a few of them
here.
Backpressure
Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note
the difference between the backpressure and choke packet methods. In backpressure,
the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke packet method, the warning is from the
router, which has encountered congestion, to the source station directly. The
intermediate nodes through which the packet has traveled are not warned. We have
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 140
seen an example of this type of control in ICMP. When a router in the Internet is
overwhelmed with IP datagram, it may discard some of them; but it informs the source
host, using a source quench ICMP message. The warning message goes directly to the
source station; the intermediate routers, and does not take any action. Figure shows the
idea of a choke packet.
Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes
and the source. The source guesses that there is a congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is
no acknowledgment for a while, one assumption is that the network is congested. The
delay in receiving an acknowledgment is interpreted as congestion in the network; the
source should slow down.
Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or
destination. The explicit signaling method, however, is different from the choke packet
method. In the choke packet method, a separate packet is used for this purpose; in the
explicit signaling method, the signal is included in the packets that carry data. Explicit
signaling, as we will see in Frame Relay congestion control, can occur in either the
forward or the backward direction.
Backward Signaling A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to
slow down to avoid the discarding of packets.
Forward Signaling A bit can be set in a packet moving in the direction of the congestion.
This bit can warn the destination that there is congestion. The receiver in this case can
use policies, such as slowing down the acknowledgments, to alleviate the congestion.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 141
17.8.2 Congestion control in TCP
Congestion Window
We said that the sender window size is determined by the available buffer space in the
receiver (rwnd).
The sender has two pieces of information: the receiver-advertised window size and the
congestion window size. The actual size of the window is the minimum of these two.
Actual window size = minimum (rwnd, cwnd)
Congestion Policy
TCP's general policy for handling congestion is based on three phases:
Slow start (exponential increase),
Congestion avoidance (Additive Increase), and
Congestion detection (Multiplicative Decrease)
Slow start:
In the slow-start phase, the sender starts with a very slow rate of transmission, but
increases the rate rapidly to reach a threshold. When the threshold is reached, the data
rate is reduced to avoid congestion. Finally if congestion is detected, the sender goes
back to the slow-start or congestion avoidance phase based on how the congestion is
detected.
Exponential Increase One of the algorithms used in TCP congestion control is called
slow start. This algorithm is based on the idea that the size of the congestion window
(cwnd) starts with one maximum segment size (MSS). The MSS is determined during
connection establishment by using an option of the same name. The size of the window
increases one MSS each time an acknowledgment is received. As the name implies, the
window starts slowly, but grows exponentially. To show the idea, let us look at Figure
We have used segment numbers instead of byte numbers (as though each segment
contains only 1 byte). We have assumed that rwnd is much higher than cwnd, so that the
sender window size always equals cwnd. We have assumed that each segment is
acknowledged individually.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 142
Slow start, exponential increase
The sender starts with cwnd =1 MSS. This means that the sender can send only one
segment. After receipt of the acknowledgment for segment 1, the size of the congestion
window is increased by 1, which means that cwnd is now 2. Now two more segments
can be sent. When each acknowledgment is received, the size of the window is
increased by 1 MSS. When all seven segments are acknowledged, cwnd = 8.
If we start with the slow-start algorithm, the size of the congestion window increases
exponentially. To avoid congestion before it happens, one must slow down this
exponential growth. TCP defines another algorithm called congestion avoidance, which
undergoes an additive increase instead of an exponential one. When the size of the
congestion window reaches the slow-start threshold, the slow-start phase stops and the
additive phase begins. In this algorithm, each time the whole window of segments is
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 143
acknowledged (one round), the size of the congestion window is increased by 1. To
show the idea, we apply this algorithm to the same scenario as slow start, although we
will see that the congestion avoidance algorithm usually starts when the size of the
window is much greater than 1. Figure shows the idea.
In the congestion avoidance algorithm, the size of the congestion window increases
additively until congestion is detected.
If congestion occurs, the congestion window size must be decreased. The only way the
sender can guess that congestion has occurred is by the need to retransmit a segment.
However, retransmission can occur in one of two cases: when a timer times out or when
three ACKs are received. In both cases, the size of the threshold is dropped to one-half,
a multiplicative decrease. Most TCP implementations have two reactions:
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 144
In this case TCP reacts strongly:
a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the size of one segment.
c. It starts the slow-start phase again.
2. If three ACKs are received, there is a weaker possibility of congestion; a segment may
have been dropped, but some segments after that may have arrived safely since three
ACKs are received. This is called fast transmission and fast recovery. In this case, TCP
has a weaker reaction: a. It sets the value of the threshold to one-half of the current
window size. b. It sets cwnd to the value of the threshold (some implementations add
three segment sizes to the threshold). c. It starts the congestion avoidance phase.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 145
18. 0 Sliding window protocol
The data can get lost, reordered or duplicated due to the presence of routers and buffer
space over the unreliable channel in the conventional networks. The data link layer
deals with frame formation, flow control, error control, and addressing and link
management. All such functions will be performed only by data link protocols. The
sliding window protocol will detect and correct error if the received data have enough
redundant bits or repeat a retransmission of data.
The physical layer deal with transmission signal over different media Data link layer
deals with frame formation and flow control, error control over unreliable channels of
conventional channel various data limitations cause efficiency decrease. Generally there
are two approaches to control such errors:
By placing limits on the number of packets that can be transmitted or received at any
given time, a sliding window protocol allows an unlimited number of packets to be
communicated using fixed-size sequence numbers. The term "window" on the
transmitter side represents the logical boundary of the total number of packets yet to be
acknowledged by the receiver. The receiver informs the transmitter in each
acknowledgment packet the current maximum receiver buffer size (window boundary).
The TCP header uses a 16 bit field to report the receive window size to the sender.
Therefore, the largest window that can be used is 216 = 64 kilobytes.
The sliding window method ensures that traffic congestion on the network is avoided.
The application layer will still be offering data for transmission to TCP without
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 146
worrying about the network traffic congestion issues as the TCP on sender and receiver
side implement sliding windows of packet buffer. The window size may vary
dynamically depending on network traffic.
In any communication protocol based on automatic repeat request for error control,
the receiver must acknowledge received packets. If the transmitter does not receive
an acknowledgment within a reasonable time, it re-sends the data.
Data in sender buffer are sent in chunks instead of the entire data in buffer. Why?
Suppose the sender buffer has the capacity of 1 MB and the receiver buffer has the
capacity of 512KB, then 50 % of the data will be lost at the receiver end and this will
unnecessarily cause re-transmission of packets from the sender end. Therefore, the
sender will send data in chunks less than 512. This will be decided by the help of
window size. Window size caters the capacity of the receiver. Flow control is the
receiver related problem, we do not want the receiver to be overwhelmed, thus in order
to avoid overwhelming situation, we need to control the flow by using window size
‚N‛.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 147
How will the sender be aware of the size of the receiver’s buffer or what should be the
window size at the sender end?
Before the data is transmitted from one host to another, a connection is first established
between the two. During this establishment, such kinds of information are share
between the two hosts, like window size, buffer size, etc. After which the data
transmission begins.
Sliding window protocol assumes full duplex communication. It uses two types of
frames, first data and second acknowledgment. One of important features of all the
sliding windows protocol is that each outbound frame contains a sequence number,
ranging from 0 to 2n -1, where the value of n can be arbitrary. Sliding window refers to
imaginary boxes at the transmitter and receiver. This window provides the upper limit
on the number of frames that can be transmitted before acknowledgment requirement.
Window holds the number of frame to provide above mention limit. The frames which
are being transmitted to send are falling in sending window similarly frames to be
accepted are store in the receiving window.
The sequence no. within the sender’s window gives the number of frame sent but not
yet acknowledge. The frames in the sender’s window are stored so that they can be
possibly retransmitted in the case of damage while travelling to receiver.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 148
The receiver window represents not the number of frame receive but the no. of frame
that may still be received before an ACK is sent. Because sliding window of receiver
strings from left when frame of data are received and expand to right when ACK is
sent. The receiver window contains (n-1) spaces for frame.
Sliding Window
• Sliding window refers to an imaginary boxes that hold the frames on both sender and
receiver side.
• It provides the upper limit on the number of frames that can be transmitted before
requiring an acknowledgment.
• Frames may be acknowledged by receiver at any point even when window is not full
on receiver side.
• Frames may be transmitted by source even when window is not yet full on sender
side.
• The windows have a specific size in which the frames are numbered modulo- n,
which means they are numbered from 0 to n-l. For e.g. if n = 8, the frames are numbered
0, 1,2,3,4,5,6, 7, 0, 1,2,3,4,5,6, 7, 0, 1, ....
• The size of window is n-1. For e.g. In this case it is 7. Therefore, a maximum of n-l
frames may be sent before an acknowledgment.
• When the receiver sends an ACK, it includes the number of next frame it expects to
receive. For example in order to acknowledge the group of frames ending in frame 4,
the receiver sends an ACK containing the number 5. When sender sees an ACK with
number 5, it comes to know that all the frames up to number 4 have been received.
In this case n=1 and uses stop and wait technique. Sender waits for ACK after each
frame transmission. The operation of this protocol is based on the ARQ(automatic
repeat request) principle, which hold the next frame will be transmitted when positive
ACK is received and when negative ACK is received, it retransmit the same frame.
Stop and wait ARQ becomes inefficient when the propagation delay is much greater
than the time tool retransmit for example let us assume that frame of 800 bits is
transmitted over channel with speed 1mbps and let time for transmission if from end
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 149
ACK is 30 ms. The number of bits that can be transmitted over this channel is 30,000
bits. But in stop and wait ARQ only 800 bits can be transmitted as it waits for ACK. The
product of bit rate and delay is called delay bandwidth product. It helps in measuring
last opportunity in transmitted bits.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 150
A Protocol Using Go Back n ARQ:
The sender in this case does not waits for the ACK signal for transmission of next frame.
The sender continuously transmits the frame so that the channel should be kept busy
rather that wasting time in waiting for it ACK. Because in stop and protocol system
does not transmit anything while it is waiting So channel remain idle for considerable
time period But in this case the system does depends on only NACK(negative
feedback). It symbolizes error in a particular frame. But as NACK signal will take same
time to reach sender, the sender will continue to transmit. On the reception of the
NACK signal, the transmitter will retransmit all the frames 3 onwards. The receivers
discard all the frames it has received after 3. Example: suppose the frame is being
transmitted end at frame bit 3 error occurs and NACK is transmitted at the receiver. But
this takes some time to reach the transmitter. By the time upto frame 7 has all ways been
transmitted.
If the transmitter frame is lost or acknowledgement is lost then only error occurs. In
case of damaged or lost frames the receiver transmits NACK to transmitter and the
transmitter retransmits all the frames sent since the last frame acknowledged. The
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 151
disadvantage of go back ARQ protocol is that its efficiency decreases in noisy channel
as it does not wait for ACK after every frame transmitted.
Stop and wait ARQ mechanism does not utilize the resources at their best. When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N
ARQ method, both sender and receiver maintain a window.
The sending-window size enables the sender to send multiple frames without receiving
the acknowledgement of the previous ones. The receiving-window enables the receiver
to receive multiple frames and acknowledge them. The receiver keeps track of
incoming frame’s sequence number.
When the sender sends all the frames in window, it checks up to what sequence
number it has received positive acknowledgement. If all frames are positively
acknowledged, the sender sends next set of frames. If sender finds that it has received
NACK or has not receive any ACK for a particular frame, it retransmits all the frames
after which it does not receive any positive ACK.
Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol,
in which the sending process continues to send a number of frames specified by
a window size even without receiving an acknowledgement (ACK) packet from the
receiver. It is a special case of the general sliding window protocol with the transmit
window size of N and receive window size of 1. It can transmit N frames to the peer
before requiring an ACK.
The receiver process keeps track of the sequence number of the next frame it expects to
receive, and sends that number with every ACK it sends. The receiver will discard any
frame that does not have the exact sequence number it expects (either a duplicate frame
it already acknowledged, or an out-of-order frame it expects to receive later) and will
resend an ACK for the last correct in-order frame. [1] Once the sender has sent all of the
frames in its window, it will detect that all of the frames since the first lost frame
are outstanding, and will go back to the sequence number of the last ACK it received
from the receiver process and fill its window starting with that frame and continue the
process over again.
Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since
unlike waiting for an acknowledgement for each packet, the connection is still being
utilized as packets are being sent. In other words, during the time that would otherwise
be spent waiting, more packets are being sent. However, this method also results in
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 152
sending frames multiple times – if any frame was lost or damaged, or the ACK
acknowledging them was lost or damaged, then that frame and all following frames in
the window (even if they were received without error) will be re-sent. To avoid
this, Selective Repeat ARQ can be used.
In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers, buffers
the frames in memory and sends NACK for only frame which is missing or damaged.
The sender in this case, sends only packet for which NACK is received.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 153
Piggybacking is a bi-directional data transmission technique in the network layer (OSI
model). It makes the most of the sent data frames from receiver to sender, adding the
confirmation that the data frame sent by the sender was received successfully (ACK
acknowledge). This practically means that instead of sending an acknowledgement in
an individual frame it is piggy-backed on the data frame.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 154
19.0 Application Layer Protocols: DNS
1) Generic Domains:
The generic domains define registered hosts according to their behavior.
Label Description
com Commercial organizations
edu Educational institutions
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 155
gov Government institutions
int International organizations
mil Military groups
net Network support centers
org Nonprofit organizations
2) Country Domains:
The country domain section follows the same format as the generic domains but
uses two-character country abbreviations (eg., ‚in‛ for India).
3) Inverse Domains:
The inverse domain is used to map an address to a name.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 156
DNS Centralized or De-centralized?
A simple design for DNS would have one DNS server that contains all the mappings. In
this centralized design, clients simply direct all queries to the single DNS server, and the
DNS server responds directly to the querying clients. Although the simplicity of this design
is attractive, it is inappropriate for today’s Internet, with its vast (and growing) number of
hosts. The problems with a centralized design include:
• A single point of failure. If the DNS server crashes, so does the entire Internet!
• Traffic volume. A single DNS server would have to handle all DNS queries (for
all the HTTP requests and e-mail messages generated from hundreds of millions
of hosts).
• Distant centralized database. A single DNS server cannot be ‚close to‛ all the
querying clients. If we put the single DNS server in New York City, then all
queries from Australia must travel to the other side of the globe, perhaps over
slow and congested links. This can lead to significant delays.
• Maintenance. The single DNS server would have to keep records for all Internet
hosts. Not only would this centralized database be huge, but it would have to be
updated frequently to account for every new host.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 157
‚server‛ is actually a network of replicated servers, for both security and reliability
purposes. All together, there are 247 root servers as of fall 2011.
• Top-level domain (TLD) servers.
These servers are responsible for top-level domains such as com, org, net, edu, and gov,
and all of the country top-level domains such as uk, fr, ca, and jp. The company Verisign
Global Registry Services maintains the TLD servers for the com top-level domain, and the
company Educause maintains the TLD servers for the edu top-level domain. See [IANA
TLD 2012] for a list of all top-level domains.
• Authoritative DNS servers.
Every organization with publicly accessible hosts (such as Web servers and mail servers)
on the Internet must provide publicly accessible DNS records that map the names of those
hosts to IP addresses. An organization’s authoritative DNS server houses these DNS
records. An organization can choose to implement its own authoritative DNS server to
hold these records; alternatively, the organization can pay to have these records stored in
an authoritative DNS server of some service provider. Most universities and large
companies implement and maintain their own primary and secondary (backup)
authoritative DNS server.
DNS Message
• The first 12 bytes is the header section, which has a number of fields.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 158
The first field is a 16-bit number that identifies the query. This identifier is copied into the
reply message to a query, allowing the client to match received replies with sent queries.
There are a number of flags in the flag field. A 1-bit query/reply flag indicates whether the
message is a query (0) or a reply (1). A1-bit authoritative flag is set in a reply message
when a DNS server is an authoritative server for a queried name. A 1-bit recursion-desired
flag is set when a client (host or DNS server) desires that the DNS server perform recursion
when it doesn’t have the record. A 1-bit recursion available field is set in a reply if the DNS
server supports recursion. In the header, there are also four number-of fields. These fields
indicate the number of occurrences of the four types of data sections that follow the header.
• The question section contains information about the query that is being made.
This section includes (1) a name field that contains the name that is being
queried, and (2) a type field that indicates the type of question being asked about
the name—for example, a host address associated with a name (Type A) or the
mail server for a name (Type MX).
• In a reply from a DNS server, the answer section contains the resource records
for the name that was originally queried. Recall that in each resource record there
is the Type (for example, A, NS, CNAME, and MX), the Value, and the TTL. A
reply can return multiple RRs in the answer, since a hostname can have multiple
IP addresses (for example, for replicated Web servers, as discussed earlier in this
section).
• The authority section contains records of other authoritative servers.
• The additional section contains other helpful records. For example, the answer
field in a reply to an MX query contains a resource record providing the
canonical hostname of a mail server. The additional section contains a Type A
record providing the IP address for the canonical hostname of the mail server.
Name Space
To be unambiguous, the names assigned to machines must be carefully selected from a
name space with complete control over the binding between the names and IP addresses.
In other words, the names must be unique because the addresses are unique. A name space
that maps each address to a unique name can be organized in two ways: fiat or
hierarchical.
Flat Name Space: In a flat name space, a name is assigned to an address. A name in this
space is a sequence of characters without structure. The names mayor may not have a
common section; if they do, it has no meaning. The main disadvantage of a fiat name space
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 159
is that it cannot be used in a large system such as the Internet because it must be centrally
controlled to avoid ambiguity and duplication.
Hierarchical Name Space:
In a hierarchical name space, each name is made of several parts. The first part can define
the nature of the organization, the second part can define the name of an organization, the
third part can define departments in the organization, and so on. In this case, the authority
to assign and control the name spaces can be decentralized. A central authority can assign
the part of the name that defines the nature of the organization and the name of the
organization. The responsibility of the rest of the name can be given to the organization
itself. The organization can add suffixes (or prefixes) to the name to define its host or
resources. The management of the organization need not worry that the prefix chosen for a
host is taken by another organization because, even if part of an address is the same, the
whole address is different. For example, assume two colleges and a company call one of
their computers challenger. The first college is given a name by the central authority such as
jhda.edu, the second college is given the name berkeley.edu, and the company is given the
name smart. com. When these organizations add the name challenger to the name they have
already been given, the end result is three distinguishable names: challenger.jhda.edu,
challenger.berkeley.edu, and challenger.smart.com. The names are unique without the need for
assignment by a central authority. The central authority controls only part of the name, not
the whole.
Domain names and labels
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 160
from the most specific to the most general, that uniquely define the name of the host. For
example, the domain name
challenger.ate.tbda.edu.
is the FQDN of a computer named challenger installed at the Advanced Technology Center
(ATC) at De Anza College. A DNS server can only match an FQDN to an address. Note
that the name must end with a null label, but because null means nothing, the label ends
with a dot (.).
If a label is not terminated by a null string, it is called a partially qualified domain name
(PQDN). A PQDN starts from a node, but it does not reach the root. It is used when the
name to be resolved belongs to the same site as the client. Here the resolver can supply the
missing part, called the suffix, to create an FQDN. For example, if a user at the jhda.edu. site
wants to get the IP address of the challenger computer, he or she can define the partial
name Challenger.
The DNS client adds the suffix atc.jhda.edu. before passing the address to the DNS server
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 161
21.0 Application Layer Protocols: Email System
Email System
In this Email system we will discuss four protocols: SMTP, POP3, IMAP and
MIME.
One of the most popular Internet services is electronic mail (e-mail). The
designers of the Internet probably never imagined the popularity of this
application program.
User Agent
The first component of an electronic mail system is the user agent (VA). It
provides service to the user to make the process of sending and receiving a
message easier.
Email Protocols
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 162
o Sending messages that include text, voice, video, or graphics.
o Sending messages to users on networks outside the Internet.
Instead of just one MTA at the sender site and one at the receiving site, other
MTAs, acting either as client or server, can relay the mail.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 163
Specific format of E-mail Address:
Local Part: The local part defines the name of a special file, called the user mailbox,
where all the mail received for a user is stored for retrieval by the message access
agent.
Domain Name: The second part of the address is the domain name. An organization
usually selects one or more hosts to receive and send e-mail; the hosts are sometimes
called mail servers or exchangers. The domain name assigned to each mail exchanger
either comes from the DNS database or is a logical name (for example, the name of the
organization).
Mailing List
Electronic mail allows one name, an alias, to represent several different e-mail
addresses; this is called a mailing list. Every time a message is to be sent, the system
checks the recipient's name against the alias database; if there is a mailing list for the
defined alias, separate messages, one for each entry in the list, must be prepared and
handed to the MTA. If there is no mailing list for the alias, the name itself is the
receiving address and a single message is delivered to the mail transfer entity.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 164
For this reason, it is not practical to establish an SMTP session with a desktop
computer because desktop computers are usually powered down at the end of
the day.
In many organizations, mail is received by an SMTP server that is always on-
line. This SMTP server provides a mail-drop service.
The server receives the mail on behalf of every host in the organization.
Workstations interact with the SMTP host to retrieve messages by using a
client-server protocol such as Post office protocol version 3 (POP3).
Although POP3 is used to download messages form the server, the SMTP client
is still needed on the desktop to forward messages from the workstation user to
its SMTP mail server
Post Office Protocol, version 3 (POP3) is simple and limited in functionality. The
client POP3 software is installed on the recipient computer; the server POP3
software is installed on the mail server.
Figure shows an example of downloading using POP3
POP3 has two modes: the delete mode and the keep mode. In the delete mode,
the mail is deleted from the mailbox after each retrieval. In the keep mode, the
mail remains in the mailbox after retrieval. The delete mode is normally used
when the user is working at her permanent computer and can save and
organize the received mail after reading or replying. The keep mode is normally
used when the user accesses her mail away from her primary computer (e.g., a
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 165
laptop). The mail is read but kept in the system for later retrieval and
organizing.
21.3 IMAP4
Another mail access protocol is Internet Mail Access Protocol, version 4 (IMAP4).
IMAP4 is similar to POP3, but it has more features; IMAP4 is more powerful and
more complex.
POP3 is deficient in several ways. It does not allow the user to organize her mail on
the server; the user cannot have different folders on the server. (Of course, the user
can create folders on her own computer.) In addition, POP3 does not allow the user
to partially check the contents of the mail before downloading.
IMAP4 provides the following extra functions:
o A user can check the e-mail header prior to downloading.
o A user can search the contents of the e-mail for a specific string of
characters prior to downloading.
o A user can partially download e-mail. This is especially useful if
bandwidth is limited and the e-mail contains multimedia with high
bandwidth requirements.
o A user can create, delete, or rename mailboxes on the mail server.
o A user can create a hierarchy of mailboxes in a folder for e-mail storage.
21.4 MIME
Electronic mail has a simple structure. Its simplicity, however, comes at a price. It can
send messages only in NVT 7-bit ASCII format. In other words, it has some limitations.
For example, it cannot be used for languages that are not supported by 7-bit ASCII
characters (such as French, German, Hebrew, Russian, Chinese, and Japanese). Also, it
cannot be used to send binary files or video or audio data.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 166
MIME defines five headers that can be added to the original e-mail header section
to define the transformation parameters:
1. MIME-Version
2. Content-Type
3. Content-Transfer-Encoding
4. Content-Id
5. Content-Description
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 167
o Message handling system (MHS)
o File transfer, access and management (FTAM)
o Virtual terminal (VT)
o Directory system (DS)
o Common management information protocol (CMIP)) for which
standardization is possible.
o
Message Handling System (MHS):
MHS is the OSI protocol that underlines electronic mail and store-and-forward
handling.
It is derived from the ITU-T X.400 series.
MHS is the system used to send any message (including copies of data and files) that
can be delivered in a store-and-forward manner.
o Store-and-forward delivery means that, instead of opening an active
channel between the sender and receiver, the protocol provides a delivery
service that forwards the message when a link becomes available.
o In most information-sharing protocols, both sender and the receiver must
be able to participate in the exchange concurrently.
o In a store-and-forward system, the sender passes the message to delivery
system.
o The delivery system may not be able to transmit the message
immediately, in which case it stores the message until conditions change.
o When the message is delivered, it is stored in the recipient’s mailbox until
called for. For example, the regular postal system.
The structure of the MHS:
o The structure of the OSI message handling system is shown in figure.
o Each user communicates with a program or process called a user agent
(UA).
o The UA is unique for each user (each user receives a copy of the program
or process).
o An example of a UA is electronic mail program associated with a specific
operating system that allows a user to type and edit messages.
o Each user has message storage (MS), which consists of disk space in a
mail storage system and is usually referred to as a mailbox.
o Message storage can be used for storing, sending, or receiving messages.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 168
o The message storage communicates with a series of processes called
message transfer agents (MTAs).
o MTAs are like the different departments of a post office.
o The combined MTAs make up the message transfer system (MTS).
[MHS]
Message Format
o The MHS standard defines the format of a message.
o The body of the message corresponds to the material (like a letter) that goes
inside of the envelope of a conventional mailing.
o Every message can include the address (name) of the sender, the address
(name) of the recipient, the subject of the message, and a list of anyone other
than the primary recipient who is to receive a copy.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 169
21.0 Application Layer: WWW and HTTP
21.1 WWW
The World Wide Web (WWW) is a repository of information linked together from
points all over the world. The WWW has a unique combination of flexibility,
portability, and user-friendly features that distinguish it from other services provided
by the Internet. The WWW project was initiated by CERN (European Laboratory for
Particle Physics) to create a system to handle distributed resources necessary for
scientific research.
Architecture
The WWW today is a distributed client/server service, in which a client using a browser
can access a service using a server. However, the service provided is distributed over
many locations called sites, as shown in Figure.
Each site holds one or more documents, referred to as Web pages. Each Web page can
contain a link to other pages in the same site or at other sites. The pages can be retrieved
and viewed by using browsers.
Let us go through the scenario shown in above Figure. The client needs to see some
information that it knows belongs to site A. It sends a request through its browser, a
program that is designed to fetch Web documents. The request, among other
information, includes the address of the site and the Web page, called the URL, which
we will discuss shortly. The server at site A finds the document and sends it to the
client. When the user views the document, she finds some references to other
documents, including a Web page at site B. The reference has the URL for the new site.
The user is also interested in seeing this document. The client sends another request to
the new site, and the new page is retrieved.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 170
21.2 Other Terms to know
21.2.1 Client (Browser)
A variety of vendors offer commercial browsers that interpret and display a Web
document, and all use nearly the same architecture.
Each browser usually consists of three parts: a controller, client protocol, and
interpreters.
The controller receives input from the keyboard or the mouse and uses the client
programs to access the document. After the document has been accessed, the
controller uses one of the interpreters to display the document on the screen. The
client protocol can be one of the protocols described previously such as FTP or
HTTP (described later in the chapter). The interpreter can be HTML, Java, or
JavaScript, depending on the type of document. We discuss the use of these
interpreters based on the document type later in the chapter
21.2.2 Server
The Web page is stored at the server. Each time a client request arrives, the
corresponding document is sent to the client. To improve efficiency, servers normally
store requested files in a cache in memory; memory is faster to access than disk. A
server can also become more efficient through multithreading or multiprocessing. In
this case, a server can answer more than one request at a time.
21.2.3 URL
A client that wants to access a Web page needs the address. To facilitate the access of
documents distributed throughout the world, HTTP uses locators. The uniform
resource locator (URL) is a standard for specifying any kind of information on the
Internet. The URL defines four things: protocol, host computer, port, and path
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 171
The protocol is the client/server program used to retrieve the document. Many different
protocols can retrieve a document; among them are FTP or HTTP. The most common
today is HTTP.
The host is the computer on which the information is located, although the name of the
computer can be an alias. Web pages are usually stored in computers, and computers
are given alias names that usually begin with the characters "www". This is not
mandatory, however, as the host can be any name given to the computer that hosts the
Web page.
The URL can optionally contain the port number of the server. If the port is included, it
is inserted between the host and the path, and it is separated from the host by a colon.
Path is the pathname of the file where the information is located. Note that the path can
itself contain slashes that, in the UNIX operating system, separate the directories from
the subdirectories and files.
21.2.4 Cookies
The World Wide Web was originally designed as a stateless entity. A client sends a
request; a server responds. Their relationship is over. The original design of WWW,
retrieving publicly available documents, exactly fits this purpose. Today the Web has
other functions; some are listed here.
We now discuss their use in Web pages Creation and Storage of Cookies
The creation and storage of cookies depend on the implementation; however, the
principle is the same.
1. When a server receives a request from a client, it stores information about the client in
a file or a string. The information may include the domain name of the client, the
contents of the cookie (information the server has gathered about the client such as
name, registration number, and so on), a timestamp, and other information' depending
on the implementation.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 172
2. The server includes the cookie in the response that it sends to the client.
3. When the client receives the response, the browser stores the cookie in the cookie
directory, which is sorted by the domain server name.
Using Cookies
When a client sends a request to a server, the browser looks in the cookie directory to
see if it can find a cookie sent by that server. If found, the cookie is included in the
request. When the server receives the request, it knows that this is an old client, not a
new one. Note that the contents of the cookie are never read by the browser or disclosed
to the user. It is a cookie made by the server and eaten by the server. Now let us see how
a cookie is used for the four previously mentioned purposes:
1. The site that restricts access to registered clients only sends a cookie to the client
when the client registers for the first time. For any repeated access, only those clients
that send the appropriate cookie are allowed.
2. An electronic store (e-commerce) can use a cookie for its client shoppers. When a
client selects an item and inserts it into a cart, a cookie that contains information about
the item, such as its number and unit price, is sent to the browser. If the client selects a
second item, the cookie is updated with the new selection information. And so on.
When the client finishes shopping and wants to check out, the last cookie is retrieved
and the total charge is calculated.
3. A Web portal uses the cookie in a similar way. When a user selects her favorite pages,
a cookie is made and sent. If the site is accessed again, the cookie is sent to the server to
show what the client is looking for.
4. A cookie is also used by advertising agencies. An advertising agency can place
banner ads on some main website that is often visited by users. The advertising agency
supplies only a URL that gives the banner address instead of the banner itself. When a
user visits the main website and clicks on the icon of an advertised corporation, a
request is sent to the advertising agency. The advertising agency sends the banner, a
GIF file, for example, but it also includes a cookie with the ill of the user. Any future use
of the banners adds to the database that profiles the Web behavior of the user. The
advertising agency has compiled the interests of the user and can sell this information
to other parties. This use of cookies has made them very controversial. Hopefully, some
new regulations will be devised to preserve the privacy of users.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 173
21.2.5 WEB DOCUMENTS
The documents in the WWW can be grouped into three broad categories: static,
dynamic, and active. The category is based on the time at which the contents of the
document are determined.
Static Documents
Static documents are fixed-content documents that are created and stored in a server.
The client can get only a copy of the document. In other words, the contents of the file
are determined when the file is created, not when it is used. Of course, the contents in
the server can be changed, but the user cannot change them. When a client accesses the
document, a copy of the document is sent. The user can then use a browsing program to
display the document
Static document
Dynamic Documents
A dynamic document is created by a Web server whenever a browser requests the
document. When a request arrives, the Web server runs an application program or a
script that creates the dynamic document. The server returns the output of the program
or script as a response to the browser that requested the document. Because a fresh
document is created for each request, the contents of a dynamic document can vary
from one request to another. A very simple example of a dynamic document is the
retrieval of the time and date from a server. Time and date are kinds of information that
are dynamic in that they change from moment to moment. The client can ask the server
to run a program such as the date program in UNIX and send the result of the program
to the client.
21.2.6 HTML
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 174
Hypertext Markup Language (HTML) is a language for creating Web pages. The term
markup language comes from the book publishing industry. Before a book is typeset and
printed, a copy editor reads the manuscript and puts marks on it. These marks tell the
compositor how to format the text. For example, if the copy editor wants part of a line
to be printed in boldface, he or she draws a wavy line under that part. In the same way,
data for a Web page are formatted for interpretation by a browser.
Let us clarify the idea with an example. To make part of a text displayed in boldface
with HTML, we put beginning and ending boldface tags (marks) in the text, as shown
in Figure
Boldface tags
The two tags <B> and </B> are instructions for the browser. When the browser
sees these two marks, it knows that the text must be boldfaced. A markup
language such as HTML allows us to embed formatting instructions in the file
itself. The instructions are included with the text. In this way, any browser can
read the instructions and format the text according to the specific workstation.
One might ask why we do not use the formatting capabilities of word processors
to create and save formatted text. The answer is that different word processors
use different techniques or procedures for formatting text. For example, imagine
that a user creates formatted text on a Macintosh computer and stores it in a Web
page. Another user who is on an IBM computer would not be able to receive the
Web page because the two computers use different formatting procedures.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 175
A Web page is made up of two parts: the head and the body. The head is the first
part of a Web page. The head contains the title of the page and other parameters
that the browser will use. The actual contents of a page are in the body, which
includes the text and the tags. Whereas the text is the actual infonnation
contained in a page, the tags define the appearance of the document. Every
HTML tag is a name followed by an optional list of attributes, all enclosed
between less-than and greater-than symbols « and >).
An attribute, if present, is followed by an equals sign and the value of the
attribute. Some tags can be used alone; others must be used in pairs. Those that
are used in pairs are called beginning and ending tags. The beginning tag can have
attributes and values and starts with the name of the tag. The ending tag cannot
have attributes or values but must have a slash before the name of the tag. The
browser makes a decision about the structure of the text based on the tags, which
are embedded into the text. Figure shows the format of a tag.
One commonly used tag category is the text formatting tags such as <B> and <!B>,
which make the text bold; <1> and <II>, which make the text italic; and <U> and <IV>,
which underline the text.
Another interesting tag category is the image tag. Nontextual information such as
digitized photos or graphic images is not a physical part of an HTML document. But we
can use an image tag to point to the file of a photo or image. The image tag defines the
address (URL) of the image to be retrieved. It also specifies how the image can be
inserted after retrieval We can choose from several attributes. The most common are
SRC (source), which defines the source (address), and ALIGN, which defines the
alignment
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 176
of the image. The SRC attribute is required. Most browsers accept images in the GIF or
IPEG formats. For example, the following tag can retrieve an image stored as imagel.gif
in the directory /bin/images:
A third interesting category is the hyperlink tag, which is needed to link documents
together. Any item (word, phrase, paragraph, or image) can refer to another document
through a mechanism called an anchor. The anchor is defined by <A ... > and <!A> tags,
and the anchored item uses the URL to refer to another document. When the document
is displayed, the anchored item is underlined, blinking, or boldfaced. The user can click
on the anchored item to go to another document, which mayor may not be stored on the
same server as the original document. The reference phrase is embedded between the
beginning and ending tags. The beginning tag can have several attributes, but the one
required is HREF (hyperlink reference), which defines the address (URL) of the linked
document. For example, the link to the author of a book can be
What appears in the text is the word Author, on which the user can click to go to the
author's Web page.
The Common Gateway Interface (CGI) is a technology that creates and handles
dynamic documents. CGI is a set of standards that defines how a dynamic document is
written, how data are input to the program, and how the output result is used.
CGI is not a new language; instead, it allows programmers to use any of several
languages such as C, C++, Boune Shell, Korn Shell, C Shell, Tcl, or Perl. The only thing
that CGI defines is a set of rules and terms that the programmer must follow.
The tern common in CGI indicates that the standard defines a set of rules that is common
to any language or platform. The term gateway here means that a CGI program can be
used to access other resources such as databases, graphical packages, and so on. The
term interface here means that there is a set of predefined terms, variables, calls, and so
on that can be used in any CGI program. A CGI program in its simplest form is code
written in one of the languages supporting CGI Any programmer who can encode a
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 177
sequence of thoughts in a program and knows the syntax of one of the abovementioned
languages can write a simple CGI program. Figure illustrates the steps in creating a
dynamic program using CGI technology.
A few technologies have been involved in creating dynamic documents using scripts.
Among the most common are Hypertext Preprocessor (pHP), which uses the Perl
language; Java Server Pages (JSP), which uses the Java language for scripting; Active
Server Pages (ASP), a Microsoft product which uses Visual Basic language for scripting;
and ColdFusion, which embeds SQL database queries in the HTML document.
21.2.7 HTTP
The Hypertext Transfer Protocol (HTTP) is a protocol used mainly to access data on the
World Wide Web. HTTP functions as a combination of FTP and SMTP. It is similar to
FTP because it transfers files and uses the services of TCP. However, it is much simpler
than FTP because it uses only one TCP connection. There is no separate control
connection; only data are transferred between the client and the server. HTTP is like
SMTP because the data transferred between the client and the server look like SMTP
messages. In addition, the format of the messages is controlled by MIME-like headers.
Unlike SMTP, the HTTP messages are not destined to be read by humans; they are read
and interpreted by the HTTP server and HTTP client (browser). SMTP messages are
stored and forwarded, but HTTP messages are delivered immediately. The commands
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 178
from the client to the server are embedded in a request message. The contents of the
requested file or other information are embedded in a response message. HTTP uses the
services of TCP on well-known port 80.
HTTP uses the services of TCP on well-known port 80.
HTTP Transaction
Figure illustrates the HTTP transaction between the client and server. Although
HTTP uses the services of TCP, HTTP itself is a stateless protocol. The client initializes
the transaction by sending a request message. The server replies by sending a response.
Messages
The formats of the request and response messages are similar; both are shown in Figure
A request message consists of a request line, a header, and sometimes a body. A
response message consists of a status line, a header, and sometimes a body.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 179
field, as shown in Figure.
Request type. This field is used in the request message. In version 1.1 of HTTP, several
request types are defined. The request type is categorized into methods as defined in
Table
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 180
HTTP Response Format
This general format of the response message matches the previous example of a
response message. Let’s say a few additional words about status codes and their
phrases. The status code and associated phrase indicate the result of the request. Some
common status codes and associated phrases include:
• 200 OK: Request succeeded and the information is returned in the response.
• 301 Moved Permanently: Requested object has been permanently moved; the new
URL is specified in Location: header of the response message. The client software will
automatically retrieve the new URL.
• 400 Bad Request: This is a generic error code indicating that the request could not be
understood by the server.
• 404 Not Found: The requested document does not exist on this server.
• 505 HTTP Version Not Supported: The requested HTTP protocol version is not
supported by the server.
Example
Ex-1 This example retrieves a document. We use the GET method to retrieve an image
with the path /usr/bin/imagel. The request line shows the method (GET), the URL, and
the HTTP version (1.1). The header has two lines that show that the client can accept
images in the GIF or JPEG format. The request does not have a body. The response
message contains the status line and four lines of header. The header lines define the
date, server, MIME version, and length of the document. The body of the document
follows the header
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 181
Ex-2
In this example, the client wants to send data to the server. We use the POST method.
The request line shows the method (POST), URL, and HTTP version (1.1). There are
four lines of headers. The request body contains the input information. The response
message contains the status line and four lines of headers. The created document, which
is a CGI document, is included as the body
Ex-3
HTTP uses ASCII characters. A client can directly connect to a server using TELNET,
which logs into port 80. The next three lines show that the connection is successful. We
then type three lines. The first shows the request line (GET method), the second is the
header (defining the host), the third is a blank, terminating the request. The server
response is seven lines starting with the status line. The blank line at the end terminates
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 182
the server response. The file of 14,230 lines is received after the blank line (not shown
here). The last line is the output by the client.
$ teinet www.mhhe.com 80
Trying 198.45.24.104 ...
Connected to www.mhhe.com (198.45.24.104).
Escape character is 11\]'.
GET /engcslcompscilforouzan HTTP/I.t
From: forouzanbehrouz@fbda.edu
HTTP/t.l 200 OK
Date: Thu, 28 Oct 2004 16:27:46 GMT
Server: Apache/l.3.9 (Unix) ApacheJServ/1.1.2 PHP/4.1.2 PHP/3.0.18
MIME-version:1.0
Content-Type: text/html
Last-modified: Friday, 15-0ct-04 02:11:31 GMT
Content-length: 14230
Connection closed by foreign host.
Nonpersistent Connection
In a nonpersistent connection, one TCP connection is made for each request/response.
The following lists the steps in this strategy:
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 183
Persistent Connection
HTTP version 1.1 specifies a persistent connection by default. In a persistent connection,
the server leaves the connection open for more requests after sending a response. The
server can close the connection at the request of a client or if a time-out has been
reached. The sender usually sends the length of the data with each response. However,
there are some occasions when the sender does not know the length of the data. This is
the case when a document is created dynamically or actively. In these cases, the server
informs the client that the length is not known and closes the connection after sending
the data so the client knows that the end of the data has been reached.
Proxy Server
HTTP supports proxy servers. A proxy server is a computer that keeps copies of
responses to recent requests. The HTTP client sends a request to the proxy server. The
proxy server checks its cache. If the response is not stored in the cache, the proxy server
sends the request to the corresponding server. Incoming responses are sent to the proxy
server and stored for future requests from other clients. The proxy server reduces the
load on the original server, decreases traffic, and improves latency. However, to use the
proxy server, the client must be configured to access the proxy instead of the target
server.
A Web cache—also called a proxy server—is a network entity that satisfies HTTP
requests on the behalf of an origin Web server. The Web cache has its own disk storage
and keeps copies of recently requested objects in this storage.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 184
Socket Programming with TCP
Unlike UDP, TCP is a connection-oriented protocol. This means that before the client
and server can start to send data to each other, they first need to handshake and
establish a TCP connection. One end of the TCP connection is attached to the client
socket and the other end is attached to a server socket. When creating the TCP
connection, we associate with it the client socket address (IP address and port number)
and the server socket address (IP address and port number). With the TCP connection
established, when one side wants to send data to the other side, it just drops the data
into the TCP connection via its socket. This is different from UDP, for which the server
must attach a destination address to the packet before dropping it into the socket.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 185
21.3 Other Application Layer Protocols
21.3.1 TELNET (TErminal NETwork):
The main task of the internet and its TCP/IP protocol suite is to provide services
for users.
For example, users want to be able to run different application programs at a
remote site and create results that can be transferred to their local site.
One way to satisfy these demands is to create different client-server application
programs (like, FTP & TFTP, e-mail (SMTP)) for each desired service.
User access any application program on a remote computer; in other words, allow
the user to log on to a remote computer.
After login on, a user can use the services available on the remote computer and
transfer the results back to the local computer.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 186
A TELNET is a popular client-server application program that provides these
services.
TELNET enables the establishment of a connection to a remote system in such a
way that the local terminal appears to be a terminal at the remote system
.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 187
21.3.3 DHCP (Dynamic Host configuration protocol):
Each computer that is attached to a TCP/IP internet must know the following
information:
o Its IP address.
o Its subnet mask.
o The IP address of a router.
o The IP address of a name server.
The DHCP (Dynamic host configuration protocol) is a client-server protocol
designed to provide the four previously mentioned pieces of information for a
diskless computer or a computer that is booted for the first time.
DHCP is an extension to BOOTP (Bootstrap protocol).
BOOTP is a static configuration protocol. And DHCP is a dynamic configuration
protocol.
DHCP is also needed when a host moves from network to network or is connected
and disconnected from a network.
DHCP provides temporary IP addresses for a limited period of time.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 188
A BOOTP configuration server assigns an IP address to each client from a pool of
addresses.
BOOTP uses the User Datagram Protocol (UDP) as a transport on IPv4 networks
only.
The Dynamic Host Configuration Protocol (DHCP) is a more advanced protocol
for the same purpose and has superseded the use of BOOTP. Most DHCP servers
also function as BOOTP servers.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 189
22.0 Error Correction Code: Hamming Code
The receiver can detect an error but it can’t find error in particular data unit
Error Correction
Error correction can be done in two ways:
1) Whenever error detect Retransmit entire data unit.
2) Whenever error detect Correct the error using any techniques: Forward Error
Correction and Burst Error Correction
2r >= m+r+1
Hamming code
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 190
Example:
Redundancy bit calculation
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 191
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 192