You are on page 1of 192

Computer Network (2140709)

Hand Book
Year: 2015

Computer Engineering Department

Prepared By: Prof. Ajay N. Upadhyaya,


Asst. Prof. CE Dept, LJIET
Index
1.0 Introduction to computer Network 3
2.0 : Basic concepts of Network 9
3.0 : Transmission media 19
4.0: OSI Model & TCP/IP Model 34
5.0: Transmission Signal Delay 43
7.0. Error Correction and Error Detection 51
8.0 Ethernet 59
9.0 Interconnecting Devices 63
10.0 Framing 69
11.0 Multiple Access Protocols 71
12.0 Fragmentation and Switching 82
13.0 Routing Algorithm 87
14.0 IP Datagram 99
15.0 Sub netting 110
16.0 Dynamic Routing Protocols 117
17.0 Transport Layer 121
18. 0 Sliding window protocol 146
19.0 Application Layer Protocols: DNS 155
20.0 Application Layer Protocols: Email System 162
21.0 Application Layer: WWW and HTTP 170
22.0 Error Correction Code: Hamming Code 190

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 2
1.0 Introduction to computer Network
1.1 What is Network?
 In the simplest terms, a network consists of two or more computers that are
connected together to share information.
 Networking may be defined as two computers being linked together, either
physically through a cable or through a wireless device. This link allows the
computers to share files, printers and even internet connections

1.2 Data Communication:-


It is an exchange of data between two devices via some form of transmission
medium.
Effectiveness of a data Communication System depends on three fundamental
Characteristics:
 Delivery
Whichever data send by the Source successfully deliver to the Correct
Destination
 Accuracy
Whichever data send by the Source to the Destination is transmitted in
accurate form.
 Timeliness
Data must be Delivering from Source to the Destination in a timely manner.

1.3 Components of networks:

A data communication System is made of five components.


1) Message
2) Source
3) Destination
4) Medium
5) Protocol
1) Message
The Message is the information /data to be communicated.
It consists of text, numbers, pictures, sounds, or video or any combination of
this.
2) Source
The Source/Sender is the device that sends the data message.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 3
It can be a Computer, Workstation, Telephone handset, television and so on.
3) Destination
The Destination/Receiver is the device that receives the message.
It can be a Computer, Workstation, Telephone handset, television and so on.

4) Medium
The transmission medium is the physical path by which a message travels from
Sender to Receiver.
It can consist of twisted pair wire, Coaxial cable, Fibre optics cable, Laser, or
Radio waves.
5) Protocol
A protocol is a formal description of a set of rules and conventions that govern a
particular aspect of how devices on a network communicate.
Protocols determine the format, timing, sequencing, and error control in data
communication.
Without protocols, the computer cannot make or rebuild the stream of incoming bits
from another computer into the original format.

Source sends data to destination

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 4
Communication between Source and Destination

1.4 Network criteria:


 Performance
 Number of Users.
 Type of Transmission Medium
 Hardware
 Software
 Reliability
 Frequency of Failure
 Recovery Time
 Catastrophe
 Security
 Unauthorized Access
 Viruses

1.5 End User Devices:-


 End-user devices that provide users with a connection to the network are also
referred to as hosts.
 These devices allow users to share, create, and obtain information. The host
devices can exist without a network, but without the network the host
capabilities are greatly reduced. NICs are used to physically connect host devices
to the network media.
 They use this connection to send e-mails, print reports, scan pictures, or access
databases.
 A NIC is a printed circuit board that fits into the expansion slot of a bus on a
computer motherboard. It can also be a peripheral device.
 NICs are sometimes called network adapters.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 5
End User Device

1.6 Basic Types of Network?

Within small organizations, the two most prevalent types of networks are “peer-to-
peer” networks and “client/server” networks.

 Peer-to-Peer
 Peer-to-peer networks are the simplest and least expensive type of networks
available and are most suitable for organizations with less than 5 computers.
 A peer-to-peer network will allow an organization to share files, printers and
even modems and Internet connections.
 In general, a peer-to-peer network does not have a central server and consists of
2 or more computers connecting through a device called a ‚Hub.‛
 The hub allows multiple computers and devices to connect via network cable.
 While simpler and less expensive, peer-to-peer networks do not offer many of
the benefits of client/server networks.
 And as an organization and network grows, the administration of these peer-to-
peer networks becomes more difficult and expensive.

 Client/Server
 Client/server networks are networks that connect individual computers, known
as ‚clients,‛ and one or more central computers, called ‚servers.‛
 There are many types of servers, the most common being a file server.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 6
 In a client/server network, the file server acts as a shared resource – a repository
for files, such as documents, spreadsheets, databases, etc.
 Instead of storing these files on each individual machine, the file server permits
storage on one central computer.
 In addition to the obvious advantage of reducing the possibility of multiple
iterations of a single file, it allows the organization to have one centralized point
from which to backup its files.

1.7 Network classification


Networks are classified according to its different characteristics. A given network can
be characterized by its:
 Size:
 The geographic size of the network
 Regarding size, networks are generally lumped into three categories, Local
area networks (LANs), Metropolitan area network (MANs) and Wide area
networks (WANs).
 Security and access:
 Who can access the network and how access is controlled
 Protocol:
 The rules of communication in use on it (for example, TCP/IP, NetBEUI, or
AppleTalk)
 As stated above, the protocol of a network is the set of guidelines for inter-
computer communication. Two computers with different protocols won't be
able to communicate with one another.
 While many computers have the ability to interpret multiple protocols, it is
important to understand the different protocols available before deciding on
one that is appropriate
 Hardware:
 The types of physical links and hardware that connect the network

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 7
 While some theoretical people would claim that the hardware involved in a
network isn't extremely important, they probably haven't ever actually dealt
with setting one up.
 Hardware is important. While in theory, every hub should send and receive
signals perfectly, that isn't always the case. And the problem is that if you ask
two network administrators what hub they recommend, you will probably get
two entirely different, yet passionate answers. From picking the cable (optical
fiber, coaxial, or copper), to choosing a server, you should find the most
suitable hardware for your needs.
1.8 Protocols and Standards
 Protocol is a set of rule that govern Data Communication.
 A protocol defines what is communicated? How it is communicated? And when
it is communicated?
 The Key Elements of a Protocol are:
 Syntax
 Semantics
 Timing

1.9 Standards:-
A Standard provides a model for Development that makes it possible for a product to
work regardless of the individual manufacturer.
Standards Creation Committees
 ISO (International Standards Organization)
 ITU-T (International telecommunication Union-Telecommunication Standards
Sector)
 ANSI (American National Standards Institute)
 IEEE (Institute of Electrical and Electronics Engineers)
 EIA (Electronics Industries Association)
 Telcordia

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 8
2.0 : Basic concepts of Network

2.1 Line Configuration:


 Line Configuration refers to the way two or more communication Devices are
attaches with each other via any Communication Link for Transferring Data.
 There are two possible Communication Link.
 Point to Point
 Multipoint (Multidrop)

2.2 Network topologies:


Network topology defines the structure of the network.

 Bus Topology:-

 Uses a single backbone cable.


 All the Nodes connect directly to this backbone.
 Advantage:-
 Easy to Construct
 Cost is Low
 Disadvantage:-
 Traffic is High
 If Link is Break Then no one can communicate with others.
 Difficult to add new add Devices.
 Ring topology:-

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 9
 Connects one Node to the next and the last Node to the first.
 Use the token.
 Token is passing from 1 Node to another Node.
 Only the node with the token is allowed to send data.
 Advantage:-If Link is broken then that node is reachable from other side of ring.
 Disadvantage:- Un directional Traffic

 Star topology:-

 Each Node is directly connected to the central Devices.


 We can use Hub, switch, router as a Central Device.
 Advantage:-
 If one link is broken then only that particular node is
Unreachable.
 Less Collision.
 Most Efficient.
 Simple and easy to identify Fault.
 Disadvantage:-
 Traffic is more at Central Point.
 All Nodes are depended on Central devices.

 Tree Topology:-
 It’s also known as hierarchical topology.
 All the nodes are connected in a tree structure.
 Advantage:-
 Simple and easy to identify Fault.
 Disadvantage:-
 If one link is broken then sub branch node can not send data to
other side.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 10
 Mesh topology:-

 Two type of Mesh Topology:-


 Fully Connected
 Partially Connected
 As seen in the graphic, each host has its own connections to all other hosts.

Fully Connected Partially Connected


 All the node has multiple paths to reach at Particular any one location.
 Advantage:-
 Reachability is high.
 Provide much Protection.
 Used at a nuclear power plant
 Disadvantage:-
 Cost is high.
 Very Complex to Build.
 Maintenance is also high.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 11
2.3 Network Categories/Classification:-
1) Local-area networks (LANs)
2) Metropolitan area network (MANs)
3) Wide area networks (WANs).
2.3.1 Local-area networks (LANs)
 LANs consist of the following components:
 Computers
 Network interface cards
 Peripheral devices
 Networking media
 Network devices
 LANs allow businesses to locally share computer files and printers efficiently
and make internal communications possible.
 A good example of this technology is e-mail. LANs manage data, local
communications, and computing equipment.
 Some common LAN technologies include the following:
 Ethernet
 Token Ring
 FDDI

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 12
LAN Network
2.3.2 Metropolitan area network (MANs)
 Wireless bridge technologies that send signals across public areas can also be
used to create a MAN.
 A MAN usually consists of two or more LANs in a common geographic area.
 For example, a bank with multiple branches may utilize a MAN.
 Typically, a service provider is used to connect two or more LAN sites using
private communication lines or optical services.
 A MAN can also be created using wireless bridge technology by beaming
signals across public area.]
 Some common MAN technologies include the following:
o ATM
o SMDS (Switched Multimegabit Data Service)

MAN Network

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 13
2.3.3 Wide area networks (WANs).
 WANs interconnect LANs, which then provide access to computers or file servers
in other locations.
 Because WANs connect user networks over a large geographical area, they make
it possible for businesses to communicate across great distances.
 WANs allow computers, printers, and other devices on a LAN to be shared with
distant locations.
 WANs provide instant communications across large geographic areas.
 Collaboration software provides access to real-time information and resources and
allows meetings to be held remotely.
 WANs have created a new class of workers called telecommuters. These people
never have to leave their homes to go to work.
 WANs are designed to do the following:
 Operate over a large and geographically separated area
 Allow users to have real-time communication capabilities with other users
 Provide full-time remote resources connected to local services
 Provide e-mail, Internet, file transfer, and e-commerce services
 Some common WAN technologies include the following:
 Integrated Services Digital Network (ISDN)
 Digital subscriber line (DSL)
 Frame Relay
 T1, E1, T3, and E3
 Synchronous Optical Network (SONET)

WAN Network

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 14
2.4 Transmission modes:-

Data flow :-
Data flow is the flow of data between 2 points. The direction of the data flow can be
described as:

1) Simple,
2) Half-Duplex,
3) Full-Duplex

1) Simplex:-
 Data flows in only one direction on the data communication line (medium).
 Examples are Radio and Television broadcasts. They go from the TV station to
your home television.

 Simplex is one direction.


 A good example would be your keyboard to your CPU. The CPU never needs to
send characters to the keyboard but the keyboard always send characters to the
CPU.
 In many cases, Computers almost always send characters to printers, but printers
usually never send characters to computers (there are exceptions, some printers
do talk back).
 Simplex requires only one lane (in the case of serial).

2) Half-Duplex:-
 Data flows in both directions but only one direction at a time on the data
communication line.
 Ex. Conversation on walkie-talkies is a half-duplex data flow. Each person takes
turns talking. If both talk at once - nothing occurs!
 Bi-directional but only 1 direction at a time!

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 15
 Half-Duplex is like the dreaded "one lane" road you may have run into at
construction sites.
 Only one direction will be allowed through at a time. Railroads have to deal with
this scenario more often since it's cheaper to lay a single track.
 A dispatcher will hold a train up at one end of the single track until a train going
the other direction goes through.

3) Full-Duplex:
 Data flows in both directions simultaneously. Modems are configured to flow
data in both directions.
 Bi-directional both directions simultaneously!

 Full-Duplex is like the ordinary two-lane highway.


 In some cases, where traffic is heavy enough, a railroad will decide to lay a
double track to allow trains to pass in both directions.
 In communications, this is most common with networking.

2.5 Applications of DCN:-


The following lists general applications of a data communication network:
 Electronic Mail (e-mail or Email) sending mail. E-mail is the forwarding of
electronic files to an electronic post office for the recipient to pick up.
 Videotext is the capability of having a 2 way transmission of picture and sound.
Games like Doom, Hearts, distance education lectures, etc..
 Groupware is the latest network application, it allows user groups to share
documents, schedules databases, etc.. ex. Lotus Notes.
 Teleconferencing allows people in different regions to "attend" meetings using
telephone lines.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 16
 Telecommuting allows employees to perform office work at home by "Remote
Access" to the network.
 Automated Banking Machines allow banking transactions to be performed
everywhere: at grocery stores, Drive-in machines etc..
 Information Service Providers: provide connections to the Internet and other
information services. Examples are CompuServe, Genie, Prodigy, America On-
Line (AOL), etc...
 Electronic Bulletin Boards (BBS - Bulletin Board Services) are dialup connections
(use a modem and phone lines) that offer a range of services for a fee.
 Value Added Networks are common carriers such as AGT, Bell Canada, etc..
(can be private or public companies) who provide additional leased line
connections to their customers. These can be Frame Relay, ATM (Asynchronous
Transfer Mode), X.25, etc.. The leased line is the Value Added Network.

2.6 Advantages of Network:


 Connectivity and Communication: Networks connect computers and the users
of those computers. Individuals within a building or work group can be
connected into local area networks (LANs); LANs in distant locations can be
interconnected into larger wide area networks (WANs). Once connected, it is
possible for network users to communicate with each other using technologies
such as electronic mail. This makes the transmission of business (or non-
business) information easier, more efficient and less expensive than it would be
without the network.
 Data Sharing: One of the most important uses of networking is to allow the
sharing of data. Before networking was common, an accounting employee who
wanted to prepare a report for her manager would have to produce it on his PC,
put it on a floppy disk, and then walk it over to the manager, who would transfer
the data to her PC's hard disk.
 Hardware Sharing: Networks facilitate the sharing of hardware devices. For
example, instead of giving each of 10 employees in a department an expensive
color printer one printer can be placed on the network for everyone to share.
 Internet Access: The Internet is itself an enormous network, so whenever you
access the Internet, you are using a network. The significance of the Internet on
modern society is hard to exaggerate, especially for those of us in technical
fields.
 Internet Access Sharing: Small computer networks allow multiple users to share
a single Internet connection. Special hardware devices allow the bandwidth of
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 17
the connection to be easily allocated to various individuals as they need it, and
permit an organization to purchase one high-speed connection instead of many
slower ones.
 Data Security and Management: In a business environment, a network allows
the administrators to much better manage the company's critical data. Instead of
having this data spread over dozens or even hundreds of small computers in a
haphazard fashion as their users create it, data can be centralized on shared
servers. This makes it easy for everyone to find the data, makes it possible for the
administrators to ensure that the data is regularly backed up, and also allows for
the implementation of security measures to control who can read or change
various pieces of critical information.
 Performance Enhancement and Balancing: Under some circumstances, a
network can be used to enhance the overall performance of some applications by
distributing the computation tasks to various computers on the network.
 Entertainment: Networks facilitate many types of games and entertainment. The
Internet itself offers many sources of entertainment, of course. In addition, many
multi-player games exist that operate over a local area network. Many home
networks are set up for this reason, and gaming across wide area networks
(including the Internet) has also become quite popular.

2.7 Disadvantages of Network:


 If a network file server develops a fault, then users may not be able to run application
programs
 A fault on the network can cause users to loose data (if the files being worked upon are
not saved)
 If the network stops operating, then it may not be possible to access various resources
 Users work-throughput becomes dependent upon network and the skill of the systems
manager
 It is difficult to make the system secure from hackers, novices or industrial espionage
 Decisions on resource planning tend to become centralized, for example, what word
processor is used, what printers are bought, e.t.c.
 Networks that have grown with little thought can be inefficient in the long term.
 As traffic increases on a network, the performance degrades unless it is designed
properly
 Resources may be located too far away from some users
 The larger the network becomes, the more difficult it is to manage.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 18
3.0 : Transmission media
3.1 Introduction
Data is represented by computers and other telecommunication devices using
signals. Signals are transmitted in the form of electromagnetic energy from one
device to another. Electromagnetic signals travel through vacuum, air or other
transmission mediums to travel between one point to another(from source to
receiver).
Electromagnetic energy (includes electrical and magnetic fields) includes power,
voice, visible light, radio waves, ultraviolet light, gamma rays etc.
Transmission medium is the means through which we send our data from one place
to another. The first layer (physical layer) of Communication Networks OSI Seven
layer model is dedicated to the transmission media,

3.2 Different Types of Transmission media:

 There are two types of transmission media


o Guided – copper wires, fiber optic cable
o Unguided – Wireless (Radio Frequency / Microwave)
 Information is transmitted over:
o Copper wire by varying the voltage or current time
o Fiber optic cable by pulsing light on / off in a fiber optic cable over time
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 19
o Radio waves or Microwaves by varying the frequency or amplitude over
time
 Guided transmission basics
o To transmit a single bit down a copper wire, we must send some electrical
signal having two discrete states to represent 0 and 1
o Examples:
Voltage +5v = 1 0V = 0
Frequency 980 Hz =1 1180 Hz = 0

3.3 Guided transmission media


 Guided transmission is where the signal (information or data) is sent through
some sort of cable, usually copper or optical fiber.
 There are many different types of cabling:
 Twisted Pair
 Coaxial Cable (coax)
 Fiber Optic Cable

3.3.1 Twisted Pair:

 This consists of two or more insulated wires twisted together in a shape


similar to a helix.
 Use metallic conductor
 The cables are twisted around each other to reduce the amount of external
interference
 It consist of two conductor (copper), each with it’s colored plastic
insulation.
 This cable can be used at speeds of several Mb/s for a few kilometers.
 Used for telephone line and lab network.

Twisted pair Cable

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 20
STP & UTP

 UTP connector

 Categories of unshielded twisted-pair(UTP) cables

 Advantages of UTP
o Cost is less

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 21
o Easy to use
o Easy to install
o Flexible
 UTP Used in Ethernet and Token ring
 STP has a metal foil or cover.
 Crosstalk (effect of one channel on the other channel) is less in STP.
 STP has the same consideration as UTP
 Shield must connect with ground.
 Disadvantage of STP : cost is high

3.3.2 Coaxial Cable (coax)

 This consists of a copper cable inside a layer of insulating material.


 The insulating material is then inside a braided outer conductor.
 A layer of plastic is on the outermost layer.
 This type of cable was commonly used in the telephone system but has since
been replaced by fiber optics on longer routes
 This cable has also been used for Cable TV.

 Categories of coaxial cables

RG-Radio Government., Each cable define by RG ratting.


RG-8 &RG-9 also used in Thin Ethernet.
RG number denotes a unique set of physical Specification, including the wire
gauge of the inner conductor.
Impedance
 Connector of coaxial cables

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 22
The most common connector is barrel connector.
The most popular type is BNC(Bayonet Network connector)
Two other types are T-connectors and terminators.
T-connectors and terminators are used in bus topology.

 Advantages:-
o Easy to Install.
o Inexpensive installation.
o It is better for Higher Distance at Higher speed than twisted pair.
o Excellent noise immunity.

 Disadvantage:-
o High Cost
o Harder to work

3.3.3 Fiber Optic Cable:


 Components of Fiber Optics:-
o Light Source
o Transmission Medium
o Light Detector

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 23
• This consists of a central glass core, surrounded by a glass cladding of
lower refractive index, so that the light stays in the core (using Total
Internal Reflection)
• outside is covered with plastic jacket
• Many fibers may be bundled together surrounded by another plastic
cover

 Refraction:-
When Light travels from one medium to another medium changes occurs in its
speed and direction, This changes is called refraction.
I - Angle of Incidence.
R- Angle of Refraction.

 Critical Angle:-
At some points, the changes in the incident angle results in the refracted angle of 90
degrees, with the refracted beam lying along the Horizontal. The incident angle at
this point is known as Critical Angle.

 Reflection:-
When the angle of incidence becomes grater than the critical angle, a new
phenomenon is Occurs is called reflection.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 24
 Light traveling of fiber optic cable
o The source of light is usually a Light Emitting Diode (LED) or a LASER. The
light source is placed at one end of the optical fiber.
o The detector, which is placed at the other end of the fiber, is usually a Photo
Diode and it generates an electrical pulse when light falls on it.
o Hence by attaching a light source on one end of an optical fiber and a detector at
the other end, we have a unidirectional data transmission system (Simplex)
o The light source would accepts an electrical signal, converts and transmits it as
light pulses
o The detector at the far end reconverts the light pulses into an electrical signal to
be then interpreted as 1 or a 0.
o The limits the data rate is 1Gb/sec (1x109 bits / sec)

 Propagations Mode:-

 Single Mode:-
 A mono mode (or single mode) fiber is one that allows a small number
of wavelengths of light to pass down it.
 Only single angle passes
 Superior performance
 High cost
 High speed
 Long distance (up to 100km)

 Multimode Fiber
 Each light ray is said to have a different mode, so a fiber that allows a
lot of rays to travel through it is called a multimode fiber.
 Variety of angles of light will reflect and propagate.
 Low cost
 Low speed

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 25
 Less distance(up to 2km)

o Stepped Index Fiber:


• This is where the glass cladding has a lower refractive index than
the glass core. The refractive index of the glass core does not
change over the length of the optical fiber
o Graded Index Fiber:
• This is where the glass cladding has a lower refractive index than
the glass core. The refractive index of the glass core changes as you
move down the glass core.
• The light rays are redirected towards the central axis of the core as
they travel through the fiber.

 Two different light sources – both emit light when voltage applied
o LED – Light Emitting Diode – less costly, longer life
o ILD - Injection Laser Diode – greater data rate

 Advantages of Fiber Optic over Copper Cable


o Fiber can handle much higher data rates than copper(More information
can be sent in one second using fiber)
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 26
o Fiber has low loss of signal power (attenuation), so repeaters are needed
every 100km rather than every 5km for copper
o Fiber is not affected by power surges, electromagnetic interference or
power failure, or corrosive chemicals in the air
o Fibers are difficult to tap and therefore excellent for security
o Fibers are thin and lightweight, allowing more cables to fit into a given
area.
o Noise Resistance.
o 1000 twisted pair cables 1 km long = 800kg
2 optical fiber cables 1km approx = 100kg allows transfer of more data

 Disadvantages of Fiber Optic over Copper Cable


o Fiber technology is relatively new and certain new skills are required in
handling it
o Optical transmission in a fiber is one way only (Simplex) – if you want
two way communication, then you must use two fibers or else use two
frequency bands on the one fiber
o Fiber optic cables and network interface cards to connect a computer to
the fiber are an order of magnitude more expensive than their
corresponding copper cable equivalents
o Cost is high
o Installation and maintenance.
o Higher Bandwidth.

 Fiber cable Connector

3.4 Unguided transmission media

Introduction

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 27
Information is usually transmitted by either radio or microwave transmission.
Unguided media transport electromagnetic waves without using a physical conductor.
Signals are broadcast through air (or in a few cases, water).

3.4.1 Radio Transmission


o Radio waves are easy to generate and can travel long distances and penetrate
buildings.
o Radio waves are omni-directional which basically means that they can transmit
both ways.
o The transmitter and receiver do not have to be in direct line of sight

 Bands

 Radio Transmission Properties


o At low frequencies (<100MHz) radio waves pass through obstacles well but the
signal power attenuates (falls off) sharply in air

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 28
o At higher frequencies (>100MHz) radio waves tend to travel in straight lines and
bounce of obstacles and can be absorbed by rain (e.g in the 8GHz range)
o At all frequencies, radio waves are subject to interference from motors and other
electrical equipment
 In very low frequencies (VLF), low frequencies (LF) and medium
frequency bands (MF) (<1 MHz) radio waves follow the
ground.(The maximum possible distance that these waves can
travel is approximately 1000km)

3.4.2 Microwave Transmission


o Different types of Propagation
 Surface Propagation (Ground Propagation)
 Sky Propagation.(Tropospheric Propagation, Ionospheric
Propagation, Space Propagation)
 Line-of-sight Propagation.

o Unlike radio waves, microwaves typically do not pass through solid


objects
o Some Waves can be refracted due to atmospheric conditions and may
take longer to arrive than direct waves. These delayed waves can
arrive out of phase with the direct wave, causing destructive
interference and corrupting the received signal This effect is called
multipath fading
o Because of increased demand for more spectrum (range of frequencies
used to transmit), transmitters are using higher and higher frequencies
o Microwave communication is widely used for long distance telephone
communication and cell phones.
o Microwave signals propagates in one direction at a time which means
two different frequencies are necessary for two way communication.
o Transmitter is used to transmit the signal
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 29
o Receiver is used to receive the signal
o Transceiver is a equipment, which work as a both Transmitter and
Receiver.

Terrestrial Microwave

Terrestrial Microwave Different Types of Antenna

o Two types of antenna are used for terrestrial microwave


Communication: Parabolic dish and Horn.
o Narrow beam – line of sight on towers to avoid obstacles
o Series of towers for long distance
o Repeaters: to increase the distance served by terrestrial microwave a
system of repeater install with each antenna.
o Applications:
 Long telephone line
 Voice and TV
 Short point to point between buildings
o Main Source of loss
 Attenuation – especially with rainfall
 Repeaters or amplifiers 10 to 100km
 Interference with overlapping bands

 Satellite Communication:
It’s Line-of-site Microwave Transmission.

• Operates on a number of frequency bands known as transponders

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 30
– Point to Point (Ground station to satellite to ground station)
– Multipoint(Ground station to satellite to multiple receiving stations)
• Satellite orbit
– 35,784 Km, to match earth rotation
– Stays fixed above the transmitter/receiver station as earth rotates
• Satellites need to be separated by distance
– Avoid interference
• Applications
– TV, long distance telephone(satellite phone), private business networks
• Optimum frequency range
– 1 – 10 GHz
– Below 1GHz results in noise, above 10GHz results in severe attenuation

 Satellites in geosynchronous orbit

 Frequency bands for satellite communication:


Each satellite sends and receives over two different bands. Transmission from the earth
to the satellite is called uplink. Transmission from the satellite to the earth is called
downlink.
Band Downlink Uplink
C 3.7 to 4.2 GHz 5.925 to 6.425 GHz
Ku 11.7 to 12.2 GHz 14 to 14.5 GHz
Ka 17.7 to 21 GHz 27.5 to 31 GHz

 Advantages of Microwave over Fiber Optics


o No need to lay cables
o This causes less disruption to the areas where the microwave transmitters
and receivers are placed
o This also means that microwave communication is less expensive than
fiber optic cable

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 31
3.4.3 Infrared:
o Infrared signals can be used for short-range communication in a closed
area using line-of-sight propagation.
o Transceivers must be within line of sight of each other or via reflection
o Does not penetrate walls like microwave
o No frequency allocation or licensing
3.4.4 Bluetooth
o Bluetooth is a wireless technology standard for exchanging data over short
distances (2.4 to 2.485 GHz) from fixed and mobile devices and building personal
area networks (PANs).
o Invented by telecom vendor Ericsson in 1994, it was originally conceived as a
wireless alternative to RS-232 data cables.
o It can connect several devices, overcoming problems of synchronization.
o Penetrate Wall or other objects.
o No Line-of-sight required.

3.5 Cellular Telephony:

 Provide stable communications connections between two moving devices or


moving and Stable Device.
 Service provider will work as a administrator
 Service provider will take care of position of caller and allocation of channel

 Cell: for tracking of caller (customer), each cellular service area is divided into
small regions called cells.
o Cell office: Each cell contains an antenna and is controlled by a small office,
called the cell office.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 32
o Cell size is not fixed and can be increased or decreased depending on the
population of the area.
o the typical radius of a cell is 1 to 12 miles
o High-density areas require more, geographically smaller cells to meet traffic
demands than do lower density areas.
 MTSO: Each cell office, in turn, is controlled by a switching office called a
mobile telephone switching office (MTSO).
o The MTSO coordinates communication between all of the cell offices and the
telephone central office.
o MTSO is a computerized center that is responsible for connecting calls as well as
recording call information and billing.
o MTSO searches for the location of mobile phone by sending query signal to each
cell in a process called paging.

Handoff
o It may be possible that during conversation the mobile phone moves from one
cell to another cell.
o At this time call must not be terminate. MTSO of one cell give responsibility of
call to another MTSO.
o Which will be responsible for handling continues call.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 33
4.0: OSI Model & TCP/IP Model
4.1 Introduction
 ISO is an Organization and OSI is a model.
 ISO stands for International Standard Organization.
 OSI stands for Open System Interconnections.
 OSI model is used for understanding the concept of network architecture.
 OSI consists of seven separate but related layers, each of which defines a
segment of the process of moving information across a network.

[The OSI Model]

4.2 Traveling of Message from Node A to Node B

 Interface:- Each interface defines what information and services a layer must
provide for the layer above it. Interface is required to transferring data between
different layers.
 Header or Trailers: - Header or Trailers are the control data added to the
beginning or the end of a data parcel. . A trailer is added at layer 2.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 34
4.3 OSI Layers:
4.3.1 Physical Layer

 Physical characteristic of interfaces and media


o Transmission media between Source and Destination
o It may be wired or wireless
 Representation of bits
o Bit must be encoded into signals-Electrical or Optical.
o It is define the type of encoding.
 Data rate
o Rate of data transmission / No. of bits transmitted per second
 Synchronization
o Sender and Receiver must be synchronies at bit level.
o Clock must be synchronies.
 Line configuration
o Point-to-Point configuration & multipoint configuration
 Physical topology
o Type of topology:-Star, Mesh, Ring, Bus.
 Transmission mode
o Simplex, half-duplex or full-duplex

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 35
4.3.2 Data Link Layer

 Transmitting Frame
 Responsible for Node to Node delivery
 Makes the physical layer appear error free to the upper layer (network layer).

 Responsibilities of the data link layer

 Framing:
o the data link layer divides the stream of bits received from the network
layer into manageable data units called frames.
 Physical addressing:
o Add MAC Address (Layer 2 address/physical address) of Source and
Receiver.
 Flow control:
o Flow of data must be controlled.
o If sending rate of sender is higher than receiving rate of receiver then flow
control is required.
 Error Control:
o Adds reliability
o Detect and Retransmit damaged or lost frames.
o Prevent duplication of frames.
o Trailer is added to the end of the frame for error controlling.
 Access Control:
o Determine the Controlling of node over the link at any time.
o HUB and Switch (L2) are operated at Layer2.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 36
4.3.3 Network Layer:

 It is Responsible for the source-to-destination delivery of a packet.


 The network layer ensures that each packet gets from its point origin to its final
destination.
 Specific responsibilities of the network layer include the following:
 Logical Addressing:
o Add IP Address (Layer 3 address/Logical address) of Source and Receiver.
 Routing:
o Route the packets to their final destination
o Switch (l3) and Router is operated at Layer3.
o Routing table is maintained by router.

4.3.4 Transport Layer


 Responsible for source-to-destination (end-to-end) delivery of the entire
message
 Network layer oversees end-to-end delivery of individual packets; it does not
recognize any relationship between those packets.
 The transport layer ensures that the whole message arrives in order.
 It is managing both error control and flow control at the source-to-destination
level.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 37
 Responsibilities of the transport layer
 Service-point Addressing:
o Computers often run several programs at the same time.
o The transport layer header therefore must include a type of address called
a service-point address (or port address).
 Segmentation and reassembly:
o A message is divided into transmittable segments.
o Each segment is containing a sequence number.
o Reassemble the message correctly upon arriving at the destination and to
identify and
o Replace packets that were lost in transmission.
 Connection Control:
o The transport layer can be either connectionless or connection-oriented.
o A connectionless transport layer treats each segment as an independent
packet and delivers it to the transport layer at the destination machine.
o A connection-oriented transport layer makes a connection with the
transport layer at the destination machine first before delivering the
packets. After all the data are transferred, the connection is terminated.
 Flow control:
o It is responsible for flow control for end to end rather than across a single
link.
 Error control:
o It is responsible for error control. for end to end rather than across a single
link.
o The sending transport layer makes sure that entire message arrives at the
receiving transport layer without error (damage, loss or duplication).
o Error correction is usually achieved through retransmission.

4.3.5 Session Layer

 The session layer is network dialog controller.


 It establishes, maintains, and synchronizes the interaction between
communicating systems.
 Specific responsibilities of the session layer include the following:

 Dialog control:
o The session layer allows two systems to enter into a dialog.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 38
o It allows the communication between two processes to take place either in
half-duplex (one way at a time) or full-duplex (two ways at a time). For
example, the dialog between a terminal connected to a mainframe can be
half-duplex.

 Synchronization:
o It allows a process to add checkpoints (synchronization points) into a
stream of file.
o For example, if a system is sending a file of 2000 pages, it is advisable to
insert checkpoints after every 100 pages to ensure that each 100-page unit
is received and acknowledged independently. In this case, if a crash
happens during the transmission of page 523, retransmission begins at
page 501: pages 1 to 500 need not be transmitted.

4.3.6 Presentation Layer


The presentation layer is concerned with the syntax and semantics of the information
exchanged between two systems.
 Specific responsibilities of the presentation layer include the following:

 Translation:
o The processes (running programs) in two systems are usually exchanging
information in the form of character strings, numbers, and so on.
o The information should be changed to bit streams before being
transmitted. Because different computers use different encoding systems,
the presentation layer is responsible for interoperability between these
different encoding methods.
o The presentation layer at the sender changes the information from its
sender-dependent format into a common format. The presentation layer at
the receiving machine changes the common format into its receiver-
dependent format.
 Encryption:
o To carry sensitive information, a system must be able to assure privacy.
o Encryption means that the sender transforms the original information to
another form and sends the resulting message but over the network.
Decryption reverses the original process to transform the message back its
original form.
 Compression:

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 39
o Data compression reduces the number of bits to be transmitted.
o Data compression becomes particularly important in the transmission of
multimedia such as text, audio, and video.

4.3.7 Application Layer

 The application layer, enables the user, whether human or software, to access the
network.
 It provides user interfaces and support for services such as electronic mail,
remote file access and transfer, shared database management, and other types
of distributed information services.
 The application layer, enables the user, whether human or software, to access the
network.
 It provides user interfaces and support for services such as electronic mail,
remote file access and transfer, shared database management, and other types
of distributed information services.

 Specific responsibilities of the application layer include the following:


 Network virtual terminal: A network virtual terminal is a software version
of a physical terminal and allows a user to log on to a remote host. To do so,
the application creates a software emulation of a terminal at the remote host.
 File transfer, access, and management (FTAM): This application allows a
user to access files in a remote computer (to make changes or read data), to
retrieve files from a remote computer; and to manage or control files in a
remote computer.
 Mail services: This application provides the basis for e-mail forwarding and
storage.
 Directory services: This application provides distributed database sources
and access for global information on about various objects and services.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 40
4.4 Internet Protocol

4.1 Comparison between TCP/IP and OSI

OSI model TCP/IP model


 Similarities include:
o Both have layers.
o Both have application layers, though they include very different services.
o Both have comparable transport and network layers.
o Both models need to be known by networking professionals.
o Both assume packets are switched. This means that individual packets
may take different paths to reach the same destination. This is contrasted
with circuit-switched networks where all the packets take the same path.
 Differences include:
o TCP/IP combines the presentation and session layer issues into its
application layer.
o TCP/IP combines the OSI data link and physical layers into the network
access layer.
o TCP/IP appears simpler because it has fewer layers.
o TCP/IP protocols are the standards around which the Internet developed,
so the TCP/IP model gains credibility just because of its protocols.
o OSI model is used as a guide when TCP/IP model Actual Implemented

4.2 TCP/IP model


4.2.1 Application Layer
 File Transfer Protocol (FTP)

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 41
 Trivial File Transfer Protocol (TFTP)
 Network File System (NFS)
 Simple Mail Transfer Protocol (SMTP)
 Telnet
 Simple Network Management Protocol (SNMP)
 Domain Name System (DNS)

4.2.2 Transport Layer


 TCP (Transmission Control Protocol )
 UDP(User Datagram Protocol)
 TCP is connection oriented & UDP is connection less
 TCP or UDP creates is called a segment or user datagram
 The functions of UDP are as follows:
o Segment upper-layer application data
o Send segments from one end device to another
 The functions of TCP are as follows:
o Establish end-to-end operations
o Provide flow control through the use of sliding windows
o Ensure reliability through the use of sequence numbers and acknowledgments
4.2.3 Internet Layer
 IP stands for Internet Protocol
 IP provides connectionless, best-effort delivery routing of packets.
 IP is not concerned with the content of the packets but looks for a path to the
destination.
 Four supporting protocols
 ARP (Address Resolution Protocol)
 RARP (Reverse Address Resolution Protocol)
 ICMP (Internet Control Message Protocol)
 IGMP (Internet Group Message Protocol)

4.2.4 Network Access Layer


 Ethernet, Fast Ethernet and Gigabit Ethernet

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 42
5.0: Transmission Signal Delay
5.1 Introduction
Type of signal communicated (analog or digital).
(1) Analog:
Those signals that vary with smooth continuous changes.
A continuously changing signal similar to that found on the speaker wires of a
high-fidelity stereo system.
(2) Digital:
Those signals that vary in steps or jumps from value to value. They are usually in
the form of pulses of electrical energy (represent 0s or 1s).

5.2 Difference between Analog and Digital Signal

Analog Signal Digital Signal

Analog signals are continuous Digital signals are discrete


Anolog signals are continuously varying Digital signals are based on 0's and 1's (or as
often said------- on's and off's).
Analog is the process of taking an audio or Digital on the other hand is breaking the
video signal (in most cases, the human signal into a binary format where the audio
voice) and translating it into electronic or video data is represented by a series of
pulses. "1"s and "0"s.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 43
Inexpensive to use. Expensive to use.
Not complicated Complicated
Analog can deliver better sound quality Not get that much of sound clarity
Not get that much of picture clarity Digital offers better picture clarity
Analog Bandwidth Measured in Hz Digital Bandwidth Measures in bps
Bandwidth: Analog Bandwidth: Digital
– The difference between the highest – Number of bits per second (bps) that
and lowest frequencies that can be sent can be sent over a link.
over an analog link (like phone lines). – The wider the bandwidth, the more
– Measurement is given in hertz (Hz). diverse kinds of information can be
sent.
As an analogy, consider a dimmer switch As an analogy, consider a light switch that is
(analog) that allows you to vary the light either on or off (digital)
in different degrees of brightness.

5.3 Different Types of Delay:


 Bandwidth
o The range of frequencies that a medium can pass. (Analog)
o Number of bits per second (bps) that can be sent over a link. (Digital)
 Throughput
o Throughput is the average rate of successful message delivery over a
communication channel.
o Bandwidth is a potential measurement of a link; the throughput is an
actual measurement of how fast we can send data.
• Processing Delay (Nodel Delay) [Dproc]: Time required examining the packet
header and determining where to direct the packet.
• Propagation[Dprop]:
This is simply the time it takes for a packet to travel between one place to another
at the speed of light. IT is a simple measurement of how long it takes for a signal
to travel along the cable being tested
Propagation time= Distance/ Propagation speed
 Queuing Delay [Dqueue]
It is a waiting time experienced by packet for transmission across the link.
• Transmission time [Dtrans]( Transmission time Delay): It is the time between the
first bit leaving the sender and the last bit arriving at the receiver.
Transmission time = Message size/Bandwidth

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 44
Transmission time = Packet Length/Transmission Rate=L/R

 Latency (Total Nodel Delay)


Time it takes for a packet of data to get from one designated point to another
Latency = Propagation time + Transmission time + Queuing time + Processing
delay.
Latency =[Dprop] + [Dtrans] + [Dqueue] +) [Dproc]

Ex-1 What is the propagation time if the distance between the two points is 12000
km? Assume the propagation speed to be 2.4X108 m/s in cable.

• Propagation time=
Distance/ Propagation speed
• Propagation time=
12000X1000/ 2.4X108 =50ms

Ex-2 What is the Transmission time for a 2.5KB message if the bandwidth of the
network is 1 Gbps?
Solution:
• 2.5KB=2.5*1000 Byte=2.5*1000*8 bits=2500*8 bits
1 gbps=109 bps
• Transmission Time=(2500*8)/109
= 0.00002 seconds
Answer=0.02 ms.

Ex-3 A network with bandwidth of 10 Mbps can pass only an average of 12,000 frames per
minute with each frame carrying an average of 10,000 bits. What is the throughput of this network?

Solution:

The throughput is almost one-fifth of the bandwidth in this case.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 45
Ex-4 What are the propagation time and the transmission time for a 5-Mbyte message (an image) if
the bandwidth of the network is 1 Mbps? Assume that the distance between the sender and the receiver is
12,000 km and that light travels at 2.4 × 10 8 m/s.

Solution:

Note that in this case, because the message is very long and the bandwidth is not
very high, the dominant factor is the transmission time, not the propagation time. The
propagation time can be ignored.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 46
6.0: Multiplexing

6.1 Multiplexing:
 Multiplexing is sending multiple signals or streams of information on a carrier at
the same time in the form of a single, complex signal and then recovering the
separate signals at the receiving end.
 In analog transmission, signals are commonly multiplexed using frequency-
division multiplexing (FDM), in which the carrier bandwidth is divided into sub
channels of different frequency widths, each carrying a signal at the same time in
parallel. In digital transmission, signals are commonly multiplexed using time-
division multiplexing (TDM), in which the multiple signals are carried over the
same channel in alternating time slots.
 In some optical fiber networks, multiple signals are carried together as separate
wavelengths of light in a multiplexed signal using dense wavelength division
multiplexing (DWDM).

6.2 FDM:

 Frequency-division multiplexing (FDM) is inherently an analog technology.


FDM achieves the combining of several digital signals into one medium by
sending signals in several distinct frequency ranges over that medium.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 47
 One of FDM's most common applications is cable television. Only one cable
reaches a customer's home but the service provider can send multiple television
channels or signals simultaneously over that cable to all subscribers.
 Receivers must tune to the appropriate frequency (channel) to access the desired
signal.

FDM multiplexing and De-multiplexing example:

Multiplexing

Demultiplexing

6.2 WDM:
 Its an analog multiplexing technique to combine optical signal.
 In fiber-optic communications, wavelength-division multiplexing (WDM) is a
technology which multiplexes a number of optical carrier signals onto a single
optical fiber by using different wavelengths (i.e. colors) of laser light.
 This technique enables bidirectional communications over one strand of fiber, as
well as multiplication of capacity.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 48
6.3 TDM

 Time-division multiplexing (TDM) is a digital (or in rare cases, analog)


technology. TDM involves sequencing groups of a few bits or bytes from each
individual input stream, one after the other, and in such a way that they can be
associated with the appropriate receiver.
If done sufficiently and quickly, the receiving devices will not detect that some of
the circuit time was used to serve another logical communication path.
 Consider an application requiring four terminals at an airport to reach a central
computer. Each terminal communicated at 2400 bit/s, so rather than acquire four
individual circuits to carry such a low-speed transmission; the airline has
installed a pair of multiplexers.
 A pair of 9600 bit/s modems and one dedicated analog communications circuit
from the airport ticket desk back to the airline data center are also installed.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 49
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 50
7.0. Error Correction and Error Detection
7.1 Introduction
 Data can be corrupted during transmission.
 So for that Reliable system must have mechanism for detecting and correcting
errors.
 Different Factors affecting the signal: Unpredictable interference from heat,
magnetism and other forms of electricity.
7.2 Types of Error

Errors

Single bit Burst

Single-bit Error:
 In a single-bit error, only one bit in the data unit has changed.
00100010 changed to <
00000010
 So, there is a single-bit error in the data unit.
Burst Error:
 Burst error means that two or more bits in the data unit have changed from
1to 0 or 0 to 1.
Length of burst
 A burst error does not necessarily occur in the consecutive bits.
 So, we find the length between first corrupted bit and last corrupted bit and
it’s called Length of burst.
Original data : 00100010
Received data : 0 1 1 0 1 1 1 0
*--------*
 Length of burst error is 5 bit.

7.3 Error Detection:

Redundancy
Error detection uses the concept of redundancy, which means adding extra bits
for detecting errors at the destination.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 51
 There is mechanism required for detect an error from the data.
 We are using different method for detect an error.
Detection methods

VRC LRC CRC Checksum

[1] VRC –Vertical Redundancy Check


 It’s least expensive method.
 It’s also called Parity Checking and bit is called parity bit.
 This Parity bit is append to every data unit, So the total number of 1‘s (Including
parity bit) becomes either Even or Odd.
 Example:-
If data is 10101010
And if user wants data with Even Parity then Data becomes
101010100{Total number of 1 is Four},
And if user wants data with Odd Parity then Data becomes
101010101{Total number of 1 is Five}
 VRC can detect all single bit error.
 It also detects burst error as long as the total number of bit changed to Odd (1, 3,
5<) & For Odd parity, and total number of bit changed to Even (2, 4, 6<) For
Even parity.

[2] LRC –Longitudinal Redundancy Check

 Blocks of bits are organized in a table format (means In a Row and Column).
 Example:
We have data
01100111000111010001100100101001
32 bit block is divided into 4 rows and 8 columns.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 52
01100111 00011101 00011001 00101001
Arrange it in table Format.
01100111
00011101
00011001
00101001
------------
01001010  LRC

Sender sends Original Data plus LRC to the receiver.


01100111 00011101 00011001 00101001 01001010
 It generally used for detect burst error
 Sometimes LRC checker can not be detecting an error.
 Example: If data is 11110000 and 11000011.
And if the first and last bit in each of them are changed,
Then Data becomes 01110001 and 01000010 and LRC not find an Error.

[3] CRC –Cyclic Redundancy Check


 CRC is based on Binary Division.
 CRC (CRC reminder) is appended to the end of Data unit.
 CRC reminder has exactly one less bit then Divisor.
 Data is also called frame
 Divisor is also called generator.
 Most powerful redundancy checking system.

 CRC generator and checker

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 53
Example-1
[A] Binary division in a CRC generator

[B] Binary division in CRC checker

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 54
Example-2

 Steps for finding CRC Reminder.

1) If user enters n number of fixed code then append n-1 0 in the original string.
2) If 1’st bit of string is 1 then put 1 in Quotient and do Ex-Or between n bit of
string and fixed code.
3) If 1’st bit of string is 0 then put 0 in Quotient and do Ex-Or between n bit of
string and n ZERO.
4) Apply same method for remaining bits, Derive 1 by 1 bit from string.
5) Find the Reminder.
6) Append reminder to the original String.

 Original data frame and generator also represent in the Polynomial format.
Example: - 11011011 can be representing in this format.
X7 + X6 + X4 + X3 + X + 1

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 55
[4] Checksum:
 It is an error detection method used by higher layer protocol.
 The Sender follows these steps.
1) Divide whole data unit into k section, each contain n bits.
2) All sections are added together.
3) The sum is complemented and become the checksum.
4) The checksum is send with the data.

 The Receiver follows these steps.


1) Divide whole data unit into k section, each contain n bits.
2) All sections are added together.
3) The sum is complemented.
4) If the result is zero then data are accepted: otherwise data is rejected.
 Example:
At sender side
Data of 16 bit
10101001 00111001
Addition of this two number (Each contain 8 bit).is:

10101001
00111001
---------------------
SUM- 11100010
CHECKSUM:  0 0 0 1 1 1 0 1

This pattern is send to the Destination


10101001 00111001 00011101

At Receiver side
Data of 24 bit
10101001 00111001 00011101

10101001
00111001
00011101

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 56
---------------------
SUM-1 1 1 1 1 1 1 1

And its complement 0 0 0 0 0 0 0 0


If we get all zero then there is not any error in the data are accepted.
The receiver can detect an error but it can’t find error in particular data unit

7.4 Error Correction


Error correction can be done in two ways:
1) Whenever error detect Retransmit entire data unit.
2) Whenever error detect Correct the error using any techniques: Forward Error
Correction and Burst Error Correction

Data redundancy bit

2r >= m+r+1

Number of Number of Total


data bits redundancy bits bits
m r m+r
1 2 3
2 3 5
3 3 6
4 3 7
5 4 9
6 4 10
7 4 11

Hamming code

Position of redundant bit

Redundancy bits calculation

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 57
Example:
Redundancy bit calculation

Error detection using Hamming code

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 58
8.0 Ethernet
8.1 Introduction
 first implemented by the Digital, Intel, and Xerox group (DIX)
 Most of the traffic on the Internet originates and ends with an Ethernet
connection.
 The success of Ethernet is due to the following factors
 Simplicity and ease of maintenance
 Ability to incorporate new technologies
 Reliability
 Low cost of installation and upgrade
 Institute of Electrical and Electronics Engineers (IEEE) 802.3 specification
 802.3u for Fast Ethernet,
 802.3z for Gigabit Ethernet over fiber
 802.3ab for Gigabit Ethernet over UTP
 Fast Ethernet is used
 To connect backbone devices
 To connect enterprise servers
8.2 Connectors
 RJ-45 : a connector commonly used for finishing a twisted pair cable
 AUI : A connector that interfaces between computer’s NIC or router interface
and an Ethernet cable.
 GBIC : A device used as an interface between the Ethernet and fiber optic
systems.
 The RJ-45 transparent end connector shows eight colored wires.
 Four of the wires, T1 through T4, carry the voltage and are called tip.
 The other four wires, R1 through R4, are grounded and are called ring
 The wires in the first pair in a cable or a connector are designated as T1 and R1.
The second pair is T2 and R2, the third is T3 and R3, and the fourth is T4 and R4
8.3 UTP Connection
 Use straight-through cables for the following connections:
 Switch to router
 Switch to PC or server
 Hub to PC or server
 Use crossover cables for the following connections:
 Switch to switch
 Switch to hub
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 59
 Hub to hub
 Router to router
 PC to PC
 Router to PC

8.4 History
 The first Ethernet standard was published in 1980
 1985, the IEEE standards committee for Local and Metropolitan Networks
published standards for LANs.
 1995, IEEE announced a standard for a 100-Mbps Ethernet
 Gigabit Ethernet in 1998 and 1999

8.5 LAN Specification

8.6 Ethernet Standards


 10Mbps Ethernet /Traditional Ethernet
 100Mbps Ethernet / Fast Ethernet
 1Gbps and 10Gbps Ethernet / Gigabit Ethernet
Other Term: Switch Ethernet: Similar to Traditional Ethernet or Fast Ethernet but switch
is used as a central device.
 10 Mbps Ethernet
 10Base5 [Thick Ethernet]
 10BASE5 systems also represent a single point of failure
 10BASE5 uses Manchester encoding
 The cable is large, heavy, and difficult to install
 10Base2 [Thin Ethernet]

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 60
 Installation was easier because of its smaller size, lighter weight, and
greater flexibility
 It has a low cost and does not require hubs
 T-shaped connector is used
 There may be up to 30 stations on a 10BASE2 segment
 10BaseT [Twisted-Pair Ethernet]
 10BASE-T used cheaper and easier to install
 10BASE-T uses Manchester encoding
 T568-A or T568-B cable pinout arrangement
 100 Mbps Ethernet / Fast Ethernet
 100BaseTx
 Category 5 UTP cable is used
 100BASE-TX uses 4B/5B encoding
 Distance between Hub and Station should be less than 100 m.
 100BaseFx
 100BASE-FX uses NRZI encoding
 Uses Optical Fiber.
 Distance between Hub and Station should be less than 2000 m.
 1000Mbps Ethernet / Gigabit Ethernet
 IEEE 802.3ab, uses Category 5
 IEEE 802.3z, specifies 1Gbps full duplex over optical fiber
 Fiber-based Gigabit Ethernet uses 8B/10B encoding, which is similar to the
4B/5B concept.
 Followed by the simple nonreturn to zero (NRZ) line encoding of light on
optical fiber
 Switched Ethernet
 When Ethernet was originally designed, computers were fairly slow and
networks were rather small. Therefore, a network running at 10 Mbps was
more than fast enough for just about any application. Nowadays,
computers are several orders of magnitude faster and networks consist of
hundreds or thousands of nodes, and the demand for bandwidth placed
on the network is often far more than it can provide.
 When the load on a network is so high that it results in large numbers of
collisions and lost frames, the productivity of the users is greatly reduced.
This is called congestion and it can be solved in one of two ways: either

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 61
scrap the entire network currently in place and replace it with a faster one;
or install an Ethernet switch to create multiple small networks.
 In a "traditional" Ethernet network, there is 10 Mbps of bandwidth
available. This bandwidth is shared among all of the users of the network
who wish to transmit or receive information at any one time. In a large
network, there is a very high probability that several users will make a
demand on the network at the same time, and if these demands occur
faster than the network can handle them, eventually the network seems to
slow to a crawl for all users.
 Switches allow us to create a "dedicated road" between individual users
(or small groups of users) and their destination (usually a file server). The
way they work is by providing many individual ports, each running at 10
Mbps interconnected through a high speed backplane. Each frame, or
piece of information, arriving on any port has a Destination Address field
which identifies where it is going to. The switch examines each frame's
Destination Address field and forwards it only to the port which is
attached to the destination device. It does not send it anywhere else.
Several of these conversations can go through the switch at one time,
effectively multiplying the network's bandwidth by the number of
conversations happening at any particular moment.

8.7 Ethernet Frame Structure

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 62
9.0 Interconnecting Devices

9.1 Introduction
9.1.1 Different category of connecting devices

9.1.2 Different devices are used at different layer of OSI

9.2 Repeater
 Repeater is also known as a Regenerator.
 It’s an electronic device.
 Used at Physical layer of OSI model.
 Used for carry information at longer distance.
 Not work as an amplifier, it’s not amplifying the signal.
 It regenerates the original bit patterns from weak signal patterns.
 So, Repeater is a regenerator not an amplifier.
 It’s not an intelligent device.

9.3 Bridge

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 63
 Bridge operates at both the physical layer and data link layer.
 Bridge divides a larger network onto smaller segments.
 Maintain traffic for each segment.
 Maintain physical address of each node of each segment.
 Store address into look-up table.
 Types of bridge

Bridge

Simple Bridge Multiport Bridge Transparent Bridge

1. Simple Bridge
o Used to connect two segments
o Least expensive
o Address of each node entered manually
o Installation and maintenance is high & time consuming
2. Multiport Bridge
o Used to connect more than two LAN
o Maintain physical address of each station
o If three segment is connected then it maintain three tables
o Address of each node entered manually
o Installation and maintenance is high & time consuming

3. Transparent Bridge
o It’s also known as a learning Bridge
o Maintain physical address of each station
o Manually entry of each node is not required
o Maintain address by its own table is empty
o At initial level

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 64
o After each process its storing details of each node
o Its self updating

9.4 Router
 It is used at network layer of OSI model
 Maintain logical address of each node
 It’s an intelligent device
 It has its own software
 Determines the best path among available different path
 Used to connect two different networks
 Manually handling is not required
 We can configure the router according to our requirement
 Maintain address by its own
 At initial level table is empty
 After each process its storing details of each node
 Its self updating

 Routing is classified as a Nonadaptive and Adaptive routing


1. Nonadaptive Routing
o Path to the destination is selected first
o Router sends each packet through the same path
o All the packets follow the same path
o Routing decision is not taken based on the condition or topology of the
network
2. Adaptive Routing
o Router may select a different path for each packet
o Routing decision is taken based on the condition or topology of the
network
o Find the best path among available path

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 65
9.5 Gateway
 Operates In all seven layer of OSI model
 Gateway is a protocol converter
 Both network is working on different protocol then accept a packet from one
network and transmit packet to the different network.
 It convert’s packet into suitable form
 A gateway is software installed within a router.
 A gateway adjusts the data rate size and format of each packet.

9.6 Other devices

9.6.1 Multiprotocol Router


 Operate at the network layer
 Handling more than two different protocol supported network

9.6.2 Brouter
 Its combination of Bridge and Router
 It may be single protocol or multiprotocol router
 Work as a both Bridge and Router
 It routes the packet based on a network layer address
 It also divides the network into smaller segments
9.6.3 Switch
 Two type of switch: Layer two and Layer three switch
 Layer two switch is operates at second layer and layer three switch is operates at
third layer
 We can configure layer three switch
 Switch(l2) maintain physical address of each node

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 66
 L3 switch handle logical address
 Not manually handling is required for tables
 For the first time broadcast the message if destination address is not available in
the table.
 Two different strategies of switch
o Store and forward Switch: store the frame in the input buffer until the
whole packet has arrived.
o Cut through Switch: Not wait for other frames; just transmit it towards
the destination
9.6.4 Hub
 Used at physical layer
 Dump device
 Not maintaining any type of node details
 Each time broadcast the packets
 Cheaper
 Used to connect two or more number of computers

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 67
10.0 Framing
10.1 Framing
In the OSI model of computer networking, a frame is the protocol data unit at the data
link layer. Frames are the result of the final layer of encapsulation before the data is
transmitted over the physical layer

Framing: Using Counters

(a) Without errors. (b) With one error.


10.2 Framing: Flag Byte
• Each frame starts and ends with special bytes: flag bytes.
• Two consecutive flag bytes indicate end of frame and beginning on new frame.
• Problem?
• What if flag bit pattern occurs in data?

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 68
• (a) A frame delimited by flag bytes.
• (b) Four examples of byte sequences before and after stuffing.
• Single ESC: part of the escape sequence.
• Doubled ESC: single ESC is part of data.
• De-stuffing.
• Problem:
• What if character encoding does not use 8-bit characters?
10.3 Bit Stuffing
• Allows character codes with arbitrary bits per character.
• Each frames begins and ends with special pattern.
• Example: 01111110.
• When sender’s DLL finds 5 consecutive 1’s in data stream, stuffs 0.
• When receiver sees 5 1’s followed by 0, de-stuffs.
Bit Stuffing: Example

(a) Original data.


(b) Data as they appear on the line.
(c) Data after de-stuffing.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 69
11.0 Multiple Access Protocols
The data link layer is divided into two sub layers:
 The media access control (MAC) layer and
 The logical link control (LLC) layer.

The former controls how computers on the network gain access to the data and obtain
permission to transmit it; the latter controls packet synchronization, flow control and
error checking.

11.1 Multiple Access Protocols

11.2 Three broad classes:


A. Channel Partitioning
 divide channel into smaller ‚pieces‛ (time slots, frequency, code)
 allocate piece to node for exclusive use

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 70
B. Random Access
 channel not divided, allow collisions
 ‚recover‛ from collisions
C. “Taking turns”
 tightly coordinate shared access to avoid collisions

A. Channel Partitioning Protocol:

A.1 FDMA
 channel spectrum divided into frequency bands
 each station assigned fixed frequency band
 unused transmission time in frequency bands go idle
 example: 6-station LAN, 1,3,4 have pkt, frequency bands 2,5,6 idle

A.2 TDMA
 access to channel in "rounds"
 each station gets fixed length slot (length = pkt trans time) in each round
 unused slots go idle
 example: 6-station LAN, 1,3,4 have pkt, slots 2,5,6 Idle

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 71
A.3 CDMA
 unique ‚code‛ assigned to each user; ie, code set partitioning
 used mostly in wireless broadcast channels (cellular, satellite,etc)
 all users share same frequency, but each user has own ‚chipping‛ sequence (ie,
code) to encode data
 encoded signal = (original data) X (chipping sequence)
 decoding: inner-product of encoded signal and chipping sequence allows
multiple users to ‚coexist‛ and transmit simultaneously with minimal terference
(if codes are ‚orthogonal‛)

B. Random Access Protocol:


 When node has packet to send
o Transmit at full channel data rate R.
o no a priori coordination among nodes
 Two or more transmitting nodes ➜ ‚collision‛,
 Random access MAC protocol specifies:
o how to detect collisions
 How to recover from collisions (e.g., via delayed retransmissions)
 Examples of random access MAC protocols:
o slotted ALOHA
o ALOHA
o CSMA, CSMA/CD, CSMA/CA

B.1 Slotted ALOHA


Assumptions:
 All frames same size
 Time is divided into equal size slots, time to transmit 1 frame
 Nodes start to transmit frames only at beginning of slots
 Nodes are synchronized

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 72
 If 2 or more nodes transmit in slot, all nodes detect collision

Operation:
 When node obtains fresh frame, it transmits in next slot
 No collision, node can send new frame in next slot
 If collision, node retransmits frame in each subsequent slot with prob. p until
success

Pros:
 Single active node can continuously transmit at full rate of channel
 Highly decentralized: only slots in nodes need to be in sync
 Simple

Cons:
 Collisions, wasting slots
 Idle slots
 Nodes may be able to detect collision in less than time to transmit packet
 Clock synchronization

Slotted Aloha efficiency:


 Efficiency is the long-run fraction of successful slots when there are many nodes,
each with many frames to send
 Suppose N nodes with many frames to send, each transmits in slot with
probability p
 Prob that node 1 has success in a slot = p(1-p)N-1
 Prob that any node has a success = Np(1-p)N-1
 For max efficiency with N nodes, find p* that maximizes Np(1-p)N-1
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 73
 For many nodes, take limit of Np*(1-p*)N-1 as N goes to infinity, gives 1/e = .37
 At best: channel used for useful transmissions 37% of time!

B.2 Pure (Un-Slotted) ALOHA


Introduction:
Un-slotted Aloha: simpler, no synchronization
 when frame first arrives
o transmit immediately
 collision probability increases:
o frame sent at t0 collides with other frames sent in [t0-1,t0+1]

Difference between Slotted and Pure ALOHA:


1. Pure Aloha is a Continuous time system whereas Slotted Aloha is discrete
time system.
2. Pure ALOHA doesn't check whether the channel is busy before
transmission.
3. Slotted ALOHA send the data at the beginning of timeslot.
Pure aloha not divided in to time .Slotted aloha divided in to time
4. In slotted ALOHA, a frame can be sent only at fixed times, whereas in
pure ALOHA, you can send any time.
5. Pure ALOHA is featured with the feedback property that enables it to
listen to the channel and finds out whether the frame was destroyed.

Pure Aloha efficiency:


 P(success by given node) = P(node transmits) .
P(no other node transmits in [p0-1,p0] .
P(no other node transmits in [p0-1,p0]
= p. (1-p) N-1 . (1-p)N-1
= p. (1-p) 2(N-1)
 Choosing optimum p and then letting n -> infty ... Even worse ! = 1/(2e) = .18

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 74
B.3 CSMA (Carrier Sense Multiple Access)
Introduction:
 A station senses the channel before it starts transmission
o If busy, either wait or schedule backoff (different options)
o If idle, start transmission
o Vulnerable period is reduced to tprop (due to channel capture effect)
o When collisions occur they involve entire frame transmission times
o Human analogy: don’t interrupt others!

CSMA Options:
 Transmitter behavior when busy channel is sensed
o 1-persistent CSMA (most greedy)
 Start transmission as soon as the channel becomes idle
 Low delay and low efficiency
o Non-persistent CSMA (least greedy)
 If busy, wait a backoff period, then sense carrier again
 High delay and high efficiency
o p-persistent CSMA (adjustable greedy)
 Wait till channel becomes idle, transmit with prob. p; or
 wait one mini-slot time & re-sense with probability 1-p
 Delay and efficiency can be balanced

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 75
B.4 CSMA/CD (Carrier Sense Multiple Access with Collision Detection)
Introduction:
 Monitor for collisions & abort transmission
o Stations with frames to send, first do carrier sensing
o After beginning transmissions, stations continue listening to the medium
to detect collisions
 If collisions detected, all stations involved abort transmission, reschedule
random backoff times, and try again at scheduled times - quickly terminating a
damaged frame saves Time & Bandwidth
 Binary exponential backoff: after k collisions, a random number between 0 to 2^k
– 1 is chosen, that number of slots is skipped

 In CSMA collisions result in wastage of X seconds spent transmitting an entire


frame
 CSMA-CD reduces wastage to time to detect collision and abort transmission
 CSMA/CD can be in one of three states: contention, transmission, or idle.
 human analogy: the polite conversationalist

CSMA/CD reaction time:

Assumptions:
 Collisions can be detected and resolved in 2tprop
 Time slotted in 2tprop slots during contention periods
 Assume n busy stations, and each may transmit with probability p in each
contention time slot
 Once the contention period is over (a station successfully occupies the channel),
it takes X seconds for a frame to be transmitted

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 76
 It takes tprop before the next contention period starts.

Throughput for Random Access MACs

B.5 CSMA/CA
 It is used in Wireless environment.

C. “Taking turns”
 Channel partitioning MAC protocols:
o share channel efficiently and fairly at high load
o Inefficient at low load: delay in channel access, 1/N bandwidth allocated
even if only 1 active node!
 Random access MAC protocols
o efficient at low load: single node can fully utilize channel
o high load: collision overhead
 ‚Taking turns‛ protocols
o Look for best of both worlds!

C.1 Polling:
 Master node ‚invites‛ slave nodes to transmit in turn
 Concerns:
o polling overhead
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 77
o latency
o single point of failure (master)

C.2 Token passing:


 Control token passed from one node to next sequentially.
 Token message
 Concerns:
o Token overhead
o Latency
o Single point of failure (token)

C.3 Reservation
Bit Map Protocol can be used for Reservation
 A bit-map protocol:
o A contention period has exactly M slots and a station j announces it has a
frame to send by inserting 1 into slot j

Reservation System Options


 Centralized or distributed system
o Centralized systems: A central controller listens to reservation information,
decides order of transmission, issues grants
o Distributed systems: Each station determines its slot for transmission from
the reservation information
 Single or Multiple Frames
o Single frame reservation: Only one frame transmission can be reserved
within a reservation cycle
o Multiple frame reservation: More than one frame transmission can be
reserved within a frame

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 78
11.3 Objectives to remember:
(A) Short Question:
What is Single-bit Error?
 In a single-bit error, only one bit in the data unit has changed.
1. 0 0 1 0 0 0 1 0 changed to <
00000010
 So, there is a single-bit error in the data unit.
What is Burst Error?
2.
 Burst error means that two or more bits in the data unit have changed from 1to 0 or 0 to 1.
What is Length of burst?
 A burst error does not necessarily occur in the consecutive bits.
 So, we find the length between first corrupted bit and last corrupted bit and it’s called Length of
burst.
3.
Original data : 0 0 1 0 0 0 1 0
Received data : 0 1 1 0 1 1 1 0
*--------*
 Length of burst error is 5 bit.
What is Redundancy?
4. Error detection uses the concept of redundancy, which means adding extra bits for detecting errors at
the destination.
How Error Correction is Done?
Error correction can be done in two ways:
5. 1) Whenever error detect Retransmit entire data unit.
2) Whenever error detect Correct the error using any techniques: Forward Error Correction and Burst
Error Correction
6. Why do you need error detection?
As the signal is transmitted through a media, the signal gets corrupted because of noise and distortion. In other
words, the media is not reliable. To achieve a reliable communication through this unreliable media, there is
need for detecting the error in the signal so that suitable mechanism can be devised to take corrective actions.
7. Explain different types of Errors?
The errors can be divided into two types: Single-bit error and Burst error.
• Single-bit Error : The term single-bit error means that only one bit of given data unit (such as a byte,
character, or data unit) is changed from 1 to 0 or from 0 to 1.
• Burst Error :The term burst error means that two or more bits in the data unit have changed from 0 to 1 or
vice-versa. Note that burst error doesn’t necessary means that error occurs in consecutive bits.
8. Explain the use of parity check for error detection?
In the Parity Check error detection scheme, a parity bit is added to the end of a block of data. The value of the
bit is selected so that the character has an even number of 1s (even parity) or an odd number of 1s (odd parity).
For odd parity check, the receiver examines the received character and if the total number of 1s is odd, then it
assumes that no error has occurred. If any one bit (or any odd number of bits) is erroneously inverted during
transmission, then the receiver will detect an error.
9. What is forward error correction?
The ability of the receiver to both detect and correct errors is known as forward error correction (FEC).
10. What is backward error correction?
When the receiver detects an error in the data received, it requests back the sender to retransmit the data unit.
11. List the services provided by the Link layer.
 Framing.
 Link access.
 Reliable delivery.
 Error detection and correction.
12. Where Is the Link Layer Implemented?
The link layer is implemented in a network adapter, also sometimes known as a network interface card (NIC).
13. List the 2 taking turn protocols.
 Polling protocol
 Token Passing protocol

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 79
(B) Give True or False. Correct False statement with Justification:
1. VRC –Vertical Redundancy Check is an Error Correction Method. (False)
2. LRC –Longitudinal Redundancy Check is an Error Correction Method. ( False )
3. CRC –Cyclic Redundancy Check is an Error Correction Method. ( False )
4. Checksum is an Error Correction Method.( False )
5. Hamming code is an Error Detection Method.(False)

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 80
12.0 Fragmentation and Switching
12.1 Fragmentation
Fragmentation is when a datagram has to be broken up into smaller datagrams to fit the
frame size of a certain network. Different networks have different MTUs (maximum
transfer unit), when a datagram enters a network with a smaller MTU the
gateway/router needs to fragment this packet into smaller packets that fit the new MTU.

 IDENTIFICATION (16 bits)


o It is used for Fragmentation.
o Sometimes, a device in the middle of the network path cannot handle the
datagram at the size it was originally transmitted, and must break it into
fragments.
o If an intermediate system needs to break up the datagram, it uses this field
to aid in identifying the fragments.
 FLAGS (3 BITS)
o The flags field contains single-bit flags that indicate whether the datagram
is a fragment, whether it is permitted to be fragmented, and whether the
datagram is the last fragment, or there are more fragments.
o The first bit in this field is always zero.
 FRAGMENT OFFSET (13 BITS)
o When a datagram is fragmented, it is necessary to reassemble the
fragments in the correct order.
o The fragment offset numbers the fragments in such a way that they can be
reassembled correctly.
12.2 IP (Internet Protocol)
 IP stands for Internet Protocol
 IP provides connectionless, best-effort delivery routing of packets.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 81
 IP is not concerned with the content of the packets but looks for a path to the
destination.
 Four supporting protocols
 ARP (Address Resolution Protocol)
 RARP (Reverse Address Resolution Protocol)
 ICMP (Internet Control Message Protocol)
 IGMP (Internet Group Message Protocol)
 Unreliable and connectionless datagram Protocol
 If Reliability is important then IP must be paired with a reliable Protocol such as
TCP.
 IP transport data in packets called Datagram, Each of which is transport
separately.
 Datagram is a unit of data

12.3 Switching:
 Whenever we have multiple devices, we have the problem of how to connect
them to make one-on-one communication possible. One solution is to install a
point-to-point connection between each pair of devices (a mesh topology) or
between a central device and every other device (a star topology).
 These methods are impractical and wasteful when applied to very large
networks. The number and length of the links require too much infrastructure to
be cost efficient, and the majority of those links would be idle most of the time.
 A better solution is switching. A switched network consists of a series of
interlinked nodes, called switches.
 Switches are hardware and/or software devices capable of creating temporary
connections between two or more devices linked to the switch but not to each other. In a
switched network, some of the nodes are connected to the communicating
devices. Others are used only for routing.
 There are three methods of switching.

Switching methods

Circuit switching Packet switching Message switching

12.3.1Circuit switching: ‚It creates a direct physical connection between two devices
such as phones or computers.‛

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 82
 For example in figure, instead of point-to-point communication between the
three computers on left (A, B, and C) to the four computers on the right (D, E,
F and G), requiring 12 links. In figure, computer A is connected through
switches I, II, and III to computer D. By moving the levers of the switches,
any computer on the left can be connected to any computer on the right.
 A circuit switch is a device with n inputs and m outputs that creates a
temporary connection between an input link and an output link. The number
of inputs does not have to match the number of outputs.

 It can use either of two technologies:


(i) Space-division switching,
(ii) Time-division switches.
 Used in the telephone system: network resources in the telephone system are
reserved from your phone to the phone you call when you place the call;
they're released when you hang up.
In short, circuit switching..
• Dedicated communication path between two stations
• Three phases
— Establish
— Transfer
— Disconnect
• Must have switching capacity and channel capacity to establish connection
• Must have intelligence to work out routing
• Inefficient
— Channel capacity dedicated for duration of connection
— If no data, capacity wasted because of dedicated link.
• Set up (connection) takes time
• Once connected, transfer is transparent
• Developed for voice communication (phone)
• Resources dedicated to a particular call

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 83
• Much of the time a data connection (line) is idle and facilities are wasted.
• Circuit switching is inflexible. Once a circuit has been established, that circuit is
the path taken by all parts of the transmission whether or not it remains the most
efficient or available.
• Circuit switching sees all transmissions as equal.
• Data rate is fixed-Both ends must operate at the same rate
12.3.2 Packet Switching:
 Circuit switching designed for voice
So a better solution, For data transmission is packet switching.
Basic Operation:
• Data transmitted in small packets
 Typically 1000 octets
 Longer messages split into series of packets
 Each packet contains a portion of user data plus some control info
• Control info
 Routing (addressing) info
• Packets are received, stored briefly (buffered) and past on to the next node
 Store and forward
Advantages
• Line efficiency
 Single node to node link can be shared by many packets over time
 Packets queued and transmitted as fast as possible
• Data rate conversion
 Each station connects to the local node at its own speed
 Nodes buffer data if required to equalize rates
• Packets are accepted even when network is busy
 Delivery may slow down
• Priorities can be used
 There are two popular approaches to packet switching:
(i) Datagram approach (Connectionless service)
(ii) Virtual circuit approach. (Connection oriented)
(i) Datagram
 Every packet contains a complete destination address.
 Switch contains a routing table
 Two successive packets may follow different paths.
 Each switch processes packets independently.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 84
(ii) Virtual Circuit Switching
 Based on the connection oriented model
 Virtual connection has to be established first
 Two stage process:- connection setup, data transfer
 Two type of virtual circuits
o Permanent virtual circuits (PVC)
o Switched virtual circuit (SVC)
12.3.3 Message Switching
• A store-and-forward network where the block of transfer is a complete message.
• Since messages can be quite large, this can cause:
– buffering problems
– high mean delay times
12.3.4 Virtual circuit and Datagram network Comparison
Datagram Virtual Circuit
Figure

Connection None (Connection setup is initially Required


Setup
required prior to sending data)
Addressing Packet contains full source and Packet contains short virtual circuit
destination address number identifier.
State Not Hold Hold information for Routing
Information
Routing Packets routed independently Route established at setup, all
packets follow same route.
Effect of Only on packets lost during crash All virtual circuits passing through
Router Failure
failed router terminated.
Congestion Difficult since all packets routed Simple by pre-allocating enough
Control
independently router resource buffers to each virtual circuit at setup,
requirements can vary. since maximum number of circuits
fixed.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 85
13.0 Routing Algorithm
13.1 Routing:
The routing algorithm is that part of the network layer software responsible for
deciding which output line an incoming packet should be transmitted on.

Routing algorithms can be grouped into two major classes: nonadaptive and adaptive.
Nonadaptive algorithms do not base their routing decisions on measurements or
estimates of the current traffic and topology. Instead, the choice of the route to use to
get from I to J (for all I and J) is computed in advance, off-line, and downloaded to the
routers when the network is booted. This procedure is sometimes called static routing.

Adaptive algorithms, in contrast, change their routing decisions to reflect changes in the
topology, and usually the traffic as well. Adaptive algorithms differ in where they get
their information (e.g., locally, from adjacent routers, or from all routers), when they
change the routes (e.g., every T sec, when the load changes or when the topology
changes), and what metric is used for optimization (e.g., distance, number of hops, or
estimated transit time). In the following sections we will discuss a variety of routing
algorithms, both static and dynamic.

13.2 Distance Vector Routing

Modern computer networks generally use dynamic routing algorithms rather than the
static ones described above because static algorithms do not take the current network
load into account. Two dynamic algorithms in particular, distance vector routing and
link state routing, are the most popular. In this section we will look at the former
algorithm. In the following section we will study the latter algorithm.

Distance vector routing algorithms operate by having each router maintain a table (i.e, a
vector) giving the best known distance to each destination and which line to use to get
there. These tables are updated by exchanging information with the neighbors.

The distance vector routing algorithm is sometimes called by other names, most
commonly the distributed Bellman-Ford routing algorithm and the Ford-Fulkerson
algorithm, after the researchers who developed it (Bellman, 1957; and Ford and
Fulkerson, 1962). It was the original ARPANET routing algorithm and was also used in
the Internet under the name RIP.

In distance vector routing, each router maintains a routing table indexed by, and
containing one entry for, each router in the subnet. This entry contains two parts: the
preferred outgoing line to use for that destination and an estimate of the time or
distance to that destination. The metric used might be number of hops, time delay in
milliseconds, total number of packets queued along the path, or something similar.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 86
The router is assumed to know the ''distance'' to each of its neighbors. If the metric is
hops, the distance is just one hop. If the metric is queue length, the router simply
examines each queue. If the metric is delay, the router can measure it directly with
special ECHO packets that the receiver just timestamps and sends back as fast as it can.

As an example, assume that delay is used as a metric and that the router knows the
delay to each of its neighbors. Once every T msec each router sends to each neighbor a
list of its estimated delays to each destination. It also receives a similar list from each
neighbor. Imagine that one of these tables has just come in from neighbor X, with X i
being X's estimate of how long it takes to get to router i. If the router knows that the
delay to X is m msec, it also knows that it can reach router i via X in Xi + m msec. By
performing this calculation for each neighbor, a router can find out which estimate
seems the best and use that estimate and the corresponding line in its new routing table.
Note that the old routing table is not used in the calculation.

This updating process is illustrated in Fig. 13-1. Part (a) shows a subnet. The first four
columns of part (b) show the delay vectors received from the neighbors of router J. A
claims to have a 12-msec delay to B, a 25-msec delay to C, a 40-msec delay to D, etc.
Suppose that J has measured or estimated its delay to its neighbors, A, I, H, and K as 8,
10, 12, and 6 msec, respectively.

Figure 13-1. (a) A subnet. (b) Input from A, I, H, K, and the new routing table for J.

Consider how J computes its new route to router G. It knows that it can get to A in 8
msec, and A claims to be able to get to G in 18 msec, so J knows it can count on a delay

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 87
of 26 msec to G if it forwards packets bound for G to A. Similarly, it computes the delay
to G via I, H, and K as 41 (31 + 10), 18 (6 + 12), and 37 (31 + 6) msec, respectively. The
best of these values is 18, so it makes an entry in its routing table that the delay to G is
18 msec and that the route to use is via H. The same calculation is performed for all the
other destinations, with the new routing table shown in the last column of the figure.

The Count-to-Infinity Problem

Distance vector routing works in theory but has a serious drawback in practice:
although it converges to the correct answer, it may do so slowly. In particular, it reacts
rapidly to good news, but leisurely to bad news. Consider a router whose best route to
destination X is large. If on the next exchange neighbor A suddenly reports a short
delay to X, the router just switches over to using the line to A to send traffic to X. In one
vector exchange, the good news is processed.

To see how fast good news propagates, consider the five-node (linear) subnet of Fig. 13-
2, where the delay metric is the number of hops. Suppose A is down initially and all the
other routers know this. In other words, they have all recorded the delay to A as
infinity.

Figure 13-2. The Count to Infinity Problem

When A comes up, the other routers learn about it via the vector exchanges. For
simplicity we will assume that there is a gigantic going somewhere that is struck
periodically to initiate a vector exchange at all routers simultaneously. At the time of
the first exchange, B learns that its left neighbor has zero delay to A. B now makes an
entry in its routing table that A is one hop away to the left. All the other routers still
think that A is down. At this point, the routing table entries for A are as shown in the
second row of Fig. 13-3(a). On the next exchange, C learns that B has a path of length 1
to A, so it updates its routing table to indicate a path of length 2, but D and E do not
hear the good news until later. Clearly, the good news is spreading at the rate of one

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 88
hop per exchange. In a subnet whose longest path is of length N hops, within N
exchanges everyone will know about newly-revived lines and routers.

Now let us consider the situation of Fig. 13-3(b), in which all the lines and routers are
initially up. Routers B, C, D, and E have distances to A of 1, 2, 3, and 4, respectively.
Suddenly A goes down, or alternatively, the line between A and B is cut, which is
effectively the same thing from B's point of view.

At the first packet exchange, B does not hear anything from A. Fortunately, C says: Do
not worry; I have a path to A of length 2. Little does B know that C's path runs through
B itself. For all B knows, C might have ten lines all with separate paths to A of length 2.
As a result, B thinks it can reach A via C, with a path length of 3. D and E do not update
their entries for A on the first exchange.

On the second exchange, C notices that each of its neighbors claims to have a path to A
of length 3. It picks one of the them at random and makes its new distance to A 4, as
shown in the third row of Fig. 13-3(b). Subsequent exchanges produce the history
shown in the rest of Fig. 13-3(b).

From this figure, it should be clear why bad news travels slowly: no router ever has a
value more than one higher than the minimum of all its neighbors. Gradually, all
routers work their way up to infinity, but the number of exchanges required depends
on the numerical value used for infinity. For this reason, it is wise to set infinity to the
longest path plus 1. If the metric is time delay, there is no well-defined upper bound, so
a high value is needed to prevent a path with a long delay from being treated as down.
Not entirely surprisingly, this problem is known as the count-to-infinity problem. There
have been a few attempts to solve it (such as split horizon with poisoned reverse in RFC
1058), but none of these work well in general. The core of the problem is that when X
tells Y that it has a path somewhere, Y has no way of knowing whether it itself is on the
path.

13.3 Link State Routing

Distance vector routing was used in the ARPANET until 1979, when it was replaced by
link state routing. Two primary problems caused its demise. First, since the delay metric
was queue length, it did not take line bandwidth into account when choosing routes.
Initially, all the lines were 56 kbps, so line bandwidth was not an issue, but after some
lines had been upgraded to 230 kbps and others to 1.544 Mbps, not taking bandwidth
into account was a major problem. Of course, it would have been possible to change the
delay metric to factor in line bandwidth, but a second problem also existed, namely, the
algorithm often took too long to converge (the count-to-infinity problem). For these
reasons, it was replaced by an entirely new algorithm, now called link state routing.
Variants of link state routing are now widely used.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 89
Figure 13-3. a) Router b) Routing Information

The idea behind link state routing is simple and can be stated as five steps. Each router
must do the following:

1. Discover its neighbors and learn their network addresses.


2. Measure the delay or cost to each of its neighbors.
3. Construct a packet telling all it has just learned.
4. Send this packet to all other routers.
5. Compute the shortest path to every other router.

In effect, the complete topology and all delays are experimentally measured and
distributed to every router. Then Dijkstra's algorithm can be run to find the shortest
path to every other router.

Figure 13-4. The Packet Buffer for Router B

Link-state routing protocols were developed to alleviate the convergence and loop
issues of distance-vector protocols. Link-state protocols maintain three separate tables:

 Neighbor table – contains a list of all neighbors, and the interface each neighbor
is connected off of. Neighbors are formed by sending Hello packets.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 90
 Topology table – otherwise known as the ‚link-state‛ table, contains a map of all
links within an area, including each link’s status.
 Shortest-Path table – contains the best routes to each particular destination
(otherwise known as the ‚routing‛ table‛)

13.4 Difference between Distance Vector Routing and Link State Routing

Particular Distance Vector Routing Link State Routing

Example of Protocol RIP, IGRP OSPF, IS-IS

Algorithm Bellmon-ford Shortest Path first (SPF)

Classfull, Classless, VLSM,


Support subnet Only Classfull Routing
Summarization

Scale Small, limited hop Large

Routing Table, Neighbor Table and


Table creation Only Routing Table
Topology Table

Convergence time Very Slow Fast

Updating On Broadcast On multicast

Updating based on Rumor Based on topology table

Updating time When periodic timer expired Whenever changing occurs

Updating contents Whole routing table Only changed information

HOP Limited Unlimited

Needs Memory Less High

CPU Cycle Less High

Configuration Simple Advanced

Risk of Layer 3 Loop Yes No

Hierarchical Structure No Yes

OPEN Standard Yes Yes

13.5 Shortest Path Routing

Let us begin our study of feasible routing algorithms with a technique that is widely
used in many forms because it is simple and easy to understand. The idea is to build a
graph of the subnet, with each node of the graph representing a router and each arc of
the graph representing a communication line (often called a link). To choose a route

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 91
between a given pair of routers, the algorithm just finds the shortest path between them
on the graph.

The concept of a shortest path deserves some explanation. One way of measuring path
length is the number of hops. Using this metric, the paths ABC and ABE in Fig. 13-5 are
equally long. Another metric is the geographic distance in kilometers, in which case
ABC is clearly much longer than ABE (assuming the figure is drawn to scale).

However, many other metrics besides hops and physical distance are also possible. For
example, each arc could be labelled with the mean queueing and transmission delay for
some standard test packet as determined by hourly test runs. With this graph labeling,
the shortest path is the fastest path rather than the path with the fewest arcs or
kilometers.

In the general case, the labels on the arcs could be computed as a function of the
distance, bandwidth, average traffic, communication cost, mean queue length,
measured delay, and other factors. By changing the weighting function, the algorithm
would then compute the ''shortest'' path measured according to any one of a number of
criteria or to a combination of criteria.

Figure 13-5. The first five steps used in computing the shortest path from A to D. The
arrows indicate the working node.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 92
Several algorithms for computing the shortest path between two nodes of a graph are
known. This one is due to Dijkstra (1959). Each node is labeled (in parentheses) with its
distance from the source node along the best known path. Initially, no paths are known,
so all nodes are labeled with infinity. As the algorithm proceeds and paths are found,
the labels may change, reflecting better paths. A label may be either tentative or
permanent. Initially, all labels are tentative. When it is discovered that a label represents
the shortest possible path from the source to that node, it is made permanent and never
changed thereafter.

To illustrate how the labeling algorithm works, look at the weighted, undirected graph
of Fig. 13-5(a), where the weights represent, for example, distance. We want to find the
shortest path from A to D. We start out by marking node A as permanent, indicated by
a filled-in circle. Then we examine, in turn, each of the nodes adjacent to A (the working
node), relabeling each one with the distance to A. Whenever a node is relabeled, we also
label it with the node from which the probe was made so that we can reconstruct the
final path later. Having examined each of the nodes adjacent to A, we examine all the
tentatively labeled nodes in the whole graph and make the one with the smallest label
permanent, as shown in Fig. 13-5(b). This one becomes the new working node.

We now start at B and examine all nodes adjacent to it. If the sum of the label on B and
the distance from B to the node being considered is less than the label on that node, we
have a shorter path, so the node is relabeled.

After all the nodes adjacent to the working node have been inspected and the tentative
labels changed if possible, the entire graph is searched for the tentatively-labeled node
with the smallest value. This node is made permanent and becomes the working node
for the next round. Figure 13-5 shows the first five steps of the algorithm.

To see why the algorithm works, look at Fig. 13-5(c). At that point we have just made E
permanent. Suppose that there were a shorter path than ABE, say AXYZE. There are
two possibilities: either node Z has already been made permanent, or it has not been. If
it has, then E has already been probed (on the round following the one when Z was
made permanent), so the AXYZE path has not escaped our attention and thus cannot be
a shorter path.

Now consider the case where Z is still tentatively labeled. Either the label at Z is greater
than or equal to that at E, in which case AXYZE cannot be a shorter path than ABE, or it
is less than that of E, in which case Z and not E will become permanent first, allowing E
to be probed from Z.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 93
13.4 Delivery semantics
Routing schemes differ in their delivery semantics:
 Unicast delivers a message to a single specific node. Unicast is the dominant
form of message delivery on the Internet.
 Anycast delivers a message to anyone out of a group of nodes, typically the one
nearest to the source
 Multicast delivers a message to a group of nodes that have expressed interest in
receiving the message
o In computer networking, multicast (one-to-many or many-to-many
distribution) is group communication where information is addressed to a
group of destination computers simultaneously. Multicast should not be
confused with physical layer point-to-multipoint communication.
o Group communication may either be application layer multicast or
network assisted multicast, where the latter makes it possible for the
source to efficiently send to the group in a single transmission. Copies are
automatically created in other network elements, such as routers, switches
and cellular network base stations, but only to network segments that
currently contain members of the group.
 Geocast delivers a message to a geographic area
 Broadcast delivers a message to all nodes in the network
o In telecommunication and information theory, broadcasting refers to a
method of transferring a message to all recipients simultaneously.
Broadcasting can be performed as a high level operation in a program, for
example broadcasting Message Passing Interface, or it may be a low level
networking operation, for example broadcasting on Ethernet.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 94
13.5 Comparison of Hub, Switch & Router.

Criteria Hub Switch Router

Figure

Symbolic notation

Data Link Layer


Layer Physical Layer Network Layer
Network Layer
Device Type Networking device Networking device Internetworking device
Active Store & Forward Simple Router
Passive Cut- Through Multiprotocol Router
Type
Layer 2 switch (L2) Wired Router
-
Layer 3 switch (L3) Wireless Router
Device Category Dump device Intelligent Device Intelligent Device
Transmission
Half duplex Full duplex Full Duplex
Mode
Data Transmission Frame (L2 Switch)
Bits Packet
form Frame & Packet (L3 switch)
Transmission At Initial Level Broadcast At Initial Level Broadcast then
Broadcasting
Type then Uni-cast & Multicast Uni-cast & Multicast
Address used for Not use any address
Uses MAC address Uses IP address
data tramsmission (Every time Broadcsting)
Store MAC address in Store IP address in Routing
Table Not maintain lookup table and maintain table and maintain address
address at its own at its own.
Used in
LAN LAN MAN, WAN
(LAN/MAN/WAN)
1-10 Mbps(Wireless)
Speed 10/100Mbps 10/100Mbps, 1Gbps
100 Mbps (Wired)
Bandwidth sharing is Dynamic
If speed of hub is 10/100 If speed of hub is 10/100 (Enables either static or dynamic
Bandwidth Mbps then it must share its Mbps then bandwidth of bandwidth sharing for modular
sharing bandwidth with each and each and every one of its cable interfaces.
every one of its ports. ports gets 10/100 Mbps. The default percent-value is 0. The
percent-value range is 1-96.)
Collision More Less Less
Ports 4/8/16/24 8/16/24/48 2/4/8
Connecting two or more Connecting Two or more
Used for Connecting two or more nodes.
nodes. networks.
Starts from 2500/-Rs (L2)
Price Starts from 400/- Rs (appx.) 20,000 /- Rs (appx.)
15,000/- Rs (L3) (appx.)

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 96
Netgear EN 104, Alcatel's OmniSwitch 9000;
Linksys WRT54GL
Cisco 1358 Micro Hub Cisco Catalyst switch 4500
Latest Models Juniper MX & EX series
Cisco 1538 Series and 6500(10 Gbps), 3Com
Cisco 3900,2900,1900
Linksys NMH405 7700, 7900E, 8800.
Port range On/Off Firewall
All time Broadcasting
Features Priority setting of port VPN
Regenerate and transmit the
VLAN Dynamic hadling of
data
Port mirroring Bandwidth
In a LAN environment L3
In a different network
switch is faster than Router
Faster - environment (MAN/ WAN) Router
(built in switchning
is faster than L3 Switch.
hardware)
NAT(Network Address
- Can Perform NAT
Translation)
Take more time for
Take faster Routing
Routing Decision - complicated routing
Decision
Decision

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 97
Objectives to Remember
(A) Fill in the Blanks:
1. Computer networks that provide only a connection service at the network layer are called _______
while computer networks that provide only a connectionless service at the network layer are
called_________. (virtual-circuit (VC) networks, datagram networks )
2. The forwarding functions implemented by a router’s input ports, output ports, and switching
fabric are also known as _________. (router forwarding plane)
3. Count-to-Infinity problem occurs in ________. (distance vector routing)

(B) Short Question:


1. What is Routing? (Nov- 2013) [LJIET]
The network layer must determine the route or path taken by packets as they flow from a sender to
a receiver. This process is known as routing. Or
Routing refers to the network-wide process that determines the end-to-end paths that packets
take from source to destination
2. What is the name of a network-layer packet? : Datagram
3. What is the difference between routing and forwarding?
Forwarding is about moving a packet from a router’s input link to the appropriate output link.
Routing is about determining the end-to-routes between sources and destinations.
4. What are the two most important network-layer functions in a datagram network? What are the
three most important network-layer functions in a virtual circuit network?
Datagram-based network layer: forwarding; routing. Additional function of VC based network
layer: call setup.
5. Do the routers in both datagram networks and virtual-circuit networks use forwarding tables?
Yes
6. Define signalling messages and signalling protocols
The messages that the end systems send into the network to initiate or terminate a VC, and the
messages passed between the routers to set up the VC (that is, to modify connection state in router
tables) are known as signaling messages, and the protocols used to exchange these messages are
often referred to as signaling protocols.
7. Discuss why each input port in a high-speed router stores a shadow copy of the forwarding table.
With the shadow copy, the forwarding decision is made locally, at each input port, without
invoking the centralized routing processor. Such decentralized forwarding avoids creating a
forwarding processing bottleneck at a single point within the router.
8. List and briefly describe switching fabrics.
Switching via memory; switching via a bus; switching via an interconnection network
9. Describe how packet loss can occur at input ports. Describe how packet loss at input ports can be
eliminated (without using infinite buffers).
Packet loss occurs if queue size at the input port grows large because of slow switching fabric
speed and thus exhausting router’s buffer space. It can be eliminated if the switching fabric speed
is at least n times as fast as the input line speed, where n is the number of input ports.
10. Describe how packet loss can occur at output ports. Can this loss be prevented by increasing the
switch fabric speed?
Packet loss can occur if the queue size at the output port grows large because of slow outgoing
line-speed.
11. What is HOL blocking? Does it occur in input ports or output ports?
HOL blocking – a queued packet in an input queue must wait for transfer through the fabric
because it is blocked by another packet at the head of the line. It occurs at the input port.
12. Is it necessary that every autonomous system use the same intra-AS routing algorithm? Why or
why not?
No. Each AS has administrative autonomy for routing within an AS.
14.0 IP Datagram
14.1 IP (Internet Protocol)
 Unreliable and connectionless datagram Protocol
 If Reliability is important then IP must be paired with a reliable Protocol such as
TCP.
 IP transport data in packets called Datagram, Each of which is transport
separately.
 Datagram is a unit of data

14.2 IP Datagram
 Packets of IP is called Datagram
 Maximum size of IP Datagram is (65,536 bytes)
 IP datagram containing two part: Header and Data
 Size of Header may be vary between 20 to 60 bytes
 Header containing the information regarding routing and delivery
Description of each field of IP Datagram:
 VER (4 BITS)
o The version field is set to the value '4' in decimal or '0100' in binary.
o The value indicates the version of IP (4 or 6, there is no version 5).
 HLEN (4 BITS)
o Defines the length of header
o Length is in a multiple of four bytes
o The four bit can represent a number between 0 to 15, which multiply by 4
gives a maximum of 60 bytes
 Service types (8 Bits)
o Define How the Datagram should be handled.
o Define the Priority.
 TOTAL LENGTH (16 BITS)
o This informs the receiver of the datagram where the end of the data in this
datagram is.
o This is why an IP datagram can be up to 65,535 bytes long, as that is the
maximum value of this 16-bit field.
 IDENTIFICATION (16 bits)
o It is used for Fragmentation.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 99
o Sometimes, a device in the middle of the network path cannot handle the
datagram at the size it was originally transmitted, and must break it into
fragments.
o If an intermediate system needs to break up the datagram, it uses this field
to aid in identifying the fragments.
 FLAGS (3 BITS)
o The flags field contains single-bit flags that indicate whether the datagram
is a fragment, whether it is permitted to be fragmented, and whether the
datagram is the last fragment, or there are more fragments.
o The first bit in this field is always zero.
 FRAGMENT OFFSET (13 BITS)
o When a datagram is fragmented, it is necessary to reassemble the
fragments in the correct order.
o The fragment offset numbers the fragments in such a way that they can be
reassembled correctly.
 TIME TO LIVE (8 BITS)
o This field determines how long a datagram will exist.
o At each hop along a network path, the datagram is opened and it's time
to live field is decremented by one (or more than one in some cases).
o When the time to live field reaches zero, the datagram is said to have
'expired' and is discarded.
o This prevents congestion on the network that is created when a datagram
cannot be forwarded to it's destination.
o Most applications set the time to live field to 30 or 32 by default.
 PROTOCOL (8 BITS)
o This indicates what type of protocol is encapsulated within the IP
datagram. Some of the common values seen in this field include:
 Number
 Protocol
(Decimal)
 ICMP  1
 IGMP  2
 TCP  6
 UDP  17
 HEADER CHECKSUM (16 BITS)
o The checksum allows IP to detect datagram with corrupted headers and
discard them.
o Since the time to live field changes at each hop, the checksum must be re-
calculated at each hop.
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 100
 SOURCE ADDRESS (32 BITS)
This is the IP address of the sender of the IP datagram.
 DESTINATION ADDRESS (32 BITS)
This is the IP address of the intended receiver(s) of the datagram. If the host
portion of this address is set to all 1's, the datagram is an 'all hosts' broadcast.
 OPTIONS & PADDING (VARIABLE)
o Various options can be included in the header by a particular vendor's
implementation of IP.
o If options are included, the header must be padded with zeroes to fill in any
unused octets so that the header is a multiple of 32 bits, and matches the count of
bytes in the Header Length (HLEN) field.

<

14.3 IP Addressing
14.3.1 Introduction
 IP address is used to unlikely identify the node in the network.
 IPV4 version contains 32 bit address.
 Divided into main two parts : Net ID and Host ID

 Classes of IPV4 Address

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 101
 Doted decimal notation is used to represent IPV4 Address

 Class Ranges of Internet Addresses

14.3.2 Network with different address

14.4 Other Protocols in the N/W Layer


TCP/IP supports four other Protocols in the network Layer: ARP, RARP, ICMP, and
IGMP.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 102
14.4.1 ARP [Address Resolution Protocol]
 ARP associates an IP Address with the Physical Address. [IP to MAC
Binding]
 ARP is used to find the Physical Address of the node when its internet
address is known.
 Anytime a host needs to find the Physical Address of another Host on its
N/W, It Send ARP Query packets that includes the IP Address and
Broadcast it over the N/W.
 Every Host I the N/W Receive the ARP Packet but only host matching the
IP Address Reply it backs and give own Physical Address.
 ARP - is a low-level protocol used to bind addresses dynamically.
 ARP allows a host to find a physical address of a target host on the same
physical network, given only it’s IP address.
 ARP broadcasts special packets with the destination’s IP address to ALL
hosts.
 The destination host (only) will respond with it’s physical address.
 When the response is received, the sender uses the physical address of
destination host to send all packets.

ARP packet

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 103
Encapsulation of ARP packet

Four cases using ARP

14.4.2 RARP [Reverse Address Resolution Protocol]

 RARP associates a Physical Address with the IP Address. [MAC to IP binding]


 RARP is used to find the Internet Address of the node when its Physical address
is known.
 RARP works much like ARP but mapping address differently.
 RARP requests and responses use the same frame format as ARP.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 104
Encapsulation of RARP packet

14.4.3 ICMP [Internet Message Control Protocol]


 The Internet Control Message Protocol (ICMP), is mechanism used by hosts and
routers to send notification of datagram problem back to the sender.
 It is tightly integrated with IP.
 ICMP messages, delivered in IP packets, are used for out-of-band messages
related to network operation or mis-operation.
 Some of ICMP's functions are to:
 Announce network errors.
 Announce network congestion.
 Assist Troubleshooting.
 Announce Timeouts.
 The ICMP message consists of an 8 bit.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 105
14.4.4 IGMP [Internet Group Message Protocol]
 The IP protocol can be viewed in three type of communication. –Unicasting, Multicasting and
Broadcasting.
 Unicasting is the communication between one sender and one Receiver; it is one to one
communication.
 Broadcasting is a one type of communication in which one sender send message to all nodes
available in the network.
 Some process is needed to send same message to a large number of receivers simultaneously at a
time. This is called Multicasting.
 Ex:-Multiple Stock brokers can simultaneously be informed of changes in price, Video on
demand.
 IP address support multicasting addresses.
 All IP Address starts with 1110 (Class D) Multicast Address., 28 remaining are group bits.

IGMP message type

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 106
14.5 IPV6 address

It’s important to understand that IPv6 is much more than an extension of IPv4
addressing. IPv6, first defined in RFC 2460, is a complete implementation of the
network layer of the TCP/IP protocol stack and it covers a lot more than simple address
space extension from 32 to 128 bits (the mechanism that increases IPv6’s ability to
allocate almost unlimited addresses to all the devices in the world for years to come).
IPv6 offers many improvements over IPv4, and Table 1 compares IPv4 and IPv6
operation at a glance.
More efficient routing. IPv6 routers no longer have to fragment packets, an overhead-
intensive process that just slows a network down. „ Quality of service (QoS) built-in.
IPv4 has no way to distinguish delay-sensitive packets from bulk data transfers,
requiring extensive workarounds, but IPv6 does. „ Elimination of NAT to extend
address spaces. IPv6 increases the IPv4 address size from 32 bits (about 4 billion) to 128
bits (enough for every molecule in the solar system). „ Network layer security built-in
(IPsec). Security, always a challenge in IPv4, is an integral part of IPv6. „ Stateless
address autoconfiguration for easier network administration. Many IPv4 installs were
complicated by manual default router and address assignment. IPv6 handles this in an
automated fashion. „ Improved header structure with less processing overhead. Many
of the fields in the IPv4 header were optional and used infrequently. IPv6 eliminates
these fields (options are handled differently).

IPv4 and IPv6 Comparisons:

IPv4 IPv6
32-bit (4 byte) address supporting 128-bit (16 byte) address supporting 228
4,294,967,296 address (although many were (about 3.4 x 1038) addresses
lost to special purposes, like 10.0.0.0 and
127.0.0.0)
Address Shortages: Larger address space:

IPv4 supports 4.3×109 (4.3 billion) addresses, IPv6 supports 3.4×1038 addresses, or
which is inadequate to give one (or more if 5×1028(50 octillion) for each of the
they possess more than one device) to every roughly 6.5 billion people alive today.
living person.
NAT can be used to extend address No NAT support (by design)
limitations
IP addresses assigned to hosts by DHCP or IP addresses self-assigned to hosts with
static configuration stateless address auto-configuration or
DHCPv6
IPSec support optional IPSec support required

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 107
Options integrated in header fields Options supported with extensions
headers (simpler header format)
Has broadcast addresses for all devices No such concept in IPv6 (uses multicast
groups)
Uses 127.0.0.1 as loopback address Uses ::1 as loopback address
IPv4 is subdivided into classes <A-E> IPv6 is classless. IPv6 uses a prefix and
an Identifier ID known as IPv4 network
IPv4 header has 20 bytes. IPv6 header is the double, it has 40 bytes
IPv4 header has many fields (13 fields) IPv6 header has fewer fields, it has 8
fields.
IPv4 address uses a subnet mask. IPv6 uses a prefix length.
IPv4 has lack of security. IPv6 has a built-in strong security
IPv4 was never designed to be secure - Encryption
- Originally designed for an isolated military - Authentication
network
- Then adapted for a public educational &
research network

IPV6 Header Format

IPv6 packets have their own frame Ethertype value, 0x86dd, making it easy for
receivers that must handle both IPv4 and IPv6 to distinguish the frame content on the
same interface. The IPv6 header is comprised of the following fields:

 Version: A four-bit field for the IP version number (0x06). „


 Traffic Class: An 8-bit field that identifies the major class of the packet content
(for example, voice or video packets). The default value is 0, meaning it is
ordinary bulk data (such as FTP) and requires no special handling. „
 Flow Label: A 20-bit field used to label packets belonging to the same flow (those
with the same values in several TCP/IP header parameters). The flow label is
normally 0 (flows are detected in other ways).

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 108
 Payload Length: A 16-bit field giving the length of the packet in bytes, excluding
the IPv6 header. „
 Next Header: An 8-bit field giving the type of header immediately following the
IPv6 header (this serves the same function as the Protocol field in IPv4). „

 Hop Limit: An 8-bit field set by the source host and decremented by 1 at each
router. Packets are discarded if Hop Limit is decremented to zero (this replaces
the IPv4 Time To Live field). Generally, implementers choose the default to use,
but values such as 64 or 128 are common.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 109
15.0 Sub netting:

15.1 Subnetting:
 Subneting is a process to divide network into sub networks.
 Host ID bit is used in Net ID
 A Network with Two Levels of Hierarchy

 A Network with Three Levels of Hierarchy

 Addresses with and without Subnetting

 Masking: Masking is a process that extracts the address of the physical network
from an IP address.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 110
15.2 Classful and Classless IP

15.3 Subnetting example for Class B address


Lets take class B where the default mask is 255.255.0.0 and prefix /16 and allows for
65534 host for that one single network, below is how it works while subneting

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 111
15.4 Subnetting example for Class C

Default subnet mask of class C is 255.255.255.0. CIDR notation of class C is /24, which
means 24 bits from IP address are already consumed by network portion and we have 8
host bits to work with. We cannot skip network bit, when we turned them on.
Subnetting moves from left to right. So Class C subnet masks can only be the following:

As we have already discussed earlier in this article that we have to have at least 2 host
bits for assigning IP addresses to hosts, that means we can't use /31 and /32 for
subnetting.

/25
CIDR /25 has subnet mask 255.255.255.128 and 128 is 10000000 in binary. We used one
host bit in network address.

N = 1 [Number of host bit used in network]


H = 7 [Remaining host bits]
Total subnets ( 2N ) :- 21 = 2
Block size (256 - subnet mask) :- 256 - 128 = 128
Valid subnets ( Count blocks from 0) :- 0,128
Total hosts (2H) :- 27 = 128
Valid hosts per subnet ( Total host - 2 ) :- 128 - 2 = 126

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 112
/26

CIDR /26 has subnet mask 255.255.255.192 and 192 is 11000000 in binary. We used two
host bits in network address.

N=2
H=6
Total subnets ( 2N ) :- 22 = 4
Block size (256 - subnet mask) :- 256 - 192 = 64
Valid subnets ( Count blocks from 0) :- 0,64,128,192
Total hosts (2H) :- 26 = 64
Valid hosts per subnet ( Total host - 2 ) :- 64 - 2 = 62

/27
CIDR /27 has subnet mask 255.255.255.224 and 224 is 11100000 in binary. We used three
host bits in network address.

N=3
H=5
Total subnets ( 2N ) :- 23 = 8

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 113
Block size (256 - subnet mask) :- 256 - 224 = 32
Valid subnets ( Count blocks from 0) :- 0,32,64,96,128,160,192,224
Total hosts (2H) :- 25 = 32
Valid hosts per subnet ( Total host - 2 ) :- 32 - 2 = 30

Sub = Subnet
/28
CIDR /28 has subnet mask 255.255.255.240 and 240 is 11110000 in binary. We used four
host bits in network address.

N=4
H=4
Total subnets ( 2N ) :- 24 = 16
Block size (256 - subnet mask) :- 256 - 240 = 16
Valid subnets ( Count blocks from 0) :-
0,16,32,48,64,80,96,112,128,144,160,176,192,208,224,240
Total hosts (2H) :- 24 = 16
Valid hosts per subnet ( Total host - 2 ) :- 16 - 2 = 14

/29
CIDR /29 has subnet mask 255.255.255.248 and 248 is 11111000 in binary. We used five
host bits in network address.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 114
N=5
H=3
Total subnets ( 2N ) :- 25 = 32
Block size (256 - subnet mask) :- 256 - 248 = 8
Valid subnets ( Count blocks from 0) :-
0,8,16,24,32,40,48,56,64,72,80,88,96,104,112,120,128,136,144,152,160,168,176,184,192,200,20
8,216,224,232,240,248
Total hosts (2H) :- 23 = 8
Valid hosts per subnet ( Total host - 2 ) :- 8 - 2 = 6

/30
CIDR /30 has subnet mask 255.255.255.252 and 252 is 11111100 in binary. We used six
host bits in network address.

N=6
H=2
Total subnets ( 2N ) :- 26 = 64
Block size (256 - subnet mask) :- 256 - 252 = 4
Valid subnets ( Count blocks from 0) :-
0,4,8,12,16,20,24,28,32,36,40,44,48,52,56,60,64,68,72,76,80,84,88,92,96,100,104,108,112,116,1
20,124,128,132,136,140,144,148,152,156,160,164,168,172,176,180,184,188,192,196,200,204,20
8,212,216,220,224,228,232,236,240,244,248,252

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 115
Total hosts (2H) :- 22 = 4
Valid hosts per subnet ( Total host - 2 ) :- 4 - 2 = 2

15.5 Subnetting Solved examples

1. In a block address, we know the IP address of one host is 182.44.82.16/26. What are
the first address (network address) and the last address (limited broadcast address) in
this block?
Answer :)
182.44.82.0 is the subnet address
182.44.82.63 is the broadcast address

2. An organization is granted the block 130.56.0.0/16. The administrator wants to


create 1024 subnets.
a. Find the subnet mask.
b. Find the number of addresses in each subnet.
c. Find the first and last addresses in subnet 1.
d. Find the first and last addresses in subnet 1024.
Answer :)
a) 255.255.255.192 will be the subnet mask
b) 62 valid addresses can exist in each subnet
c) First address in subnet 1 will be 130.56.0.1 and last address is subnet 1 will be
130.56.0.62
d) First address is 1024 subnet will be 130.56.255.193 and last address in 1024 subnet
will be 130.56.255.254

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 116
16.0 Dynamic Routing Protocols
16.1 IP routing:
In general terms, routing is the process of forwarding packets between connected
networks. For TCP/IP-based networks, routing is part of Internet Protocol (IP) and is
used in combination with other network protocol services to provide forwarding
capabilities between hosts that are located on separate network segments within a
larger TCP/IP-based network.

IP routing is implemented, operated and managed by the router. It works when a


device on a local network sends a packet toward a destination node that's external to
the network. For IP routing, the external network is any network that requires the
transmission of data through one or more routers before reaching the destination. Each
network's router maintains a table of IP addresses and details of router or other
networks to which it has previously been connected. Once it receives the packet from
the local computer/network, it matches the destination IP address to its list of networks.
If a match is found, the packet is routed to the corresponding router or list of routers,
through which it must pass to reach the destination node.

16.1.1 Static vs. Dynamic Routing


There are two basic methods of building a routing table:
• Static Routing
• Dynamic Routing
A static routing table is created, maintained, and updated by a network administrator,
manually. A static route to every network must be configured on every router for full
connectivity. This provides a granular level of control over routing, but quickly
becomes impractical on large networks.
Routers will not share static routes with each other, thus reducing CPU/RAM overhead
and saving bandwidth. However, static routing is not fault-tolerant, as any change to the
routing infrastructure (such as a link going down, or a new network added) requires
manual intervention. Routers operating in a purely static environment cannot
seamlessly choose a better route if a link becomes unavailable.
Static routes have an Administrative Distance (AD) of 1, and thus are always preferred
over dynamic routes, unless the default AD is changed. A static route with an adjusted
AD is called a floating static route, and is covered in greater detail in another guide.
A dynamic routing table is created, maintained, and updated by a routing protocol
running on the router. Examples of routing protocols include RIP (Routing Information
Protocol), EIGRP (Enhanced Interior Gateway Routing Protocol), and OSPF (Open
Shortest Path First). Specific dynamic routing protocols are covered in great detail in
other guides.
Routers do share dynamic routing information with each other, which increases CPU,
RAM, and bandwidth usage. However, routing protocols are capable of dynamically
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 117
choosing a different (or better) path when there is a change to the routing infrastructure.
A routing protocol dynamically builds the network, topology, and next hop information
in routing tables (such as RIP, EIGRP, etc.)

Static Routing:
Advantages:
 Minimal CPU/Memory overhead
 No bandwidth overhead (updates are not shared between routers)
 Granular control on how traffic is routed
Disadvantages:
 Infrastructure changes must be manually adjusted
 No ‚dynamic‛ fault tolerance if a link goes down
 Impractical on large network
Dynamic Routing:
Advantages:
 Simpler to configure on larger networks
 Will dynamically choose a different (or better) route if a link goes down
 Ability to load balance between multiple links
Disadvantages:
 Updates are shared between routers, thus consuming bandwidth
 Routing protocols put additional load on router CPU/RAM
 The choice of the ‚best route‛ is in the hands of the routing protocol, and not the
network administrator

Dynamic versus Static Routing


Feature Dynamic Routing Static Routing
Configuration complexity Generally independent of the Increases with network size
network size
Required administrator Advanced knowledge No extra knowledge required
knowledge required
Topology changes Automatically adapts to Administrator intervention
topology changes required
Scaling Suitable for simple and Suitable for simple topologies
complex topologies
Security Less secure More secure
Resource usage Uses CPU, memory, and link No extra resources needed
bandwidth
Predictability Route depends on the current Route to destination is always
topology the same

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 118
16.2 Dynamic Routing Protocols

IGP and EGP Routing Protocols


An autonomous system (AS) is a collection of routers under a common administration
such as a company or an organization. An AS is also known as a routing domain.
Typical examples of an AS are a company’s internal network and an ISP’s network.
The Internet is based on the AS concept; therefore, two types of routing protocols are
required:
 Interior Gateway Protocols (IGP): Used for routing within an AS. It is also referred to
as intra-AS routing. Companies, organizations, and even service providers use an IGP
on their internal networks. IGPs include RIP, EIGRP, OSPF, and IS-IS.
 Exterior Gateway Protocols (EGP): Used for routing between autonomous systems. It is
also referred to as inter-AS routing. Service providers and large companies may
interconnect using an EGP. The Border Gateway Protocol (BGP) is the only currently
viable EGP and is the official routing protocol used by the Internet.
Routing Protocol Characteristics
Routing protocols can be compared based on the following characteristics:
 Speed of convergence: Speed of convergence defines how quickly the routers in the
network topology share routing information and reach a state of consistent knowledge.
The faster the convergence, the more preferable the protocol. Routing loops can occur
when inconsistent routing tables are not updated due to slow convergence in a changing
network.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 119
 Scalability: Scalability defines how large a network can become, based on the routing
protocol that is deployed. The larger the network is, the more scalable the routing
protocol needs to be.
 Classful or classless (use of VLSM): Classful routing protocols do not include the
subnet mask and cannot support variable-length subnet mask (VLSM). Classless routing
protocols include the subnet mask in the updates. Classless routing protocols support
VLSM and better route summarization.
 Resource usage: Resource usage includes the requirements of a routing protocol such as
memory space (RAM), CPU utilization, and link bandwidth utilization. Higher resource
requirements necessitate more powerful hardware to support the routing protocol
operation, in addition to the packet forwarding processes.
 Implementation and maintenance: Implementation and maintenance describes the level
of knowledge that is required for a network administrator to implement and maintain
the network based on the routing protocol deployed.

RIPv1 RIPv2 IGRP EIGRP OSPF IS-IS BGP


Interior/
Interior Interior Interior Interior Interior Interior Exterior
Exterior
Distance Distance Distance Distance Link Path
Type Link state
vector vector vector vector state Vwctor
Speed of
Slow Slow Slow Fast Fast Fast Average
Convergence
Scalability –
Small Small Small Large Large Large Large
Size of Network
Use of VLSM No Yes No Yes Yes Yes Yes
Resource Usage Low Low Low Medium High High High
Implementation
and Simple Simple Simple Complex Complex Complex Complex
Maintenance
Classful/ Classles
Classful Classful Classless Classless Classless Classless
Classless s
Bandwidth
and delay (also Composit
Cost Multiple
Metric Hop Hop can include e Cost
reliability, load,
Attributes
and MTU)
Internal- Only Only Only
90 sec When When When
Time Period 30 sec 30 sec 90 sec
External- Changes Changes Changes
170 Sec occur occur occur
Administrative
120 120 100 100 110 115 200
Distance(AD)
Bellman- Bellman Best Path
Algorithm Bellman-Ford DUAL Dijkstra Dijkstra
Ford -Ford Algorithm
Full Only Only Only Only
Updates Full Table Full Table
Table Changes Changes Changes Changes

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 120
17.0 Transport Layer
17.1 Introduction
 Transmission Control Protocol (TCP) supports the network at the transport layer.
 Transmission Control Protocol (TCP) provides a reliable connection oriented
service.
 Connection oriented means both the client and server must open the connection
before data is sent.
 TCP provides:
o End to end reliability.
o Data packet re sequencing.
o Flow control.
 TCP relies on the IP service at the network layer to deliver data to the host. Since
IP is not reliable with regard to message quality or delivery, TCP must make
provisions to be sure messages are delivered on time and correctly
 At the sending end of each transmission, TCP divides long transmission into
smaller data units and Packages each into a frame called a “Segment”
 The header is followed by data.
 Port Address:
o Each port is defined by a positive integer address carried in the header of
the transport Layer Packet.
o Size of Port address is 16 bits.
o Allow up to 65,536 Ports.
o Port Address range is 0 to 65,536.

 The transport layer is responsible for process-to-process delivery of the entire


message.
 A process is an application program running on a host. Whereas the network
layer oversees source-to-destination delivery of individual packets, it does not
recognize any relationship between those packets.
 It treats each one independently, as though each piece belonged to a separate
message, whether or not it does. The transport layer, on the other hand, ensures
that the whole message arrives intact and in order, overseeing both error control
and flow control at the source-to-destination level.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 121
17.2 Transport Layer Services:
 Service-point Addressing:
o Computers often run several programs at the same time.
o The transport layer header therefore must include a type of address called
a service-point address (or port address).
 Segmentation and reassembly:
o A message is divided into transmittable segments.
o Each segment is containing a sequence number.
o Reassemble the message correctly upon arriving at the destination and to
identify and
o Replace packets that were lost in transmission.
 Connection Control:
o The transport layer can be either connectionless or connection-oriented.
o A connectionless transport layer treats each segment as an independent
packet and delivers it to the transport layer at the destination machine.
o A connection-oriented transport layer makes a connection with the
transport layer at the destination machine first before delivering the
packets. After all the data are transferred, the connection is terminated.
 Flow control:
o It is responsible for flow control for end to end rather than across a single
link.
 Error control:
o It is responsible for error control. for end to end rather than across a single
link.
o The sending transport layer makes sure that entire message arrives at the
receiving transport layer without error (damage, loss or duplication).
o Error correction is usually achieved through retransmission.

17.3 Connection Oriented and Connectionless Services


These are the two services given by the layers to layers above them. These services are :
1. Connection Oriented Service
2. Connectionless Services

Connection Oriented Services


There is a sequence of operation to be followed by the users of connection oriented
service. These are:
1. Connection is established
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 122
2. Information is sent
3. Connection is released
In connection oriented service we have to establish a connection before starting the
communication. When connection is established we send the message or the
information and then we release the connection.
Connection oriented service is more reliable than connectionless service. We can send
the message in connection oriented service if there is an error at the receivers end.
Example of connection oriented is TCP (Transmission Control Protocol) protocol.

Connection Less Services


It is similar to the postal services, as it carries the full address where the message (letter)
is to be carried. Each message is routed independently from source to destination. The
order of message sent can be different from the order received.
In connectionless the data is transferred in one direction from source to destination
without checking that destination is still there or not or if it prepared to accept the
message. Authentication is not needed in this. Example of Connectionless service is
UDP (User Datagram Protocol) protocol.

Difference between Connections oriented service and Connectionless service


1. In connection oriented service authentication is needed while connectionless
service does not need any authentication.
2. Connection oriented protocol makes a connection and checks whether message is
received or not and sends again if an error occurs connectionless service protocol
does not guarantees a delivery.
3. Connection oriented service is more reliable than connectionless service.
4. Connection oriented service interface is stream based and connectionless is
message based.
5. In connectionless communication there is no need to establish connection
between source (sender) and destination (receiver). But in connection-oriented
communication connection must established before data transfer.
6. Connection-oriented communication have higher overhead and and place
greater demands on bandwidth. But in connectionless communication requires
far less overhead than connection-oriented communication.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 123
Three-Way Handshaking
 In TCP, connection-oriented transmission requires three phases: connection
establishment, data transfer, and connection termination.
Connection Establishment
 TCP transmits data in full-duplex mode. When two TCPs in two machines are
connected, they are able to send segments to each other simultaneously. This
implies that each party must initialize communication and get approval from the
other party before any data are transferred.
 The connection establishment in TCP is called three way handshaking. In our
example, an application program, called the client, wants to make a connection
with another application program, called the server, using TCP as the transport
layer protocol.
 The process starts with the server. The server program tells its TCP that it is
ready to accept a connection. This is called a request for a passive open. Although
the server TCP is ready to accept any connection from any machine in the world,
it cannot make the connection itself.
 The client program issues a request for an active open. A client that wishes to
connect to an open server tells its TCP that it needs to be connected to that
particular server. TCP can now start the three-way handshaking process as
shown in Figure

Connection establishment using three-way handshaking

 To show the process, we use two time lines: one at each site. Each segment has
values for all its header fields and perhaps for some of its option fields, too.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 124
However, we show only the few fields necessary to understand each phase. We
show the sequence number, the acknowledgment number, the control flags (only
those that are set), and the window size, if not empty. The three steps in this
phase are as follows.

Step 1: The client sends the first segment, a SYN segment, in which only the SYN flag is
set. This segment is for synchronization of sequence numbers. It consumes one
sequence number. When the data transfer starts, the sequence number is incremented
by 1. We can say that the SYN segment carries no real data, but we can think of it as
containing 1 imaginary byte.
A SYN segment cannot carry data, but it consumes one sequence number.

Step 2: The server sends the second segment, a SYN +ACK segment, with 2 flag bits set:
SYN and ACK. This segment has a dual purpose. It is a SYN segment for
communication in the other direction and serves as the acknowledgment for the SYN
segment. It consumes one sequence number.
A SYN +ACK segment cannot carry data, but does consume one sequence number.

Step 3: The client sends the third segment. This is just an ACK segment. It
acknowledges the receipt of the second segment with the ACK flag and
acknowledgment number field. Note that the sequence number in this segment is the
same as the one in the SYN segment; the ACK segment does not consume any sequence
numbers.
An ACK segment, if carrying no data, consumes no sequence number.

Data transfer

After connection is established, bidirectional data transfer can take place. The client and
server can both send data and acknowledgments.

In this example, after connection is established (not shown in the figure), the client
sends 2000 bytes of data in two segments. The server then sends 2000 bytes in one
segment. The client sends one more segment. The first three segments carry both data
and acknowledgment, but the last segment carries only an acknowledgment because
there are no more data to be sent. Note the values of the sequence and acknowledgment
numbers. The data segments sent by the client have the PSH (push) flag set so that the
server TCP knows to deliver data to the server process as soon as they are received. The
segment from the server, on the other hand, does not set the push flag.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 125
Connection Termination

Connection termination using three-way handshaking

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 126
17.4 TCP Message Format / TCP Segment Format

The format of the TCP header is as follows:

1. Source port number (16 bits)


2. Destination port number (16 bits)
3. Sequence number (32 bits) - The byte in the data stream that the first byte of this
packet represents.

4. Acknowledgement number (32 bits) –Acknowledge the receipt of data, and


Contains the next sequence number that the sender of the acknowledgement
expects to receive which is the sequence number plus 1 (plus the number of bytes
received in the last message?). This number is used only if the ACK flag is on.
5. Header length (4 bits) – It shows the length of the header.
6. Reserved (6 bits) –Reserved for future use.
7. Control bits
a) URG (1 bit) - The urgent pointer is valid.
b) ACK (1 bit) - Makes the acknowledgement number valid.
c) PSH (1 bit) - High priority data for the application.
d) RST (1 bit) - Reset the connection.
e) SYN (1 bit) - Turned on when a connection is being established and the
sequence number field will contain the initial sequence number chosen by
this host for this connection.
f) FIN (1 bit) - The sender is done sending data.
8. Window size (16 bits) - The maximum number of bytes that the receiver will to
accept. (it’s maximum size of sliding window)
9. TCP checksum (16 bits) – Use to detect error.
10. Urgent pointer (16 bits) - It is only valid if the URG bit is set. The urgent mode is
a way to transmit emergency data to the other side of the connection.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 127
11. Options and Padding - (variable length) Convey additional Information for
alignment purpose.

17.5 User Datagram Protocol

 User Datagram Protocol (UDP) supports the network at the transport


layer.
 User Datagram Protocol (UDP) is an unreliable connection-less protocol
 There is no guarantee that the data will reach its destination.
 UDP is meant to provide service with very little transmission overhead.
 It adds very little to IP data packets except for some error checking and
port direction (Remember, UDP encapsulates IP packets).
 The Packet produced by UDP is called “User Datagram”.

UDP Message Format /User Datagram Format

The UDP header includes:

1. Source port number (16 bits) - An optional field, The Address of Application
Program that has created the message.
2. Destination port number (16 bits) –The Address of Application Program that
will receive the message.
3. UDP length (16 bits) –Gives the total Data Length of the user Datagram.
4. UDP checksum (16 bits)-Used in error detection.

Use of UDP
The following lists some uses of the UDP protocol:
 UDP is suitable for a process that requires simple request-response
communication with little concern for flow and error control. It is not usually
used for a process such as FrP that needs to send bulk data.
 UDP is suitable for a process with internal flow and error control mechanisms.
For example, the Trivial File Transfer Protocol (TFTP) process includes flow and
error control. It can easily use UDP.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 128
 UDP is a suitable transport protocol for multicasting. Multicasting capability is
embedded in the UDP software but not in the TCP software.
 UDP is used for management processes such as SNMP.
 UDP is used for some route updating protocols such as Routing Information
Protocol (RIP).

Features
 UDP is used when acknowledgement of data does not hold any significance.
 UDP is good protocol for data flowing in one direction.
 UDP is simple and suitable for query based communications.
 UDP is not connection oriented.
 UDP does not provide congestion control mechanism.
 UDP does not guarantee ordered delivery of data.
 UDP is stateless.
 UDP is suitable protocol for streaming applications such as VoIP, multimedia
streaming.

UDP application
Here are few applications where UDP is used to transmit data:
 Domain Name Services
 Simple Network Management Protocol
 Trivial File Transfer Protocol
 Routing Information Protocol

17.6 Multiplexing and De-multiplexing

 Now let’s consider how a receiving host directs an incoming transport-layer


segment to the appropriate socket. Each transport-layer segment has a set of
fields in the segment for this purpose. At the receiving end, the transport layer
examines these fields to identify the receiving socket and then directs the
segment to that socket.
 This job of delivering the data in a transport-layer segment to the correct socket
is called demultiplexing.
 The job of gathering data chunks at the source host from different sockets,
encapsulating each data chunk with header information (that will later be used
in demultiplexing) to create segments, and passing the segments to the network
layer is called multiplexing.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 129
For Simplification:
The addressing mechanism allows multiplexing and demultiplexing by the transport
layer, as shown in Figure below:

Multiplexing
At the sender site, there may be several processes that need to send packets. However,
there is only one transport layer protocol at any time. This is a many-to-one relationship
and requires multiplexing. The protocol accepts messages from different processes,
differentiated by their assigned port numbers. After adding the header, the transport
layer passes the packet to the network layer.

De-multiplexing
At the receiver site, the relationship is one-to-many and requires demultiplexing. The
transport layer receives datagrams from the network layer. After error checking and
dropping of the header, the transport layer delivers each message to the appropriate
process based on the port number.

The inversion of source and destination port numbers

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 130
Two clients, using the same destination port number (80) to communicate with the same Web
server application.

17.7 Reliable Data Transfer Protocol

It is the responsibility of a reliable data transfer protocol to implement this service


abstraction. This task is made difficult by the fact that the layer below the reliable data
transfer protocol may be unreliable. For example, TCP is a reliable data transfer
protocol that is implemented on top of an unreliable (IP) end-to-end network layer.

In this section, we will incrementally develop the sender and receiver sides of a reliable
data transfer protocol, considering increasingly complex models of the underlying
channel.

One assumption we’ll adopt throughout our discussion here is that packets will be
delivered in the order in which they were sent, with some packets possibly being lost;
that is, the underlying channel will not reorder packets. Below figure illustrates the
interfaces for our data transfer protocol. The sending side of the data transfer protocol
will be invoked from above by a call to rdt_send(). It will pass the data to be delivered

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 131
to the upper layer at the receiving side. (Here rdt stands for reliable data transfer protocol
and _send indicates that the sending side of rdt is being called.

Reliable data transfer: Service model and service implementation

Building a Reliable Data Transfer Protocol


We now step through a series of protocols, each one becoming more complex, arriving
at a flawless, reliable data transfer protocol.
Characteristics of unreliable channel will determine complexity of reliable data transfer
protocol (rdt)

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 132
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 133
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 134
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 135
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 136
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 137
17.8 Congestion Control
17.8.1 Congestion:
An important issue in a packet-switched network is congestion. Congestion in a
network may occur if the load on the network-the number of packets sent to the
network-is greater than the capacity of the network-the number of packets a network can
handle. Congestion control refers to the mechanisms and techniques to control the
congestion and keep the load below the capacity.

We may ask why there is congestion on a network. Congestion happens in any system
that involves waiting. For example, congestion happens on a freeway because any
abnonnality in the flow, such as an accident during rush hour, creates blockage.

Congestion in a network or internetwork occurs because routers and switches have


queues-buffers that hold the packets before and after processing. A router, for example,
has an input queue and an output queue for each interface.

17.8.1 Congestion Control:

Congestion Control categories

Open-Loop Congestion Control


In open-loop congestion control, policies are applied to prevent congestion before it
happens. In these mechanisms, congestion control is handled by either the source or the
destination. We give a brief list of policies that can prevent congestion.

Retransmission Policy
Retransmission is sometimes unavoidable. If the sender feels that a sent packet is lost or
corrupted, the packet needs to be retransmitted. Retransmission in general may increase

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 138
congestion in the network. However, a good retransmission policy can prevent
congestion. The retransmission policy and the retransmission timers must be designed
to optimize efficiency and at the same time prevent congestion. For example, the
retransmission policy used by TCP (explained later) is designed to prevent or alleviate
congestion.

Window Policy
The type of window at the sender may also affect congestion. The Selective Repeat
window is better than the Go-Back-N window for congestion control. In the Go-Back-N
window, when the timer for a packet times out, several packets may be resent, although
some may have arrived safe and sound at the receiver. This duplication may make the
congestion worse. The Selective Repeat window, on the other hand, tries to send the
specific packets that have been lost or corrupted.

Acknowledgment Policy
The acknowledgment policy imposed by the receiver may also affect congestion. If the
receiver does not acknowledge every packet it receives, it may slow down the sender
and help prevent congestion. Several approaches are used in this case. A receiver may
send an acknowledgment only if it has a packet to be sent or a special timer expires. A
receiver may decide to acknowledge only N packets at a time. We need to know that the
acknowledgments are also part of the load in a network. Sending fewer
acknowledgments means imposing less load on the network.

Discarding Policy
A good discarding policy by the routers may prevent congestion and at the same time
may not harm the integrity of the transmission. For example, in audio transmission, if
the policy is to discard less sensitive packets when congestion is likely to happen, the
quality of sound is still preserved and congestion is prevented or alleviated.

Admission Policy
An admission policy, which is a quality-of-service mechanism, can also prevent
congestion in virtual-circuit networks. Switches in a flow first check the resource
requirement of a flow before admitting it to the network. A router can deny establishing
a virtualcircuit connection if there is congestion in the network or if there is a possibility
of future congestion.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 139
Closed-Loop Congestion Control
Closed-loop congestion control mechanisms try to alleviate congestion after it happens.
Several mechanisms have been used by different protocols. We describe a few of them
here.

Backpressure

The technique of backpressure refers to a congestion control mechanism in which a


congested node stops receiving data from the immediate upstream node or nodes. This
may cause the upstream node or nodes to become congested, and they, in turn, reject
data from their upstream nodes or nodes. And so on. Backpressure is a node-to-node
congestion control that starts with a node and propagates, in the opposite direction of
data flow, to the source. The backpressure technique can be applied only to virtual
circuit networks, in which each node knows the upstream node from which a flow of
data is corning. Figure shows the idea of backpressure.

Backpressure method for alleviating congestion


Node III in the figure has more input data than it can handle. It drops some packets in
its input buffer and informs node II to slow down. Node II, in turn, may be congested
because it is slowing down the output flow of data. If node II is congested, it informs
node I to slow down, which in turn may create congestion. If so, node I informs the
source of data to slow down. This, in time, alleviates the congestion. Note that the
pressure on node III is moved backward to the source to remove the congestion.

Choke Packet
A choke packet is a packet sent by a node to the source to inform it of congestion. Note
the difference between the backpressure and choke packet methods. In backpressure,
the warning is from one node to its upstream node, although the warning may
eventually reach the source station. In the choke packet method, the warning is from the
router, which has encountered congestion, to the source station directly. The
intermediate nodes through which the packet has traveled are not warned. We have

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 140
seen an example of this type of control in ICMP. When a router in the Internet is
overwhelmed with IP datagram, it may discard some of them; but it informs the source
host, using a source quench ICMP message. The warning message goes directly to the
source station; the intermediate routers, and does not take any action. Figure shows the
idea of a choke packet.

Implicit Signaling
In implicit signaling, there is no communication between the congested node or nodes
and the source. The source guesses that there is a congestion somewhere in the network
from other symptoms. For example, when a source sends several packets and there is
no acknowledgment for a while, one assumption is that the network is congested. The
delay in receiving an acknowledgment is interpreted as congestion in the network; the
source should slow down.

Explicit Signaling
The node that experiences congestion can explicitly send a signal to the source or
destination. The explicit signaling method, however, is different from the choke packet
method. In the choke packet method, a separate packet is used for this purpose; in the
explicit signaling method, the signal is included in the packets that carry data. Explicit
signaling, as we will see in Frame Relay congestion control, can occur in either the
forward or the backward direction.

Backward Signaling A bit can be set in a packet moving in the direction opposite to the
congestion. This bit can warn the source that there is congestion and that it needs to
slow down to avoid the discarding of packets.

Forward Signaling A bit can be set in a packet moving in the direction of the congestion.
This bit can warn the destination that there is congestion. The receiver in this case can
use policies, such as slowing down the acknowledgments, to alleviate the congestion.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 141
17.8.2 Congestion control in TCP

Congestion Window

We said that the sender window size is determined by the available buffer space in the
receiver (rwnd).
The sender has two pieces of information: the receiver-advertised window size and the
congestion window size. The actual size of the window is the minimum of these two.
Actual window size = minimum (rwnd, cwnd)

Congestion Policy
TCP's general policy for handling congestion is based on three phases:
 Slow start (exponential increase),
 Congestion avoidance (Additive Increase), and
 Congestion detection (Multiplicative Decrease)

Slow start:

In the slow-start phase, the sender starts with a very slow rate of transmission, but
increases the rate rapidly to reach a threshold. When the threshold is reached, the data
rate is reduced to avoid congestion. Finally if congestion is detected, the sender goes
back to the slow-start or congestion avoidance phase based on how the congestion is
detected.
Exponential Increase One of the algorithms used in TCP congestion control is called
slow start. This algorithm is based on the idea that the size of the congestion window
(cwnd) starts with one maximum segment size (MSS). The MSS is determined during
connection establishment by using an option of the same name. The size of the window
increases one MSS each time an acknowledgment is received. As the name implies, the
window starts slowly, but grows exponentially. To show the idea, let us look at Figure

We have used segment numbers instead of byte numbers (as though each segment
contains only 1 byte). We have assumed that rwnd is much higher than cwnd, so that the
sender window size always equals cwnd. We have assumed that each segment is
acknowledged individually.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 142
Slow start, exponential increase

The sender starts with cwnd =1 MSS. This means that the sender can send only one
segment. After receipt of the acknowledgment for segment 1, the size of the congestion
window is increased by 1, which means that cwnd is now 2. Now two more segments
can be sent. When each acknowledgment is received, the size of the window is
increased by 1 MSS. When all seven segments are acknowledged, cwnd = 8.

In the slow-start algorithm, the size of the congestion window increases


exponentially until it reaches a threshold.

Congestion Avoidance: Additive Increase

If we start with the slow-start algorithm, the size of the congestion window increases
exponentially. To avoid congestion before it happens, one must slow down this
exponential growth. TCP defines another algorithm called congestion avoidance, which
undergoes an additive increase instead of an exponential one. When the size of the
congestion window reaches the slow-start threshold, the slow-start phase stops and the
additive phase begins. In this algorithm, each time the whole window of segments is

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 143
acknowledged (one round), the size of the congestion window is increased by 1. To
show the idea, we apply this algorithm to the same scenario as slow start, although we
will see that the congestion avoidance algorithm usually starts when the size of the
window is much greater than 1. Figure shows the idea.

Congestion avoidance, additive increase

In the congestion avoidance algorithm, the size of the congestion window increases
additively until congestion is detected.

Congestion Detection: Multiplicative Decrease

If congestion occurs, the congestion window size must be decreased. The only way the
sender can guess that congestion has occurred is by the need to retransmit a segment.
However, retransmission can occur in one of two cases: when a timer times out or when
three ACKs are received. In both cases, the size of the threshold is dropped to one-half,
a multiplicative decrease. Most TCP implementations have two reactions:

I. If a time-out occurs, there is a stronger possibility of congestion; a segment has


probably been dropped in the network, and there is no news about the sent segments.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 144
In this case TCP reacts strongly:
a. It sets the value of the threshold to one-half of the current window size.
b. It sets cwnd to the size of one segment.
c. It starts the slow-start phase again.

2. If three ACKs are received, there is a weaker possibility of congestion; a segment may
have been dropped, but some segments after that may have arrived safely since three
ACKs are received. This is called fast transmission and fast recovery. In this case, TCP
has a weaker reaction: a. It sets the value of the threshold to one-half of the current
window size. b. It sets cwnd to the value of the threshold (some implementations add
three segment sizes to the threshold). c. It starts the congestion avoidance phase.

Implementation reacts to congestion detection in one of the following ways:


o If detection is by time-out, a new slow-start phase starts.
o If detection is by three ACKs, a new congestion avoidance phase starts.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 145
18. 0 Sliding window protocol

The data can get lost, reordered or duplicated due to the presence of routers and buffer
space over the unreliable channel in the conventional networks. The data link layer
deals with frame formation, flow control, error control, and addressing and link
management. All such functions will be performed only by data link protocols. The
sliding window protocol will detect and correct error if the received data have enough
redundant bits or repeat a retransmission of data.

The physical layer deal with transmission signal over different media Data link layer
deals with frame formation and flow control, error control over unreliable channels of
conventional channel various data limitations cause efficiency decrease. Generally there
are two approaches to control such errors:

a) Forward Error Correction (FEC)


In this the sender adds redundant data to its message known as error-correcting code.
This enables the receiver to detect and correct error without repeating additional data
from the sender. In this, the back channel is not required and retransmission of data can
often be avoided, so it is used where retransmission is either costly or impossible. The
FEC systems are designed for simplex channels.

(b) Automatic Repeat Request (ARQ)


It uses the high rate error-detecting code together with some retransmission protocol.
When the error is detected by receiver it generates negative feedback and gives positive
feedback for the no error. So this scheme requires a feedback channel.

Sliding window protocol

By placing limits on the number of packets that can be transmitted or received at any
given time, a sliding window protocol allows an unlimited number of packets to be
communicated using fixed-size sequence numbers. The term "window" on the
transmitter side represents the logical boundary of the total number of packets yet to be
acknowledged by the receiver. The receiver informs the transmitter in each
acknowledgment packet the current maximum receiver buffer size (window boundary).
The TCP header uses a 16 bit field to report the receive window size to the sender.
Therefore, the largest window that can be used is 216 = 64 kilobytes.

The sliding window method ensures that traffic congestion on the network is avoided.
The application layer will still be offering data for transmission to TCP without

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 146
worrying about the network traffic congestion issues as the TCP on sender and receiver
side implement sliding windows of packet buffer. The window size may vary
dynamically depending on network traffic.

In any communication protocol based on automatic repeat request for error control,
the receiver must acknowledge received packets. If the transmitter does not receive
an acknowledgment within a reasonable time, it re-sends the data.

Data in sender buffer are sent in chunks instead of the entire data in buffer. Why?
Suppose the sender buffer has the capacity of 1 MB and the receiver buffer has the
capacity of 512KB, then 50 % of the data will be lost at the receiver end and this will
unnecessarily cause re-transmission of packets from the sender end. Therefore, the
sender will send data in chunks less than 512. This will be decided by the help of
window size. Window size caters the capacity of the receiver. Flow control is the
receiver related problem, we do not want the receiver to be overwhelmed, thus in order
to avoid overwhelming situation, we need to control the flow by using window size
‚N‛.

The above figure shows the buffer at the sender end.


‚Base‛ indicates the sequence number of the packet that has not been acknowledged
(the first unacknowledged packet).
Window shifts when the packets starting from the base gets acknowledged and it starts
moving, this process continues till all acknowledgements are received, so it is also
known as sliding window protocol. When window size is reached the packet will not be
sent further till the window slides further.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 147
How will the sender be aware of the size of the receiver’s buffer or what should be the
window size at the sender end?
Before the data is transmitted from one host to another, a connection is first established
between the two. During this establishment, such kinds of information are share
between the two hosts, like window size, buffer size, etc. After which the data
transmission begins.

Sliding window protocol assumes full duplex communication. It uses two types of
frames, first data and second acknowledgment. One of important features of all the
sliding windows protocol is that each outbound frame contains a sequence number,
ranging from 0 to 2n -1, where the value of n can be arbitrary. Sliding window refers to
imaginary boxes at the transmitter and receiver. This window provides the upper limit
on the number of frames that can be transmitted before acknowledgment requirement.
Window holds the number of frame to provide above mention limit. The frames which
are being transmitted to send are falling in sending window similarly frames to be
accepted are store in the receiving window.

Significance of Sender's and Receiver's Windows:

The sequence no. within the sender’s window gives the number of frame sent but not
yet acknowledge. The frames in the sender’s window are stored so that they can be
possibly retransmitted in the case of damage while travelling to receiver.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 148
The receiver window represents not the number of frame receive but the no. of frame
that may still be received before an ACK is sent. Because sliding window of receiver
strings from left when frame of data are received and expand to right when ACK is
sent. The receiver window contains (n-1) spaces for frame.

Sliding Window

• Sliding window refers to an imaginary boxes that hold the frames on both sender and
receiver side.

• It provides the upper limit on the number of frames that can be transmitted before
requiring an acknowledgment.

• Frames may be acknowledged by receiver at any point even when window is not full
on receiver side.

• Frames may be transmitted by source even when window is not yet full on sender
side.

• The windows have a specific size in which the frames are numbered modulo- n,
which means they are numbered from 0 to n-l. For e.g. if n = 8, the frames are numbered
0, 1,2,3,4,5,6, 7, 0, 1,2,3,4,5,6, 7, 0, 1, ....

• The size of window is n-1. For e.g. In this case it is 7. Therefore, a maximum of n-l
frames may be sent before an acknowledgment.

• When the receiver sends an ACK, it includes the number of next frame it expects to
receive. For example in order to acknowledge the group of frames ending in frame 4,
the receiver sends an ACK containing the number 5. When sender sees an ACK with
number 5, it comes to know that all the frames up to number 4 have been received.

A One Bit Sliding Window Protocol (Stop and Wait ARQ):

In this case n=1 and uses stop and wait technique. Sender waits for ACK after each
frame transmission. The operation of this protocol is based on the ARQ(automatic
repeat request) principle, which hold the next frame will be transmitted when positive
ACK is received and when negative ACK is received, it retransmit the same frame.

Stop and wait ARQ becomes inefficient when the propagation delay is much greater
than the time tool retransmit for example let us assume that frame of 800 bits is
transmitted over channel with speed 1mbps and let time for transmission if from end

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 149
ACK is 30 ms. The number of bits that can be transmitted over this channel is 30,000
bits. But in stop and wait ARQ only 800 bits can be transmitted as it waits for ACK. The
product of bit rate and delay is called delay bandwidth product. It helps in measuring
last opportunity in transmitted bits.

The following transition may occur in Stop-and-Wait ARQ:

o The sender maintains a timeout counter.


o When a frame is sent, the sender starts the timeout counter.
o If acknowledgement of frame comes in time, the sender transmits the next
frame in queue.
o If acknowledgement does not come in time, the sender assumes that either
the frame or its acknowledgement is lost in transit. Sender retransmits the
frame and starts the timeout counter.
o If a negative acknowledgement is received, the sender retransmits the
frame.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 150
A Protocol Using Go Back n ARQ:

The sender in this case does not waits for the ACK signal for transmission of next frame.
The sender continuously transmits the frame so that the channel should be kept busy
rather that wasting time in waiting for it ACK. Because in stop and protocol system
does not transmit anything while it is waiting So channel remain idle for considerable
time period But in this case the system does depends on only NACK(negative
feedback). It symbolizes error in a particular frame. But as NACK signal will take same
time to reach sender, the sender will continue to transmit. On the reception of the
NACK signal, the transmitter will retransmit all the frames 3 onwards. The receivers
discard all the frames it has received after 3. Example: suppose the frame is being
transmitted end at frame bit 3 error occurs and NACK is transmitted at the receiver. But
this takes some time to reach the transmitter. By the time upto frame 7 has all ways been
transmitted.

If the transmitter frame is lost or acknowledgement is lost then only error occurs. In
case of damaged or lost frames the receiver transmits NACK to transmitter and the
transmitter retransmits all the frames sent since the last frame acknowledged. The

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 151
disadvantage of go back ARQ protocol is that its efficiency decreases in noisy channel
as it does not wait for ACK after every frame transmitted.

Stop and wait ARQ mechanism does not utilize the resources at their best. When the
acknowledgement is received, the sender sits idle and does nothing. In Go-Back-N
ARQ method, both sender and receiver maintain a window.

The sending-window size enables the sender to send multiple frames without receiving
the acknowledgement of the previous ones. The receiving-window enables the receiver
to receive multiple frames and acknowledge them. The receiver keeps track of
incoming frame’s sequence number.

When the sender sends all the frames in window, it checks up to what sequence
number it has received positive acknowledgement. If all frames are positively
acknowledged, the sender sends next set of frames. If sender finds that it has received
NACK or has not receive any ACK for a particular frame, it retransmits all the frames
after which it does not receive any positive ACK.

Go-Back-N ARQ is a specific instance of the automatic repeat request (ARQ) protocol,
in which the sending process continues to send a number of frames specified by
a window size even without receiving an acknowledgement (ACK) packet from the
receiver. It is a special case of the general sliding window protocol with the transmit
window size of N and receive window size of 1. It can transmit N frames to the peer
before requiring an ACK.

The receiver process keeps track of the sequence number of the next frame it expects to
receive, and sends that number with every ACK it sends. The receiver will discard any
frame that does not have the exact sequence number it expects (either a duplicate frame
it already acknowledged, or an out-of-order frame it expects to receive later) and will
resend an ACK for the last correct in-order frame. [1] Once the sender has sent all of the
frames in its window, it will detect that all of the frames since the first lost frame
are outstanding, and will go back to the sequence number of the last ACK it received
from the receiver process and fill its window starting with that frame and continue the
process over again.

Go-Back-N ARQ is a more efficient use of a connection than Stop-and-wait ARQ, since
unlike waiting for an acknowledgement for each packet, the connection is still being
utilized as packets are being sent. In other words, during the time that would otherwise
be spent waiting, more packets are being sent. However, this method also results in

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 152
sending frames multiple times – if any frame was lost or damaged, or the ACK
acknowledging them was lost or damaged, then that frame and all following frames in
the window (even if they were received without error) will be re-sent. To avoid
this, Selective Repeat ARQ can be used.

Protocol using Selective Repeat ARQ


In Go-back-N ARQ, it is assumed that the receiver does not have any buffer
space for its window size and has to process each frame as it comes. This
enforces the sender to retransmit all the frames which are not acknowledged.

In Selective-Repeat ARQ, the receiver while keeping track of sequence numbers, buffers
the frames in memory and sends NACK for only frame which is missing or damaged.

The sender in this case, sends only packet for which NACK is received.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 153
Piggybacking is a bi-directional data transmission technique in the network layer (OSI
model). It makes the most of the sent data frames from receiver to sender, adding the
confirmation that the data frame sent by the sender was received successfully (ACK
acknowledge). This practically means that instead of sending an acknowledgement in
an individual frame it is piggy-backed on the data frame.

(Explain Go Back N and selective Repeat in detail in Pipeline Protocol Question)

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 154
19.0 Application Layer Protocols: DNS

DNS (Domain Name System):


Introduction
 To identify an entity, TCP/IP protocols use the IP address, which uniquely
identifies the connection of a host to the internet.
 However, people prefer to use names instead of addresses.
 Therefore, we need a system that can map a name to an address and conversely
an address to a name. In TCP/IP, this is the Domain Name System (DNS).
 DNS is a protocol that can be used in different platforms.
 In the internet, the domain name space (tree) is divided into three different
sections:
1) Generic domains
2) Country domains
3) Inverse domain.

1) Generic Domains:
The generic domains define registered hosts according to their behavior.

Label Description
com Commercial organizations
edu Educational institutions

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 155
gov Government institutions
int International organizations
mil Military groups
net Network support centers
org Nonprofit organizations
2) Country Domains:
The country domain section follows the same format as the generic domains but
uses two-character country abbreviations (eg., ‚in‛ for India).
3) Inverse Domains:
The inverse domain is used to map an address to a name.

Hierarchy of DNS servers

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 156
DNS Centralized or De-centralized?

A simple design for DNS would have one DNS server that contains all the mappings. In
this centralized design, clients simply direct all queries to the single DNS server, and the
DNS server responds directly to the querying clients. Although the simplicity of this design
is attractive, it is inappropriate for today’s Internet, with its vast (and growing) number of
hosts. The problems with a centralized design include:

• A single point of failure. If the DNS server crashes, so does the entire Internet!
• Traffic volume. A single DNS server would have to handle all DNS queries (for
all the HTTP requests and e-mail messages generated from hundreds of millions
of hosts).
• Distant centralized database. A single DNS server cannot be ‚close to‛ all the
querying clients. If we put the single DNS server in New York City, then all
queries from Australia must travel to the other side of the globe, perhaps over
slow and congested links. This can lead to significant delays.
• Maintenance. The single DNS server would have to keep records for all Internet
hosts. Not only would this centralized database be huge, but it would have to be
updated frequently to account for every new host.

Three classes of DNS servers:

• Root DNS servers.


In the Internet there are 13 root DNS servers (labeled A through M), most of which are
located in North America. An October 2006 map of the root DNS servers is shown in above
Figure ; a list of the current root DNS servers is available via [Root-servers 2012]. Although
we have referred to each of the 13 root DNS servers as if it were a single server, each

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 157
‚server‛ is actually a network of replicated servers, for both security and reliability
purposes. All together, there are 247 root servers as of fall 2011.
• Top-level domain (TLD) servers.
These servers are responsible for top-level domains such as com, org, net, edu, and gov,
and all of the country top-level domains such as uk, fr, ca, and jp. The company Verisign
Global Registry Services maintains the TLD servers for the com top-level domain, and the
company Educause maintains the TLD servers for the edu top-level domain. See [IANA
TLD 2012] for a list of all top-level domains.
• Authoritative DNS servers.
Every organization with publicly accessible hosts (such as Web servers and mail servers)
on the Internet must provide publicly accessible DNS records that map the names of those
hosts to IP addresses. An organization’s authoritative DNS server houses these DNS
records. An organization can choose to implement its own authoritative DNS server to
hold these records; alternatively, the organization can pay to have these records stored in
an authoritative DNS server of some service provider. Most universities and large
companies implement and maintain their own primary and secondary (backup)
authoritative DNS server.

DNS Message

DNS Message Format


Earlier in this section, we referred to DNS query and reply messages. These are the only
two kinds of DNS messages. Furthermore, both query and reply messages have the same
format, as shown in above Figure. The semantics of the various fields in a DNS message are
as follows:

• The first 12 bytes is the header section, which has a number of fields.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 158
The first field is a 16-bit number that identifies the query. This identifier is copied into the
reply message to a query, allowing the client to match received replies with sent queries.
There are a number of flags in the flag field. A 1-bit query/reply flag indicates whether the
message is a query (0) or a reply (1). A1-bit authoritative flag is set in a reply message
when a DNS server is an authoritative server for a queried name. A 1-bit recursion-desired
flag is set when a client (host or DNS server) desires that the DNS server perform recursion
when it doesn’t have the record. A 1-bit recursion available field is set in a reply if the DNS
server supports recursion. In the header, there are also four number-of fields. These fields
indicate the number of occurrences of the four types of data sections that follow the header.

• The question section contains information about the query that is being made.
This section includes (1) a name field that contains the name that is being
queried, and (2) a type field that indicates the type of question being asked about
the name—for example, a host address associated with a name (Type A) or the
mail server for a name (Type MX).
• In a reply from a DNS server, the answer section contains the resource records
for the name that was originally queried. Recall that in each resource record there
is the Type (for example, A, NS, CNAME, and MX), the Value, and the TTL. A
reply can return multiple RRs in the answer, since a hostname can have multiple
IP addresses (for example, for replicated Web servers, as discussed earlier in this
section).
• The authority section contains records of other authoritative servers.
• The additional section contains other helpful records. For example, the answer
field in a reply to an MX query contains a resource record providing the
canonical hostname of a mail server. The additional section contains a Type A
record providing the IP address for the canonical hostname of the mail server.

Name Space
To be unambiguous, the names assigned to machines must be carefully selected from a
name space with complete control over the binding between the names and IP addresses.
In other words, the names must be unique because the addresses are unique. A name space
that maps each address to a unique name can be organized in two ways: fiat or
hierarchical.
Flat Name Space: In a flat name space, a name is assigned to an address. A name in this
space is a sequence of characters without structure. The names mayor may not have a
common section; if they do, it has no meaning. The main disadvantage of a fiat name space

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 159
is that it cannot be used in a large system such as the Internet because it must be centrally
controlled to avoid ambiguity and duplication.
Hierarchical Name Space:
In a hierarchical name space, each name is made of several parts. The first part can define
the nature of the organization, the second part can define the name of an organization, the
third part can define departments in the organization, and so on. In this case, the authority
to assign and control the name spaces can be decentralized. A central authority can assign
the part of the name that defines the nature of the organization and the name of the
organization. The responsibility of the rest of the name can be given to the organization
itself. The organization can add suffixes (or prefixes) to the name to define its host or
resources. The management of the organization need not worry that the prefix chosen for a
host is taken by another organization because, even if part of an address is the same, the
whole address is different. For example, assume two colleges and a company call one of
their computers challenger. The first college is given a name by the central authority such as
jhda.edu, the second college is given the name berkeley.edu, and the company is given the
name smart. com. When these organizations add the name challenger to the name they have
already been given, the end result is three distinguishable names: challenger.jhda.edu,
challenger.berkeley.edu, and challenger.smart.com. The names are unique without the need for
assignment by a central authority. The central authority controls only part of the name, not
the whole.
Domain names and labels

Fully Qualified Domain Name:


If a label is terminated by a null string, it is called a fully qualified domain name (FQDN).
An FQDN is a domain name that contains the full name of a host. It contains all labels,

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 160
from the most specific to the most general, that uniquely define the name of the host. For
example, the domain name
challenger.ate.tbda.edu.
is the FQDN of a computer named challenger installed at the Advanced Technology Center
(ATC) at De Anza College. A DNS server can only match an FQDN to an address. Note
that the name must end with a null label, but because null means nothing, the label ends
with a dot (.).

Partially Qualified Domain Name

If a label is not terminated by a null string, it is called a partially qualified domain name
(PQDN). A PQDN starts from a node, but it does not reach the root. It is used when the
name to be resolved belongs to the same site as the client. Here the resolver can supply the
missing part, called the suffix, to create an FQDN. For example, if a user at the jhda.edu. site
wants to get the IP address of the challenger computer, he or she can define the partial
name Challenger.
The DNS client adds the suffix atc.jhda.edu. before passing the address to the DNS server

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 161
21.0 Application Layer Protocols: Email System

Email System

In this Email system we will discuss four protocols: SMTP, POP3, IMAP and
MIME.

One of the most popular Internet services is electronic mail (e-mail). The
designers of the Internet probably never imagined the popularity of this
application program.

User Agent

The first component of an electronic mail system is the user agent (VA). It
provides service to the user to make the process of sending and receiving a
message easier.

Services Provided by a User Agent: A user agent is a software package (program)


that composes, reads, replies to, and forwards messages. It also handles
mailboxes. Figure shows the services of a typical user agent.

Email Protocols

21.1 SMTP (Simple Mail Transfer Protocol):


 One of the most popular network services is electronic mail (e-mail).
 The TCP/IP protocol that supports electronic mail on the Internet is called Simple
Mail transfer protocol (SMTP).
 It is a system for sending messages to other computer users based in e-mail
addresses.
 SMTP provides for mail exchange between users on the same or different
computers and supports:
o Sending a single message to one or more recipients.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 162
o Sending messages that include text, voice, video, or graphics.
o Sending messages to users on networks outside the Internet.

 SMTP system is a combination of SMTP client and SMTP server.


 SMTP client and server into two components: user agent (UA) and mail transfer
agent (MTA).
 The UA prepares the message, creates the envelope, and puts the message in the
envelope.
 The MTA transfers the mail across the Internet.

 Instead of just one MTA at the sender site and one at the receiving site, other
MTAs, acting either as client or server, can relay the mail.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 163
 Specific format of E-mail Address:

Local Part: The local part defines the name of a special file, called the user mailbox,
where all the mail received for a user is stored for retrieval by the message access
agent.
Domain Name: The second part of the address is the domain name. An organization
usually selects one or more hosts to receive and send e-mail; the hosts are sometimes
called mail servers or exchangers. The domain name assigned to each mail exchanger
either comes from the DNS database or is a logical name (for example, the name of the
organization).
Mailing List
Electronic mail allows one name, an alias, to represent several different e-mail
addresses; this is called a mailing list. Every time a message is to be sent, the system
checks the recipient's name against the alias database; if there is a mailing list for the
defined alias, separate messages, one for each entry in the list, must be prepared and
handed to the MTA. If there is no mailing list for the alias, the name itself is the
receiving address and a single message is delivered to the mail transfer entity.

21.2 POP3 (Post Office Protocol version 3):


 SMTP expects the destination host, the mail server receiving the mail, to be on-
line all the time; otherwise, a TCP connection cannot be established.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 164
 For this reason, it is not practical to establish an SMTP session with a desktop
computer because desktop computers are usually powered down at the end of
the day.
 In many organizations, mail is received by an SMTP server that is always on-
line. This SMTP server provides a mail-drop service.
 The server receives the mail on behalf of every host in the organization.
 Workstations interact with the SMTP host to retrieve messages by using a
client-server protocol such as Post office protocol version 3 (POP3).
 Although POP3 is used to download messages form the server, the SMTP client
is still needed on the desktop to forward messages from the workstation user to
its SMTP mail server
 Post Office Protocol, version 3 (POP3) is simple and limited in functionality. The
client POP3 software is installed on the recipient computer; the server POP3
software is installed on the mail server.
 Figure shows an example of downloading using POP3

 POP3 has two modes: the delete mode and the keep mode. In the delete mode,
the mail is deleted from the mailbox after each retrieval. In the keep mode, the
mail remains in the mailbox after retrieval. The delete mode is normally used
when the user is working at her permanent computer and can save and
organize the received mail after reading or replying. The keep mode is normally
used when the user accesses her mail away from her primary computer (e.g., a

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 165
laptop). The mail is read but kept in the system for later retrieval and
organizing.

21.3 IMAP4
 Another mail access protocol is Internet Mail Access Protocol, version 4 (IMAP4).
IMAP4 is similar to POP3, but it has more features; IMAP4 is more powerful and
more complex.
 POP3 is deficient in several ways. It does not allow the user to organize her mail on
the server; the user cannot have different folders on the server. (Of course, the user
can create folders on her own computer.) In addition, POP3 does not allow the user
to partially check the contents of the mail before downloading.
 IMAP4 provides the following extra functions:
o A user can check the e-mail header prior to downloading.
o A user can search the contents of the e-mail for a specific string of
characters prior to downloading.
o A user can partially download e-mail. This is especially useful if
bandwidth is limited and the e-mail contains multimedia with high
bandwidth requirements.
o A user can create, delete, or rename mailboxes on the mail server.
o A user can create a hierarchy of mailboxes in a folder for e-mail storage.

21.4 MIME
Electronic mail has a simple structure. Its simplicity, however, comes at a price. It can
send messages only in NVT 7-bit ASCII format. In other words, it has some limitations.
For example, it cannot be used for languages that are not supported by 7-bit ASCII
characters (such as French, German, Hebrew, Russian, Chinese, and Japanese). Also, it
cannot be used to send binary files or video or audio data.

Multipurpose Internet Mail Extensions (MIME) is a supplementary protocol that allows


non-ASCII data to be sent through e-mail. MIME transforms non-ASCII data at the
sender site to NVT ASCII data and delivers them to the client MTA to be sent through
the Internet. The message at the receiving side is transformed back to the original data
We can think of MIME as a set of software functions that transforms non-ASCII data
(stream of bits) to ASCII data and vice versa, as shown in Figure

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 166
MIME defines five headers that can be added to the original e-mail header section
to define the transformation parameters:
1. MIME-Version
2. Content-Type
3. Content-Transfer-Encoding
4. Content-Id
5. Content-Description

21.5 MHS (Message Handling system)


Introduction:
 The seventh layer of the OSI model is the application layer.
 The application layer contains whatever functions are required by the user – for
example, electronic mail – and as such, no standardization in general is possible.
 However, the ITU-T has recognized that there are several common applications:

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 167
o Message handling system (MHS)
o File transfer, access and management (FTAM)
o Virtual terminal (VT)
o Directory system (DS)
o Common management information protocol (CMIP)) for which
standardization is possible.
o
Message Handling System (MHS):

 MHS is the OSI protocol that underlines electronic mail and store-and-forward
handling.
 It is derived from the ITU-T X.400 series.
 MHS is the system used to send any message (including copies of data and files) that
can be delivered in a store-and-forward manner.
o Store-and-forward delivery means that, instead of opening an active
channel between the sender and receiver, the protocol provides a delivery
service that forwards the message when a link becomes available.
o In most information-sharing protocols, both sender and the receiver must
be able to participate in the exchange concurrently.
o In a store-and-forward system, the sender passes the message to delivery
system.
o The delivery system may not be able to transmit the message
immediately, in which case it stores the message until conditions change.
o When the message is delivered, it is stored in the recipient’s mailbox until
called for. For example, the regular postal system.
 The structure of the MHS:
o The structure of the OSI message handling system is shown in figure.
o Each user communicates with a program or process called a user agent
(UA).
o The UA is unique for each user (each user receives a copy of the program
or process).
o An example of a UA is electronic mail program associated with a specific
operating system that allows a user to type and edit messages.
o Each user has message storage (MS), which consists of disk space in a
mail storage system and is usually referred to as a mailbox.
o Message storage can be used for storing, sending, or receiving messages.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 168
o The message storage communicates with a series of processes called
message transfer agents (MTAs).
o MTAs are like the different departments of a post office.
o The combined MTAs make up the message transfer system (MTS).

[MHS]
 Message Format
o The MHS standard defines the format of a message.
o The body of the message corresponds to the material (like a letter) that goes
inside of the envelope of a conventional mailing.
o Every message can include the address (name) of the sender, the address
(name) of the recipient, the subject of the message, and a list of anyone other
than the primary recipient who is to receive a copy.

[Message format in MHS]

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 169
21.0 Application Layer: WWW and HTTP
21.1 WWW
The World Wide Web (WWW) is a repository of information linked together from
points all over the world. The WWW has a unique combination of flexibility,
portability, and user-friendly features that distinguish it from other services provided
by the Internet. The WWW project was initiated by CERN (European Laboratory for
Particle Physics) to create a system to handle distributed resources necessary for
scientific research.

Architecture
The WWW today is a distributed client/server service, in which a client using a browser
can access a service using a server. However, the service provided is distributed over
many locations called sites, as shown in Figure.

Each site holds one or more documents, referred to as Web pages. Each Web page can
contain a link to other pages in the same site or at other sites. The pages can be retrieved
and viewed by using browsers.

Let us go through the scenario shown in above Figure. The client needs to see some
information that it knows belongs to site A. It sends a request through its browser, a
program that is designed to fetch Web documents. The request, among other
information, includes the address of the site and the Web page, called the URL, which
we will discuss shortly. The server at site A finds the document and sends it to the
client. When the user views the document, she finds some references to other
documents, including a Web page at site B. The reference has the URL for the new site.
The user is also interested in seeing this document. The client sends another request to
the new site, and the new page is retrieved.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 170
21.2 Other Terms to know
21.2.1 Client (Browser)
 A variety of vendors offer commercial browsers that interpret and display a Web
document, and all use nearly the same architecture.
 Each browser usually consists of three parts: a controller, client protocol, and
interpreters.
 The controller receives input from the keyboard or the mouse and uses the client
programs to access the document. After the document has been accessed, the
controller uses one of the interpreters to display the document on the screen. The
client protocol can be one of the protocols described previously such as FTP or
HTTP (described later in the chapter). The interpreter can be HTML, Java, or
JavaScript, depending on the type of document. We discuss the use of these
interpreters based on the document type later in the chapter

21.2.2 Server
The Web page is stored at the server. Each time a client request arrives, the
corresponding document is sent to the client. To improve efficiency, servers normally
store requested files in a cache in memory; memory is faster to access than disk. A
server can also become more efficient through multithreading or multiprocessing. In
this case, a server can answer more than one request at a time.

21.2.3 URL
A client that wants to access a Web page needs the address. To facilitate the access of
documents distributed throughout the world, HTTP uses locators. The uniform
resource locator (URL) is a standard for specifying any kind of information on the
Internet. The URL defines four things: protocol, host computer, port, and path

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 171
The protocol is the client/server program used to retrieve the document. Many different
protocols can retrieve a document; among them are FTP or HTTP. The most common
today is HTTP.
The host is the computer on which the information is located, although the name of the
computer can be an alias. Web pages are usually stored in computers, and computers
are given alias names that usually begin with the characters "www". This is not
mandatory, however, as the host can be any name given to the computer that hosts the
Web page.
The URL can optionally contain the port number of the server. If the port is included, it
is inserted between the host and the path, and it is separated from the host by a colon.
Path is the pathname of the file where the information is located. Note that the path can
itself contain slashes that, in the UNIX operating system, separate the directories from
the subdirectories and files.

21.2.4 Cookies
The World Wide Web was originally designed as a stateless entity. A client sends a
request; a server responds. Their relationship is over. The original design of WWW,
retrieving publicly available documents, exactly fits this purpose. Today the Web has
other functions; some are listed here.

1. Some websites need to allow access to registered clients only.


2. Websites are being used as electronic stores that allow users to browse through the
store, select wanted items, put them in an electronic cart, and pay at the end with a
credit card.
3. Some websites are used as portals: the user selects the Web pages he wants to see.
4. Some websites are just advertising.

We now discuss their use in Web pages Creation and Storage of Cookies
The creation and storage of cookies depend on the implementation; however, the
principle is the same.
1. When a server receives a request from a client, it stores information about the client in
a file or a string. The information may include the domain name of the client, the
contents of the cookie (information the server has gathered about the client such as
name, registration number, and so on), a timestamp, and other information' depending
on the implementation.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 172
2. The server includes the cookie in the response that it sends to the client.
3. When the client receives the response, the browser stores the cookie in the cookie
directory, which is sorted by the domain server name.

Using Cookies
When a client sends a request to a server, the browser looks in the cookie directory to
see if it can find a cookie sent by that server. If found, the cookie is included in the
request. When the server receives the request, it knows that this is an old client, not a
new one. Note that the contents of the cookie are never read by the browser or disclosed
to the user. It is a cookie made by the server and eaten by the server. Now let us see how
a cookie is used for the four previously mentioned purposes:

1. The site that restricts access to registered clients only sends a cookie to the client
when the client registers for the first time. For any repeated access, only those clients
that send the appropriate cookie are allowed.
2. An electronic store (e-commerce) can use a cookie for its client shoppers. When a
client selects an item and inserts it into a cart, a cookie that contains information about
the item, such as its number and unit price, is sent to the browser. If the client selects a
second item, the cookie is updated with the new selection information. And so on.
When the client finishes shopping and wants to check out, the last cookie is retrieved
and the total charge is calculated.
3. A Web portal uses the cookie in a similar way. When a user selects her favorite pages,
a cookie is made and sent. If the site is accessed again, the cookie is sent to the server to
show what the client is looking for.
4. A cookie is also used by advertising agencies. An advertising agency can place
banner ads on some main website that is often visited by users. The advertising agency
supplies only a URL that gives the banner address instead of the banner itself. When a
user visits the main website and clicks on the icon of an advertised corporation, a
request is sent to the advertising agency. The advertising agency sends the banner, a
GIF file, for example, but it also includes a cookie with the ill of the user. Any future use
of the banners adds to the database that profiles the Web behavior of the user. The
advertising agency has compiled the interests of the user and can sell this information
to other parties. This use of cookies has made them very controversial. Hopefully, some
new regulations will be devised to preserve the privacy of users.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 173
21.2.5 WEB DOCUMENTS
The documents in the WWW can be grouped into three broad categories: static,
dynamic, and active. The category is based on the time at which the contents of the
document are determined.

Static Documents
Static documents are fixed-content documents that are created and stored in a server.
The client can get only a copy of the document. In other words, the contents of the file
are determined when the file is created, not when it is used. Of course, the contents in
the server can be changed, but the user cannot change them. When a client accesses the
document, a copy of the document is sent. The user can then use a browsing program to
display the document

Static document

Dynamic Documents
A dynamic document is created by a Web server whenever a browser requests the
document. When a request arrives, the Web server runs an application program or a
script that creates the dynamic document. The server returns the output of the program
or script as a response to the browser that requested the document. Because a fresh
document is created for each request, the contents of a dynamic document can vary
from one request to another. A very simple example of a dynamic document is the
retrieval of the time and date from a server. Time and date are kinds of information that
are dynamic in that they change from moment to moment. The client can ask the server
to run a program such as the date program in UNIX and send the result of the program
to the client.

21.2.6 HTML

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 174
Hypertext Markup Language (HTML) is a language for creating Web pages. The term
markup language comes from the book publishing industry. Before a book is typeset and
printed, a copy editor reads the manuscript and puts marks on it. These marks tell the
compositor how to format the text. For example, if the copy editor wants part of a line
to be printed in boldface, he or she draws a wavy line under that part. In the same way,
data for a Web page are formatted for interpretation by a browser.
Let us clarify the idea with an example. To make part of a text displayed in boldface
with HTML, we put beginning and ending boldface tags (marks) in the text, as shown
in Figure

Boldface tags
The two tags <B> and </B> are instructions for the browser. When the browser
sees these two marks, it knows that the text must be boldfaced. A markup
language such as HTML allows us to embed formatting instructions in the file
itself. The instructions are included with the text. In this way, any browser can
read the instructions and format the text according to the specific workstation.
One might ask why we do not use the formatting capabilities of word processors
to create and save formatted text. The answer is that different word processors
use different techniques or procedures for formatting text. For example, imagine
that a user creates formatted text on a Macintosh computer and stores it in a Web
page. Another user who is on an IBM computer would not be able to receive the
Web page because the two computers use different formatting procedures.

Effect of boldface tags


HTML lets us use only ASCII characters for both the main text and formatting
instructions. In this way, every computer can receive the whole document as an
ASCII document. The main text is the data, and the formatting instructions can
be used by the browser to format the data.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 175
A Web page is made up of two parts: the head and the body. The head is the first
part of a Web page. The head contains the title of the page and other parameters
that the browser will use. The actual contents of a page are in the body, which
includes the text and the tags. Whereas the text is the actual infonnation
contained in a page, the tags define the appearance of the document. Every
HTML tag is a name followed by an optional list of attributes, all enclosed
between less-than and greater-than symbols « and >).
An attribute, if present, is followed by an equals sign and the value of the
attribute. Some tags can be used alone; others must be used in pairs. Those that
are used in pairs are called beginning and ending tags. The beginning tag can have
attributes and values and starts with the name of the tag. The ending tag cannot
have attributes or values but must have a slash before the name of the tag. The
browser makes a decision about the structure of the text based on the tags, which
are embedded into the text. Figure shows the format of a tag.

Beginning and ending tags

One commonly used tag category is the text formatting tags such as <B> and <!B>,
which make the text bold; <1> and <II>, which make the text italic; and <U> and <IV>,
which underline the text.

Another interesting tag category is the image tag. Nontextual information such as
digitized photos or graphic images is not a physical part of an HTML document. But we
can use an image tag to point to the file of a photo or image. The image tag defines the
address (URL) of the image to be retrieved. It also specifies how the image can be
inserted after retrieval We can choose from several attributes. The most common are
SRC (source), which defines the source (address), and ALIGN, which defines the
alignment

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 176
of the image. The SRC attribute is required. Most browsers accept images in the GIF or
IPEG formats. For example, the following tag can retrieve an image stored as imagel.gif
in the directory /bin/images:

A third interesting category is the hyperlink tag, which is needed to link documents
together. Any item (word, phrase, paragraph, or image) can refer to another document
through a mechanism called an anchor. The anchor is defined by <A ... > and <!A> tags,
and the anchored item uses the URL to refer to another document. When the document
is displayed, the anchored item is underlined, blinking, or boldfaced. The user can click
on the anchored item to go to another document, which mayor may not be stored on the
same server as the original document. The reference phrase is embedded between the
beginning and ending tags. The beginning tag can have several attributes, but the one
required is HREF (hyperlink reference), which defines the address (URL) of the linked
document. For example, the link to the author of a book can be

What appears in the text is the word Author, on which the user can click to go to the
author's Web page.

Common Gateway Interface (CGI)

The Common Gateway Interface (CGI) is a technology that creates and handles
dynamic documents. CGI is a set of standards that defines how a dynamic document is
written, how data are input to the program, and how the output result is used.

CGI is not a new language; instead, it allows programmers to use any of several
languages such as C, C++, Boune Shell, Korn Shell, C Shell, Tcl, or Perl. The only thing
that CGI defines is a set of rules and terms that the programmer must follow.
The tern common in CGI indicates that the standard defines a set of rules that is common
to any language or platform. The term gateway here means that a CGI program can be
used to access other resources such as databases, graphical packages, and so on. The
term interface here means that there is a set of predefined terms, variables, calls, and so
on that can be used in any CGI program. A CGI program in its simplest form is code
written in one of the languages supporting CGI Any programmer who can encode a

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 177
sequence of thoughts in a program and knows the syntax of one of the abovementioned
languages can write a simple CGI program. Figure illustrates the steps in creating a
dynamic program using CGI technology.

Dynamic document using CGI

A few technologies have been involved in creating dynamic documents using scripts.
Among the most common are Hypertext Preprocessor (pHP), which uses the Perl
language; Java Server Pages (JSP), which uses the Java language for scripting; Active
Server Pages (ASP), a Microsoft product which uses Visual Basic language for scripting;
and ColdFusion, which embeds SQL database queries in the HTML document.

21.2.7 HTTP

The Hypertext Transfer Protocol (HTTP) is a protocol used mainly to access data on the
World Wide Web. HTTP functions as a combination of FTP and SMTP. It is similar to
FTP because it transfers files and uses the services of TCP. However, it is much simpler
than FTP because it uses only one TCP connection. There is no separate control
connection; only data are transferred between the client and the server. HTTP is like
SMTP because the data transferred between the client and the server look like SMTP
messages. In addition, the format of the messages is controlled by MIME-like headers.
Unlike SMTP, the HTTP messages are not destined to be read by humans; they are read
and interpreted by the HTTP server and HTTP client (browser). SMTP messages are
stored and forwarded, but HTTP messages are delivered immediately. The commands

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 178
from the client to the server are embedded in a request message. The contents of the
requested file or other information are embedded in a response message. HTTP uses the
services of TCP on well-known port 80.
HTTP uses the services of TCP on well-known port 80.

HTTP Transaction
Figure illustrates the HTTP transaction between the client and server. Although
HTTP uses the services of TCP, HTTP itself is a stateless protocol. The client initializes
the transaction by sending a request message. The server replies by sending a response.

Messages
The formats of the request and response messages are similar; both are shown in Figure
A request message consists of a request line, a header, and sometimes a body. A
response message consists of a status line, a header, and sometimes a body.

Request and response messages


Request and Status Lines The first line in a request message is called a request line;
the first line in the response message is called the status line. There is one common

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 179
field, as shown in Figure.

Request and status lines

Request type. This field is used in the request message. In version 1.1 of HTTP, several
request types are defined. The request type is categorized into methods as defined in
Table

 URL. We discussed the URL earlier in the chapter.


 Version. The most current version of HTTP is 1.1.
 Status code. This field is used in the response message. The status code field is
similar to those in the FTP and the SMTP protocols. It consists of three digits.
Whereas the codes in the 100 range are only informational, the codes in the 200
range indicate a successful request. The codes in the 300 range redirect the client
to another URL, and the codes in the 400 range indicate an error at the client site.
Finally, the codes in the 500 range indicate an error at the server site. We list the
most common codes in Table 27.2.
 Status phrase. This field is used in the response message. It explains the status
code in text form.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 180
HTTP Response Format

This general format of the response message matches the previous example of a
response message. Let’s say a few additional words about status codes and their
phrases. The status code and associated phrase indicate the result of the request. Some
common status codes and associated phrases include:

• 200 OK: Request succeeded and the information is returned in the response.
• 301 Moved Permanently: Requested object has been permanently moved; the new
URL is specified in Location: header of the response message. The client software will
automatically retrieve the new URL.
• 400 Bad Request: This is a generic error code indicating that the request could not be
understood by the server.
• 404 Not Found: The requested document does not exist on this server.
• 505 HTTP Version Not Supported: The requested HTTP protocol version is not
supported by the server.

Example

Ex-1 This example retrieves a document. We use the GET method to retrieve an image
with the path /usr/bin/imagel. The request line shows the method (GET), the URL, and
the HTTP version (1.1). The header has two lines that show that the client can accept
images in the GIF or JPEG format. The request does not have a body. The response
message contains the status line and four lines of header. The header lines define the
date, server, MIME version, and length of the document. The body of the document
follows the header

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 181
Ex-2
In this example, the client wants to send data to the server. We use the POST method.
The request line shows the method (POST), URL, and HTTP version (1.1). There are
four lines of headers. The request body contains the input information. The response
message contains the status line and four lines of headers. The created document, which
is a CGI document, is included as the body

Ex-3
HTTP uses ASCII characters. A client can directly connect to a server using TELNET,
which logs into port 80. The next three lines show that the connection is successful. We
then type three lines. The first shows the request line (GET method), the second is the
header (defining the host), the third is a blank, terminating the request. The server
response is seven lines starting with the status line. The blank line at the end terminates

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 182
the server response. The file of 14,230 lines is received after the blank line (not shown
here). The last line is the output by the client.

$ teinet www.mhhe.com 80
Trying 198.45.24.104 ...
Connected to www.mhhe.com (198.45.24.104).
Escape character is 11\]'.
GET /engcslcompscilforouzan HTTP/I.t
From: forouzanbehrouz@fbda.edu
HTTP/t.l 200 OK
Date: Thu, 28 Oct 2004 16:27:46 GMT
Server: Apache/l.3.9 (Unix) ApacheJServ/1.1.2 PHP/4.1.2 PHP/3.0.18
MIME-version:1.0
Content-Type: text/html
Last-modified: Friday, 15-0ct-04 02:11:31 GMT
Content-length: 14230
Connection closed by foreign host.

Persistent Versus Nonpersistent Connection

HTTP prior to version 1.1 specified a nonpersistent connection, while a persistent


connection is the default in version 1.1.

Nonpersistent Connection
In a nonpersistent connection, one TCP connection is made for each request/response.
The following lists the steps in this strategy:

1. The client opens a TCP connection and sends a request.


2. The server sends the response and closes the connection.
3. The client reads the data until it encounters an end-of-file marker; it then closes the
connection.
In this strategy, for N different pictures in different files, the connection must be opened
and closed N times. The nonpersistent strategy imposes high overhead on the server
because the server needs N different buffers and requires a slow start procedure each
time a connection is opened.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 183
Persistent Connection
HTTP version 1.1 specifies a persistent connection by default. In a persistent connection,
the server leaves the connection open for more requests after sending a response. The
server can close the connection at the request of a client or if a time-out has been
reached. The sender usually sends the length of the data with each response. However,
there are some occasions when the sender does not know the length of the data. This is
the case when a document is created dynamically or actively. In these cases, the server
informs the client that the length is not known and closes the connection after sending
the data so the client knows that the end of the data has been reached.

Proxy Server
HTTP supports proxy servers. A proxy server is a computer that keeps copies of
responses to recent requests. The HTTP client sends a request to the proxy server. The
proxy server checks its cache. If the response is not stored in the cache, the proxy server
sends the request to the corresponding server. Incoming responses are sent to the proxy
server and stored for future requests from other clients. The proxy server reduces the
load on the original server, decreases traffic, and improves latency. However, to use the
proxy server, the client must be configured to access the proxy instead of the target
server.
A Web cache—also called a proxy server—is a network entity that satisfies HTTP
requests on the behalf of an origin Web server. The Web cache has its own disk storage
and keeps copies of recently requested objects in this storage.

21.2.8 Socket Programming: Creating Network Applications


Sockets provide an interface for programming networks at the transport layer. Network
communication using Sockets is very much similar to performing fi le I/O. In fact,
socket handle is treated like fi le handle. The streams used in fi le I/O operation are also
applicable to socket-based I/O. Socket-based communication is independent of a
programming language used for implementing it. That means, a socket program
written in Java language can communicate to a program written in non-Java (say C or
C++) socket program

The client-server application using UDP

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 184
Socket Programming with TCP

Unlike UDP, TCP is a connection-oriented protocol. This means that before the client
and server can start to send data to each other, they first need to handshake and
establish a TCP connection. One end of the TCP connection is attached to the client
socket and the other end is attached to a server socket. When creating the TCP
connection, we associate with it the client socket address (IP address and port number)
and the server socket address (IP address and port number). With the TCP connection
established, when one side wants to send data to the other side, it just drops the data
into the TCP connection via its socket. This is different from UDP, for which the server
must attach a destination address to the packet before dropping it into the socket.

The client-server application using TCP

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 185
21.3 Other Application Layer Protocols
21.3.1 TELNET (TErminal NETwork):
 The main task of the internet and its TCP/IP protocol suite is to provide services
for users.
 For example, users want to be able to run different application programs at a
remote site and create results that can be transferred to their local site.
 One way to satisfy these demands is to create different client-server application
programs (like, FTP & TFTP, e-mail (SMTP)) for each desired service.
 User access any application program on a remote computer; in other words, allow
the user to log on to a remote computer.
 After login on, a user can use the services available on the remote computer and
transfer the results back to the local computer.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 186
 A TELNET is a popular client-server application program that provides these
services.
 TELNET enables the establishment of a connection to a remote system in such a
way that the local terminal appears to be a terminal at the remote system
.

21.3.2 FTP (File Transfer Protocol):


 FTP is the standard mechanism provided by TCP/IP for copying a file from one
host to another.
 Transferring files from one computer to another is one of the most common tasks
expected from a networking or internetworking environment.
 Although transferring files from one system to another seems simple and
straightforward, some problems must be dealt with first.
 For example, two systems may use different file name conventions. Two systems
may have different ways to represent text and data.
 Two systems may have different directory structures.
 All of these problems have been solved by FTP in a very simple and elegant
approach.
 FTP differs from other client-server applications in that it is establishes two
connections between the hosts. One connection is used for data transfer, the
other connection for control information (commands and responses). Separation
of commands and data transfer makes FTP more efficient.
 The port 21 is used for the control connection, and the port 20 is used for the data
connection.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 187
21.3.3 DHCP (Dynamic Host configuration protocol):
 Each computer that is attached to a TCP/IP internet must know the following
information:
o Its IP address.
o Its subnet mask.
o The IP address of a router.
o The IP address of a name server.
 The DHCP (Dynamic host configuration protocol) is a client-server protocol
designed to provide the four previously mentioned pieces of information for a
diskless computer or a computer that is booted for the first time.
 DHCP is an extension to BOOTP (Bootstrap protocol).
 BOOTP is a static configuration protocol. And DHCP is a dynamic configuration
protocol.
 DHCP is also needed when a host moves from network to network or is connected
and disconnected from a network.
 DHCP provides temporary IP addresses for a limited period of time.

21.3.4 BOOTP (Bootstrap Protocol):

 In computer networking, the Bootstrap Protocol, or BOOTP, is a network


protocol used by a network client to obtain an IP address from a configuration
server.
 BOOTP is usually used during the bootstrap process when a computer is starting
up.

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 188
 A BOOTP configuration server assigns an IP address to each client from a pool of
addresses.
 BOOTP uses the User Datagram Protocol (UDP) as a transport on IPv4 networks
only.
 The Dynamic Host Configuration Protocol (DHCP) is a more advanced protocol
for the same purpose and has superseded the use of BOOTP. Most DHCP servers
also function as BOOTP servers.

21.3.5 SNMP (Simple Network Management protocol):


 The simple network management protocol (SNMP) is a framework for managing
devices in an internet using the TCP/IP protocol suite.
 It provides a set of fundamental operations for monitoring and maintaining an
internet.
 SNMP uses the concept of manager and agent. That is, a manager usually host,
controls and monitors a set of agents, usually routers.
 SNMP is an application-level protocol in which a few manager stations control a
set of agents.
 The protocol is designed at the application level so that it can monitor devices
made by different manufacturers and installed on different physical network.
 It can be used in a heterogeneous internet made of different LANs and WANs
connected by routers or gateways made by different manufacturers.
 Management with SNMP is based on three basic ideas:
o A manager (also called Management Station) checks an agent (also called
Managed Station) by requesting information that reflects the behavior of the
agent.
o A manager (a host that runs the SNMP client program) forces an agent to
perform a task by resetting values in the agent (a router or host that runs the
SNMP server program) database.
o An agent contributes to the management process by warning the manager
of an unusual situation (by using warning message (also called a trap)).

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 189
22.0 Error Correction Code: Hamming Code

The receiver can detect an error but it can’t find error in particular data unit

Error Correction
Error correction can be done in two ways:
1) Whenever error detect Retransmit entire data unit.
2) Whenever error detect Correct the error using any techniques: Forward Error
Correction and Burst Error Correction

Data redundancy bit

2r >= m+r+1

Number of Number of Total


data bits redundancy bits bits
m r m+r
1 2 3
2 3 5
3 3 6
4 3 7
5 4 9
6 4 10
7 4 11

Hamming code

Position of redundant bit

Redundancy bits calculation

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 190
Example:
Redundancy bit calculation

Error detection using Hamming code

Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 191
Prepared By: Prof. Ajay N. Upadhyaya, Asst. Prof. CE Dept, LJIET Page 192

You might also like