You are on page 1of 99

Networking Fundamentals

Objectives of Networking

Complex communication deals with the transfer of information between various computer terminals
through communication links. From 1970s, the merger of the field of computer science and data
communication has led to a fast-field of system opportunities. The technologies dominating the past
decade revolve around information gathering processing, Storage

Nowadays, the (data communication) communication is mostly digital due to high operating speed,
reliability, miniaturization, precise data handling.

The interconnect computers do the job-instead of a single computer serving all the companies needs.
This system is called “Computer networks”. Computer network means interconnected collection of
autonomous computers. Inter commented computers able to exchange information. The connection can
be via copper wire, fiber optics, microwaves and communication satellite.

1.1 Network Structure

Network reduces the cost, input hardware and cable installations. It consists of number of stations, set
of nodes (IMP) and communications channels.

Network Stations: May be terminal, computers, telephones or other communication devices. They are
also called HOST\END SYSTEMS. The hosts are connected to communication subnet or subnet. They
carry messages from host and consist of switching elements and transmission lines. Transmission lines
are also called CIRCUITS, CHANNELS, TRUNKS, move bits between the machines.

The switching elements are specialized computers used to connect two or more transmission lines. The
purpose of the switching element is to choose outgoing line and forward the data arriving on an
incoming line. All traffic to/from the host has to go via its IMP. They are also known as PACKET
SWITCHING NODES, INTERMEDIATE SYSTEM OR DATA SWITCHING EXCHANGES.

Subnet is the collection of the communication lines and routers but not the host. The set of nods to
which stations attached is the boundary of the communication network. The collection of routers and
communication lines moves packets from source host to the destination host.

Network structure can be thought with

- Data terminal equipment (DTE).


- Data circuit terminating Equipment (DCE) concept.

Most digital data processing device have limited data transmission capacity and limited distance of data
transmission.
DTE is the end user machine, generally refers to (Devices) terminals and computers.
Example: Email terminal, workstation, ATM in a bank, sales terminal in a departmental store. They are
not commonly connected directly to transmission medium.

DCE is used to connect the communication channel.

Example: modem. It interacts with DTE and provides an interface of DTE to communication network
transmits and receive bits one at a time over the communication channel.

To specify the exact nature of interface between DTE and DCE various standards and protocols have
been developed. DCEs and DTEs are connected in two ways. A high degree of cooperation is essential in
DTE-DCE combination, as data and control information is to be exchanged. They can be connected in
two ways

- Point to point configuration: Here only two DTE devices are in the channel

- Multidrop configuration: Here more than two devices are connected to the same communication
channel.

This will provide the basic technology concepts required for understanding networking. The following are
the lessons how we categorized Network fundamentals.

Lesson 1: Networking Basic

This lesson covers the very basics of networking. We’ll start with a little history that describes how the
networking industry evolved. We’ll then move on to a section that describes how a LAN is built:
essentially the necessary components (like NIC cards and cables). We then cover LAN topologies. And
finally we’ll discuss the key networking devices: hubs, bridges, switches, and routers.
This module is an overview only. It will familiarize you with much of the vocabulary you hear with
regards to networking. Some of these concepts are covered in more detail in later lessons

The Agenda

- Networking History

- How a LAN Is Built

- LAN Topologies

- LAN/WAN Devices

Networking History

Early networks

From a historical perspective, electronic communication has actually been around a long time, beginning
with Samuel Morse and the telegraph. He sent the first telegraph message May 24, 1844 from
Washington DC to Baltimore MD, 37 miles away. The message? “What hath God wrought.”

Less than 25 years later, Alexander Graham Bell invented the telephone – beating out a competitor to
the patent office only by a couple of hours on Valentine’s Day in 1867. This led to the development of
the ultimate analog network – the telephone system.

The first bit-oriented language device was developed by Emile Baudot – the printing telegraph. By bit-
oriented we mean the device sent pulses of electricity which were either positive or had no voltage at
all. These machines did not use Morse code. Baudot’s five-level code sent five pulses down the wire for
each character transmitted. The machines did the encoding and decoding, eliminating the need for
operators at both ends of the wires. For the first time, electronic messages could be sent by anyone.

Telephone Network

But it’s really the telephone network that has had the greatest impact on how businesses communicate
and connect today. Until 1985, the Bell Telephone Company, now known as AT&T, owned the telephone
network from end to end. It represented a phenomenal network, the largest then and still the largest
today.

Let’s take a look at some additional developments in the communications industry that had a direct
impact on the networking industry today.

Developments in Communication

In 1966, an individual named “Carter” invented a special device that attached to a telephone receiver
that would allow construction workers to talk over the telephone from a two-way radio.
Bell telephone had a problem with this and sued – and eventually lost.

As a result, in 1975, the Federal Communications Commission ruled that devices could attach to the
phone system, if they met certain specifications. Those specifications were approved in 1977 and
became known as FCC Part 68. In fact, years ago you could look at the underside of a telephone not
manufactured by Bell, and see the “Part 68” stamp of approval.

This ruling eventually led to the breakup of American Telephone and Telegraph in 1984, thus creating
nine regional Bell operating companies like Pacific Bell, Bell Atlantic, Bell South, Mountain Bell, etc.
The break up of AT&T in 1984 opened the door for other competitors in the telecommunications market.
Companies like Microwave Communications, Inc. (MCI), and Sprint. Today, when you make a phone call
across the country, it may go through three or four different carrier networks in order to make the
connection.

Now, let’s take a look at what was happening in the computer industry about the same time.

1960's - 1970's Communication

In the 1960’s and 1970’s, traditional computer communications centered around the mainframe host.
The mainframe contained all the applications needed by the users, as well as file management, and even
printing. This centralized computing environment used low-speed access lines that tied terminals to the
host.
These large mainframes used digital signals – pulses of electricity or zeros and ones, what is called
binary -- to pass information from the terminals to the host. The information processing in the host was
also all digital.

Problems faced in communication

This brought about a problem. The telephone industry wanted to use computers to switch calls faster
and the computer industry wanted to connect remote users to the mainframe using the telephone
service. But the telephone networks speak analog and computers speak digital. Let’s take a closer look
at this problem.

Digital signals are seen as one’s and zero’s. The signal is either on or off. Whereas analog signals are
like audio tones – for example, the high-pitched squeal you hear when you accidentally call a fax
machine. So, in order for the computer world to use the services of the telephone system, a conversion
of the signal had to occur.
The solution

The solution – a modulator/demodulator or “modem.” The modem takes the digital signals from the
computer and modulates the signal into analog format. In sending information from a desktop computer
to a host using POTS or plain old telephone service, the modem takes the digital signals from the
computer and modulates the signal into analog format to go through the telephone system. From the
telephone system, the analog signal goes through another modem which converts the signal to digital
format to be processed by the host computer.
This helped solve some of the distance problems, at least to a certain extent.

Multiplexing or muxing

Another problem is how to connect multiple terminals to a single cable. The technology solution is
multiplexing or muxing.
What we can do with multiplexing is we can take multiple remote terminals, connect them back to our
single central site, our single mainframe at the central site, but we can do it all over a single
communications channel, a single line.
So what you see is we have some new terminology here in our diagram. Our single central site we refer
to as a broadband connection. That's referred to as a broadband connection because whenever we talk
about broadband we're talking about carrying multiple communications channels over a single
communication pipe.
So what we're saying here is we have multiple communication channels as in four terminals at the
remote site going back to a single central site over one common channel.
But again in the case of our definition of broadband here, we're referring to the fact that we have four
communication channels, one for each remote terminal over a single physical path.
Now out at the end stations at the terminals, you see we have the term Baseband and what we mean by
the term Baseband is, in our example, between the terminal and the multiplexer we have a single
communication channel per wire, so each of those wires leading into the multiplexer has a dedicated
channel or a dedicated path.
Now the function of the multiplexer is to take each of those Baseband paths and break it up and allocate
time slots.
What that allows us to do is allocate a time slot per terminal so each terminal has its own time slot
across that common Baseband connection between the remote terminals and the central mainframe site.

That is the function of the multiplexer is to allocate the time slots and then also on the other side to put
the pieces back together for delivery to the mainframe.
So muxing is our fundamental concept here. Let’s look at the different ways to do our muxing.
Baseband and broadband

You see again the terms here, Baseband and broadband.


Again, the analogy that they're using here is that in the case of Baseband we said we had a single
communications channel per physical path.
An example of some Baseband technology you're probably familiar with is Ethernet for example.
Most implementations of Ethernet use Baseband technology.
We have a single communications channel going over a single physical path or a single physical cable.
On the other hand on the bottom part of our diagram you see a reference to broadband and the analogy
here would be multiple trains inside of a single tunnel.
Maybe we see that in the real world, we're probably familiar with broadband as something we do every
day, is cable TV.
With cable TV we have multiple channels coming in over a single cable.
We plug a single cable into the back of our TV and over that single cable certainly we know we can get
12 or 20 or 40 or 60 or more channels over that single cable.
So cable TV is a good example of broadband.

Given the addition of multiplexing and the use of the modem, let’s see how we can grow our network.

How networks are growing

Example:-
Using all the technology available, companies were able to team up with the phone company and tie
branch offices to the headquarters. The speeds of data transfer were often slow and were still dependent
on the speed and capacity of the host computers at the headquarters site.

The phone company was also able to offer leased line and dial-up options. With leased-lines, companies
paid for a continuous connection to the host computer. Companies using dial-up connections paid only
for time used. Dial-up connections were perfect for the small office or branch.

Birth of the personal computer

The birth of the personal computer in 1981 really fueled the explosion of the networking marketplace.
No longer were people dependent on a mainframe for applications, file storage, processing, or printing.
The PC gave users incredible freedom and power.

The Internet 1970's - 1980's

The 70’s and 80’s saw the beginnings of the Internet. The Internet as we know it today began as the
ARPANET — The Advanced Research Projects Agency Network – built by a division of the Department of
Defense essentially in the mid ‘60's through grant-funded research by universities and companies. The
first actual packet-switched network was built by BBN. It was used by universities and the federal
government to exchange information and research. Many local area networks connected to the ARPANET
with TCP/IP. TCP/IP was developed in 1974 and stands for Transmission Control Protocol / Internet
Protocol. The ARPANET was shut down in 1990 due to newer network technology and the need for
greater bandwidth on the backbone.
In the late ‘70’s the NSFNET, the National Science Foundation Network was developed. This network
relied on super computers in San Diego; Boulder; Champaign; Pittsburgh; Ithaca; and Princeton. Each of
these six super computers had a microcomputer tied to it which spoke TCP/IP. The microcomputer really
handled all of the access to the backbone of the Internet. Essentially this network was overloaded from
the word "go".
Further developments in networking lead to the design of the ANSNET -- Advanced Networks and
Services Network. ANSNET was a joint effort by MCI, Merit and IBM specifically for commercial purposes.
This large network was sold to AOL in 1995. The National Science Foundation then awarded contracts to
four major network access providers: Pacific Bell in San Francisco, Ameritech in Chicago, MFS in
Washington DC and Sprint in New York City. By the mid ‘80's the collection of networks began to be
known as the “Internet” in university circles. TCP/IP remains the glue that holds it together.
In January 1992 the Internet Society was formed – a misleading name since the Internet is really a
place of anarchy. It is controlled by those who have the fastest lines and can give customers the
greatest service today.
The primary Internet-related applications used today include: Email, News retrieval, Remote Login, File
Transfer and World Wide Web access and development.

1990's Globle Internetworking

With the growth and development of the Internet came the need for speed – and bandwidth. Companies
want to take advantage of the ability to move information around the world quickly. This information
comes in the form of voice, data and video – large files which increase the demands on the network. In
the future, global internetworking will provide an environment for emerging applications that will require
even greater amounts of bandwidth. If you doubt the future of global internetworking consider this – the
Internet is doubling in size about every 11 months.
How a LAN can build

In the previous section, we discussed how networking evolved and some of the problems involved in the
transmission of data such as conflict and multiple terminals. In this section some of the basic elements
needed to build local area networks (LANs) will be described.

LAN(Local Area Netwok)

The term local-area network, or LAN, describes of all the devices that communicate together—printers,
file server, computers, and perhaps even a host computer. However, the LAN is constrained by distance.
The transmission technologies used in LAN applications do not operate at speed over long distances. LAN
distances are in the range of 100 meters (m) to 3 kilometers (km). This range can change as new
technologies emerge.
For systems from different manufacturers to interoperate—be it a printer, PC, and file server—they must
be developed and manufactured according to industry-wide protocols and standards.
More details about protocols and standards will be given later, but for now, just keep in mind they
represent rules that govern how devices on a network exchange information. These rules are developed
by industry-wide special interest groups (SIGs) and standards committees such as the Institute of
Electrical and Electronics Engineers (IEEE).

Most of the network administrator’s tasks deal with LANs. Major characteristics of LANs are:

- The network operates within a building or floor of a building. The geographic scope for ever more
powerful LAN desktop devices running more powerful applications is for less area per LAN.

- LANs provide multiple connected desktop devices (usually PCs) with access to high-bandwidth
media.

- An enterprise purchases the media and connections used in the LAN; the enterprise can privately
control the LAN as it chooses.

- LANs rarely shut down or restrict access to connected workstations; local services are usually
always available.

- By definition, the LAN connects physically adjacent devices on the media.

So let’s look at the components of a LAN.

Components of LAN

- Network operating system(NOS)

In order for computers to be able to communicate with each other, they must first have the networking
software that tells them how to do so. Without the software, the system will function simply as a
“standalone,” unable to utilize any of the resources on the network.
Network operating software may by installed by the factory, eliminating the need for you to purchase it,
(for example AppleTalk), or you may install it yourself.

- Network interface card(NIC)


In addition to network operating software, each network device must also have a network interface card.
These cards today are also referred to as adapters, as in “Ethernet adapter card” or “Token Ring adapter
card.”
The NIC card amplifies electronic signals which are generally very weak within the computer system
itself. The NIC is also responsible for packaging data for transmission, and for controlling access to the
network cable. When the data is packaged properly, and the timing is right, the NIC will push the data
stream onto the cable.
The NIC also provides the physical connection between the computer and the transmission cable (also
called “media”). This connection is made through the connector port. Examples of transmission media
are Ethernet, Token Ring, and FDDI.

- Writing Hub

In order to have a network, you must have at least two devices that communicate with each other. In
this simple model, it is a computer and a printer. The printer also has an NIC installed (for example, an
HP Jet Direct card), which in turn is plugged into a wiring hub. The computer system is also plugged into
the hub, which facilitates communication between the two devices.
Additional components (such as a server, a few more PCs, and a scanner) may be connected to the hub.
With this connection, all network components would have access to all other network components.
The benefit of building this network is that by sharing resources a company can afford higher quality
components. For example, instead of providing an inkjet printer for every PC, a company may purchase
a laser printer (which is faster, higher capacity, and higher quality than the inkjet) to attach to a
network. Then, all computers on that network have access to the higher quality printer.

- Cables or Transmission Media

The wires connecting the various devices together are referred to as cables.

- Cable prices range from inexpensive to very costly and can comprise of a significant cost of the
network itself.

- Cables are one example of transmission media. Media are various physical environments through
which transmission signals pass. Common network media include twisted-pair, coaxial cable, fiber-
optic cable, and the atmosphere (through which microwave, laser, and infrared transmission occurs).
Another term for this is “physical media.” *Note that not all wiring hubs support all medium types.

The other component shown in this fig1. is the connector.

- As their name implies, the connector is the physical location where the NIC card and the cabling
connect.

- Registered jack (RJ) connectors were originally used to connect telephone lines. RJ connectors are
now used for telephone connections and for 10BaseT and other types of network connections.
Different connectors are able support different speeds of transmission because of their design and the
materials used in their manufacture.

- RJ-11 connectors are used for telephones, faxes, and modems. RJ-45 connectors are used for NIC
cards, 10BaseT cabling, and ISDN lines.

Network Cabling

Cable is the actual physical path upon which an electrical signal travels as it moves from one component
to another.
Transmission protocols determine how NIC cards take turns transmitting data onto the cable. Remember
that we discussed how LAN cables (baseband) carry one signal, while WAN cables (broadband) carry
multiple signals. There are three primary cable types:

- Twisted-pair (or copper)

- Coaxial cable and

- Fiber-optic cable

Twisted-pair (or copper)

Unshielded twisted-pair (UTP) is a four-pair wire medium used in a variety of networks. UTP does not
require the fixed spacing between connections that is necessary with coaxial-type connections. There are
five types of UTP cabling commonly used as shown below:

- Category 1: Used for telephone communications. It is not suitable for transmitting data.

- Category 2: Capable of transmitting data at speeds up to 4 Mbps.

- Category 3: Used in 10BaseT networks and can transmit data at speeds up to 10 Mbps.

- Category 4: Used in Token Ring networks. Can transmit data at speeds up to 16 Mbps.

- Category 5: Can transmit data at speeds up to 100 Mbps.

Shielded twisted-pair (STP) is a two-pair wiring medium used in a variety of network implementations.
STP cabling has a layer of shielded insulation to reduce EMI. Token Ring runs on STP.

Using UTP and STP:


- Speed is usually satisfactory for local-area distances.

- These are the least expensive media for data communication. UTP is cheaper than STP.

- Because most buildings are already wired with UTP, many transmission standards are adapted to
use it to avoid costly re-wiring of an alternative cable type.

Coaxial cable

Coaxial cable consists of a solid copper core surrounded by an insulator, a combination shield and
ground wire, and an outer protective jacket.
The shielding on coaxial cable makes it less susceptible to interference from outside sources. It requires
termination at each end of the cable, as well as a single ground connection.
Coax supports 10/100 Mbps and is relatively inexpensive, although more costly than UTP.
Coaxial can be cabled over longer distances than twisted-pair cable. For example, Ethernet can run at
speed over approximately 100 m (300 feet) of twisted pair. Using coaxial cable increases this distance to
500 m.

Fiber-optic cable

Fiber-optic cable consists of glass fiber surrounded by shielding protection: a plastic shield, kevlar
reinforcing, and an outer jacket. Fiber-optic cable is the most expensive of the three types discussed in
this section, but it supports 100+ Mbps line speeds.

There are two types of fiber cable:

- Single or mono-mode—Allows only one mode (or wavelength) of light to propagate through the
fiber; is capable of higher bandwidth and greater distances than multimode. Often used for campus
backbones. Uses lasers as the light generating method. Single mode is much more expensive than
multimode cable. Maximum cable length is 100 km.

- Multimode—Allows multiple modes of light to propagate through the fiber. Often used for workgroup
applications. Uses light-emitting diodes (LEDs) as light generating device. Maximum cable length is 2
km.

Throughput Needs....!!
Super servers, high-capacity workstations, and multimedia applications have also fueled the need for
higher capacity bandwidths.
The examples on abow image shows that the need for throughput capacity grows as a result of a desire
to transmit more voice, video, and graphics. The rate at which this information may be sent
(transmission speed) is dependent how data is transmitted and the medium used for transmission. The
“how” of this equation is satisfied by a transmission protocol.
Each protocol runs at a different speed. Two terms are used to describe this speed: throughput rate and
bandwidth.

The throughput rate is the rate of information arriving at, and possibly passing through, a particular
point in a network.
In this chapter, the term bandwidth means the total capacity of a given network medium (twisted pair,
coaxial, or fiber-optic cable) or protocol.

- Bandwidth is also used to describe the difference between the highest and the lowest frequencies
available for network signals. This quantity is measured in Megahertz (MHz).

- The bandwidth of a given network medium or protocol is measured in bits per second (bps).

Some of the available bandwidth specified for a given medium or protocol is used up in overhead,
including control characters. This overhead reduces the capacity available for transmitting data.

This table shows the tremendous variation in transmission time with different throughput rates. In years
past, megabit (Mb) rates were considered fast. In today’s modern networks, gigabit (Gb) rates are
possible. Nevertheless, there continues to be a focus on greater throughput rates.

LAN Topologies

You may hear the word topology used with respect to networks. “Topology” refers to the physical
arrangement of network components and media within an enterprise networking structure. There are
four primary kinds of LAN topologies: bus, tree, star, and ring.

Bus and Tree topology


Bus topology is

- A linear LAN architecture in which transmissions from network components propagate the length of
the medium and are received by all other components.
- The bus portion is the common physical signal path composed of wires or other media across which
signals can be sent from one part of a network to another. Sometimes called a highway.
- Ethernet/IEEE 802.3 networks commonly implement a bus topology

Tree topology is

- Similar to bus topology, except that tree networks can contain branches with multiple nodes. As in
bus topology, transmissions from one component propagate the length of the medium and are
received by all other components.

The disadvantage of bus topology is that if the connection to any one user is broken, the entire network
goes down, disrupting communication between all users. Because of this problem, bus topology is rarely
used today.
The advantage of bus topology is that it requires less cabling (therefore, lower cost) than star topology.

Star topology

Star topology is a LAN topology in which endpoints on a network are connected to a common central
switch or hub by point-to-point links. Logical bus and ring topologies re often implemented physically in
a star topology.

- The benefit of star topology is that even if the connection to any one user is broken, the network
stays functioning, and communication between the remaining users is not disrupted.
- The disadvantage of star topology is that it requires more cabling (therefore, higher cost) than bus
topology.

Star topology may be thought of as a bus in a box.

Ring topology
Ring topology consists of a series of repeaters connected to one another by unidirectional transmission
links to form a single closed loop.

- Each station on the network connects to the network at a repeater.


- While logically a ring, ring topologies are most often organized in a closed-loop star. A ring topology
that is organized as a star implements a unidirectional closed-loop star, instead of point-to-point links.
- One example of a ring topology is Token Ring.

Redundancy is used to avoid collapse of the entire ring in the event that a connection between two
components fails.

LAN/WAN Devices

Let’s now take a look at some of the devices that move traffic around the network.

The approach taken in this section will be simple. As networking technology continues to evolve, the
actual differences between networking devices is beginning to blur slightly. Routers today are switching
packets faster and yielding the performance of switches. Switches, on the other hand, are being
designed with more intelligence and able to act more like routers. Hubs, while traditionally not intelligent
in terms of the amount of software they run, are now being designed with software that allows the hub
to be “intelligent” acting more like a switch.
In this section, we’ll keep these different types of product separate so that you can understand the
basics. Let’s start off with the hub.

Hub

Star topology networks generally have a hub in the center of the network that connects all of the
devices together using cabling. When bits hit a networking device, be they hubs, switches, or routers,
the devices will strengthen the signal and then send it on its way.
A hub is simple a multiport repeater. There is usually no software to load, and no configuration required
(i.e. network administrators don’t have to tell the device what to do).

Hubs operate very much the same way as a repeater. They amplify and propagate signals received out
all ports, with the exception of the port from which the data arrived.
For example in the above image, if system 125 wanted to print on the printer 128, the message would
be sent to all systems on Segment 1, as well as across the hub to all systems on Segment 2. System
128 would see that the message is intended for it and would process it.
Devices on the network are constantly listening for data. When devices sense a frame of information
that is addressed (and we will talk more about addressing later) for it, then it will accept that
information into memory found on the network interface card (NIC) and begin processing the data.
In fairly small networks, hubs work very well. However, in large networks the limitations of hubs creates
problems for network managers. In this example, Ethernet is the standard being used. The network is
also baseband, only one station can use the network at a time. If the applications and files being used
on this network are large, and there are more nodes on the network, contention for bandwidth will slow
the responsiveness of the network down.

Bridges

Bridges improve network throughput and operate at a more intelligent level than do hubs. A bridge is
considered to be a store and forward device that uses unique hardware addresses to filter traffic that
would otherwise travel from one segment to another. A bridge performs the following functions:

- Reads data frame headers and records source address/port (segment) pairs
- Reads the destination address of incoming frames and uses recorded addresses to determine the
appropriate outbound port for the frame.
- Uses memory buffers to store frames during periods of heavy transmission, and forwards them
when the medium is ready.

Let’s take a look at an example.

The bridge divides this Ethernet LAN into two segments in the above image, each connecting to a hub
and then to a bridge port. Stations 123-125 are on segment 1 and stations 126-128 are on segment 2.
When station 124 transmits to station 125, the frame goes into the hub (who repeats it and sends it out
all connected ports) and then on to the bridge. The bridge will not forward the frame because it
recognizes that stations 124 and 125 are on the same segment. Only traffic between segments passes
through the bridge. In this example, a data frame from station 123, 124, or 125 to any station on
segment 2 would be forwarded, and so would a message from any station on segment 2 to stations on
segment 1.
When one station transmits, all other stations must wait until the line is silent again before transmitting.
In Ethernet, only one station can transmit at a time, or data frames will collide with each other,
corrupting the data in both frames.
Bridges will listen to the network and keep track of who they are hearing. For instance, the bridge in this
example will know that system 127 is on Segment 2, and that 125 is on segment 1. The bridge may
even have a port (perhaps out to the Internet) where it will send all packets that it cannot identify a
destination for.

Switches

Switches use bridging technology to forward traffic between ports. They provide full dedicated
transmission rates between two stations that are directly connected to the switch ports. Switches also
build and maintain address tables just like bridges do. These address tables are known as “content
addressable memory.”

Let’s look at an example.


Replacing the two hubs and the bridge with an Ethernet switch provides the users with dedicated
bandwidth. Each station has a full 10Mbps “pipe” to the switch. With a switch at the center of the
network, combined with the 100Mbps links, users have greater access to the network.
Given the size of the files and applications on this network, additional bandwidth for access to the sever
or to the corporate intranet is possible by using a switch that has both 10Mbps and 100Mbps Fast
Ethernet ports. The 10Mbps links could be used to support all the desktop devices, including the printer,
while the 100Mbps switch ports would be used for higher bandwidth needs.

Routers

A router has two basic functions, path determination using a variety of metrics, and forwarding packets
from one network to another. Routing metrics can include load on the link between devices, delay,
bandwidth, and reliability, or even hop count (i.e. the number of devices a packet must go through in
order to reach its destination).
In essence, routers will do all that bridges and switches will do, plus more. Routers have the capability
of looking deeper into the data frame and applying network services based on the destination IP
address. Destination and Source IP addresses are a part of the network header added to a packet
encapsulation at the network layer.

- SUMMARY -

* LANs are designed to operate within a limited geographic area

* Key LAN components are computers, NOS, NICs, hubs, and cables

* Common LAN topologies include bus, tree, star, and ring

* Common LAN/WAN devices are hubs, bridges, switches, and routers

Lesson 2: OSI Reference Model

This lesson covers the OSI reference model. It is sometimes also called ISO or 7 layer reference model.
The model was developed by the International Standards Organization in the early 1980's. It describes
the principles for interconnection of computer systems in an Open System Interconnection environment.

The Agenda

- The Layered Model

- Layers 1 & 2: Physical & Data Link Layers

- Layer 3: Network Layer

- Layers 4–7: Transport, Session, Presentation, and Application Layers

The Layered Model

The concept of layered communication is essential to ensuring interoperability of all the pieces of a
network. To introduce the process of layered communication, let’s take a look at a simple example.
In this image, the goal is to get a message from Location A to Location B. The sender doesn’t know what
language the receiver speaks – so the sender passes the message on to a translator.
The translator, while not concerned with the content of the message, will translate it into a language
that may be globally understood by most, if not all translators – thus it doesn’t matter what language
the final recipient speaks. In this example, the language is Dutch. The translator also indicates what the
language type is, and then passes the message to an administrative assistant.
The administrative assistant, while not concerned with the language, or the message, will work to
ensure the reliable delivery of the message to the destination. In this example, she will attach the fax
number, and then fax the document to the destination – Location B.

The document is received by an administrative assistant at Location B. The assistant at Location B may
even call the assistant at Location A to let her know the fax was properly received.
The assistant at Location B will then pass the message to the translator at her office. The translator will
see that the message is in Dutch. The translator, knowing that the person to whom the message is
addressed only speaks French, will translate the message so the recipient can properly read the
message. This completes the process of moving information from one location to another.

Upon closer study of the process employed to communicate, you will notice that communication took
place at different layers. At layer 1, the administrative assistants communicated with each other. At
layer 2, the translators communicated with each other. And, at layer 3 the sender was able to
communicate with the recipient.
Why a Layered Network Model.........?

That’s essentially the same thing that goes in networking with the OSI model. This image illustrates the
model.

So, why use a layered network model in the first place? Well, a layered network model does a number of
things. It reduces the complexity of the problems from one large one to seven smaller ones. It allows
the standardization of interfaces among devices. It also facilitates modular engineering so engineers can
work on one layer of the network model without being concerned with what happens at another layer.
This modularity both accelerates evolution of technology and finally teaching and learning by dividing
the complexity of internetworking into discrete, more easily learned operation subsets.
Note that a layered model does not define or constrain an implementation; it provides a framework.
Implementations, therefore, do not conform to the OSI reference model, but they do conform to the
standards developed from the OSI reference model principles.

Devices Function at Layers

Let’s put this in some context. You are already familiar with different networking devices such as hubs,
switches, and routers. Each of these devices operate at a different level of the OSI Model.
NIC cards receive information from upper level applications and properly package data for transmission
on to the network media. Essentially, NIC cards live at the lower four layers of the OSI Model.
Hubs, whether Ethernet, or FDDI, live at the physical layer. They are only concerned with passing bits
from one station to other connected stations on the network. They do not filter any traffic.
Bridges and switches on the other hand, will filter traffic and build bridging and switching tables in order
to keep track of what device is connected to what port.
Routers, or the technology of routing, lives at layer 3.
These are the layers people are referring to when they speak of “layer 2” or “layer 3” devices.
Let’s take a closer look at the model.

Host Layers & Media Layers


Host Layers :-

The upper four layers, Application, Presentation, Session, and Transport, are responsible for accurate
data delivery between computers. The tasks or functions of these upper four layers must “interoperate”
with the upper four layers in the system being communicated with.

Media Layers :-

The lower three layers – Network, Data Link and Physical -- are called the media layers. The media
layers are responsible for seeing that the information does indeed arrive at the destination for which it
was intended.

Layer Functions

- Application Layer

If we take a look at the model from the top layer, the Application Layer, down, I think you will begin to
get a better idea of what the model does for the industry.

The applications that you run on a desktop system, such as Power Point, Excel and Word work above the
seven layers of the model.
The application layer of the model helps to provide network services to the applications. Some of the
application processes or services that it offers are electronic mail, file transfer, and terminal emulation.

- Presentation Layer

The next layer of the seven layer model is the presentation layer. It is responsible for the overall
representation of the data from the application layer to the receiving system. It insures that the data is
readable by the receiving system.

- Session Layer
The session layer is concerned with inter-host communication. It establishes, manages and terminates
sessions between applications.

- Trasport Layer

Layer 4, the Transport layer is primarily concerned with end-to-end connection reliability. It is concerned
with issues such as data transport information flow and fault detection and the recovery.

- Network Layer

The network layer is layer 3. This is the layer that is associated with addressing and looking for the best
path to send information on. It provides connectivity and path selection between two systems.
The network layer is essentially the domain of routing. So when we talk about a device having layer 3
capability, we mean that that device is capable of addressing and best path selection.

- Data Link Layer

The link layer (formally referred to as the data link layer) provides reliable transit of data across a
physical link. In so doing, the link layer is concerned with physical (as opposed to network or logical)
addressing, network topology, line discipline (how end systems will use the network link), error
notification, ordered delivery of frames, and flow control.

- Physical Layer
The physical layer is concerned with binary transmission. It defines the electrical, mechanical,
procedural, and functional specifications for activating, maintaining, and deactivating the physical link
between end systems. Such characteristics as voltage levels, physical data rates, and physical
connectors are defined by physical layer specifications. Now you know the role of all 7 layers of the OSI
model.

Peer-to-Peer Communications

Let’s see how these layers work in a Peer to Peer Communications Network. In this exercise we will
package information and move it from Host A, across network lines to Host B.
Each layer uses its own layer protocol to communicate with its peer layer in the other system. Each
layer’s protocol exchanges information, called protocol data units (PDUs), between peer layers.
This peer-layer protocol communication is achieved by using the services of the layers below it. The
layer below any current or active layer provides its services to the current layer.
The transport layer will insure that data is kept segmented or separated from one other data. At the
network layer we get packets that begin to be assembled. At the data link layer those packets become
frames and then at the physical layer those frames go out on the wires from one host to the other host
as bits

Data Encapsulation

This whole process of moving data from host A to host B is known as data encapsulation – the data is
being wrapped in the appropriate protocol header so it can be properly received.
Let’s say we compose an email that we wish to send from system A to system B. The application we are
using is Eudora. We write the letter and then hit send. Now, the computer translates the numbers into
ASCII and then into binary (1s and 0s). If the email is a long one, then it is broken up and mailed in
pieces. This all happens by the time the data reaches the Transport layer.
At the network layer, a network header is added to the data. This header contains information required
to complete the transfer, such as source and destination logical addresses.

The packet from the network layer is then passed to the data link layer where a frame header and a
frame trailer are added thus creating a data link frame.

Finally, the physical layer provides a service to the data link layer. This service includes encoding the
data link frame into a pattern of 1s and 0s for transmission on the medium (usually a wire).

Layers 1 & 2: Physical & Data Link Layers

Now let’s take a look at each of the layers in a bit more detail and with some context. For Layers 1 and
2, we’re going to look at physical device addressing, and the resolution of such addresses when they are
unknown.

Physical and Logical Addressing


Locating computer systems on an internetwork is an essential component of any network system – the
key to this is addressing.
Every NIC card on the network has its own MAC address. In this example we have a computer with the
MAC address 000.0C12.3456. The MAC address is a hexadecimal number so the numbers in this address
here don’t go just from zero to nine, but go from zero to nine and then start at "A" and go through "F".
So, there are actually sixteen digits represented in this counting system. Every type of device on a
network has a MAC address, whether it is a Macintosh computer, a Sun Work Station, a hub or even a
router. These are known as physical addresses and they don’t change.
Logical addresses exist at Layer 3 of the OSI reference model. Unlike link-layer addresses, which usually
exist within a flat address space, network-layer addresses are usually hierarchical. In other words, they
are like mail addresses, which describe a person’s location by providing a country, a state, a zip code, a
city, a street, and address on the street, and finally, a name. One good example of a flat address space
is the U.S. social security numbering system, where each person has a single, unique security number.

MAC Address

For multiple stations to share the same medium and still uniquely identify each other, the MAC sub layer
defines a hardware or data link address called the MAC address. The MAC address is unique for each
LAN interface.
On most LAN-interface cards, the MAC address is burned into ROM—hence the term, burned-in address
(BIA). When the network interface card initializes, this address is copied into RAM.
The MAC address is a 48-bit address expressed as 12 hexadecimal digits. The first 6 hexadecimal digits
of a MAC address contain a manufacturer identification (vendor code) also known as the organizationally
unique identifier (OUI). To ensure vendor uniqueness the Institute of Electrical and Electronic Engineers
(IEEE) administers OUIs. The last 6 hexadecimal digits are administered by each vendor and often
represent the interface serial number.

Layer 3: Network Layer

Now let’s take a look a layer 3--the domain of routing.

Network Layer: Path Determination

Which path should traffic take through the cloud of networks? Path determination occurs at Layer 3. The
path determination function enables a router to evaluate the available paths to a destination and to
establish the preferred handling of a packet.
Data can take different paths to get from a source to a destination. At layer 3, routers really help
determine which path. The network administrator configures the router enabling it to make an intelligent
decision as to where the router should send information through the cloud.
The network layer sends packets from source network to destination network.
After the router determines which path to use, it can proceed with switching the packet: taking the
packet it accepted on one interface and forwarding it to another interface or port that reflects the best
path to the packet’s destination.
To be truly practical, an internetwork must consistently represent the paths of its media connections. As
the graphic shows, each line between the routers has a number that the routers use as a network
address. These addresses contain information about the path of media connections used by the routing
process to pass packets from a source toward a destination.
The network layer combines this information about the path of media connections–sets of links–into an
internetwork by adding path determination, path switching, and route processing functions to a
communications system. Using these addresses, the network layer also provides a relay capability that
interconnects independent networks.
The consistency of Layer 3 addresses across the entire internetwork also improves the use of bandwidth
by preventing unnecessary broadcasts which tax the system.

Addressing—Network and Node

Each device in a local area network is given a logical address. The first part is the network number – in
this example that is a single digit – 1. The second part is a node number, in this example we have nodes
1, 2, and 3. The router uses the network number to forward information from one network to another.

Protocol Addressing Variations

The two-part network addressing scheme extends across all the protocols covered in this course. How do
you interpret the meaning of the address parts? What authority allocates the addresses? The answers
vary from protocol to protocol.
For example, in the TCP/IP address, dotted decimal numbers show a network part and a host part.
Network 10 uses the first of the four numbers as the network part and the last three numbers–8.2.48 as
a host address. The mask is a companion number to the IP address. It communicates to the router the
part of the number to interpret as the network number and identifies the remainder available for host
addresses inside that network.
The Novell Internet Package Exchange or IPX example uses a different variation of this two-part
address. The network address 1aceb0b is a hexadecimal (base 16) number that cannot exceed a fixed
maximum number of digits. The host address 0000.0c00.6e25 (also a hexadecimal number) is a fixed
48 bits long. This host address derives automatically from information in hardware of the specific LAN
device.
These are the two most common Layer 3 address types.
Network Layer Protocol Operations

Let’s take a look at the flow of packets through a routed network. For examples sake, let’s say it is an
Email message from you at Station X to your mother in Michigan who is using System Y.
The message will exit Station X and travel through the corporate internal network until it gets to a point
where it needs the services of an Internet service provider. The message will bounce through their
network and eventually arrive at Mom’s Internet provider in Dearborn. Now, we have simplified this
transmission to three routers, when in actuality, it could travel through many different networks before
it arrives at its destination.
Let’s take a look, from the OSI models reference point, at what is happening to the message as it
bounces around the Internet on its way to Mom’s.

As information travels from Station X it reaches the network level where a network address is added to
the packet. At the data link layer, the information is encapsulated in an Ethernet frame. Then it goes to
the router – here it is Router A – and the router de-encapsulates and examines the frame to determine
what type of network layer data is being carried. The network layer data is sent to the appropriate
network layer process, and the frame itself is discarded.
The network layer process examines the header to determine the destination network.
The packet is again encapsulated in the data-link frame for the selected interface and queued for
delivery.
This process occurs each time the packet switches through another router. At the router connected to
the network containing the destination host – in this case, C -- the packet is again encapsulated in the
destination LAN’s data-link frame type for delivery to the protocol stack on the destination host, System
Y.

Multiprotocol Routing

Routers are capable of understanding address information coming from many different types of networks
and maintaining associated routing tables for several routed protocols concurrently. This capability
allows a router to interleave packets from several routed protocols over the same data links.
As the router receives packets from the users on the networks using IP, it builds a routing table
containing the addresses of the network of these IP users.
Now some Macintosh AppleTalk users are adding to the traffic on this link of the network. The router
adds the AppleTalk addresses to the routing table. Routing tables can contain address information from
multiple protocol networks.
In addition to the AppleTalk and IP users, there is also some IPX traffic from some Novell NetWare
networks.
Finally, we see some DEC traffic from the VAX minicomputers attached to the Ethernet networks.
Routers can pass traffic from these (and other) protocols across the common Internet.
The various routed protocols operate separately. Each uses routing tables to determine paths and
switches over addressed ports in a “ships in the night” fashion; that is, each protocol operates without
knowledge of or coordination with any of the other protocol operations.
Now, we have spent some time with routed protocols; let’s take some time talking about routing
protocols.

Routed Versus Routing Protocol

It is easy to confuse the similar terms routed protocol and routing protocol:

Routed protocols are what we have been talking about so far. They are any network protocol suite that
provides enough information in its network layer address to allow a packet to direct user traffic. Routed
protocols define the format and use of the fields within a packet. Packets generally are conveyed from
end system to end system. The Internet protocol IP and Novell’s IPX are examples of routed protocols.

Routing protocol support a routed protocol by providing mechanisms for sharing routing information.
Routing protocol messages move between the routers. A routing protocol allows the routers to
communicate with other routers to update and maintain tables. Routing protocol messages do not carry
end-user traffic from network to network. A routing protocol uses the routed protocol to pass
information between routers. TCP/IP examples of routing protocols are Routing Information Protocol
(RIP), Interior Gateway Routing Protocol (IGRP), and Open Shortest Path First (OSPF).

Static Versus Dynamic Routes

Routers must be aware of what links, or lines, on the network are up and running, which ones are
overloaded, or which ones may even be down and unusable. There are two primary methods routers use
to determine the best path to a destination: static and dynamic
Static knowledge is administered manually: a network administrator enters it into the router’s
configuration. The administrator must manually update this static route entry whenever an internetwork
topology change requires an update. Static knowledge is private–it is not conveyed to other routers as
part of an update process.
Dynamic knowledge works differently. After the network administrator enters configuration commands
to start dynamic routing, route knowledge is updated automatically by a routing process whenever new
topology information is received from the internetwork. Changes in dynamic knowledge are exchanged
between routers as part of the update process.

Static Route : Uses a protocol route that a network administrator enters into the router

Dynamic Route : Uses a route that a network protocol adjusts automatically for topology or
traffic changes

Dynamic routing tends to reveal everything known about an internetwork. For security reasons, it might
be appropriate to conceal parts of an internetwork. Static routing allows an internetwork administrator
to specify what is advertised about restricted partitions.
When an internetwork partition is accessible by only one path, a static route to the partition can be
sufficient. This type of partition is called a stub network. Configuring static routing to a stub network
avoids the overhead of dynamic routing.

Adapting to Topology Change

The internetwork shown in the graphic adapts differently to topology changes depending on whether it
uses statically or dynamically configured knowledge.
Static knowledge allows the routers to properly route a packet from network to network. The router
refers to its routing table and follows the static knowledge there to relay the packet to Router D. Router
D does the same and relays the packet to Router C. Router C delivers the packet to the destination host.

But what happens if the path between Router A and Router D fails? Obviously Router A will not be able
to relay the packet to Router D. Until Router A is reconfigured to relay packets by way of Router B,
communication with the destination network is impossible.
Dynamic knowledge offers more automatic flexibility. According to the routing table generated by Router
A, a packet can reach its destination over the preferred route through Router D. However, a second path
to the destination is available by way of Router B. When Router A recognizes the link to Router D is
down, it adjusts its routing table, making the path through Router B the preferred path to the
destination. The routers continue sending packets over this link.
When the path between Routers A and D is restored to service, Router A can once again change its
routing table to indicate a preference for the counter-clockwise path through Routers D and C to the
destination network.

LAN-to-LAN Routing

Example 01:-

The next two examples will bring together many of the concepts we have discussed.

The network layer must relate to and interface with various lower layers. Routers must be capable of
seamlessly handling packets encapsulated into different lower-level frames without changing the
packets’ Layer 3 addressing.
Let’s look at an example of this in a LAN-to-LAN routing situation. Packet traffic from source Host 4 on
Ethernet network 1 needs a path to destination Host 5 on Token Ring Network 2. The LAN hosts depend
on the router and its consistent network addressing to find the best path.
When the router checks its router table entries, it discovers that the best path to destination Network 2
uses outgoing port To0, the interface to a Token Ring LAN.

Although the lower-layer framing must change as the router switches packet traffic from the Ethernet on
Network 1 to the Token Ring on Network 2, the Layer 3 addressing for source and destination remains
the same - in this example it is Net 2, Host 5 despite the different lower-layer encapsulations.
The packet is then reframed and sent on to the destination Token Ring network.

LAN-to-WAN Routing

Now, let’s look at an example using a Wide Area Network.

Example 02:-
The network layer must relate to and interface with various lower layers for LAN-to-WAN traffic, as well.
As an internetwork grows, the path taken by a packet might encounter several relay points and a variety
of data-link types beyond the LANs. For example, in the graphic, a packet from the top workstation at
address 1.3 must traverse three data links to reach the file server at address 2.4 shown on the bottom:
The workstation sends a packet to the file server by encapsulating the packet in a Token Ring frame
addressed to Router A.

When Router A receives the frame, it removes the packet from the Token Ring frame, encapsulates it in
a Frame Relay frame, and forwards the frame to Router B.

Router B removes the packet from the Frame Relay frame and forwards the packet to the file server in a
newly created Ethernet frame.
When the file server at 2.4 receives the Ethernet frame, it extracts and passes the packet to the
appropriate upper-layer process through the process of de- encapsulation.
The routers enable LAN-to-WAN packet flow by keeping the end-to-end source and destination
addresses constant while encapsulating the packet at the port to a data link that is appropriate for the
next hop along the path.

Layers 4–7: Transport, Session, Presentation, and Application Layers

Let’s look at the upper layers of the OSI seven layer model now. Those layers are the transport, session,
presentation, and application layers.

Transport Layer

Transport services allow users to segment and reassemble several upper-layer applications onto the
same transport layer data stream.
It also establishes the end-to-end connection, from your host to another host. As the transport layer
sends its segments, it can also ensure data integrity. Essentially the transport layer opens up the
connection from your system through a network and then through a wide area cloud to the receiving
system at the other end.
- Segments upper-layer applications
- Establishes an end-to-end connection
- Sends segments from one end host to another
- Optionally, ensures data reliability

Transport Layer— Segments Upper-Layer Applications

The transport layer has several functions. First, it segments upper layer application information. You
might have more than one application running on your desktop at a time. You might be sending
electronic mail open while transferring a file from the Web, and opening a terminal session. The
transport layer helps keep straight all of the information coming from these different applications.

Transport Layer— Establishes Connection

Another function of the transport layer is to establish the connection from your system to another
system. When you are browsing the Web and double-click on a link your system tries to establish a
connection with that host. Once the connection has been established, there is some negotiation that
happens between your system and the system that you are connected to in terms of how data will be
transferred. Once the negotiations are completed, data will begin to transfer. As soon as the data
transfer is complete, the receiving station will send you the end message and your browser will say
done. Essentially, the transport layer is responsible then for connecting and terminating sessions from
your host to another host.

Transport Layer— Sends Segments with Flow Control


Another important function of the transport layer is to send segments and maintain the sending and
receiving of information with flow control.
When a connection is established, the host will begin to send frames to the receiver. When frames arrive
too quickly for a host to process, it stores them in memory temporarily. If the frames are part of a small
burst, this buffering solves the problem. If the traffic continues, the host or gateway eventually exhausts
its memory and must discard additional frames that arrive.
Instead of losing data, the transport function can issue a not ready indicator to the sender. Acting like a
stop sign, this indicator signals the sender to discontinue sending segment traffic to its peer. After the
receiver has processed sufficient segments that its buffers can handle additional segments, the receiver
sends a ready transport indicator, which is like a go signal. When it receives this indicator, the sender
can resume segment transmission.

Transport Layer— Reliability with Windowing

In the most basic form of reliable connection-oriented data transfer, a sequence of data segments must
be delivered to the recipient in the same sequence that they were transmitted. The protocol here
represents TCP. It fails if any data segments are lost, damaged, duplicated, or received in a different
order. The basic solution is to have a receiving system acknowledge the receipt of every data segment.
If the sender had to wait for an acknowledgment after sending each segment, throughput would be low.
Because time is available after the sender finishes transmitting the data segment and before the sender
finishes processing any received acknowledgment, the interval is used for transmitting more data. The
number of data segments the sender is allowed to have outstanding–without yet receiving an
acknowledgment– is known as the window.
In this scenario, with a window size of 3, the sender can transmit three data segments before expecting
an acknowledgment. Unlike this simplified graphic, there is a high probability that acknowledgments and
packets will intermix as they communicate across the network.

Transport Layer— An Acknowledgement Technique

Reliable delivery guarantees that a stream of data sent from one machine will be delivered through a
functioning data link to another machine without duplication or data loss. Positive acknowledgment with
retransmission is one technique that guarantees reliable delivery of data streams. Positive
acknowledgment requires a receiving system or receiver to communicate with the source, sending back
an acknowledgment message when it receives data. The sender keeps a record of each packet it sends
and waits for an acknowledgment before sending the next packet.
In this example, the sender is transmitting packets 1, 2, and 3. The receiver acknowledges receipt of the
packets by requesting packet number 4. The sender, upon receiving the acknowledgment sends packets
4, 5, and 6. If packet number 5 does not arrive at the destination, the receiver acknowledges with a
request to resend packet number 5. The sender resends packet number 5 and must receive an
acknowledgment to continue with the transmission of packet number 7.

Transport to Network Layer


The transport layer assumes it can use the network as a given “cloud” as segments cross from sender
source to receiver destination.
If we open up the functions inside the “cloud,” we reveal issues like, “Which of several paths is best for a
given route?” We see the role that routers perform in this process, and we see the segments of Layer 4
transport further encapsulated into packets.

Session Layer

- Network File System (NFS)


- Structured Query Language (SQL)
- Remote-Procedure Call (RPC)
- X Window System
- AppleTalk Session Protocol (ASP)
- DEC Session Control Protocol (SCP)

The session layer establishes, manages, and terminates sessions among applications. This layer is
primarily concerned with coordinating applications as they interact on different hosts. Some popular
session layer protocols are listed here, Network File Systems (NFS), Structured Query Language or SQL,
X Window Systems; even AppleTalk Session Protocol is part of the session layer.

Presentation Layer

The presentation layer is primarily concerned with the format of the data. Data and text can be
formatted as ASCII files, as EBCDIC files or can even be Encrypted. Sound may become a Midi file.
Video files can be formatted as MPEG video files or QuickTime files. Graphics and visual images can be
formatted as PICT, TIFF, JPEG, or even GIF files. So that is really what happens at the presentation
layer.

Application Layer

The application layer is the highest level of the seven layer model. Computer applications that you use
on your desktop everyday, applications like word processing, presentation graphics, spreadsheets files,
and database management, all sit above the application layer. Network applications and internetwork
applications allow you, as the user, to move computer application files through the network and through
the internetwork.
Examples:-

COMPUTER APPLICATIONS

- Word Processor
- Presentation Graphics
- Spreadsheet
- Database
- Design/Manufacturing
- Project Planning
- Others

NETWORK APPLICATIONS

- Electronic Mail
- File Transfer
- Remote Access
- Client-Server Process
- Information Location
- Network Management
- Others

INTERNETWORK APPLICATIONS

- Electronic Data Interchange


- World Wide Web
- E-Mail Gateways
- Special-Interest Bulletin Boards
- Financial Transaction Services
- Internet Navigation Utilities
- Conferencing (Voice, Video, Data)
- Others

- SUMMARY -

- OSI reference model describes building blocks of functions for program-to-program


communications between similar or dissimilar hosts

- Layers 4–7 (host layers) provide accurate data delivery between computers

- Layers 1–3 (media layers) control physical delivery of data over the network

The OSI reference model describes what must transpire for program to program communications to
occur between even dissimilar computer systems. Each layer is responsible to provide information and
pointers to the next higher layer in the OSI Reference Model.
The Application Layer (which is the highest layer in the OSI model) makes available network services to
actual software application programs.
The presentation layer is responsible for formatting and converting data and ensuring that the data is
presentable for one application through the network to another application.
The session layer is responsible for coordinating communication interactions between applications. The
reliable transport layer is responsible for segmenting and multiplexing information, keeping straight all
the various applications you might be using on your desktop, the synchronization of the connection, flow
control, error recovery as well as reliability through the process of windowing. The network layer is
responsible for addressing and path determination.
The link layer provides reliable transit of data across a physical link. And finally the physical layer is
concerned with binary transmission.

Lesson 3: Introduction to TCP/IP

This lesson provides an introduction to TCP/IP. I am sure you’ve heard of TCP/IP… though you may
wonder why you need to understand it. Well, TCP/IP is the language that governs communications
between all computers on the Internet. A basic understanding of TCP/IP is essential to understanding
Internet technology and how it can bring benefits to an organization.
We’re going to explain what TCP/IP is and the different parts that make it up. We’ll also discuss IP
addresses.

The Agenda

- What Is TCP/IP?

- IP Addressing

What Is TCP/IP?

TCP/IP is shorthand for a suite of protocols that run on top of IP. IP is the Internet Protocol, and TCP is
the most important protocol that runs on top of IP. Any application that can communicate over the
Internet is using IP, and these days most internal networks are also based on TCP/IP.
Protocols that run on top of IP include: TCP, UDP and ICMP. Most TCP/IP implementations support all
three of these protocols. We’ll talk more about them later.
Protocols that run underneath IP include: SLIP and PPP. These protocols allow IP to run across
telecommunications lines.
TCP/IP protocols work together to break data into packets that can be routed efficiently by the network.
In addition to the data, packets contain addressing, sequencing, and error checking information. This
allows TCP/IP to accurately reconstruct the data at the other end.
Here’s an analogy of what TCP/IP does. Say you’re moving across the country. You pack your boxes and
put your new address on them. The moving company picks them up, makes a list of the boxes, and
ships them across the country using the most efficient route. That might even mean putting different
boxes on different trucks. When the boxes arrive at your new home, you check the list to make sure
everything has arrived (and in good shape), and then you unpack the boxes and “reassemble” your
house.

- A suite of protocols
- Rules that dictate how packets of information are sent across - multiple networks
- Addressing
- Error checking

IP

Let’s start with IP, the Internet Protocol.

Every computer on the Internet has at least one address that uniquely identifies it from all other
computers on the Internet (aptly called it’s IP address!). When you send or receive data—say an email
message or web page—the message gets divided into little chunks called packets or data grams. Each of
these packets contains both the source IP address and the destination IP address.
IP looks at the destination address to decide what to do next. If the destination is on the local network,
IP delivers the packet directly. If the destination is not on the local network, then IP passes the packet
to a gateway—usually a router.
Computers usually have a single default gateway. Routers frequently have several gateways from which
to choose. A packet may get passed through several gateways before reaching one that is on a local
network with the destination.
Along the way, any router may break the IP packet into several smaller packets based on transmission
medium. For example, Ethernet usually allows packets of up to 1500 bytes, but it is not uncommon for
modem-based PPP connections to only allow packets of 256 bytes. The last system in the chain (the
destination) reassembles the original IP packet.

TCP/IP Transport Layer


- 21 FTP—File Transfer Protocol
- 23 Telnet
- 25 SMTP—Simple Mail Transfer Protocol
- 37 Time
- 69 TFTP—Trivial File Transfer Protocol
- 79 Finger
- 103 X400
- 161 SNMP—Simple Network Management Protocol
- 162 SNMPTRAP

After TCP/IP was invented and deployed, the OSI layered network model was accepted as a standard.
OSI neatly divides network protocols into seven layers; the bottom four layers are shown in this
diagram. The idea was that TCP/IP was an interesting experiment, but that it would be replaced by
protocols based on the OSI model.
As it turned out, TCP/IP grew like wildfire, and OSI-based protocols only caught on in certain segments
of the manufacturing community. These days, while everyone uses TCP/IP, it is common to use the OSI
vocabulary.

TCP/IP Applications

- Application layer

- File Transfer Protocol (FTP)


- Remote Login (Telnet)
- E-mail (SMTP)

- Transport layer

- Transport Control Protocol (TCP)


- User Datagram Protocol (UDP)

- Network layer

- Internet Protocol (IP)

- Data link & physical layer

- LAN Ethernet, Token Ring, FDDI, etc.


- WAN Serial lines, Frame Relay, X.25, etc.

Roughly, Ethernet corresponds to both the physical layer and the data link layer. Other media (T1,
Frame Relay, ATM, ISDN, analog) and other protocols (SLIP, PPP) are down here as well.
Roughly, IP corresponds to the network layer.
Roughly, TCP and UDP correspond to the transport layer.
TCP is the most important of all the IP protocols. Most Internet applications you can think of use TCP,
including: Telnet, HTTP (Web), POP & SMTP (email) and FTP (file transfer).

TCP Transmission Control Protocol

TCP stands for Transmission Control Protocol.

TCP establishes a reliable connection between two applications over the network. This means that TCP
guarantees accurate, sequential delivery of your data. If something goes wrong, TCP reports an error, so
you always know whether your data arrived at the other end.
Here’s how it works:
Every TCP connection is uniquely identified by four numbers:

- source IP address
- source port
- destination IP address
- destination port

Typically, a client will use a random port number, but a server will use a “well known” port number, e.g.
25=SMTP (email), 80=HTTP (Web) and so on. Because every TCP connection is unique, even though
many people may be making requests to the same Web server, TCP/IP can identify your packets among
the crowd.
In addition to the port information, each TCP packet has a sequence number. Packets may arrive out of
sequence (they may have been routed differently, or one may have been dropped), so the sequence
numbers allow TCP to reassemble the packets in the correct order and to request retransmission of any
missing packets.
TCP packets also include a checksum to verify the integrity of the data. Packets that fail checksum get
retransmitted.

UDP User Datagram Protocol

- Unreliable
- Fast
- Assumes application will retransmit on error
- Often used in diskless workstations

UDP is a fast, unreliable protocol that is suitable for some applications.


Unreliable means there is no sequencing, no guaranteed delivery (no automatic retransmission of lost
packets) and sometimes no checksums.
Fast means there is no connection setup time, unlike TCP. In reality, once a TCP session is established,
packets will go just as fast over a TCP connection as over UDP.
UDP is useful for applications such as streaming audio that don’t care about dropped packets and for
applications such as TFTP that inherently do their own sequencing and checksums. Also, applications
such as NFS that usually run on very reliable physical networks and which need fast, connectionless
transactions use UDP.

ICMP Ping

Ping is an example of a program that uses ICMP rather than TCP or UDP. Ping sends an ICMP echo
request from one system to another, then waits for an ICMP echo reply. It is mostly used for testing.

IPv4 Addressing

Most IP addresses today use IP version 4—we’ll talk about IP version 6 later.
IPv4 addresses are 32 bits long and are usually written in “dot” notation. An example would be
192.1.1.17.
The Internet is actually a lot of small local networks connected together. Part of an IP address identifies
which local network, and part of an IP address identifies a specific system or host on that local network.
What part of an IP address is for the “network” and what part is for the “host” is determined by the class
or the subnet.

IP Addressing—Three Classes

- Class A: NET.HOST.HOST.HOST
- Class B: NET.NET.HOST.HOST
- Class C: NET.NET.NET.HOST

Before the introduction of subnet masks, the only way to tell the network part of an IP address from the
host part was by its class.
Class A addresses have 8 bits (one octet) for the network part and 24 bits for the host part. This allows
for a small number of large networks.
Class B addresses have 16 bits each for the network and host parts.
Class C addresses have 24 bits for the network and 8 bits for the host. This allows for a fairly large
number of networks with up to 254 systems on each.

To summarize:

IPv4 addresses are 32 bits with a network part and a host part.
Unless you are using subnets, you divide an IP address into the network and host parts based on the
address class.
The network part of an address is used for routing packets over the Internet. The host part is used for
final delivery on the local net.

IP Addressing—Class A

Here’s an example of a class A address. Any IPv4 address in which the first octet is less than 128 is by
definition a class A address.
This address is for host #222.135.17 on network #10, although the host is always referred to by its full
address.

Examlpe:- 10.222.135.17

- Network # 10
- Host # 222.135.17
- Range of class A network IDs: 1–126
- Number of available hosts: 16,777,214

IP Addressing—Class B

Here’s an example of a class B address. Any IPv4 address in which the first octet is between 128 and
191 is by definition a class B address

Examlpe:- 128.128.141.245

- Network # 128.128
- Host # 141.245
- Range of class B network IDs: 128.1–191.254
- Number of available hosts: 65,534

IP Addressing—Class C

Here’s an example of a class C address. Most IPv4 addresses in which the first octet is 192 or higher are
class C addresses, but some of the higher ranges are reserved for multicast applications.

Examlpe:- 192.150.12.1

-Network # 192.150.12
-Host # 1
-Range of class C network IDs: 192.0.1–223.255.254
-Number of available hosts: 254

IP Subnetting

As it turns out, dividing IP addresses into classes A, B and C is not flexible enough. In particular, it does
not make efficient use of the available IP addresses and it does not give network administrators enough
control over their internal LAN configurations.
In this diagram, the class B network 131.108 is split (probably into 256 subnets), and a router connects
the 131.108.2 subnet to the 131.108.3 subnet.

IP Subnet Mask

A subnet mask tells a computer or a router how to divide a range of IP addresses into the network part
and the host part.

Given:

Address = 131.108.2.160

Subnet Mask = 255.255.255.0

Subnet = 131.108.2.0

In this example, without a subnet mask the address would be treated as class B and the network
number would be 131.108. But because someone supplied a subnet mask of 255.255.255.0, the
network number is actually 131.108.2.
These days, routers and computers always use subnet masks if they are supplied. If there is no subnet
mask for an address, then the class A, B, C scheme is used.

Remember that a network mask determines which portion of an IP address identifies the network and
which portion identifies the host, while a subnet mask describes which portion of an address refers to
the subnet and which part refers to the host.

IP Address Assignment

- ISPs assign addresses to customers


- IANA assigns addresses to ISPs
- CIDR block: bundle of addresses

Historically, an organization was assigned a class A, B or C address and carried that address around.
This is no longer the case.
Usually an organization is assigned IP addresses by its ISP. If an organization changes ISPs, it changes
IP addresses. This is usually not a problem, since most people refer to IP addresses using the DNS. For
example, www.acme.com might point to 192.1.1.1 today and point to 128.7.7.7 tomorrow, but nobody
other than the system administrator at acme.com has to worry about it.
IANA—the Internet Assigned Numbers Authority—assigns IP addresses to ISPs. These days no one gets
a class A or a class B network—they are pretty much all gone. Usually the IANA bundles 8 or 16 or 32
class C networks together and calls it a CIDR (pronounced “cider”) block. CIDR stands for Class
Independent Routing, and it greatly simplifies routing among the Internet backbones. CIDR blocks are
sometimes called supernets (as opposed to subnets).

IPv6 Addressing

- 128-bit addresses

- 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses

Example1:- 5F1B:DF00:CE3E:E200:0020:0800:5AFC:2B36
Example2:- 0:0:0:0:0:0:192.1.1.17

With the explosive growth of the Internet, there are not enough IPv4 addresses to go around. IPv6 is
now released, and many organizations are already migrating.
While IPv6 has a number of nice features, its biggest claim to fame is a huge number of IP addresses.
IPv4 was only 32 bits; IPv6 is 128 bits.
To ease migration, IPv6 completely contains all of IPv4, as shown in the second example above.
Most network applications will have to be modified slightly to accommodate IPv6.

- SUMMARY -

- TCP/IP is a suite of protocols

- TCP/IP defines communications between computers on the Internet

- IP determines where packets are routed based on their destination address

- TCP ensures packets arrive correctly at their destination address

Lesson 4: LAN Basics

In this lesson, we will cover the fundamentals of LAN technologies. We’ll look at Ethernet, Token Ring,
and FDDI. For each one, we’ll look at the technology as well as its operations.

The Agenda

- Ethernet

- Token Ring

- FDDI

Common LAN Technologies

The three LAN technologies shown here account for virtually all deployed LANs:

The most popular local area networking protocol today is Ethernet. Most network administrators building
a network from scratch use Ethernet as a fundamental technology.

Token Ring technology is widely used in IBM networks.


FDDI networks are popular for campus LANs – and are usually built to support high bandwidth needs for
backbone connectivity.

Let’s take a look at Ethernet in detail.

Ethernet

Ethernet and IEEE 802.3

Ethernet was initially developed by Xerox. They were later joined by Digital Equipment Corporation
(DEC) and Intel to define the Ethernet 1 specification in 1980. There have been further revisions
including the Ethernet standard (IEEE Standard 802.3) which defines rules for configuring Ethernet as
well as specifying how elements in an Ethernet network interact with one another.
Ethernet is the most popular physical layer LAN technology because it strikes a good balance between
speed, cost, and ease of installation. These strong points, combined with wide acceptance in the
computer marketplace and the ability to support virtually all popular network protocols, make Ethernet
an ideal networking technology for most computer users today.
The Fast Ethernet standard (IEEE 802.3u) has been established for networks that need higher
transmission speeds. It raises the Ethernet speed limit from 10 Mbps to 100 Mbps with only minimal
changes to the existing cable structure. Incorporating Fast Ethernet into an existing configuration
presents a host of decisions for the network manager. Each site in the network must determine the
number of users that really need the higher throughput, decide which segments of the backbone need to
be reconfigured specifically for 100BaseT and then choose the necessary hardware to connect the
100BaseT segments with existing 10BaseT segments.
Gigabit Ethernet is an extension of the IEEE 802.3 Ethernet standard. It increases speed tenfold over
Fast Ethernet, to 1000 Mbps, or 1 Gbps.

Benefits and background

- Ethernet is the most popular physical layer LAN technology because it strikes a good balance
between speed, cost, and ease of installation
- Supports virtually all network protocols
- Xerox initiated, then joined by DEC & Intel in 1980

Revisions of Ethernet specification

- Fast Ethernet (IEEE 802.3u) raises speed from 10 Mbps to 100 Mbps
- Gigabit Ethernet is an extension of IEEE 802.3 which increases speeds to 1000 Mbps, or 1 Gbps

One thing to keep in mind in Ethernet is that there are several framing variations that exist for this
common LAN technology.
These differences do not prohibit manufacturers from developing network interface cards that support
the common physical layer, and software that recognizes the differences between the two data links

Ethernet Protocol Names

Ethernet protocol names follow a fixed scheme. The number at the beginning of the name indicates the
wire speed. If the word “base” appears next, the protocol is for baseband applications. If the word
“broad” appears, the protocol is for broadband applications. The alphanumeric code at the end of the
name indicates the type of cable and, in some cases, the cable length. If a number appears alone, you
can determine the maximum segment length by multiplying that number by 100 meters. For example
10Base2 is a protocol with a maximum segment length of approximately 200 meters (2 x 100 meters).

Ethernet and Fast Ethernet

This chart give you an idea of the range of Ethernet protocols including their data rate, maximum
segment length, and medium.
Ethernet has survived as an essential media technology because of its tremendous flexibility and its
relative simplicity to implement and understand. Although other technologies have been touted as likely
replacements, network managers have turned to Ethernet and its derivatives as effective solutions for a
range of campus implementation requirements. To resolve Ethernet’s limitations, innovators (and
standards bodies) have created progressively larger Ethernet pipes. Critics might dismiss Ethernet as a
technology that cannot scale, but its underlying transmission scheme continues to be one of the
principal means of transporting data for contemporary campus applications.
The most popular today is 10BaseT and 100BaseT… 10Mbps and 100Mbps respectively using UTP wiring.

Let’s take a look at how Ethernet works.

Ethernet Operation

Example:-
Let’s say in our example here that station A is going to send information to station D. Station A will
listen through its NIC card to the network. If no other users are using the network, station A will go
ahead and send its message out on to the network. Stations B and C and D will all receive the
communication.

At the data link layer it will inspect the MAC address. Upon inspection station D will see that the MAC
address matches its own and then will process the information up through the rest of the layers of the
seven layer model.

As for stations B & C, they too will pull this packet up to their data link layers and inspect the MAC
addresses. Upon inspection they will see that there is no match between the data link layer MAC address
for which it is intended and their own MAC address and will proceed to dump the packet.

Ethernet Broadcast

Broadcasting is a powerful tool that sends a single frame to many stations at the same time.
Broadcasting uses a data link destination address of all 1s. In this example, station A transmits a frame
with a destination address of all 1s, stations B, C, and D all receive and pass the frame to their
respective upper layers for further processing.
When improperly used, however, broadcasting can seriously impact the performance of stations by
interrupting them unnecessarily. For this reason, broadcasts should be used only when the MAC address
of the destination is unknown or when the destination is all stations.

Ethernet Reliability
Ethernet is known as being a very reliable local area networking protocol. In this example, A is
transmitting information and B also has information to transmit. Let’s say that A & B listen to the
network, hear no traffic and broadcast at the same time. A collision occurs when these two packets
crash into one another on the network. Both transmissions are corrupted and unusable.

When a collision occurs on the network, the NIC card sensing the collision, in this case, station C sends
out a jam signal that jams the entire network for a designated amount of time.

Once the jam signal has been received and recognized by all of the stations on the network, stations A
and D will both back off for different amounts of time before they try to retransmit. This type of
technology is known as Carrier Sense Multiple Access With Collision Detection – CSMA/CD.

High-Speed Ethernet Options

- Fast Ethernet
- Fast EtherChannel®
- Gigabit Ethernet
- Gigabit EtherChannel
We’ve mentioned that Ethernet also has high speed options that are currently available. Fast Ethernet is
used widely at this point and provides customers with 100 Mbps performance, a ten-fold increase. Fast
EtherChannel is a Cisco value-added feature that provides bandwidth up to 800 Mbps. There is now a
standard for Gigabit Ethernet as well and Cisco provides Gigabit Ethernet solutions with 1000 Mbps
performance.

Let’s look more closely at Fast EtherChannel and Gigabit Ethernet.

What Is Fast EtherChannel?

Grouping of multiple Fast Ethernet interfaces into one logical transmission path

- Scalable bandwidth up to 800+ Mbps


- Using industry-standard Fast Ethernet
- Load balancing across parallel links
- Extendable to Gigabit Ethernet

Fast EtherChannel provides a solution for network managers who require higher bandwidth between
servers, routers, and switches than Fast Ethernet technology can currently provide.
Fast EtherChannel is the grouping of multiple Fast Ethernet interfaces into one logical transmission path
providing parallel bandwidth between switches, servers, and Cisco routers. Fast EtherChannel provides
bandwidth aggregation by combining parallel 100-Mbps Ethernet links (200-Mbps full-duplex) to provide
flexible, incremental bandwidth between network devices.
For example, network managers can deploy Fast EtherChannel consisting of pairs of full-duplex Fast
Ethernet to provide 400+ Mbps between the wiring closet and the data center, while in the data center
bandwidths of up to 800 Mbps can be provided between servers and the network backbone to provide
large amounts of scalable incremental bandwidth.
Cisco’s Fast EtherChannel technology builds upon standards-based 802.3 full-duplex Fast Ethernet. It is
supported by industry leaders such as Adaptec, Compaq, Hewlett-Packard, Intel, Micron, Silicon
Graphics, Sun Microsystems, and Xircom and is scalable to Gigabit Ethernet in the future.

What Is Gigabit Ethernet?

In some cases, Fast EtherChannel technology may not be enough.


The old 80/20 rule of network traffic (80 percent of traffic was local, 20 percent was over the backbone)
has been inverted by intranets and the World Wide Web. The rule of thumb today is to plan for 80
percent of the traffic going over the backbone.

Gigabit networking is important to accommodate these evolving needs.


Gigabit Ethernet builds on the Ethernet protocol but increases speed tenfold over Fast Ethernet, to 1000
Mbps, or 1 Gbps. It promises to be a dominant player in high-speed LAN backbones and server
connectivity. Because Gigabit Ethernet significantly leverages on Ethernet, network managers will be
able to leverage their existing knowledge base to manage and maintain Gigabit networks.

The Gigabit Ethernet spec addresses three forms of transmission media though not all are available yet:
- 1000BaseLX: Long-wave (LW) laser over single-mode and multimode fiber
- 1000BaseSX: Short-wave (SW) laser over multimode fiber
- 1000BaseCX: Transmission over balanced shielded 150-ohm 2-pair STP copper cable
- 1000BaseT: Category 5 UTP copper wiring Gigabit Ethernet allows Ethernet to scale from 10 Mbps at
the desktop, to 100 Mbps to the workgroup, to 1000 Mbps in the data center. By leveraging the
current Ethernet standards as well as the installed base of Ethernet and Fast Ethernet switches and
routers, network managers do not need to retrain and relearn a new technology to provide support for
Gigabit Ethernet.

Token Ring (IEEE 802.5)

The Token Ring network was originally developed by IBM in the 1970s. It is still IBM’s primary LAN
technology and is second only to Ethernet in general LAN popularity. The related IEEE 802.5
specification is almost identical to and completely compatible with IBM’s Token Ring network.
Collisions cannot occur in Token Ring networks. Possession of the token grants the right to transmit. If a
node receiving the token has no information to send, it passes the token to the next end station. Each
station can hold the token for a maximum period of time.
Token-passing networks are deterministic, which means that it is possible to calculate the maximum
time that will pass before any end station will be able to transmit. This feature and several reliability
features make Token Ring networks ideal for applications where delay must be predictable and robust
network operation is important. Factory automation environments are examples of such applications.
Token Ring is more difficult and costly to implement. However, as the number of users in a network
rises, Token Ring’s performance drops very little. In contrast, Ethernet’s performance drops significantly
as more users are added to the network.

Token Ring Bandwidth

Here are some of the speeds associated with Token Ring. Note that Token Ring runs at 4 Mbps or 16
Mbps. Today, most networks operate at 16 Mbps. If a network contains even one component with a
maximum speed of 4 Mbps, the whole network must operate at that speed.
When Ethernet first came out, networking professionals believed that Token Ring would die, but this has
not happened. Token Ring is primarily used with IBM networks running Systems Network Architecture
(SNA) networking operating systems. Token Ring has not yet left the market because of the huge
installed base of IBM mainframes being used in industries such as banking.
The practical difference between Ethernet and Token Ring is that Ethernet is much cheaper and simpler.
However, Token Ring is more elegant and robust.

Token Ring Topology

The logical topology of an 802.5 network is a ring in which each station receives signals from its nearest
active upstream neighbor (NAUN) and repeats those signals to its downstream neighbor. Physically,
however, 802.5 networks are laid out as stars, with each station connecting to a central hub called a
multistation access unit or MAU. The stations connect to the central hub through shielded or unshielded
twisted-pair wire.
Typically, a MAU connects up to eight Token Ring stations. If a Token Ring network consists of more
stations than a MAU can handle, or if stations are located in different parts of a building–for example on
different floors–MAUs can be chained together to create an extended ring. When installing an extended
ring, you must ensure that the MAUs themselves are oriented in a ring. Otherwise, the Token Ring will
have a break in it and will not operate.

Token Ring Operation

Station access to a Token Ring is deterministic; a station can transmit only when it receives a special
frame called a token. One station on a token ring network is designated as the active monitor. The
active monitor will prepare a token. A token is usually a few bits with significance to each one of the
network interface cards on the network. The active monitor will pass the token into the multistation
access unit. The multistation access unit then will pass the token to the first downstream neighbor. Let’s
say in this example that station A has something to transmit. Station A will seize the token and append
its data to the token. Station A will then send its token back to the multistation access unit. The MAU will
then grab the token and push it to the next downstream neighbor. This process is followed until the
token reaches the destination for which it is intended.

If a station receiving the token has no information to send, it simply passes the token to the next
station. If a station possessing the token has information to transmit, it claims the token by altering one
bit of the frame, the T bit. The station then appends the information it wishes to transmit and sends the
information frame to the next station on the Token Ring.

The information frame circulates the ring until it reaches the destination station, where the frame is
copied by the station and tagged as having been copied. The information frame continues around the
ring until it returns to the station that originated it, and is removed.
Because frames proceed serially around the ring, and because a station must claim the token before
transmitting, collisions are not expected in a Token Ring network.
Broadcasting is supported in the form of a special mechanism known as explorer packets. These are
used to locate a route to a destination through one or more source route bridges.
- Token Ring Summary -

- Reliable transport, minimized collisions

- Token passing/token seizing

- 4- or 16-Mbps transport

- Little performance impact with increased number of users

- Popular at IBM-oriented sites such as banks and automated factories

FDDI - Fiber Distributed Data Interface

FDDI is an American National Standards Institute (ANSI) standard that defines a dual Token Ring LAN
operating at 100 Mbps over an optical fiber medium. It is used primarily for corporate and carrier
backbones.
Token Ring and FDDI share several characteristics including token passing and a ring architecture which
were explored in the previous section on Token Ring.
Copper Distributed Data Interface (CDDI) is the implementation of FDDI protocols over STP and UTP
cabling. CDDI transmits over relatively short distances (about 100 meters), providing data rates of 100
Mbps using a dual-ring architecture to provide redundancy.
While FDDI is fast, reliable, and handles a lot of data well, its major problem is the use of expensive
fiber-optic cable. CDDI addresses this problem by using UTP or STP. However, notice that the maximum
segment length drops significantly.
FDDI was developed in the mid-1980s to fill the needs of growing high-speed engineering workstation
capacity and network reliability. Today, FDDI is frequently used as a high-speed backbone technology
because of its support for high bandwidth and greater distances than copper.

FDDI Network Architecture

FDDI uses a dual-ring architecture. Traffic on each ring flows in opposite directions (called counter-
rotating). The dual-rings consist of a primary and a secondary ring. During normal operation, the
primary ring is used for data transmissions, and the secondary ring remains idle. The primary purpose of
the dual rings is to provide superior reliability and robustness.
One of the unique characteristics of FDDI is that multiple ways exist to connect devices to the ring. FDDI
defines three types of devices: single-attachment station (SAS) such as PCs, dual attachment station
(DAS) such as routers and servers, and a concentrator.

- Dual-ring architecture

- Primary ring for data transmissions


- Secondary ring for reliability and robustness

- Components

- Single attachment station (SAS)—PCs


- Dual attachment station (DAS)—Servers
- Concentrator

- FDDI concentrator

- Also called a dual-attached concentrator (DAC)


- Building block of an FDDI network
- Attaches directly to both rings and ensures that any SAS failure or power-down does not
bring down the ring
Example:-

An FDDI concentrator (also called a dual-attachment concentrator [DAC]) is the building block of an
FDDI network. It attaches directly to both the primary and secondary rings and ensures that the failure
or power-down of any single attachment station (SAS) does not bring down the ring. This is particularly
useful when PCs, or similar devices that are frequently powered on and off, connect to the ring.

- FDDI Summary -

- Features

- 100-Mbps token-passing network


- Single-mode (100 km), double-mode (2 km)
- CDDI transmits at 100 Mbps over about 100 m
- Dual-ring architecture for reliability

- Optical fiber advantages versus copper

- Security, reliability, and performance are enhanced because it does not emit electrical signals
- Much higher bandwidth than copper

- Used for corporate and carrier backbones

- Summary -

- LAN technologies include Ethernet, Token Ring, and FDDI

- Ethernet

- Most widely used


- Good balance between speed, cost, and ease of installation
- 10 Mbps to 1000 Mbps

- Token Ring

- Primarily used with IBM networks


- 4 Mbps to 16 Mbps

- FDDI

- Primarily used for corporate backbones


- Supports longer distances
- 100 Mbps

Lesson 5: Understanding LAN Switching

This lession covers an introduction to switching technology.

The Agenda
- Shared LAN Technology

- LAN Switching Basics

- Key Switching Technologies

We'll begin by looking at traditional shared LAN technologies. We'll then look at LAN switching basics,
and then some key switching technologies, such as spanning tree and multicast controls.

Let's begin our discussion by reviewing shared LAN technologies.

Shared LAN Technology

Early Local Area Networks

The earliest Local Area Network technologies that were installed widely were either thick Ethernet or thin
Ethernet infrastructures. And it's important to understand some of he limitations of these to see where
we're at today with LAN switching.With thick Ethernet installations there were some important
limitations such as distance, for example. Early thick Ethernet networks were limited to only 500 meters
before the signal degraded.In order to extend beyond the 500 meter distance, they required to install
repeaters to boost and amplify that signal.There were also limitations on the number of stations and
servers we could have on our network, as well as the placement of those workstations on the network.

The cable itself was relatively expensive, it was also large in diameter, which made it difficult or more
challenging to install throughout the building, as we pulled it through the walls and ceilings and so on.
As far as adding new users, it was relatively simple.There could use what was known as a non-intrusive
tap to plug in a new station anywhere along the cable.And in terms of the capacity that was provided by
this thick Ethernet network, it provided 10 megabits per second, but this was shared bandwidth,
meaning that that 10 megabits was shared amongst all users on a given segment.

A slight improvement to thick Ethernet was thin Ethernet technology, commonly referred to as cheaper
net.This was less expensive and it required less space in terms of installation than thick Ethernet
because it was actually thinner in diameter, which is where the name thin Ethernet came from.It was
still relatively challenging to install, though, as it sometimes required what we call home runs, or a direct
run from a workstation back to a hub or concentrator.And also adding users required a momentary
interruption in the network, because we actually had to cut or make a break in a cable segment in order
to add a new server or workstation. So those are some of the limitations of early thin and thick Ethernet
networks.An improvement on thin and thick Ethernet technology was adding hubs or concentrators into
our network. And this allowed us to use something known as UTP cabling, or Unshielded Twisted Pair
cabling.

As you can see indicated in the diagram on the left, Ethernet is fundamentally what we call a shared
technology.And that is, all users of a given LAN segment are fighting for the same amount of bandwidth.
And this is very similar to the cars you see in our diagram, here, all trying to get onto the freeway at
once.This is really what our frames, or packets, do in our network as we're trying to make transmissions
on our Ethernet network. So, this is actually what's occurring on our hub.Even though each device has
its own cable segment connecting into the hub, we're still all fighting for the same fixed amount of
bandwidth in the network.Some common terms that we hear associated with the use of hubs,
sometimes we call these Ethernet concentrators, or Ethernet repeaters, and they're basically self-
contained Ethernet segments within a box.So while physically it looks like everybody has their own
segment to their workstation, they're all interconnected inside of this hub, so it's still a shared Ethernet
technology.Also, these are passive devices, meaning that they're virtually transparent to the end users,
the end users don't even know that those devices exist, and they don't have any role in terms of a
forwarding decision in the network whatsoever, they also don't provide any segmentation within the
network whatsoever.And this is basically because they work at Layer 1 in the OSI framework.
Collisions: Telltale Signs

A by-product that we have in any Ethernet network is something called collisions. And this is a result of
the fundamental characteristic of how any Ethernet network works.Basically, what happens in an
Ethernet network is that many stations are sharing the same segment. So what can happen is any one
of these stations can transmit at any given time.And if 2 or more stations try to transmit at the same
time, it's going to result in what we call a collision. And this is actually one of the early tell-tale signs
that your Ethernet network is becoming too congested. Or we simply have too many users on the same
segment.And when we get to a certain number of collisions in the network, where they become
excessive, this is going to cause sluggish network response times, and a good way to measure that is by
the increasing number of user complaints that are reported to the network manager.

Other Bandwidth Consumers

It's also important to understand fundamentally how transmissions can occur in the network. There's
basically three different ways that we can communicate in the network. The most common way is by
way of unicast transmissions.And when we make a unicast transmission, we basically have one
transmitter that's trying to reach one receiver, which is by far the most common, or hopefully the most
common form of communication in our network.

Another way to communicate is with a mechanism known as a broadcast. And that is when one
transmitter is trying to reach all receivers in the network.So, as you can see in the diagram, in the
middle, our server station is sending out one message, and it's being received by everyone on that
particular segment.

The last mechanism we have is what is known as a multicast.And a multicast is when one transmitter is
trying to reach, not everyone, but a subset or a group of the entire segment.So as you can see in the
bottom diagram, we're reaching two stations, but there's one station that doesn't need to participate, so
he's not in our multicast group. So those are the three basic ways that we can communicate within our
Local Area Network.

Broadcasts Consume Bandwidth


Now, in terms of broadcast, it's relatively easy to broadcast in a network, and that's a transmission
mechanism that many different protocols use to communicate certain information, such as address
resolution, for example.Address resolution is something that all protocols need to do in order to map
Layer 2 MAC addresses up to logical layer, or Layer 3, addresses. For example, in an IP network we do
something known as an ARP, an Address Resolution Protocol.And this allows us to map Layer 3 IP
addresses down to Layer 2 MAC-layer addresses. Also, in terms of distributing routing protocol
information, we do this by way of broadcasting, and also some key network services in our networks rely
on broadcast mechanisms as well.

And it doesn't really matter what our protocol is, whether it's AppleTalk or Novell IPX, or TCP IP, for
example, all of these different Layer 3 protocols rely on the broadcast mechanism. So, in other words,
all of these protocols produce broadcast traffic in a network.

Broadcasts Consume Processor Performance

Now, in addition to consuming bandwidth on the network, another by-product of broadcast traffic in the
network is that they consume CPU cycles as well.Since broadcast traffic is sent out and received by all
stations on the network, that means that we must interrupt the CPU of all stations connected to the
network.So here in this diagram you see the results of a study that was performed with several different
CPUs on a network. And it shows you the relative level of CPU degradation as the number of broadcasts
on a network increases.

So you can see, we did this study based on a SPARC2 CPU, a SPARC5 CPU and also a Pentium CPU. And
as the number of broadcasts increased, the amount of CPU cycles consumed, simply by processing and
listening to that broadcast traffic, increased dramatically.So, the other thing we need to recognize is that
a lot of times the broadcast traffic in our network is not needed by the stations that receive it.So what
we have then in shared LAN technologies is our broadcast traffic running throughout the network,
needlessly consuming bandwidth, and needlessly consuming CPU cycles.

Hub-Based LANs

So hubs are introduced into the network as a better way to scale our thinand thick Ethernet networks.
It's important to remember, though, that these are still shared Ethernet networks, even though we're
using hubs.
Basically what we have is an individual desktop connection for each individual workstation or server in
the network, and this allows us to centralize all of our cabling back to a wiring closet for example. There
are still security issues here, though.It's still relatively easy to tap in and monitor a network by way of a
hub. In fact it's even easier to do that because all of the resources are generally located centrally.If we
need to scale this type of network we're going to rely on routers to scale this network beyond the
workgroup, for example.

It's makes adds, moves and changes easier because we can simply go to the wiring closet and move
cables around, but we'll see later on with LAN switching that it's even easier with LAN switching.Also, in
terms of our workgroups, in a hub or concentrator based network, the workgroups are determined
simply by the physical hub that we plug into. And once again we'll see later on with LAN switching how
we can improve this as well.

Bridges

Another way is to add bridges. In order to scale our networks we need to do something known as
segmentation. And bridges provide a certain level of segmentation in our network.And bridges do this by
adding a certain amount of intelligence into the network. Bridges operate at Layer 2, while hubs operate
at Layer 1. So operating at Layer 2 gives us more intelligence in order to make an intelligent forwarding
decision.

That's why we say that bridges are more intelligent than a hub, because they can actually listen in, or
eavesdrop on the traffic going through the bridge, they can look at source and destination addresses,
and they can build a table that allows them to make intelligent forwarding decisions.

They actually collect and pass frames between two network segments and while they're doing this
they're making intelligent forwarding decisions. As a result, they can actually provide greater control of
the traffic within our network.

Switches—Layer 2

To provide even better control we're going to look to switches to provide the most control in our
network, at least at Layer 2. And as you can see in the diagram, have improved the model of traffic
going through our network.
Getting back to our traffic analogy, as you can see looking at the highway here, we've actually
subdivided the main highway so that each particular car has it's own lane that they can drive on through
the network. And fundamentally, this is what we can provide in our data networks as well.So that when
we look at our network we see that physically each station has its own cable into the network, well,
conceptually we can think of this as each workstation having their own lane through the
highway.Basically there is something known as micro-segmentation. That's a fancy way simply to say
that each workstation gets its own dedicated segment through the network.

Switches versus Hubs

If we compare that with a hub or with a bridge, we're limited on the number of simultaneous
conversations we can have at a time.Remember that if two stations tried to communicate in a hubbed
environment, that caused something known as collisions. Well, in a switched environment we're not
going to expect collisions because each workstation has its own dedicated path through the
network.What that means in terms of bandwidth, and in terms of scalability, is we have dramatically
more bandwidth in the network. Each station now will have a dedicated 10 megabits per second worth of
bandwidth.

So when we look at our switches versus our hubs, and the top diagram, remember that we're looking at
a hub. And this is when all of our traffic was fighting for the same fixed amount of bandwidth.Looking at
the bottom diagram you can see that we've improved our traffic flow through the network, because
we've provided a dedicated lane for each workstation.

The Need for Speed: Early Warning Signs

Now, how can you tell if you have congestion problems in your network? Well, some early things to look
at, some early things to watch out for, include increased delay on our file transfers.If basic file transfers
are taking a long, long time in the network, that means we may need more bandwidth. Also, another
thing to watch out for is print jobs that take a very long time to print out.From the time we queue them
from our workstation, till the time they actually get printed, if that's increasing, that's an indication that
we may have some LAN congestion problems.Also, if your organization is looking to take advantage of
multimedia applications, you're going to need to move beyond basic shared LAN technologies, because
those shared LAN technologies don't have the multicast controls that we're going to need for multimedia
applications.

Typical Causes of Network Congestion

Some causes of this congestion, if we're seeing those early warning signs some things we might want to
look for, if we have too many users on a shared LAN segment. Remember that shared LAN segments
have a fixed amount of bandwidth.As we add users, proportionally, we're degrading the amount of
bandwidth per user. So we're going to get to a certain number of users and it's going to be too much
congestion, too many collisions, too many simultaneous conversations trying to occur all at the same
time.
And that's going to reduce our performance. Also, when we look at the newer technologies that we're
using in our workstations. With early LAN technologies the workstations were relatively limited in terms
of the amount of traffic they could dump on the network.Well, with newer, faster CPUs, faster busses,
faster peripherals and so on, it's much easier for a single workstation to fill up a network segment.So by
virtue of the fact that we have much faster PCs, we can also do more with the applications that are on
there, we can more quickly fill up the available bandwidth that we have.

Network Traffic Impact from Centralization of Servers

Also, the way the traffic is distributed on our network can have an impact as well. A very common thing
to do in many networks is to build what's known as a server farm for example.Well, in a server farm
effectively what we're doing is centralizing all of the resources on our network that need to be accessed
by all of the workstations in our network.So what happens here is we cause congestion on those
centralized segments within the network. So, when we start doing that, what we're going to do is cause
congestion on those centralized or backbone resources.

Servers are gradually moving into a central area (data center) versus being located throughout the
company to:

- Ensure company data integrity


- Maintain the network and ensure operability
- Maintain security
- Perform configuration and administrative functions

More centralized servers increase the bandwidth demands on campus and workgroup backbones

Today’s LANs

- Mostly switched resources; few shared


- Routers provide scalability
- Groups of users determined by physical location

When we look at today's LANs, the ones that are most commonly implemented today, we're looking at
mostly switched infrastructures, because of the price point of deploying switches, many companies are
bypassing the shared hub technologies and moving directly to switches.Even within switched networks,
at some point we still need to look to routers to provide scalability. And also we see that in terms of the
grouping of users, they're largely determined by the physical location.So that's a quick look at traditional
shared LAN technologies. What we want to do now, since we know those limitations, we want to look at
how we can fix some of those issues. We want to see how we can deploy LAN switches to take
advantage of some new, improved technologies.

Switching Technology: Full Duplex

Another concept that we have in LAN switching that allows us to dramatically improve the scalability, is
something known as full duplex transmission. And that effectively doubles the amount of bandwidth
between nodes.This can be important, for example, between high bandwidth consumers such as
between a switch and a server connection, for example. It provides essentially collision free
transmissions in the network.

And what this provides, for example, in 10 megabit per second connections, it effectively provides 10
meg of transmit capacity, and 10 megabit of receive capacity, for effectively 20 megabits of capacity on
a single connection.Likewise, for a 100 megabit per second connection, we can get effectively 200
megabits per second of throughput

Switching Technology: Two Methods

Another concept that we have in switching is that we have actually two different modes of switching.
And this is important because it can actually effect the performance or the latency of the switching
through our network.

Cut-through

First of all we have something known as cut through switching. What cut through switching does, is, as
the traffic flows through the switch, the switch simple reads the destination MAC address, in other words
we find out where the traffic needs to go through, go to.And as the data flows through the switch we
don't actually look at all of the data. We simply look at that destination address, and then, as the name
implies, we cut it through to its destination without continuing to read the rest of the frame.

Store-and-forward

And that allows to improve performance over another method known as store and forward. With store
and forward switching, what we do is we actually read, not only the destination address, but we read the
entire frame of data.As we read that entire frame we then make a decision on where it needs to go, and
send it on it's way. The obvious trade-off there is, if we're going to read the entire frame it takes longer
to do that.

But the reason that we read the entire frame is that we can do some error correction, or error detection,
on that frame, that may increase the reliability if we're having problems with that in a switched
network.So cut through switching is faster, but the trade-off is that we can't do any error detection in
our switched network.

Key Switching Technologies


let's look at some key technologies within LAN switching.

- 802.1d Spanning-Tree Protocol

- Multicasting

The Need for Spanning Tree

Specifically we'll look at the Spanning Tree Protocol, and also some multicasting controls that we have in
our network.As we build out large networks, one of the problems we have at Layer 2 in the OSI model,
is if we're just making forwarding decisions at Layer 2, that means that we cannot have any Physical
Layer loops in our network.

So if we have a simple network, as we see in the diagram here, what these switches are going to do is
that anytime they have any multicast, broadcast traffic, or any unknown traffic, that's going to create
storms of traffic that are going to get looped endlessly through our network.So in order to prevent that
situation we need to cut out any of the loops.

802.1d Spanning-Tree Protocol (STP)

Spanning Tree Protocol, or STP. This is actually an industry standard that's defined by the IEEE
standards committee, it's known as the 802.1d Spanning Tree Protocol.This allows us to have physical
redundancy in the network, but it logically disconnects those loops.

It's important to understand that we logically disconnect the loops because that allows us to dynamically
re-establish a connection if we need to, in the event of a failure within our network.The way that the
switches do this, and actually bridges can do this as well, is that they simply communicate by way of a
protocol, back and forth. The basically exchange these little hello messages.

If they stop hearing a given communication from a certain device on the network, we know that a
network device has failed. And when a network failure occurs we have to re-establish a link in order to
maintain that redundancy.technically, these little exchanges are known as BPDUs or Bridge Protocol
Data Units.

Now, Spanning Tree protocol works just fine, but one of the issues with Spanning Tree is that it can take
anywhere from half a minute to a full minute in order for the network to fully converge, or in order for
all devices to know the status of the network.So in order to improve on this, there are some refinements
that Cisco has introduced, such as PortFast and UplinkFast, and this allows your Spanning Tree
protocol to converge even faster.

Multicasting

Now, another issue that we have in Layer 2 networks, or switched networks, is control of our multicast
traffic. There's a lot of new applications that are emerging today such as video based applications,
desktop conferencing, and so on, that take advantage of multicasting

But without special controls in the network, multicasting is going to quickly congest our network. Okay,
so what we need is to add intelligent multicasting in the network.

Multipoint Communications
Now, again, let's understand that there are a few fundamental ways that we have in order to achieve
multipoint communications, because effectively, that's what we're trying to do with our video based
applications or any of our multimedia type applications that use this mechanism.

One way is to broadcast our traffic. And what that does is it effectively sends our messages everywhere.
The problem, and the obvious down side there is that not everybody necessarily needs to hear these
communications.So while it will get the job done, it's not the most efficient way to get the job done. So
the better way to do this is by way of multicasting.

And that is, the applications will use a special group address to communicate to only those stations or
group of stations that need to receive these transmissions.And that's what we mean by multipint
communications. That's going to be the more effective way to do that.

Multicast

This also needs to be done dynamically because these multicast groups are going to change over time at
any given moment. So, in order to do this, we need some special protocols in our network. First of all, in
the Wide Area, we need something known as multicast routing protocols.Certainly, in our Wide Area we
already have routing protocols such as RIP, the Routing Information Protocol, or OSPF, or IGRP, for
example, but what we need to do is add multicast extensions so that these routing protocols need,
understand how to handle the need for our multicast groups.

An example of a multicast routing protocol would be PIM, or Protocol Independent multicasting, for
example. This is simply an extension of the existing routing protocols in our network.Another protocol
we have is known as IGMP, or the Internet Group Management Protocol. And IGMP simply allows us to
identify the group membership of the IP stations that want to participate in a given multicast
conversation.

So as you can see indicated by the red traffic in our network, we have channel #1 being multicast
through the network. And by way of IGMP, the workstations can signal back to the original video servers
that they want to participate.And by way of the multicast routing protocols are added, we can efficiently
deliver our traffic in the Wide Area.Now, another challenge that we have is once our traffic gets to the
Local Area Network, or the switch, by default that traffic is going to be flooded to all stations in the
network.

End-to-End Multicast
And that's because IGMP works at Layer 3,, but our LAN switch works at Layer 2. So the switch has no
concept of our Layer 3 group membership. So what we need to do is add some intelligence to our
switch.The intelligence that going to add is a protocol such as CGMP, for example, or Cisco Group
Management Protocol. Another similar technology that we could add, is called IGMP Snooping, which has
the same effect in the Local Area Network.

And that effect is, as you see in the diagram, to limit our multicast traffic to only those stations that
want to participate in the group. So now, as you can see, the red channel, or channel number 1, is
delivered to only station #1 and station #3.

The station 2 does not receive this content because he doesn't wish to participate.So the advantage of
adding protocols such as IGMP, CGMP, IGMP Snooping, and Protocol Independent multicasting into our
network, that achieved bandwidth savings for our multicast traffic.

Why Use Multicast?

What we see indicated in the red is, as we add stations to our multicast group, the amount of bandwidth
we need to do that is going to increase in a linear fashion.But by adding multicast controls, you can see
the amount of bandwidth is reduced dramatically. Because these intelligent multicast controls can better
make, can make better use of the bandwidth in our network.So by adding multicast controls that's going
to also reduce the cost of networking as well because we've reduced the bandwidth that we need, so
that's going to provide a dramatic improvement to our Local Area Network.

- Summary -

- Switches provide dedicated access

- Switches eliminate collisions and increase capacity

- Switches support multiple conversations at the same time

- Switches provide intelligence for multicasting

In this Lesson, we’ll discuss the WAN. We’ll start by defining what a WAN is, and then move on to talking
about basic technology such as WAN devices and circuit and packet switching.
also cover transmission options from POTS (plain old telephone service) to Frame Relay, to leased lines,
and more.
Finally, we’ll discuss wide area requirements including a section on minimizing WAN charges with
bandwidth optimization features.

The Agenda

- WAN Basics

- Transmission Options

- WAN Requirements & Solutions

WAN Basics

What Is a WAN?

So, what is a WAN? A WAN is a data communications network that serves users across a broad
geographic area and often uses transmission facilities provided by common carriers such as telephone
companies. These providers are companies like MCI, AT&T, UuNet, and Sprint. There are also many
small service providers that provide connectivity to one of the larger carriers’ networks and may even
have email servers to store clients mail until it is retrieved.

- Telephone service is commonly referred to as plain old telephone service (POTS).

- WAN technologies function at the lower three layers of the OSI reference model: the physical layer,
the data link layer, and the network layer.

Common WAN network components include WAN switches, access servers, modems, CSU/DSUs, and
ISDN Terminals.

WAN Devices

A WAN switch is a multiport internetworking device used in carrier networks. These devices typically
switch traffic such as Frame Relay, X.25, and SMDS and operate at the data link layer of the OSI
reference model. These WAN switches can share bandwidth among allocated service priorities, recover
from outages, and provide network design and management systems.

A modem is a device that interprets digital and analog signals, enabling data to be transmitted over
voice-grade telephone lines. At the source, digital signals are converted to analog. At the destination,
these analog signals are returned to their digital form.

An access server is a concentration point for dial-in and dial-out connections.

A channel service unit/digital service unit (CSU/DSU) is a digital interface device that adapts the physical
interface on a data terminal equipment device (such as a terminal) to the interface of a data circuit
terminating (DCE) device (such as a switch) in a switched-carrier network. The CSU/DSU also provides
signal timing for communication between these devices.

An ISDN terminal is a device used to connect ISDN Basic Rate Interface (BRI) connections to other
interfaces, such as EIA/TIA-232. A terminal adapter is essentially an ISDN modem.

WAN Terminating Equipment


The WAN physical layer describes the interface between the data terminal equipment (DTE) and the data
circuit-terminating equipment (DCE). Typically, the DCE is the service provider, and the DTE is the
attached device (the customer’s device). In this model, the services offered to the DTE are made
available through a modem or channel service unit/data service unit (CSU/DSU).
CSU/DSU (Channel Service Unit / Data Service Unit) Device that connects the end-user equipment to
the local digital telephone loop or to the service providers data transmission loop. The DSU adapts the
physical interface on a DTE device to a transmission facility such as T1 or E1. Also responsible for such
functions as signal timing for synchronous serial transmissions.
Unless a company owns (literally) the lines over which they transport data, they must utilize the services
of a Service Provider to access the wide area network.

Circuit Switching

- Dedicated physical circuit established, maintained, and terminated through a carrier network for
each communication session

- Datagram and data stream transmissions

- Operates like a normal telephone call

- Example: ISDN

Service providers typically offer both circuit switching packet switching services.
Circuit switching is a WAN switching method in which a dedicated physical circuit is established,
maintained, and terminated through a carrier network for each communication session. Circuit switching
accommodates two types of transmissions: datagram transmissions and data-stream transmissions.
Used extensively in telephone company networks, circuit switching operates much like a normal
telephone call. Integrated Services Digital Network (ISDN) is an example of a circuit-switched WAN
technology.

Packet Switching

Packet switching is a WAN switching method in which network devices share a single point-to-point link
to transport packets from a source to a destination across a carrier network. Statistical multiplexing is
used to enable devices to share these circuits. Asynchronous Transfer Mode (ATM), Frame Relay,
Switched Multimegabit Data Service (SMDS), and X.25 are examples of packet-switched WAN
technologies.

- Network devices share a point-to-point link to transport packets from a source to a destination across
a carrier network

- Statistical multiplexing is used to enable devices to share these circuits

- Examples: ATM, Frame Relay, SMDS, X.25

WAN Virtual Circuits

- A logical circuit ensuring reliable communication between two devices

- Switched virtual circuits (SVCs)

- Dynamically established on demand


- Torn down when transmission is complete
- Used when data transmission is sporadic

- Permanent virtual circuits (PVCs)

- Permanently established
- Save bandwidth for cases where certain virtual circuits must exist all the time

- Used in Frame Relay, X.25, and ATM

A virtual circuit is a logical circuit created to ensure reliable communication between two network
devices. Two types of virtual circuits exist: switched virtual circuits (SVCs) and permanent virtual circuits
(PVCs). Virtual circuits are used in Frame Relay and X.25 and ATM.
SVCs are dynamically established on demand and are torn down when transmission is complete. SVCs
are used in situations where data transmission is sporadic.
PVCs are permanently established. PVCs save bandwidth associated with circuit establishment and tear
down in situations where certain virtual circuits must exist all the time.

WAN Protocols

The OSI model provides a conceptual framework for communication between computers, but the model
itself is not a method of communication. Actual communication is made possible by using communication
protocols. A protocol implements the functions of one or more of the OSI layers. A wide variety of
communication protocols exist, but all tend to fall into one of the following groups:

- LAN protocols: operate at the physical and data link layers and define communication over the various
LAN media

- WAN protocols: operate at the lowest three layers and define communication over the various wide-
area media.

- Network protocols: are the various upper-layer protocols in a given protocol suite.

- Routing protocols: network-layer protocols responsible for path determination and traffic switching.

SDLC:-
Synchronous Data Link Control. IBM’s SNA data link layer communications protocol. SDLC is a bit-
oriented, full-duplex serial protocol that has spawned numerous similar protocols, including HDLC and
LAPB.

HDLC:-
High-Level Data Link Control. Bit-oriented synchronous data link layer protocol developed by ISO.
Specifies a data encapsulation method on synchronous serial links using frame characters and
checksums.

LAPB:-
Link Access Procedure, Balanced. Data link layer protocol in the X.25 protocol stack. LAPB is a bit-
oriented protocol derived from HDLC.

PPP:-
Point-to-Point Protocol. Provides router-to-router and host-to-network connections over synchronous
and asynchronous circuits with built-in security features. Works with several network layer protocols,
such as IP, IPX, & ARA.

X.25 PTP:-
Packet level protocol. Network layer protocol in the X.25 protocol stack. Defines how connections are
maintained for remote terminal access and computer communications in PDNs. Frame Relay is
superseding X.25.

ISDN:-
Integrated Services Digital Network. Communication protocol, offered by telephone companies, that
permits telephone networks to carry data, voice, and other source traffic.

Frame Relay:-
Industry-standard, switched data link layer protocol that handles multiple virtual circuits using HDLC
encapsulation between connected devices. Frame Relay is more efficient than X.25, and generally
replaces it.

Transmission Options or WAN Services

There are a number of transmission options available today. They fall either into the analog or digital
category. Next let’s take a brief look at each of these transmission types.

POTS Using Modem Dialup

Analog modems using basic telephone service are asynchronous transmission-based, and have the
following benefits:
- Available everywhere
- Easy to set up
- Dial anywhere on demand
- The lowest cost alternative of any wide-area service

Integrated Services Digital Network (ISDN)

ISDN is a digital service that can use asynchronous or, more commonly, synchronous transmission.
ISDN can transmit data, voice, and video over existing copper phone lines. Instead of leasing a
dedicated line for high-speed digital transmission, ISDN offers the option of dialup connectivity—
incurring charges only when the line is active.
ISDN provides a high-bandwidth, cost-effective solution for companies requiring light or sporadic high-
speed access to either a central or branch office.
ISDN can transmit data, voice, and video over existing copper phone lines.
Instead of leasing a dedicated line for high-speed digital transmission, ISDN offers the option of dialup
connectivity —incurring charges only when the line is active.
Companies needing more permanent connections should evaluate leased-line connections.

- High bandwidth
- Up to 128 Kbps per basic rate interface
- Dial on demand
- Multiple channels
- Fast connection time
- Monthly rate plus cost-effective, usage-based billing
- Strictly digital

ISDN comes in two flavors, Basic Rate Interface (BRI) and Primary Rate Interface (PRI). BRI provides
two “B” or bearer channels of 64 Kbps each and one additional signaling channel called the “D” or delta
channel.
While it requires only one physical connection, ISDN provides two channels that remote telecommuters
use to connect to the company network.
PRI provides up to 23 bearer channels of 64 Kbps each and one D channel for signaling. That’s 23
channels but with only one physical connection, which makes it an elegant solution- there’s no wiring
mess (PRI service typically provides 30 bearer channels outside the U.S. and Canada).
You’ll want to use PRI at your central site if you plan to have many ISDN dial-in clients.

Leased Line

Leased lines are most cost-effective if a customer’s daily usage exceeds four to six hours. Leased lines
offer predictable throughput with bandwidth typically 56 Kbps to 1.544 Mbps. They require one
connection per physical interface (namely, a synchronous serial port).
- One connection per physical interface
- Bandwidth: 56 kbps–1.544 Mbps
- T1/E1 and fractional T1/E1
- Cost effective at 4–6 hours daily usage
- Dedicated connections with predictable throughput
- Permanent
- Cost varies by distance

Frame Relay

Frame Relay provides a standard interface to the wide-area network for bridges, routers, front-end
processors (FEPs), and other LAN devices. A Frame Relay interface is designed to act like a wide-area
LAN- it relays data frames directly to their destinations at very high speeds. Frame Relay frames travel
over predetermined virtual circuit paths, are self-routing, and arrive at their destination in the correct
order.
Frame Relay is designed to handle the LAN-type bursty traffic efficiently.
The guaranteed bandwidth (known as committed information rate or CIR) is typically between 56 Kbps
and 1.544 Mbps.
The cost is normally not distance-sensitive.

Connecting Offices with Frame Relay

Companies who require office-to-office communications, usually choose between a dedicated leased-line
connection or a packet-based service, such as Frame Relay or X.25. As a rule, higher connect times
make leased-line solutions more cost-effective.
Like ISDN, Frame Relay requires only one physical connection to the Frame Relay network, but can
support many Permanent Virtual Circuits, or PVCs.

Frame Relay service is often less expensive than leased lines, and the cost is based on:
- The committed information rate (CIR), which can be exceeded up to the port speed when the
capacity is available on your carrier’s network.
- Port speed
- The number of permanent virtual circuits (PVCs) you require; a benefit to users who need reliable,
dedicated connections to resources simultaneously.

X.25

X.25 networks implement the internationally accepted ITU-T standard governing the operation of packet
switching networks. Transmission links are used only when needed. X.25 was designed almost 20 years
ago when network link quality was relatively unstable. It performs error checking along each hop from
source node to destination node.
The bandwidth is typically between 9.6Kbps and 64Kbps.
X.25 is widely available in many parts of the world including North America, Europe, and Asia.
There is a large installed base of X.25 devices.

Digital Subscriber Line (xDSL)

- DSL is a pair of “modems” on each end of a copper wire pair


- DSL converts ordinary phone lines into high-speed data conduits
- Like dial, cable, wireless, and T1, DSL by itself is a transmission technology, not a complete solution
- End-users don’t “buy” DSL, they “buy” services, such as high-speed Internet access, intranet, leased
line, voice, VPN, and video on demand
- Service is limited to certain geographical areas

Digital subscriber line (DSL) technology is a high-speed service that, like ISDN, operates over ordinary
twisted-pair copper wires supplying phone service to businesses and homes in most areas. DSL is often
more expensive than ISDN in markets where it is offered today.
Using special modems and dedicated equipment in the phone company's switching office, DSL offers
faster data transmission than either analog modems or ISDN service, plus-in most cases-simultaneous
voice communications over the same lines. This means you don't need to add lines to supercharge your
data access speeds. And since DSL devotes a separate channel to voice service, phone calls are
unaffected by data transmissions.

DSL Modem Technology


DSL has several flavors. ADSL delivers asymmetrical data rates (for example, data moves faster on the
way to your PC than it does on the way out to Internet). Other DSL technologies deliver symmetrical
data (same speeds traveling in and out of your PC).
The type of service available to you will depend on the carriers operating in your area. Because DSL
works over the existing telephone infrastructure, it should be easy to deploy over a wide area in a
relatively short time. As a result, the pursuit of market share and new customers is spawning
competition between traditional phone companies and a new breed of firms called competitive local
exchange carriers (CLECs).

Asynchronous Transfer Mode (ATM)

ATM is short for Asynchronous Transfer Mode, and it is a technology capable of transferring voice, video
and data through private and public networks. It uses VLSI technology to segment data at high speeds
into units called cells. Basically it carves up Ethernet or Token ring packets and creates cells out of
them.

Each cell contains 5 bites of header information, 48 bites of payload for 53 bites total in every cell. Each
cell contains identifiers that specify the data stream to which they belong. ATM is capable of T3 speeds,
E3 speeds in Europe as well as Fiber speed, like Sonet which is asynchronous optical networking speeds
of OC-1 and up. ATM technology is primarily used in enterprise backbones or in WAN links.

How to choose Service?

Analog services are the least expensive type of service. ISDN costs somewhat more but improves
performance over even the fastest current analog offerings. Leased lines are the costliest of these three
options, but offer dedicated, digital service for more demanding situations. Which is right?
You’ll need to answer a few questions:

- Will employees use the Internet frequently?


- Will the Internet be used for conducting business (for example, inventory management, online
catalog selling or account information or bidding on new jobs)?
- Do you anticipate a large volume of traffic between branch offices of the business?
- Is there a plan to use videoconferencing or video training between locations?
- Who will use the main office’s connection to the Internet - individual employees at the central office,
telecommuting workers dialing in from home, mobile workers dialing in from the road?

The more times the answer is “yes”, the more likely that leased line services are required. It is also
possible to mix and match services. For example, small branch offices or individual employees dialing in
from home might connect to the central office using ISDN, while the main connection from the central
office to the Internet can be a T1.
Which service you select also depends on what the Internet Service Provider (is using. If the ISP’s
maximum line speed is 128K, as with ISDN, it wouldn’t make sense to connect to that ISP with a T1
service. It is important to understand that as the bandwidth increases, so do the charges, both from the
ISP and the phone company. Keep in mind that rates for different kinds of connections vary from
location to location.

Let’s compare our technology options, assuming all services are available in our region. To summarize:
- A leased-line service provides a dedicated connection with a fixed bandwidth at a flat rate. You pay
the same monthly fee regardless how much or how little you use the connection.

- A packet-switched service typically provides a permanent connection with specific, guaranteed


bandwidth (Frame Relay). Temporary connections (such as X.25) may also be available. The cost of
the line is typically a flat rate, plus an additional charge based on actual usage.

- A circuit-switched service provides a temporary connection with variable bandwidth, with cost
primarily based on actual usage.

Wide-Area Network Requirements

- Minimize bandwidth costs


- Maximize efficiency
- Maximize performance
- Support new/emerging applications
- Maximize availability
- Minimize management and maintenance

Manage Bandwidth to Control Cost

Because transmission costs are by far the largest portion of a network’s cost, there are a number of
bandwidth optimization features you should be aware of that enable the cost-effective use of WAN links.
These include dial-on-demand routing, bandwidth-on-demand, snapshot routing, IPX protocol spoofing,
and compression.
Dial-on-demand ensures that you’re only paying for bandwidth when it’s needed for switched services
such as ISDN and asynchronous modem (and switched 56Kb in the U.S. and Canada only).
Bandwidth-on-demand gives you the flexibility to add additional WAN bandwidth when it’s needed to
accommodate heavy network loads such as file transfers. Snapshot routing prevents unnecessary
transmissions. It inhibits your switched network from being dialed solely for the purpose of exchanging
routing updates at short intervals (e.g.: 30 seconds). Many of you are familiar with compression, which
is also a good method of optimization.

Lets take a close look at a few features that will keep your WAN costs down.

- Dial-on-Demand Routing

Dial-on-demand routing allows a router to automatically initiate and close a circuit-switched session.
With dial-on-demand routing, the router dials up the WAN link only when it senses “interesting” traffic.
Interesting traffic might be defined as any traffic destined for the remote network, or only traffic related
to a specific host address or service.
Equally important, dial-on-demand routing enables the router to take down the connection when it is no
longer needed, ensuring that the user will not have unnecessary WAN usage charges.

- Bandwidth-on-Demand

Bandwidth-on-demand works in a similar way.


When the router senses that the traffic level on the primary link has reached a certain threshold—say,
when a user starts a large file transfer—it automatically dials up additional bandwidth through the PSTN
to accommodate the increased load.
For example, if you’re using ISDN, you may decide that when the first B channel reaches 75% saturation
for more than one minute, your router will automatically dial up a second B channel. When the traffic
load on the second B channel falls below 40%, the channel is automatically dropped.

- Snapshot Routing

By default, routing protocols such as RIP exchange routing tables every 30 seconds. If placed as calls,
these routine updates will drive up WAN costs unnecessarily, and Snapshot Routing limits these calls to
the remote site.
A remote router with this feature only requests a routing update when the WAN link is already up for the
purpose of transferring user application data.
Without Snapshot Routing, your ISDN connection would be dialed every 30 seconds; this feature
ensures that the remote router always has the most up-to-date routing information but only when
needed.

- IPX Protocol Spoofing

Protocol spoofing allows the user to improve performance while providing the ability to use lower line
speeds over the WAN.

- Compression

Compression reduces the space required to store data, thus reducing the bandwidth required to
transmit. The benefit of these compression algorithms is that users can utilize lower line speeds if
needed to save costs. Compression also provides the ability to move more data over a link than it would
normally bear.
- Three types
Header
Link
Payload

- Van Jacobson header compression


RFC 1144
Reduces header from 40 to ~5 bytes

- Dial Backup

Dial backup addresses a customer’s need for reliability and guaranteed uptime. Dial backup capability
offers users protection against WAN downtime by allowing them to configure a backup serial line via a
circuit-switched connection such as ISDN. When the software detects the loss of a signal from the
primary line device or finds that the line protocol is down, it activates the secondary line to establish a
new session and continue the job of transmitting traffic over the backup line.

- Summary -

- The network operates beyond the local LAN’s geographic scope. It uses the services of carriers like
regional bell operating companies (RBOCs), Sprint, and MCI.

- WANs use serial connections of various types to access bandwidth over wide-area geographies.

- An enterprise pays the carrier or service provider for connections used in the WAN; the enterprise can
choose which services it uses; carriers are usually regulated by tariffs.>

- WANs rarely shut down, but since the enterprise must pay for services used, it might restrict access to
connected workstations. All WAN services are not available in all locations.

The objective of this lesson is to explain routing. We’ll start by first defining what routing is. We’ll follow
that with a discussion on addressing.
There is a section on routing terminology which covers subjects like routed vs. routing protocols and
dynamic and static routing.
Finally, we’ll talk about routing protocols.

The Agenda

- What Is Routing?

- Network Addressing

- Routing Protocols

What Is Routing?
Routing is the process of finding a path to a destination host and of moving information across an
internetwork from a source to a destination. Along the way, at least one intermediate node typically is
encountered. Routing is very complex in large networks because of the many potential intermediate
destinations a packet might traverse before reaching its destination host.
A router is a device that forwards packets from one network to another and determines the optimal path
along which network traffic should be forwarded. Routers forward packets from one network to another
based on network layer information. Routers are occasionally called gateways (although this definition of
gateway is becoming increasingly outdated).

Routers—Layer 3

A router is a more sophisticated device than a hub or a switch.. It determines the appropriate network
path to send the packet along by keeping an up-to-date network topology in memory, its routing table.

A router keeps a table of network addresses and knows which path to take to get to each network.
Routers keep track of each other’s routes by alternately listening, and periodically sending, route
information. When a router hears a routing update, it updates its routing table. Routing is often
contrasted with bridging, which might seem to accomplish precisely the same thing to the causal
observer. The primary difference between the two is that bridging occurs at Layer 2 (the data link layer)
of the OSI reference model, whereas routing occurs at Layer 3 (the network layer). This distinction
provides routing and bridging with different information to use in the process of moving information
from source to destination, so that the two functions accomplish their tasks in different ways.
In addition, bridges can’t block a broadcast (where a data packet is sent to all nodes on a network).
Broadcasts can consume a great deal of bandwidth. Routers are able to block broadcasts, so they
provide security and assist in bandwidth control.
You might ask, if bridging is faster than routing, why do companies move from a bridged/switched
network to a routed network?
There are many reasons, but LAN segmentation is a key reason. Also, routers increase scalability and
control broadcast transmissions.

Where are Routers Used?

A router can perform LAN-to-LAN routing through its ability to route packet traffic from one network to
another. It checks its router table entries to determine the best path to the destination network.
A router can perform LAN-to-WAN and remote access routing through its ability to route packet traffic
from one network to another while handling different WAN services in between. Popular WAN service
options include Integrated Services Digital Network, or ISDN, leased lines, Frame Relay, and X.25.
Let’s look at routing in more detail.

Routing Tables

To aid the process of path determination, routing algorithms initialize and maintain routing tables, which
contain route information. Route information varies depending on the routing algorithm used. Routing
algorithms fill routing tables with a variety of information. Two examples are
destination/next hop associations and path desirability.

- Destination/next hop associations tell a router that a particular destination is linked to a particular
router representing the “next hop” on the way to the final destination. When a router receives an
incoming packet, it checks the destination address and attempts to associate this address with a
next hop.
- With path desirability, routers compare metrics to determine optimal routes. Metrics differ
depending on the routing algorithm used. A metric is a standard of measurement, such as path
length, that is used by routing algorithms to determine the optimal path to a destination.

Routers communicate with one another and maintain their routing tables through the transmission of a
variety of messages.

- Routing update messages may include all or a portion of a routing table. By analyzing routing
updates from all other routers, a router can build a detailed picture of network topology.

- Link-state advertisements inform other routers of the state of the sender’s link so that routers can
maintain a picture of the network topology and continuously determine optimal routes to network
destinations.

Routing Algorithm Goals

Routing tables contain information used by software to select the best route. But how, specifically, are
routing tables built? What is the specific nature of the information they contain? How do routing
algorithms determine that one route is preferable to others?
Routing algorithms often have one or more of the following design goals:

Optimality - the capability of the routing algorithm to select the best route, depending on metrics
and metric weightings used in the calculation. For example, one algorithm may use a number of
hops and delays, but may weight delay more heavily in the calculation.

Simplicity and low overhead - efficient routing algorithm functionality with a minimum of software
and utilization overhead. Particularly important when routing algorithm software must run on a
computer with limited physical resources.

Robustness and stability - routing algorithm should perform correctly in the face of unusual or
unforeseen circumstances, such as hardware failures, high load conditions, and incorrect
implementations. Because of their locations at network junctions, failures can cause extensive
problems.

Rapid convergence - Convergence is the process of agreement, by all routers, on optimal routes.
When a network event causes changes in router availability, recalculations are need to restablish
networks. Routing algorithms that converge slowly can cause routing loops or network outages.

Flexibility - routing algorithm should quickly and accurately adapt to a variety of network
circumstances. Changes of consequence include router availability, changes in network bandwidth,
queue size, and network delay.

Routing Metrics

Routing algorithms have used many different metrics to determine the best route. Sophisticated routing
algorithms can base route selection on multiple metrics, combining them in a single (hybrid) metric. All
the following metrics have been used:

Path length - The most common metric. The sum of either an assigned cost per network link or hop
count, a metric specify the number of passes through network devices between source and
destination.

Reliability - dependability (bit-error rate) of each network link. Some network links might go down
more often than others. Also, some links may be easier or faster to repair after a failure.

Delay - The length of time required to move a packet from source to destination through the
internetwork. Depends on bandwidth of intermediate links, port queues at each router, network
congestion, and physical distance. A common and useful metric.

Bandwidth - available traffic capacity of a link.

Load - Degree to which a network resource, such as a router, is busy (uses CPU utilization or
packets processed per second).
Communication cost - operating expenses of network links (private versus public lines).
Now let’s talk a little about network addressing.

Network Addressing

Network and Node Addresses

Each network segment between routers is is identified by a network address. These addresses contain
information about the path used by the router to pass packets from a source to a destination.
For some network layer protocols, a network administrator assigns network addresses according to
some preconceived internetwork addressing plan. For other network layer protocols, assigning addresses
is partially or completely dynamic.
Most network protocol addressing schemes also use some form of a node address. The node address
refers to the device’s port on the network. The figure in this slide shows three nodes sharing network
address 1 (Router 1.1, PC 1.2, and PC 1.3). For LANs, this port or device address can reflect the real
Media Access Control or MAC address of the device.
Unlike a MAC address that has a preestablished and usually fixed relationship to a device, a network
address contains a logical relationship within the network topology..
The hierarchy of Layer 3 addresses across the entire internetwork improves the use of bandwidth by
preventing unnecessary broadcasts. Broadcasts invoke unnecessary process overhead and waste
capacity on any devices or links that do not need to receive the broadcast. By using consistent end-to-
end addressing to represent the path of media connections, the network layer can find a path to the
destination without unnecessarily burdening the devices or links on the internetwork with broadcasts.

Examples:-

For TCP/IP, dotted decimal numbers show a network part and a host part. Network 10 uses the first of
the four numbers as the network part and the last three numbers—8.2.48-as a host address. The mask
is a companion number to the IP address. It communicates to the router the part of the number to
interpret as the network number and identifies the remainder available for host addresses inside that
network.
For Novell IPX, the network address 1aceb0b is a hexadecimal (base 16) number that cannot exceed a
fixed maximum number of digits. The host address 0000.0c00.6e25 (also a hexadecimal number) is a
fixed 48 bits long. This host address derives automatically from information in the hardware of the
specific LAN device.

Subnetwork Addressing
Subnetworks or subnets are networks arbitrarily segmented by a network administrator in order to
provide a multilevel, hierarchical routing structure while shielding the subnetwork from the addressing
complexity of attached networks.
Subnetting allows single routing entries to refer either to the larger block or to its individual
constituents. This permits a single, general routing entry to be used through most of the Internet, more
specific routes only being required for routers in the subnetted block.
A subnet mask is a 32-bit number that determines how an IP address is split into network and host
portions, on a bitwise basis. For example, 131.108.0.0 is a standard Class B subnet mask; the first two
bytes identify the network and the last two bytes identify the host.
A subnet mask is a 32-bit address mask used in IP to indicate the bits of an IP address that are being
used for the subnet address. Sometimes referred to simply as mask. The term mask derives from the
fact that the non-host portions of the IP address bits are masked by 0’s to form the subnet mask.
Subnetting helps to organize the network, allows rules to be developed and applied to the network, and
provides security and shielding. Subnetting also enables scalability by controlling the size of links to a
logical grouping of nodes that have reason to communicate with each other (such as within Human
Resources, R&D, or Manufacturing).

Routing Algorithm Types

Routing algorithms can be classified by type. Key differentiators include:

- Single-path versus multi-path: Multi-path routing algorithms support multiple paths to the same
destination and permit traffic multiplexing over multiple lines. Multi-path routing algorithms can
provide better throughput and reliability.

- Flat versus hierarchical: In a flat routing system, the routers are peers of all others. In a hierarchical
routing system, some routers form what amounts to a routing backbone. In hierarchical systems,
some routers in a given domain can communicate with routers in other domains, while others can
communicate only with routers in their own domain.

- Host-intelligent versus router-intelligent: In host-intelligent routing algorithms, the source end- node
determines the entire route and routers act simply as store-and-forward devices. In router- intelligent
routing algorithms, host are assumed to know nothing about routes and routers determine the optimal
path.

- Intradomain versus interdomain: Some routing algorithms work only within domains; others work
within and between domains.

- Static versus dynamic - this classification will be discussed in the following two slides.

- Link state versus distance vector: will be discussed after static versus dynamic routing.

Static Routing

Static routing knowledge is administered manually: a network administrator enters it into the router’s
configuration. The administrator must manually update this static route entry whenever an internetwork
topology change requires an update. Static knowledge is private—it is not conveyed to other routers as
part of an update process.
Static routing has several useful applications when it reflects a network administrator’s special
knowledge about network topology.
When an internetwork partition is accessible by only one path, a static route to the partition can be
sufficient. This type of partition is called a stub network. Configuring static routing to a stub network
avoids the overhead of dynamic routing.

Dynamic Routing

After the network administrator enters configuration commands to start dynamic routing, route
knowledge is updated automatically by a routing process whenever new topology information is received
from the internetwork. Changes in dynamic knowledge are exchanged between routers as part of the
update process.
Dynamic routing tends to reveal everything known about an internetwork. For security reasons, it might
be appropriate to conceal parts of an internetwork. Static routing allows an internetwork administrator
to specify what is advertised about restricted partitions.
In the illustration above, the preferred path between routers A and C is through router D. If the path
between Router A and Router D fails, dynamic routing determines an alternate path from A to C.
According to the routing table generated by Router A, a packet can reach its destination over the
preferred route through Router D. However, a second path to the destination is available by way of
Router B. When Router A recognizes that the link to Router D is down, it adjusts its routing table,
making the path through Router B the preferred path to the destination. The routers continue sending
packets over this link.
When the path between Routers A and D is restored to service, Router A can once again change its
routing table to indicate a preference for the counterclockwise path through Routers D and C to the
destination network.

Distance Vector versus Link State

Distance vector versus link state is another possible routing algorithm classification.

- Link state algorithms (also known as shortest path first algorithms) flood routing information about
its own link to all network nodes. The link-state (also called shortest path first) approach recreates the
exact topology of the entire internetwork (or at least the partition in which the router is situated).

- Distance vector algorithms send all or some portion of their routing table only to neighbors. The
distance vector routing approach determines the direction (vector) and distance to any link in the
internetwork.

- A third classification in this course, called hybrid, combines aspects of these two basic algorithms.

There is no single best routing algorithm for all internetworks. Network administrators must weigh
technical and non-technical aspects of their network to determine what’s best.

Routing Protocols

Routed versus Routing Protocols

Confusion often exists between the similar terms routing protocol and routed protocol.
Routed protocols are any network protocol suite that provides enough information in its network layer
address to allow a packet to direct user traffic. Routed protocols define the format and use of the fields
within a packet. Packets generally are conveyed from end system to end system. The Internet IP
protocol and Novell’s IPX are examples of routed protocols. Other examples include DECnet, AppleTalk,
Novell NetWare, Open Systems Interconnect (OSI), Banyan VINES, and Xerox Network System (XNS).
A routing protocol supports a routed protocol by providing mechanisms for sharing routing information.
Routing protocol messages move between the routers. A routing protocol allows the routers to
communicate with other routers to update and maintain tables. Routing protocol messages do not carry
end-user traffic from network to network. A routing protocol uses the routed protocol to pass
information between routers. TCP/IP examples of routing protocols are the Routing Information Protocol
(RIP), Interior Gateway Routing Protocol (IGRP), Open Shortest Path First (OSPF), Border Gateway
Protocol (BGP), and Enhanced IGRP (EIGRP).

Routing Protocol Evolutions

Distance Vector

RIP - Routing Information Protocol. The most common IGP in the Internet. RIP uses hop count as a
routing metric.

IGRP - Interior Gateway Routing Protocol. IGP developed by Cisco to address the issues associated with
routing in large, heterogeneous networks.

Link State

OSPF - Open Shortest Path First. Link-state, hierarchical IGP routing algorithm proposed as a successor
to RIP in the Internet community. OSPF features include least-cost routing, multipath routing, and load
balancing. OSPF was derived from an early version of the IS-IS protocol.

NLSP - NetWare Link Services Protocol. Link-state routing protocol based on IS-IS.

IS-IS - Intermediate System-to-Intermediate System. OSI link-state hierarchical routing protocol based
on DECnet Phase V routing, whereby ISs (routers) exchange routing information based on a single
metric, to determine network topology.

Hybrid

EIGRP - Enhanced Interior Gateway Routing Protocol. Advanced version of IGRP developed by Cisco.
Provides superior convergence properties and operating efficiency, and combines the advantages of link
state protocols with those of distance vector protocols.

RIP and IGRP

RIP takes the path with the least number of hops, but does not account for the speed of the links. It only
counts hops. The limitation of RIP is about 15 hops. This creates a scalability issue when routing in
large, heterogeneous networks.
IGRP was developed by Cisco and works only with Cisco products (although it has been licensed to some
other vendors). It accounts for the varying speeds of each link. Additionally, IRGP can handle 224 to 252
hops, depending on the IOS version. However, IGRP only supports IP.

OSPF and EIGRP

OSPF - Open Shortest Path First. Link-state, hierarchical IGP routing algorithm proposed as a successor
to RIP in the Internet community. OSPF features include least-cost routing, multipath routing, and load
balancing. OSPF was derived from an early version of the IS-IS protocol.
EIGRP - Enhanced Interior Gateway Routing Protocol. Advanced version of IGRP developed by Cisco.
Provides superior convergence properties and operating efficiency, and combines the advantages of link
state protocols with those of distance vector protocols.
- Summary -

- Routers move data across networks from a source to a destination

- Routers determine the optimal path for forwarding network traffic

- Routing protocols communicate reachability information between routers

This lesson covers virtual LANs or VLANs. We’ll start by defining what a VLAN is and then explaining how
it works. We’ll conclude the lesson by talking about some key VLAN technologies such as ISL and VTP.

The Agenda

- What Is a VLAN?

- VLAN Technologies

What Is a VLAN?

Well, the reality of the work environment today is that personnel is always changing. Employees move
departments; they switch projects. Keeping up with these changes can consume significant network
administration time. VLANs address the end-to-end mobility needs that businesses require.
Traditionally, routers have been used to limit the broadcast domains of workgroups. While routers
provide well-defined boundaries between LAN segments, they introduce the following problems:

- Lack of scalability (e.g., restrictive addressing on subnets)


- Lack of security (e.g., within shared segments)
- Insufficient bandwidth use (e.g., extra traffic results when segmentation of the network is based upon
physical location and not necessarily by workgroups or interest group)
- Lack of flexibility (e.g., cost reconfigurations are required when users are moved)

Virtual LAN, or VLAN, technology solves these problems because it enables switches and routers to
configure logical topologies on top of the physical network infrastructure. Logical topologies allow any
arbitrary collection of LAN segments within a network to be combined into an autonomous user group,
appearing as a single LAN.

Virtual LANs

A VLAN can be defined as a logical LAN segment that spans different physical LANs. VLANs provide
traffic separation and logical network partitioning.
VLANs logically segment the physical LAN infrastructure into different subnets (broadcast domains for
Ethernet) so that broadcast frames are switched only between ports within the same VLAN.
A VLAN is a logical grouping of network devices (users) connected to the port(s) on a LAN switch. A
VLAN creates a single broadcast domain and is treated like a subnet.
Unlike a traditional segment or workgroup, you can create a VLAN to group users by their work
functions, departments, the applications used, or the protocols shared irrespective of the users’ work
location (for example, an AppleTalk network that you want to separate from the rest of the switched
network).
VLAN implementation is most often done in the switch software.

Remove the Physical Boundaries

Conceptually, VLANs provide greater segmentation and organizational flexibility. VLAN technology allows
you to group switch ports and the users connected to them into logically defined communities of
interest. These groupings can be coworkers within the same department, a cross-functional product
team, or diverse users sharing the same network application or software (such as Lotus Notes users).
Grouping these ports and users into communities of interest—referred to as VLAN organizations—can be
accomplished within a single switch, or more powerfully, between connected switches within the
enterprise. By grouping ports and users together across multiple switches, VLANs can span single
building infrastructures or interconnected buildings. As shown here, VLANs completely remove the
physical constraints of workgroup communications across the enterprise.
Additionally, the role of the router evolves beyond the more traditional role of firewalls and broadcast
suppression to policy-based control, broadcast management, and route processing and distribution.
Equally as important, routers remain vital for switched architectures configured as VLANs because they
provide the communication between logically defined workgroups (VLANs). Routers also provide VLAN
access to shared resources such as servers and hosts, and connect to other parts of the network that
are either logically segmented with the more traditional subnet approach or require access to remote
sites across wide-area links. Layer 3 communication, either embedded in the switch or provided
externally, is an integral part of any high-performance switching architecture.

VLAN Benefits

VLANs provide many internetworking benefits that are compelling.


Reduced administrative costs—Members of a VLAN group can be geographically dispersed. Members
might be related because of their job functions or type of data that they use rather than the physical
location of their workspace.

- The power of VLANs comes from the fact that adds, moves, and changes can be achieved simply by
configuring a port into the appropriate VLAN. Expensive, time-consuming recabling to extend
connectivity in a switched LAN environment, or host reconfiguration and re-addressing is no longer
necessary, because network management can be used to logically “drag and drop” a user from one
VLAN group to another.

Better management and control of broadcast activity—A VLAN solves the scalability problems often
found in a large flat network by breaking a single broadcast domain into several smaller broadcast
domains or VLAN groups. All broadcast and multicast traffic is contained within each smaller domain.

Tighter network security with establishment of secure user groups:

- High-security users can be placed in a separate VLAN group so that non-group members do not
receive their broadcasts and cannot communicate with them.
- If inter-VLAN communication is necessary, a router can be added, and the traditional security and
filtering functions of a router can be used.
- Workgroup servers can be relocated into secured, centralized locations.
Scalability and performance—VLAN groups can be defined based on any criteria; therefore, you can
determine a network’s traffic patterns and associate users and resources logically. For example, an
engineer making intensive use of a networked CAD/CAM server can be put into a separate VLAN group
containing just the engineer and the server. The engineer does not affect the rest of the workgroup. The
engineer’s dedicated LAN increases throughput to the CAD/CAM server and helps performance for the
rest of the group by not affecting its work.

VLAN Components

There are five key components within VLANs:

Switches — For determining VLAN membership. This is where users/systems attach to the network.

Trunking — For exchanging VLAN information throughout the network. This is essential for larger
environments that comprise several switches, routers, and servers.

Multiprotocol routing — For supporting inter-VLAN communications. Remember that while all
members within the same VLAN can communicate directly with one another, routers are required for
exchanging information between different VLANs.

Servers — Servers are not required within VLAN environments specifically; however, they are a staple
within any network. Within a VLAN environment, users can utilize servers in several different ways, and
we’ll discuss them momentarily. Because VLANs are used throughout the network, users from multiple
VLANs will most likely need their services.

Management — For security, control, and administration within the network. Effective management
and administration is essential within any network environment, and it becomes even more imperative
for networks using VLANs. The network management system appropriately recognize and administer
logical segments within the switched network.
Let’s look at some of these components in more detail.

Establishing VLAN Membership

Switches provide the means for users to access a network and join a VLAN. Various approaches exist for
establishing VLAN membership.
each of these methods has its positive and negative points.

Membership by Port

Let’s look at the first method for determining or assigning VLAN membership:

Port-based — In this case, the port is assigned to a specific VLAN independent of the user or system
attached to the port. This VLAN assignment is typically done by the network administrator and is not
dynamic. In other words, the port cannot be automatically changed to another VLAN without the
personal supervision and processing of the network administrator.
This approach is quite simple and fast, in that no complex lookup tables are required to achieve this
VLAN segregation. If this port-to-VLAN association is done via ASICs, the performance is very good.
This approach is also very easy to manage, and a Graphical user Interface, or GUI, illustrating the VLAN-
to-port association is normally intuitive for most users.
As in other VLAN approaches, the packets within this port-based method do not leak into other VLAN
domains on the network. The port is assigned to one and only one VLAN at any time, and no other
packets from other VLANs will “bleed” into or out of this port.

Membership by MAC Addresses

The other methods for determining VLAN membership provide more flexibility and are more “user-
centric” than the port-based model. However, these methods are conducted with software in the switch
and require more processing power and resources within the switches and the network. These solutions
require a packet-by-packet lookup method that decreases the overall performance of the switch.
(Software solutions do not run as fast as hardware/ASIC-based solutions.)
In the MAC-based model, the VLAN assignment is linked to the physical media address or MAC address
of the system accessing the network. This approach provides enhanced security benefits of the more
“open” port-based approach, because all MAC addresses are unique.
From an administrative aspect, the MAC-based approach requires slightly more work, because a VLAN
membership table must be created for all of the users within each VLAN on the network. As a user
attaches to a switch, the switch must verify and confirm the MAC address with a central/main table and
place it into the proper VLAN.
The network address and user ID approaches are also more flexible than the port-based approach, but
they also require even more overhead than the MAC-based method, because tables must exist
throughout the network for all the relevant network protocols, subnets, and user addresses. With the
user ID method, another large configuration/policy table must exist containing all authorized user login
IDs. Within both of these methods, the switches typically do not have enough resources (CPU, memory)
to accommodate such large tables. Therefore, these tables must exist within servers located elsewhere
in the network. Additionally, the latencies resulting from the lookup process would be more significant in
these approaches.
From an administrative aspect, the network and user ID-based approaches require more resources
(memory and bandwidth) to use distributed tables on several switches or servers throughout the
network. These two approaches also require slightly more bandwidth to share this information between
switches and servers.

Multiple VLANs per Port

When addressing these various methods for implementing VLANs, customers always question the use of
multiple VLANs per switch port. Can this be done? Does this make sense?
The means for implementing this type of design is based on using shared hubs off of switch ports.
Members using the hub belong to different VLANs, and thus, the switch port must also support multiple
VLANs.
While this method does offer the flexibility of having VLANs completely port independent, this method
also violates one of the general principle of implementing VLANs: broadcast containment. An incoming
broadcast on any VLAN would be sent to all hub ports — even though they may belong to a different
VLAN. The switch, hub, and all endstations will have to process this broadcast even if it belongs to a
different VLAN. This “bleeding” of VLAN information does not provide true segmentation nor does it
effectively use resources.

Communicating Between VLANs

Another key component of VLANs is the router. Routers provide inter-VLAN communications and are
essential for sharing VLAN information in large environments. The Layer 3 routing capabilities provide
additional security between networks (access lists, protocol filtering, and so on).

In general, there are two approaches to using routers as communication points for VLANs:

- Logical connection method— Using ISL within the router, a trunk can be established between the
switch and the router. One high- speed port is used, and multiple VLAN information runs across this
trunk link. (We’ll explain ISL in just a minute.)

- Physical connection method— Multiple independent links are used between the router and the switch.
Each link contains its own VLAN. This scenario does not require ISL to be implemented on the router
and also allows lower-speed links to be used.
The proper method to implement depends on the customer’s needs and requirements. (Does the
customer need to conserve router and switch ports? Does the customer need a high-speed ISL port?) In
both instances, the router still supports inter-VLAN communication.

Server Connectivity

The network server is another key component of VLANs. Servers provide file, print, and storage services
to users throughout the network regardless of VLANs.
To optimize their network environments many customers deploy centralized server farms in their
networks.

This eases administration of the servers and Network Operating System, or NOS, significantly. These
server farms contain servers that support the entire network, but each server supports a specific VLAN
or number of VLANs.

As in the use of routers within VLANs, there are two approaches to using servers as common access
within a VLAN environment:

Logical connection method


Using a server adapter (NIC) running ISL, a trunk can be established between the switch and the server.
One high-speed port is used and information for multiple VLANs runs across this trunk link. This method
offers greater flexibility as well as a high-performance solution that is easy to administer. (that is one
NIC to setup and monitor). Note: ISL is now supported in several vendors’ server NIC cards: Intel,
CrossPoint. These adapters support up to 64 VLANs per port and cost approximately US$500.

Physical Connection method


Multiple independent links are used between the server and the switch. Each link contains its own VLAN.
This method does not require ISL to be implemented on the server and also allows lower-speed links to
be used.

The proper method to implement depends on the customer’s needs and requirements. (Does the
customer need to conserve switch ports? Does the customer need a high-speed ISL port? Does the
customer want to use ISL server adapters?) In both methods, the server still supports multiple VLANs.

VLAN Technologies

Let’s take a look at some technologies that are essential for VLAN implementations.

Inter-Switch Link

Cisco developed the Inter-Switch Link, or ISL, mechanism to support high-speed trunking between
switches and switches, routers, or servers in Fast Ethernet environments.
Cisco’s Inter-Switch Link protocol (ISL) enables VLAN traffic to cross LAN segments. ISL is used for
interconnecting multiple switches and maintaining VLAN information as traffic goes between switches.
ISL uses “packet tagging” to send VLAN packets between devices on the network without impacting
switching performance or requiring the use and exchange of complex filtering tables. Each packet is
tagged depending on the VLAN to which it belongs.

The benefits of packet tagging include manageable broadcast domains that span the campus; bandwidth
management functions such as load distribution across redundant backbone links and control over
spanning tree domains; and a substantial cost reduction in the number of physical switch and router
ports required to configure multiple VLANs.

The ISL protocol enables in excess of 1000 VLANs concurrently without requiring any fragmentation or
re assembly of the packets.
Additionally, ISL wraps a 48-byte “envelope” around the packet that handles processing, priority, and
quality-of-service, or QoS, features. ISL is not limited to Fast Ethernet/Ethernet packet sizes (1518
bytes) and can even accommodate large packet sizes up to 16000 bytes — which is appropriate for
Token Ring. It is important to understand that ISL (and 802.1q—a format used by some other vendors,
for that matter) are both just packet-tagging formats. Neither sets up a standard for administration.

VLAN Standardization

While Cisco was first to market with its revolutionary packet tagging schemes for Fast Ethernet and
FDDI, they are proprietary solutions. Other vendors implemented their own unique methods for sharing
VLAN information across the network. As a result, a standards body was created within the IEEE to
provide one common VLAN communication standard. This ultimately benefits customers using switches
from various vendors in the marketplace.

Within the 802.1Q standard, packet tagging is the exchange vehicle for VLAN information.

Because ISL is so widely deployed in our installed customer base, Cisco will continue to support both ISL
and 802.1Q. It is important to note that Cisco’s dual mode support of both methods will be implemented
via hardware ASICs, which will provide tremendous performance.

VLAN Standard Implementation


This diagram illustrates a typical customer implementation of the 802.1Q VLAN standard. This scenario
is based upon a customer network composed of two separate campuses based on different vendors’
technology (Cisco and vendor X).
If the customer already has Cisco switches deployed, it can maintain its use of ISL. Also, it can maintain
its use of the VLAN trunking scheme used by vendor X. However, the new joined network must use the
802.1Q standard to share VLAN information between switches within the campus.

Virtual Trunk Protocol (VTP)

In addition to the ISL packet tagging method, Cisco also created the Virtual Trunking Protocol, or VTP,
for dynamically configuring VLAN information across the network regardless of media type (for example
Fast Ethernet, ATM, FDDI, and so on).
This VTP protocol is the software that makes ISL usable.

VTP enables VLAN communication from a centralized network management platform, thus minimizing
the amount of administration that is required when adding or changing VLANs anywhere within the
network. VTP completely eliminates the need to administer VLANs on a per-switch basis, an essential
characteristic as the number of a network’s switches and VLANs grows and reaches a point where
changes can no longer be reliably administered on individual components. VTP allows for greater
scalability because it eliminates complex VLAN administration tasks across every switch.

Conceptually, VTP works like this: When you add a new VLAN to the network, let's say VLAN 1, VTP
automatically goes out and configures the trunk interfaces across the backbone for that VLAN. This
includes the mapping of ISL to LANE or to 802.1Q.
Adding a second VLAN is just as easy. VTP sends out new advertisements and maps the VLAN across the
appropriate interfaces. The important thing to remember about this second VLAN, is that VTP keeps
track of the VLANs that already exist and eliminates any cross configurations between these two,
especially if this configuration were to be done manually.

- Summary -

- VLANs enable logical (instead of physical) groups of users on a switch


- VLANs address the needs for mobility and flexibility

- VLANs reduce administrative overhead, improve security, and provide more efficient bandwidth
utilization

QoS is important to many network applications. Voice/data integration is not possible without. Nor is
effective multimedia… or even VPNs. In this module, we’ll discuss what QoS is and some of its building
blocks. Will also look at some specific examples of how QoS can be used.

The Agenda

- What Is QoS?

- QoS Building Blocks

- QoS in Action

What Is Quality of Service (QoS)?

Basically, QoS comprises the mechanisms that give network managers the ability to control the mix of
bandwidth, delay, variances in delay (jitter), and packet loss in the network in order to deliver a network
service such as voice over IP; define different service-level agreements (SLAs) for divisions,
applications, or organizations; or simply prioritize traffic across a WAN.

QoS provides the ability to prioritize traffic and allocate resources across the network to ensure the
delivery of mission-critical applications, especially in heavily loaded environments. Traffic is usually
prioritized according to protocol.
So what does this really mean...

An analogy is the carpool lane on the highway. For business applications, we want to give high priority
to mission-critical applications. All other traffic can receive equal treatment.

Mission-critical applications are given the right of way at all times. Multimedia applications take a lower
priority. Bandwidth-consuming applications, such as file transfers, can receive an even lower priority.

What Is Driving the Need for QoS?

There are two broad application areas that are driving the need for QoS in the network:

- Mission-critical applications need QoS to ensure delivery and that their traffic is not impacted by
misbehaving applications using the network.

- Real-time applications such as multimedia and voice need QoS to guarantee bandwidth and minimize
jitter. This ensures the stability and reliability of existing applications when new applications are
added.

Voice and data convergence is the first compelling application requiring delay-sensitive traffic handling
on the data network. The move to save costs and add new features by converging the voice and data
networks--using voice over IP, VoFR, or VoATM--has a number of implications for network management:

- Users will expect the combined voice and data network to be as reliable as the voice network:
99.999% availability

- To even approach such a level of reliability requires a sophisticated management capability; policies
come into play again

So what are mission critical applications?

Enterprise Resource Planning (ERP) applications

- Order entry
- Finance
- Manufacturing
- Human resources
- Supply-chain management
- Sales-force automation

What else is mission critical?

- SNA applications
- Selected physical ports
- Selected hosts/clients

QoS Benefits

QoS provides tremendous benefits. It allows network managers to understand and control which
resources are being used by application, users, and departments.

It ensures the WAN is being used efficiently by the mission-critical applications and that other
applications get “fair” service, but take a back seat to mission-critical traffic.

It also provides an infrastructure that delivers the service levels needed by new mission-critical
applications, and lays the foundation for the “rich media” applications of today and tomorrow.

Where Is QoS Important?

QoS is required wherever there is congestion. QoS has been a critical requirement for the WAN for
years. Bandwidth, delay, and delay variation requirements are at a premium in the wide area.
LAN QoS requirements are emerging with the increased reliance on mission critical applications and the
growing popularity of voice over LAN and WAN.

The importance of end-to-end QoS is increasing due to the rapid growth of intranets and extranet
applications that have placed increased demands on the entire network.

QoS Example
Hopefully this Image provides a little context. It demonstrates a real example of how QoS could be used
to manage network applications.

QoS Building Blocks

Let’s now take a look at some of the building blocks of QoS.

There are a wide range of QoS services. Queuing, traffic shaping, and filtering are essential to traffic
prioritization and congestion control, determining how a router or switch handles incoming and outgoing
traffic.
QoS signaling services determine how network nodes communicate to deliver the specific end-to-end
service required by applications, flows, or sets of users.

Let’s take a look at a few of these.

Classification

- IP Precedence
- Committed Access Rate (CAR)
- Diff-Serv Code Point (DSCP)
- IP-to-ATM Class of Service
- Network-Based Application Recognition (NBAR)
- Resource Reservation Protocol (RSVP)

Policing

- Committed Access Rate (CAR)


- Class-Based Weighted Fair Queuing (CB WFQ)
- Weighted Fair Queuing (WFQ)

Shaping

- Generic Traffic Shaping (GTS)


- Distributed Traffic Shaping (DTS)
- Frame Relay Traffic Shaping (FRTS)

Congestion Avoidance

- Weighted Random Early Detection (WRED)


- Flow-Based WRED (Flow RED)

Congestion Management— Fancy Queuing

Weighted fair queuing is another queuing mechanism that ensures high priority for sessions that are
delay sensitive, while ensuring that other applications also get fair treatment.

For instance, in the Cisco network, Oracle SQLnet traffic, which consumes relatively low bandwidth,
jumps straight to the head of the queue, while video and HTTP are serviced as well. This works out very
well because these applications do not require a lot of bandwidth as long as they meet their delay
requirements.

A sophisticated algorithm looks at the size and frequency of packets to determine whether a specific
session has a heavy traffic flow or a light traffic flow. It then treats the respective queues of each
session accordingly.

Weighted fair queuing is self-configuring and dynamic. It is also turned on by default when routers are
shipped.
Other options include:

- Priority queuing assigns different priority levels to traffic according to traffic types or source and
destination addresses. Priority queuing does not allow any traffic of a lower priority to pass until
all packets of high priority have passed. This works very well in certain situations. For instance, it
has been very successfully implemented in Systems Network Architecture (SNA) environments,
which are very sensitive to delay.

- Custom queuing provides a guaranteed level of bandwidth to each application, in the same way
that a time-division multiplexer (TDM) divides bandwidth among channels. The advantage of
custom queuing is that if a specific application is not using all the bandwidth it is allotted, other
applications can use it. This assures that mission-critical applications receive the bandwidth they
need to run efficiently, while other applications do not time out either.

This has been implemented especially effectively in applications where SNA leased lines have been
replaced to provide guaranteed transmission times for very time-sensitive SNA traffic. What does “no
bandwidth wasted” mean?Traffic loads are redirected when and if space becomes available. If there is
space and there is traffic, the bandwidth is used.

Random Early Detection (RED)

Random Early Detection (RED) is a congestion avoidance mechanism designed for packet switched
networks that aims to control the average queue size by indicating to the end hosts when they should
temporarily stop sending packets. RED takes advantage of TCP’s congestion control mechanism. By
randomly dropping packets prior to periods of high congestion, RED tells the packet source to decrease
its transmission rate.

Assuming the packet source is using TCP, it will decrease its transmission rate until all the packets reach
their destination, indicating that the congestion is cleared. You can use RED as a way to cause TCP to
back off traffic. TCP not only pauses, but it also restarts quickly and adapts its transmission rate to the
rate that the network can support.

RED distributes losses in time and maintains normally low queue depth while absorbing spikes. When
enabled on an interface, RED begins dropping packets when congestion occurs at a rate you select
during configuration.
RED is recommended only for TCP/IP networks. RED is not recommended for protocols, such as
AppleTalk or Novell Netware, that respond to dropped packets by retransmitting the packets at the same
rate.
VPNs are a common topic today. Just about everyone is talking about implementing one. This module
explains what a VPN is and covers the basic VPN technology. We’ll also go through some examples of
VPNs including a return on investment analysis.

The Agenda

- What Are VPNs?

- VPN Technologies

- Access, Intranet, and Extranet VPNs

- VPN Examples

What Are VPNs?

Simply defined, a VPN is an enterprise network deployed on a shared infrastructure employing the same
security, management, and throughput policies applied in a private network.

A VPN can be built on the Internet or on a service provider’s IP, Frame Relay, or ATM infrastructure.
Businesses that run their intranets over a VPN service enjoy the same security, QoS, reliability, and
scalability as they do in their own private networks.

VPNs based on IP can naturally extend the ubiquitous nature of intranets over wide-area links, to remote
offices, mobile users, and telecommuters. Further, they can support extranets linking business partners,
customers, and suppliers to provide better customer satisfaction and reduced manufacturing costs.
Alternatively, VPNs can connect communities of interest, providing a secure forum for common topics of
discussion.

Virtual Private Networks

Building a virtual private network means you use the “public” Internet (or a service provider’s network)
as your “private” wide-area network.

Since it’s generally much less expensive to connect to the Internet than to lease your own data circuits,
a VPN may allow to you connect remote offices or employees who wouldn’t ordinarily justify the cost of a
regular WAN connection.
VPNs may be useful for conducting secure transactions, or transferring highly confidential data between
offices that have a WAN connection.

Some of the technologies that make VPNs possible are:

- Tunneling
- Encryption
- QoS
- Comprehensive security

Why Build a VPN?

Why should customers consider a VPN?

- Company information is secured


-VPNs allow vital company information to be secure against unwanted intrusion

- Reduce costs

- Internet-based VPNs offer low-cost connectivity from anywhere in the world, and can be considered
a viable replacement for leased-line or Frame Relay services

Using the Internet as a replacement for expensive WAN services can cut costs by as much as 60
percent, according to Forrester Research

- Also lower remote costs by connecting a mobile user over the Internet. (Often referred to as a
virtual private dial-up networking, or VPDN).

- Wider connectivity options for users

- A VPN can provide more connectivity options (for example, over cable, DSL, telephone, or Ethernet)

- Increased speed of deployment

- Extranets can be created more easily (you don’t wait for suppliers). This keeps the customer in
control of their own destiny.

However, for an Internet-based VPN to be considered as a viable replacement for leased-line or Frame
Relay service, it must be able to offer a comparable level of security, quality of service, and reliability.

In this lesson, we’re going to discuss the Internet. We’ll cover how the Internet has created a new
business model that’s changing how companies do business today. We’ll look at intranets, extranets,
and e-commerce. Finally, we’ll look at the technology implications of the new Internet applications such
as the need for higher bandwidth technologies and security.

In this lesson, we’re going to discuss the Internet. We’ll cover how the Internet has created a new
business model that’s changing how companies do business today. We’ll look at intranets, extranets,
and e-commerce. Finally, we’ll look at the technology implications of the new Internet applications such
as the need for higher bandwidth technologies and security.

The Agenda

- What Is the Internet?

- The New Business Model

- Intranets

- Extranets

- E-Commerce

- Technology Implications of Internet Applications

The Internet: A Network of Networks


What is the Internet? The Internet is the following:

- A flock of independent networks flying in loose formation, owned by no one and connecting an
unknown number of users
- A grass roots cultural phenomenon started 30 years ago by a group of graduate students in tie-dyed
shirts and ponytails
- Ma Bell’s good old telephone networks dressed up for the 1990s
A new way to transmit information that is faster and cheaper than a phone call, fax, or the post
office

Some Internet facts:

- The number of hosts (or computers) connected to the Internet has grown from a handful in 1989 to
hundreds of millions today.
- The MIT Media Lab says that the size of the World Wide Web is doubling every 50 days, and that a
new home page is created every 4 seconds.

Internet Hierarchy

The Internet has three components: information, wires, and people.

- The “wires” are arranged in a loose hierarchy, with the fastest wires located in the middle of the
cloud on one of the Internet’s many “backbones.”
- Regional networks connect to the Internet backbone at one of several Network Access Points (NAPs),
including MAE-EAST, in Herndon, Virginia; and MAE-WEST, in Palo Alto, California.
- Internet service providers (ISPs) administer or connect to the regional networks, and serve
customers from one or more points of presence (POPs).
- Dynamic adaptive routing allows Internet traffic to be automatically rerouted around circuit failures.
- Dataquest estimates that up to 88 percent of all traffic on the Internet touches a Cisco router at
some point.

The New Business Model

The Internet Is Changing the Way Everyone Does Business

From simple electronic mail to extensive intranets that include online ordering and extranet services, the
Internet is changing the way everyone does business. Small and medium-sized companies seeking to
remain competitive into the next century must leverage the Internet as a business asset.

The Internet is forcing companies adopt technology faster. You’ll discover several themes that are
driving the new Internet economy, as follows.

Compression—Everything happens faster: business cycles are shorter, and time and distances are less
relevant to your customers.

Time—Some companies have reported a 92 percent reduction in processing time when an item is
ordered via an online system.

Distance—Using networked commerce, BankAmerica has widened its customer base so that now 30
percent of customers are outside the traditional geographic reach.
Business cycles—Adaptec, a manufacturing firm in California, used networked commerce to reduce
their manufacturing cycle from 12 to 8 weeks, slashing their inventory costs by $10 million a year.

Market turbulence—Customers suddenly have more choices. They can shop farther afield in search of
good values. You have to compete even harder to retain customers.

Networked business—Many deem that networked commerce applications will “make or break”
companies in the next century. The ability to solicit and sustain business relationships with customers,
employees, partners, and suppliers using networked commerce applications is critical to success.

Rapid transformation—Building relationships, business processes, and operating models that can
quickly adjust to accommodate shifting market forces is essential. This requires an infrastructure that
provides the ability to change rapidly.

Forces Driving Change

Shorter product life cycles are required to stay competitive.

Industry and geographical borders are changing rapidly:

- Companies today must be able to swiftly “go to market” in new and expanded locations.
- Moreover, the rigid border or boundaries of manufacturers are changing: manufacturers are
becoming retailers and distributors.

The need to “do more with less” is essential to accommodate narrowing margins, intensifying
competition, and industry convergence. The network must raise the productivity of the workforce.

Traditional Business Model Versus New Business Model

The Internet is transforming the way companies can use information and information systems.
Historically, businesses have “protected” company information and allowed limited sharing of systems.

Creating these “silos” of information has meant that each “link” of the “extended” traditional business
has lacked access to relevant information to make profit maximizing decisions. That means your
employees, suppliers, customers, and partners were kept from information, not always by intention, but
because limited access created barriers to sharing it. The result was:

- Closely held knowledge base


- Limited access to relevant and timely information
- Costly duplication of effort
- Limited transaction hours to conduct business

The Internet and networked applications have changed all that. They allow all companies, no matter the
size, to break the information barriers—to “let loose the power of information.”

Now we are experiencing a transition to a new business paradigm. In order to compete effectively in this
rapidly expanding Internet economy, we must reshape our business practices.

Companies today are now:


- Sharing knowledge with suppliers and partners
- Ensuring that relevant and timely information is available to all employees
- Removing redundancies
- Conducting business 24 hours a day, 7 days a week (24x7)

Accelerating this shift is the explosive growth and rapid adoption of Internet usage.

Today’s Internet Business Solutions

Let’s take a look at some of the Internet business solutions that companies are driven to implement in
order to improve their productivity and stay competitive. These include:

- Intranets
- Extranets
- E-commerce

Intranets

What Is an Intranet?

An intranet is an internal network based on Internet and World Wide Web technology that delivers
immediate, up-to-date information and services to networked employees anytime, anywhere.

Whether providing capabilities to download the latest sales presentation, arrange travel, or report a
defective disk drive to the technical assistance center, an intranet offers a common, platform-
independent interface that is consistent, easy to implement, and easy to use.

Initially, organizations used intranets almost exclusively as publishing platforms for delivering up-to-the-
minute information to employees worldwide. Increasingly, however, organizations are broadening the
scope of their intranets to encompass interactive services that streamline business processes and reduce
the time employees spend on routine, paper-based tasks.

Intranet applications are platform-independent, so they are less costly to deploy than traditional
client/server applications, and they bear no installation and upgrade costs since employees access them
from the network using a standard Web browser. Finally, and perhaps most important, intranets
enhance employees’ productivity by equipping them with powerful, consistent tools.

Typical Intranet Applications

Most companies can benefit from an intranet. Here are some sample applications:

Employee self-service—Employee self-service provides your employees with the ability to access
information at any time from anywhere they want. It enables employees to independently access vital
company information. Employee self-service allows companies to save on labor costs as well as increase
employee productivity and communication. We’ll look at this in more detail.
Distance learning—Employee training becomes more accessible through distance learning over the
data network, which can draw employees from many sites into a single virtual classroom, saving them
travel time and keeping them more productive.

Technical support—Companies with limited IS staff can deploy an intranet server to answer frequently
asked technical questions, house software that users can download, and provide documentation on a
variety of subjects. Users gain instant access to key technical assistance, while IS staff can concentrate
on other matters.

Videoconferencing—A proven way to bring team members together without calling for travel, video
conferencing is now possible over a data network, bypassing the need for an expensive parallel network.
Intranets can make videoconferences easier to set up and use.

Example: Employee Self-Service

These are some of the employee self-service applications.

Let’s take a look at one in detail. By posting HR benefits information on an intranet, employees can look
up routine information without taking up the time of a benefits administrator, thus reducing total
headcount requirements. By giving employees the ability to look this information up anytime they wish,
they are not confined to making their inquiries during regular business hours. And, they don’t have to
wait on hold while another employee is being assisted, resulting in saved time.

In addition, by posting general benefits information on the internal Web site, HR is able to spend their
time in more productive, strategic ways that ultimately benefit the company, as well as reduce the costs
of having an administrator available on the phone all day.

Another example is corporate travel. Many employees travel frequently. New intranet applications that
store an employee’s travel preferences can make it easy for employees to request or even book travel
arrangements at any time of the day or night, enabling companies to provide this vital service at a lower
cost.
As you can see, intranet applications are a win/win for both employees and the company.

Benefits of Intranets

Intranets are rapidly gaining wide acceptance because they make network applications much easier to
access and use. Intranets enable self-service.
Intranets allow you to:

- Improve design productivity and compress time-to-market, for example, by providing engineers with
immediate access to online parts information and requisitions.

- Increase productivity through greater employee collaboration.

- Share or access vital information at any time, from any location. For example, you can extend
intranets around the world, for instance, to sales offices in London and Tokyo. Now sales teams or
manufacturing plants in Asia can quickly access information on servers at the central office in the
United States—and it’s easier to use.

- Minimize downtime and cut maintenance costs by providing work teams with complete electronic
work packages.

- Lower administrative costs by automating common tasks, such as forms and benefit paperwork.

Extranets

What Is an Extranet?

An extranet allows you to extend your company intranet to your supply chain.

Extranets are an extension of the company network—a collaborative Internet connection to customers
and trading partners designed to provide access to specific company information, and facilitate closer
working relationships.
The way you extend your company network to your extranet partners can vary. For instance, you can
use a private network for real-time communication. Or you can leverage virtual private networks (VPNs)
over the Internet for cost savings.
You can also use a combination of both. However, it’s important to realize that each solution has
different benefits and security solutions.
A typical extranet solution requires a router at each end, a firewall, authentication software, a server,
and a dedicated WAN line or VPN over the Internet.

Typical Extranet Applications

- Supply-chain management
- Customer communications
- Distributor promotions
- Online continuing education/training
- Customer service
- Order status inquiry
- Inventory inquiry
- Account status inquiry
- Warranty registration
- Claims
- Online discussion forums

Extranet applications are as varied as intranet applications. Some examples are listed above. Extranets
are advantageous anywhere that day-to-day operations processes that are being done by hand can be
automated. Companies can save time and money in development, production, order processing, and
distribution. Improving productivity increases customer satisfaction, which drives business growth.

Example: Supply Chain Management

The traditional business fulfillment model is linear, with communication flowing from supplier to
manufacturers in a step-by-step process. Communication does not transcend down the supply chain
resulting in inefficiencies and time consuming processes.

Effectively managing the supply chain is more critical now than ever. Customers today are looking for a
total solution—they want ease of purchase and implementation, they want customized products, and
they want them yesterday.

Today, in order to better service and retain customers, companies realize that they need to improve
their business processes in order to deliver products to customers in reduced time. One effective way to
do this is to improve the system processes that make up the overall supply chain.

With an extranet, companies can:

- Enable suppliers to see real-time market demand and inventory levels, thus providing them with the
necessary information to alter their production mix accordingly.

- Give suppliers access to customer order information, so they can fulfill those orders directly without
having to route product through you.

- Using the network, demand forecasts can be updated in real time, and manufacturing line statuses,
and product fulfillment can be queried by any member of the supply chain.
- Use the network to hold online meetings where product design teams work together with suppliers to
discuss prototype development, resulting in reduced cycle times.

Benefits of Extranets

What are the benefits of using extranets?

You can decrease inventories and cycle times, while improving on-time delivery.
You can increase customer satisfaction and, at the same time, more effectively manage the supply
chain.
You can improve sales channel performance by providing dealers and distributors with product and
promotional information online, while it’s hot.
You can reduce costs by automating everyday processes.
You can improve customer satisfaction by streamlining processes and improving productivity.

E-Commerce

E-Commerce Market Growing Rapidly

When we think of e-commerce, most of us think of business-to-consumer e-commerce, for example,


Amazon.com.

However, the revenues that business-to-consumer companies are realizing are just the tip of the
iceberg. The bulk of business on the Internet is actually business-to-business e-commerce which, as you
can see by this chart, is skyrocketing.

In the last two years alone, the amount of business conducted over the Internet has gone from $1 billion
to $30 billion, with an 80 to 20 business-to-business and business-to-consumer mix. The projections for
the next two years and beyond are even more dramatic. Internet commerce will likely reach from $350
to $400 billion in 2002. Some estimates are even more aggressive and place the size of Internet
commerce by 2002 at almost a trillion dollars.

And, most of us generally think that only big businesses are conducting e-commerce. In fact, over 97
percent of businesses conducting electronic commerce are companies with 499 employees or less, and
71 percent of those companies have less than 49 employees. As you can see, e-business has become a
critical component of many businesses.

Typical E-Commerce Applications

Now let’s take a look at what you can do with e-commerce.

A few examples of e-commerce are:

- Online catalog
- Order entry
- Configuration
- Pricing
- Order verification
- Credit authorization
- Invoicing
- Payment and receivables

For example, by allowing customers to do their own online ordering, long-distance phone and fax service
can be reduced. In addition, fewer people are required to take customer orders and do timely order
entry. Finally, online electronic order forms eliminate data entry and shipment errors.

Benefits of E-Commerce

E-commerce can expand and improve business.When we think of e-commerce, we immediately think of
selling online. We quickly realize the benefits of increasing revenue by supplying customers and
prospects with valuable information at any time and providing them the opportunity to purchase online.

We also recognize how online ordering can cut costs significantly by reducing the staff needed to man an
800 number or physically write up orders.
Additionally, we understand that the Internet allows companies to extend their reach and sell into new
markets without incurring global headcount costs

What most of us don’t realize is that these are only a few of the benefits of e-commerce.

Lets take a look at the following two more compelling benefits:

- You can manage your inventory levels better. For example, an automobile manufacturer has its
suppliers linked via the Web for online ordering. A supplier can place an order directly and can see
immediately if the part is in stock or will need to be back ordered.

- By putting valuable information on your Web site, customers can get answers quickly to most of
their questions at any time of the day, from any location. Customer satisfaction soars when
customers can get critical information at any time, from any location. It allows them to do business
when they want to, not during the traditional 8 to 5 business day.

Technology Implications of Internet Applications

There are real technology implications to these new Internet applications.

First is the need for increased bandwidth. Internets, intranets, and extranets have totally reversed the
80/20 rule so that now 80% of the traffic is going over the backbone and only 20% is local. Everyone is
clamoring for Fast Ethernet and even Gigabit Ethernet connections.

The need for security is obvious once a company is connected to the Internet. You cannot read the
paper without hearing about the latest hacking job.

The Internet makes VPNs possible.


And finally, EDI to enable electronic commerce.
We’ll look at each of these briefly.

Applications Need Bandwidth

The type of connection necessary depends on the bandwidth required:

- Individual users connecting to the Internet for e-mail or casual Web browsing can usually get by
using a simple modem.

- Power users or small offices should consider ISDN or Frame Relay.

- Larger offices or businesses that expect high levels of Internet traffic should look into Frame Relay or
leased lines.

- New technologies like asymmetric digital subscriber line (ADSL) and high-data-rate digital subscriber
line (HDSL) will make high-speed Internet access even more affordable in the future.
Internet Security Solutions

One of the most vulnerable point in a customer’s network is its connection to the Internet. To secure the
communication between a corporate headquarters and the Internet, a customer needs all the integrity
security tools at its disposal. These tools include firewalls, Network Address Translation (NAT), and
encryption, token cards, and others.

Virtual Private Network

Virtual Private Networks (VPNs) can bring the power of the Internet to the local enterprise network. Here
is where the distinction between Internet and intranet starts to blur. By building a VPN, an enterprise
can use the “public” Internet as its own “private” WAN.

Because it is generally much less expensive to connect to the Internet than it is to lease data circuits, a
VPN may allow companies to connect remote offices or employees when they could not ordinarily justify
the cost of a regular WAN connection.

Some of the technologies that make VPNs possible are:

- Tunneling
- Encryption
- Resource Reservation Protocol (RSVP)

Electronic Data Interchange (EDI)


Electronic commerce can streamline regular business activities in new ways. Have any of you used a fax
machine to send purchase orders to vendors?

A fax machine turns your PO into bits, transmits them across a network, and then turns them back into
atoms on the other end. The disadvantage is that the atoms on the other end can only be read by a
human being, who probably has to retype the data into another computer.

EDI provides a way for many companies to reduce their operating costs by eliminating the atoms and
keeping the bits.

What advantages does EDI provide your customer?

- Ensures accurate data transmission


- Provides fast customer response
- Enables automatic data transfer—no need to re-key

For example, RJR Nabisco reduced PO processing costs from $70 to 93 cents by replacing its paper-
based system with EDI.

Public key/private key encryption is created by the PGP program (Pretty Good Privacy). It creates a
public key and a private key. Anyone can encrypt a file with your public key, but only you can decrypt
the file. To ensure security, an enterprise may issue a public key to its customers, but only the
enterprise will be able to decrypt a message using the private key.

- SUMMARY -

The Internet has created the capability for almost ANY computer system to communicate with any other.
With Internet business solutions, companies can redefine how they share relevant information with the
key constituents in their business—not just their internal functional groups, but also customers,
partners, and suppliers.

This “ubiquitous connectivity” created by Internet business solutions creates tighter relationships across
the company’s “extended enterprise,” and can be as much of a competitive advantage for the company
as its core products and services. For example, by allowing customers and employees access to self-
service tools, businesses can cost effectively scale their customer support operations without having to
add huge numbers of support personnel. Collaborating with suppliers on new product design can
improve a company’s competitive agility, accelerate time-to-market for its products, and lower
development costs. And perhaps most importantly, integrating customers so that they have access to
on-time, relevant information can increase their levels of satisfaction significantly. Recapping:

- Internet access can take a business into new markets, decrease costs, and increase revenue through
e-commerce applications. It can attract retail customers by providing them with company
information and the ability to order online.

- Intranets can provide your employees with access to information and help compress business cycles.

- Extranets enable effective management of your supply chain and transform relationships with key
partners, suppliers, and customers.

- Voice/data integration can save companies significant amounts of money and, at the same time,
enable new applications.

- All of these applications reduce costs and increase revenue.

You might also like