You are on page 1of 182

COMPUTER

NETWORKS

Prepared by: Engr. Jerwin V. Obmerga


CPE-Department
Prelim
Part 1: Introduction to
Computer Networks

What is a network ?
• Is a collection of devices and end
system
• Consist of computers, servers, and
network devices such as switch and
routers that can communicate with
each others
What is a internetwork ?
• Consist of two or more network that
are connected together via a router
Simple Network
Actual Connection
Network Topology
- defines how computers, printers, network devices
and other devices are connected.

Physical topology - defines the physical


components of the network: cables,
network devices, and computers.

Logical topology - defines the data


path of the network.
Bus Topology
-linear bus
-connects all the devices using a single cable
-The main cable segment must add end with a terminator that
absorbs the signal when it reaches the end of the line or wire.
Star Topology
- the most commonly used physical
topology in Ethernet LAN
- is made up of a central connection
point that is a device such as hub,
switch, or router where all cabling
segments meet.
Ring Topology
- host are connected in the form of a
ring or circle
- frame travels around the ring
stopping at each node
Full Mesh Topology
- connects all devices (nodes) to each
other for redundancy and fault
tolerance
- expensive
The Cisco three-Layer Hierarchical Model
The Core Layer
The core layer is literally the core of the network. At the top of the hierarchy,
the core layer is responsible for transporting large amounts of traffic both
reliably and quickly. The only purpose of the network’s core layer is to switch
traffic as fast as possible. The traffic transported across the core is common
to a majority of users.
The Distribution Layer
The distribution layer is sometimes referred to as the workgroup layer and
is the communication point between the access layer and the core. The
primary functions of the distribution layer are to provide routing, filtering,
and WAN access and to determine how packets can access the core, if
needed.
The Access Layer
The access layer controls user and workgroup access to internetwork
resources. The access layer is sometimes referred to as the desktop layer.
The network resources most users need will be available locally because
the distribution layer handles any traffic for remote services.
Your CEO wants to know the stability and
availability of your company’s network for
the past year. During the past year, the
network was down for 30 minutes. What
was the total availability for the network?

Answer : 99.994%.

Computation:
([525,600 – 30]/[525,600])  100 = 99.994%
Part 2: Networking Model
A networking model, sometimes also called either a networking
architecture or networking blueprint, refers to a comprehensive
set of documents. Individually, each document describes one
small function required for a network; collectively, these
documents define everything that should happen for a computer
network to work. Some documents define a protocol, which is
a set of logical rules that devices must follow to communicate.
Other documents define some physical requirements for
networking. For example, a document could define the voltage
and current levels used on a particular cable when transmitting
data.
Similarly, you could build your own network—write your own
software, build your own networking cards, and so on—to create
a network. However, it is much easier to simply buy and use
products that already conform to some well-known networking
model or blueprint. Because the networking product vendors build
their products with some networking model in mind, their
products should work well together.
History Leading to TCP/IP
Today, the world of computer networking uses one networking
model: TCP/IP (Transmission Control Protocol/Internet Protocol).
However, the world has not always been so simple. Once upon a
time, networking protocols didn’t exist, including TCP/IP. Vendors
created the first networking protocols; these protocols supported
only that vendor’s computers. For example, IBM published its
Systems Network Architecture (SNA) networking model in 1974.
Other vendors also created their own
proprietary networking models. As a result, if your company bought
computers from three vendors, network engineers often had to
create three different networks based on the networking models
created by each company, and then somehow connect those
networks, making the combined networks much more complex.
Although vendor-defined proprietary networking models often
worked well, having an open, vendor-neutral networking model
would aid competition and reduce complexity. The International
Organization for Standardization (ISO) took on the task to create
such a model, starting as early as the late 1970s, beginning work on
what would become known as the Open Systems Interconnection
(OSI) networking model. ISO had a noble goal for the OSI model: to
standardize data networking protocols to
allow communication between all computers across the entire planet.
ISO worked toward this ambitious and noble goal, with participants
from most of the technologically developed nations on Earth
participating in the process.
A second, less formal effort to create an open, vendor-neutral, public
networking model sprouted forth from a U.S. Department of Defense
(DoD) contract. Researchers at various universities volunteered to
help further develop the protocols surrounding the original DoD
work. These efforts resulted in a competing open networking model
called TCP/IP.
During the 1990s, companies began adding OSI, TCP/IP, or both
to their enterprise networks. However, by the end of the 1990s,
TCP/IP had become the common choice, and OSI fell away.
Six reason why OSI model is created

• To ensure that different vendors’ products can work


together
• To create standards to enable ease of
interoperability by defining standards for the
operations at each level
• To clarify general functions of internetworking
• To divide the complexity of networking into
smaller, more manageable sublayers
• To simplify troubleshooting
• To enable developers to modify or improve
components at one layer without having to rewrite
an entire protocol stack
Users interact with the computer at the Application layer and also
that the upper layers are responsible for applications communicating
between hosts. None of the upper layers knows anything about
networking or network addresses because that’s the responsibility of
the four bottom layers.
Application Layer (Layer 7)
Refers to interfaces between network and application
software by supplying a protocol with actions meaning
to the application. Also include authentication services.
Application Layer Protocols

File Transfer
• TFTP – Trivial File Transfer Protocol is a connectionless service that uses UDP. It is
used on routers and switches to transfer configuration files and Cisco IOS
Software images, and to transfer files between systems that support TFTP
• FTP – File Transfer Protocol is designed to download files (received or gotten
from the Internet) and upload files (sent or put to the Internet)
• NFS – Network File System allows a user on a client computer to access files over
a computer network much like local storage is accessed.
Email
• SMTP – Simple Mail Transfer Protocol transports (send or receive )e-mail
messages in ASCII format using TCP
• POP3 - The Post Office Protocol - Version 3 is intended to permit a workstation to
dynamically access a maildrop on a server host in a useful fashion. Usually, it is
used to allow a workstation to retrieve mail that the server is holding for it.
Application Layer Protocols
Network Management
• SNMP – Simple Network Management Protocol facilitates the exchange of management
information between network devices. Enables network administrators to manage network
performance, find and solve network problems and plan for network growth.
• DHCP Dynamic Host Configuration Protocol- is a client/server protocol that automatically
provides an Internet Protocol (IP) host with its IP address and other related configuration
information such as the subnet mask and default gateway

Name Management
• DNS – Domain Name System server is a device on a network that responds to request from
clients to translate a domain name into association IP address.

Client/Server
• HTTP/HTTPS – Hypertext Transfer Protocol (HTTP) works with World Wide, which is the
fastest-growing and most used part of the internet. Defines how web browsers can pull the
content of a web page from a web server.

Remote Login
• Telnet – Terminal Emulation software provides the capability to remotely access another
computer.
• SSH – Secure Socket Shell, is a network protocol that provides administrators with a secure
way to access a remote computer. It also refers to the suite of utilities that implement the
protocol. It provides strong authentication and secure encrypted data communications
between two computers connecting over an insecure network such as the Internet.
TFTP
FTP
NFS
SNMP and POP3
HTTP Protocol Mechanism
Presentation Layer (Layer 6)
Defines the format and organization of data. Includes
encryption.
Presentation Data Formats

ASCII/EBCDIC - American Standard Code for Information Interchange /


Extended Binary Coded Decimal Interchange Code
JPEG/MP3 - Joint Photographic Experts Group / MPEG 3 or Moving
Picture Experts Group 3
Presentation Layer Protocol
SSL - (Secure Sockets Layer) is the standard security technology for
establishing an encrypted link between a web server and a browser. This
link ensures that all data passed between the web server and browsers
remain private and integral.
TLS - Transport Layer Security protocol provides communications
security over the Internet. The protocol allows client/server applications
to communicate in a way that is designed to prevent eavesdropping,
tampering, or message forgery.
Session Layer (Layer 5)
Establishes and maintains end-to-end bidirectional
flows between endpoints. Includes managing
transaction flows.

Set up, manages and tears down sessions between


presentation layer entities
Provides dialogue control between nodes

Session Protocols
Network File System
Operating System
Scheduling
Transport Layer (Layer 4)
The Transport layer segments and reassembles data into a
single data stream. Services located at this layer take all the
various data received from upper-layer applications, then
combine it into the same, concise data stream. These protocols
provide end-to-end data transport services and can establish a
logical connection between the sending host and destination
host on an internetwork.

The Transport layer is responsible for providing mechanisms


for multiplexing upper-layer applications, establishing sessions,
and tearing down virtual circuits. It can also hide the details of
network-dependent information from the higher layers as well
as provide transparent data transfer.

Transport Layer Protocols


TCP – Connection Oriented
UDP – Connectionless Oriented
Connection-Oriented Communication
For reliable transport to occur, a device that wants to transmit must first establish
a connection-oriented communication session with a remote device—its peer
system—known as a call setup or a three-way handshake. Once this process is
complete, the data transfer occurs, and when it’s finished, a call termination takes
place to tear down the virtual circuit.
Here’s a summary of the steps in the connection-
oriented session—that three-way handshake.
• The first “connection agreement” segment is a
request for synchronization (SYN).
• The next segments acknowledge (ACK) the request
and establish connection parameters—the rules—
between hosts. These segments request that the
receiver’s sequencing is synchronized here as well
so that a bidirectional connection can be formed.
• The final segment is also an acknowledgment,
which notifies the destination host that the
connection agreement has been accepted and that
the actual connection has been established. Data
transfer can now begin.
Flow Control
Since floods and losing data can both be tragic, we have a fail-safe
solution in place known as flow control. Its job is to ensure data integrity
at the Transport layer by allowing applications to request reliable data
transport between systems. Flow control prevents a sending host on one
side of the connection from overflowing the buffers in the receiving host.
Reliable data transport employs a connection-oriented communications
session between systems, and the
protocols involved ensure that the following will be achieved:
• The segments delivered are acknowledged back to the sender upon
their reception.
• Any segments not acknowledged are retransmitted.
• Segments are sequenced back into their proper order upon arrival at
their destination.
• A manageable data flow is maintained in order to avoid congestion,
overloading, or worse, data loss.
The types of flow control are :
• Buffering
• Windowing
• Congestion Avoidance
Buffering
The sender's transport layer must worry about overwhelming both the
network and the receiver. The network may exceed the carrying capacity,
and the receiver may run out of buffers.
Buffers are statically allocated kernel memory so that storing received
TPDUs (Transport Protocol Data Unit) can be done quickly. If the network
layer is reliable the transport layer need not buffer transmitted data, since it
relies on the network layer to get the data through.
If the network layer is unreliable, then the sending transport entity has to
buffer all TPDUs until they are acknowledged. This gives the receiving
transport entity the choice of buffering. If it does not, it knows the sender will
eventually resend, though the time spent transmitting and receiving the
TPDU has been wasted. Why might the receiver not buffer? It might not have
a buffer to put the received TPDU in. Remember that the transport entities
handle many connections simultaneously. The buffer pool available to the
transport entity may be exhausted by other connections.
Windowing
TCP implements flow control by using a window
concept that is applied to the amount of data that
can be outstanding and awaiting
acknowledgment at any one point in time. The
window concept lets the receiving host tell the
sender how much data it can receive right now,
giving the receiving host a way to make the
sending host slow down or speed up. The
receiver can slide the window size up and
down—called a sliding window or dynamic
window— to change how much data the sending
host can send.
The quantity of data segments, measured in
bytes, that the transmitting machine is allowed
to send without receiving an acknowledgment
is called a window.
Begin with the first segment, sent by the server to the PC. The Acknowledgment field
should be familiar by now: it tells the PC that the server expects a segment with
sequence number 1000 next. The new field, the window field, is set to 3000. Because
the segment flows to the PC, this value tells the PC that the PC can send no more than
3000 bytes over this connection before receiving an acknowledgment. So, as shown on
the left, the PC realizes it can send only 3000 bytes, and it stops sending, waiting on an
acknowledgment, after sending three 1000-byte TCP segments.

Continuing the example, the server not only acknowledges receiving the data (without
any loss), but the server decides to slide the window size a little higher. Note that second
message flowing right-to-left in the figure, this time with a window of 4000. Once the PC
receives this TCP segment, the PC realizes it can send another 4000 bytes (a slightly
larger window than the previous value).
Congestion control
If routers in the subnet can exchange x packets per
second on direct links, and there are k hops between
sender and receiver, then the most data that can be
sent is k*x packets per second (store and forward
network). Anything more than this causes congestion
in the network. One scheme is to have the sender
monitor the carrying capacity of the network by
measuring the time required for sending and
receiving an ACK for a TPDU. Then, with a capacity of
C TPDUs/second, and a round trip time of r seconds
per TPDU, the sender should be allowed a window of
C * r bytes. This keeps the pipe full. Since the network
capacity may change rapidly due to congestion, the
estimates of C and r must be continually updated.
Acknowledgement
Positive Acknowledgment with Retransmission — a technique that requires
a receiving machine to communicate with the transmitting source by
sending an acknowledgment message back to the sender when it receives
data.
TCP and UDP

TCP - A connection-oriented, reliable protocol that provides flow


control by providing sliding windows and offers reliability by providing
sequence numbers and acknowledgements. TCP resends anything that
is not acknowledged and supplies a virtual circuit between end-user
application. The advantage of TCP is that it provides guaranteed
delivery of segments

UDP – A connectionless and unreliable protocols that is responsible for


transmitting messages but provides no software checking for segment
delivery. The advantage that UDP provides is speed. Because UDP
provides no acknowledgements , less control traffic is sent across the
network, making the transfer faster.
Ports

Ports are used in TCP or UDP communications to name the ends of logical
connections that transfer data. For the purpose of providing services to
unknown clients, ports were created

List of Well-Known Ports


Port numbers range from 0 to 65535, but only port numbers 0 to 1023 are
reserved for privileged services and designated as well-known ports. The
following list of well-known port numbers specifies the port used by the
server process as its contact port.
Ports

•Well-known ports range from 0 through 1023.


•Registered ports are 1024 to 49151.
•Dynamic ports (also called private ports) are 49152 to 65535.

The registered port numbers are the port numbers that companies and
other users register with the Internet Corporation for Assigned Names
and Numbers (ICANN) for use by the applications that communicate
using the Internet's Transmission Control Protocol (TCP) or the User
Datagram Protocol (UDP). In most cases, these applications run as
ordinary programs that can be started by nonprivileged users. The
registered port numbers are in the range from 1024 through 49151.

Examples of applications with registered port numbers include Sun's


NEO Object Request Broker (port numbers 1047 and 1048) and
Shockwave (port number 1626).
Well known ports
Network Layer (Layer 3)
This layer defines logical addressing, routing(forwarding),
and the routing protocols used to learn routes

Network Layer Protocols


IP – Provides addressing and connectionless, best-effort delivery routing of
datagrams, is concerned with the content of the datagrams, and looks for a way
to move the datagrams to their destination
ICMP – Internet Control Message Protocol , provides control and messaging
capabilities
ARP – Address Resolution Protocol, determines the data link layer (MAC)
addresses for known IP address
RARP – Reverse Address Resolution Protocol, determines network addresses
when data link layer (MAC) addresses are known.
IPSec - is a technology standard for implementing security features in Internet
Protocol (IP) networking. IPsec network protocols support encryption and
authentication. IPsec is most commonly used in so-called "tunnel mode" with a
Virtual Private Network (VPN)
ICMP
Internet Control Message Protocol (ICMP) - is a component of TCP/IP
protocol stack that addresses IP's failure to ensure data delivery. ICMP
does not overcome the unreliability limitation that exist in IP. ICMP
simply sends error message to the sender of the data, indicating that
problem with data delivery.

Functions of ICMP control messages


- ICMP redirect/change request message
- ICMP clock synchronization and transit time estimation messages
- ICMP information request and reply messages
- ICMP address mask request and reply messages
- ICMP router-discovery message
- ICMP router-solicitation message
- ICMP congestion and flow-control message
ICMP
How ARP Works?
Data Link Layer (Layer 2)
Formats data into frames appropriate for transmission onto
some physical medium. Defines rules for when the
medium can be used. Defines means by which to
recognize transmission errors.

Creates frames from bits of data, uses MAC address to


access endpoints and provides error detection but no
correction

Data Link Layer Protocols


802.3
802.2
HDLC
Frame Relay
Data Link Layer

The data link layer defines how data is formatted from transmission and how
access to the physical media is controlled. This layer also typically includes
error correction to ensure reliable delivery of data. The data link layer
translates messages from the network layer into bits for the physical layer, and
it enables the network layer to control the interconnection of data circuits
within the physical layer. Its specifications define different network and
protocol characteristics, including physical addressing, error notification,
network topology, and sequencing of frames.
Data-link protocols provide the delivery across individual links and are
concerned with the different media types, such as 802.2 and 802.3. The data
link layer is responsible for putting 1s and 0s into a logical group. These 1s
and 0s are then put on the physical wire. Some examples of data link layer
implementations are IEEE 802.2/802.3, IEEE 802.5/802.2, packet trailer (for
Ethernet, frame check sequence [FCS], or cyclic redundancy check [CRC]),
Fiber Distributed Data Interface (FDDI), High-
Level Data Link Control (HDLC), and Frame Relay.
Data Link Layer two sublayers

The Logical Link Control (802.2) sublayer is responsible for


identifying different network layer protocols and then
encapsulating them to be transferred across the network.
Two types of LLC frames exist: Service access point (SAP) and
Subnetwork Access Protocol (SNAP). An LLC header tells the
data link layer what to do with a packet after it is received.

The MAC sublayer specifies how data is placed and


transported over the physical wire. It controls access to the
physical medium.
The LLC sublayer communicates with the network layer, but
the MAC sublayer communicates downward directly to the
physical layer. Physical addressing (MAC addresses), network
topologies, error notification, and delivery of frames are
defined at the MAC sublayer.
MAC Address
Media Access Control (MAC) address –an address burned into each and every
Ethernet network interface card (NIC). The MAC, physical, or hardware, address is
a 48-bit (6-byte) address written in a hexadecimal format.

How to Determine the MAC address of your PC?


Parts of a MAC Address

The high-order bit is the Individual/Group (I/G) bit. When it has a value of 0, we can
assume that the address is the MAC address of a device and that it may well appear in
the source portion of the MAC header. When it’s a 1, we can assume that the address
represents either a broadcast or multicast address in Ethernet.

The next bit is the global/local bit, sometimes called the G/L bit or U/L bit, where U
means universal. When set to 0, this bit represents a globally administered address, as
assigned by the IEEE, but when it’s a 1, it represents a locally governed and
administered address.
Parts of a MAC Address

The organizationally unique identifier (OUI) is assigned by the


IEEE to an organization. It’s composed of 24 bits, or 3 bytes,
and it in turn assigns a globally administered address also
made up of 24 bits, or 3 bytes, that’s supposedly unique to each
and every adapter an organization manufactures.

The low-order 24 bits of an Ethernet address represent a locally


administered or manufacturer-assigned code. This portion
commonly starts with 24 0s for the first card made and
continues in order until there are 24 1’s for the last
(16,777,216th) card made. You’ll find that many manufacturers
use these same six hex digits as the last six characters of their
serial number on the same card
Network Interface Card
Physical Layer (Layer 1)
The physical layer defines the physical medium. It defines the
media type, the connector type, and the signaling type
(baseband versus broadband). This includes voltage levels,
physical data rates, and maximum cable lengths. The physical
layer is responsible for converting frames into electronic bits
of data, which are then sent or received across the physical
medium. Twisted-pair, coaxial, and fiber-optic cable operate at
this level. Other implementations at this layer are repeaters/
hubs.

Physical Layer
EIA/TIA
V.35
Most common IEEE Ethernet standards
10Base-T (IEEE 802.3) 10 Mbps using category 3 unshielded twisted pair (UTP) wiring
for runs up to 100 meters. Unlike with the 10Base-2 and 10Base-5 networks, each
device must connect into a hub or switch, and you can have only one host per segment
or wire. It uses an RJ45 connector (8-pin modular connector) with a physical star
topology and a logical bus.

100Base-TX (IEEE 802.3u) 100Base-TX, most commonly known as Fast Ethernet, uses
EIA/TIA category 5, 5E, or 6 UTP two-pair wiring. One user per segment; up to 100
meters long. It uses an RJ45 connector with a physical star topology and a logical bus.

100Base-FX (IEEE 802.3u) Uses fiber cabling 62.5/125-micron multimode fiber. Point
to-point topology; up to 412 meters long. It uses ST and SC connectors, which are
media interface connectors.

1000Base-CX (IEEE 802.3z) Copper twisted-pair, called twinax, is a balanced coaxial


pair that can run only up to 25 meters and uses a special 9-pin connector known as the
High Speed Serial Data Connector (HSSDC). This is used in Cisco’s new Data Center
technologies.

1000Base-T (IEEE 802.3ab) Category 5, four-pair UTP wiring up to 100 meters long
and up to 1 Gbps.
Most common IEEE Ethernet standards

1000Base-SX (IEEE 802.3z) The implementation of 1 Gigabit Ethernet running over


multimode fiber-optic cable instead of copper twisted-pair cable, using short
wavelength laser. Multimode fiber (MMF) using 62.5- and 50-micron core; uses an 850
nanometer (nm) laser and can go up to 220 meters with 62.5-micron, 550 meters with
50-micron.

1000Base-LX (IEEE 802.3z) Single-mode fiber that uses a 9-micron core and 1300
nm laser and can go from 3 kilometers up to 10 kilometers.

1000Base-ZX (Cisco standard) 1000BaseZX, or 1000Base-ZX, is a Cisco specified


standard for Gigabit Ethernet communication. 1000BaseZX operates on ordinary
single-mode fiber-optic links with spans up to 43.5 miles (70 km).

10GBase-T (802.3.an) 10GBase-T is a standard proposed by the IEEE 802.3an


committee to provide 10 Gbps connections over conventional UTP cables, (category
5e, 6, or 7 cables). 10GBase-T allows the conventional RJ45 used for Ethernet LANs
and can support signal transmission at the full 100-meter distance specified for LAN
wiring.
Encapsulation
Encapsulation wraps data with the necessary protocol information before
network transmission.

A PDU can include different information as it goes up or down the OSI model.
It is given a different name according to the information it is carrying (the layer
where it is located). When the transport layer receives upper-layer data, it
adds a TCP header to the data; this is called a segment. The segment is then
passed to the network layer, and an IP header is added; thus, the data
becomes a packet. The packet is passed to the data link layer,
thus becoming a frame. This frame is then converted into bits and is passed
across the network medium.

Application layer: Data


Transport layer: Segment
Network layer: Packet
Data link layer: Frame
Physical layer: Bits
Encapsulation
Encapsulation
Ethernet
Ethernet is a contention-based media access method that allows all hosts
on a network to share the same link’s bandwidth. Some reasons it’s so
popular are that Ethernet is really pretty simple to implement and it makes
troubleshooting fairly straightforward as well. Ethernet is so readily scalable,
meaning that it eases the process of integrating new technologies into an
existing network infrastructure, like upgrading from Fast Ethernet to Gigabit
Ethernet.
Collision and Broadcast Domain

collision domain refers to a network scenario wherein one device sends a


frame out on a physical network segment forcing every other device on the
same segment to pay attention to it. This is bad because if two devices on a
single physical segment just happen to transmit simultaneously, it will
cause a collision and require these devices to retransmit. Think of a
collision event as a situation where each device’s digital signals totally
interfere with one another on the wire.

broadcast domain refers to a group of devices on a specific network


segment that hear all the broadcasts sent out on that specific network
segment.
But even though a broadcast domain is usually a boundary delimited by
physical media like switches and routers, it can also refer to a logical
division of a network segment, where all hosts can communicate via a Data
Link layer, hardware address broadcast.

broadcast is a data packet that is sent


to all nodes on a network
The definition of a collision domain is a set of LAN devices whose frames could
collide with one another. This happens with hubs, bridges, repeaters and
wireless access points as only one device can send and receive. If more than
one device tries sending or receiving, the information is lost and irrecoverable it
will need to be resent. This can slow down network performance along with
making it a security threat.

A hub is considered a layer one device


of the OSI model; all it does is
send frames out on all ports including
the port in which the frame was
received on. This causes a collision
domain because only one device can
send at the same time. This also shares
the bandwidth between of all devices
connected to that collision domain.
These devices can inefficiently use that
bandwidth because of the CSMA/CD
and jamming signals that occur when a
collision happens.
A switch uses layer two of the
Collision Domain OSI model, so the switch uses
MAC addresses to send
frames to the correct device.
Rather than sending it to all
ports a switch only sends
the frame out one port, if it
has the MAC address in its
MAC address table. If not the
switch will send the frame on
all ports except for the port in
which the frame was
received on. Switches
provide separate collision
domains on each port, this
provides dedicated
bandwidth to that device
and allows simultaneous
conversations between
devices on different ports.
Each port can be operated at
full-duplex so the device can
send and receive information
at the same time.
CSMA/DA

Ethernet networking uses a protocol called Carrier Sense Multiple Access


with Collision Detection (CSMA/CD), which helps devices share the
bandwidth evenly while preventing two devices from transmitting
simultaneously on the same network medium. CSMA/CD was actually
created to overcome the problem of the collisions that occur when packets
are transmitted from different nodes at the same time.

When a collision occurs on an Ethernet LAN, the following happens:


1. A jam signal informs all devices that a collision occurred.
2. The collision invokes a random back-off algorithm.
3. Each device on the Ethernet segment stops transmitting for a short time until
its back-off timer expires.
4. All hosts have equal priority to transmit after the timers have expired.
A broadcast domain is like a collision domain, however the difference is these
broadcast domains belong to a set of devices in the same layer two domain.
These kind of blur together between a layer three broadcast message and a
layer two broadcast message (FFFF:FFFF:FFFF). A layer two broadcast goes to
every host in the same LAN/VLAN. To make it little more fun there are two types
of layer three broadcast messages 🙂

Limited/Local Broadcast – (255.255.255.255) is often used when the host really


has no idea what network its on and wait for a DHCP server to respond back. As
well as if a host needs to know the MAC address of a another host on the same
LAN/VLAN. This broadcast goes to every host on the same LAN/VLAN and is the
most common type of broadcast message.
Directed Broadcast – Is a directed IP packet whose destination is a valid
broadcast address on the network that the host is not currently a part of. A router
would forward this on to the correct network, however this is usually disabled by
default. Example 192.168.1.255/24
Also keep in mind when you send a layer three broadcast you’ll also send a
layer two broadcast regardless of what type of layer three broadcast message is
sent. This also works the other way, when you send a layer two broadcast
message you’ll also send a layer three broadcast message. (We work up and
down the OSI model)
If devices are in the same IP network they will send and receive a broadcast
messages and having a smaller broadcast domains can improve network
performance as well as improve against security attacks. The more PCs and
network devices connected to a single broadcast domain, the more broadcast
messages you will have. A broadcast message goes to every PC that’s on the
LAN/VLAN. An example is when the router gets a packet from a different
network and that packet is destined to host (192.168.1.124) it will send the
packet if the router has the MAC address of 192.168.1.124. If the router does not
have the MAC address of (192.168.1.124) in its MAC address table it will send
an ARP (Limited/Local Broadcast) request before delivering the packet. In this
ARP request the router is basically saying who is 192.168.1.124? and tell me
your MAC address. That broadcast message goes to every PC and network
device in the broadcast domain. Each PC and network device that is in the
192.168.1.0/24 network has to look at the frame and then discard it if it’s not
192.168.1.124. The PC that is 192.168.1.124 will respond back to the ARP
request with an ARP reply. So a broadcast message can just be like a collision
domain and affect network performance.
Part 3: Routing Protocols
and Concepts

Reference: Routing Protocols and Concepts


CCNA Exploration Companion Guide
What is a router?
A router is a computer and has many of the common hardware
components found on other types of computers. A router also
includes an operating system. Routers are used to connect
multiple networks. The router is responsible for the delivery of
packets across different networks.

Other services that a router provides


■ Ensuring 24/7 (24 hours a day, 7 days a week) availability to
help guarantee network reachability using alternate paths in
case the primary path fails
■ Providing integrated services of data, video, and voice over
wired and wireless networks using quality of service (QoS)
prioritization of IP packets to ensure that realtime traffic, such as
voice and video or critical data, is not dropped or delayed
■ Mitigating the impact of worms, viruses, and other attacks on
the network by permitting or denying the forwarding of packets
Routing Table
A routing table is a data file in RAM that is used to store route information about directly
connected and remote networks. The routing table contains network/next-hop
associations that tell a router that a particular destination can be optimally reached by
sending the packet to a particular router representing the “next hop” on the way to the
final destination. The next-hop association can also be the outgoing or exit interface to
the final destination.

A directly connected network is a network that is directly attached to one of the router
interfaces. When a router’s interface is configured with an IP address and subnet mask,
the interface becomes a host on that attached network. The network address and subnet
mask of the interface, along with the interface type and number, are entered into the
routing table as a directly connected network. When a router forwards a packet to a
host such as a web server, that host is on the same network as a router’s directly
connected network. A remote network is a network that is not directly connected to
the router. In other words, a remote network is a network that can only be reached by
sending the packet to another router. Remote networks are added to the routing table
using a dynamic routing protocol or by configuring static routes.

Dynamic routes are routes to remote networks that were


learned automatically by the router, using a dynamic routing protocol. Static routes are
routes to networks that a network administrator manually configured.
Static Routing
Static routing is a type of network routing technique. Static
routing is not a routing protocol; instead, it is the manual
configuration and selection of a network route, usually
managed by the network administrator.
Stub Network
Static routes are commonly used when routing from a network to a stub
network. A stub network is a network accessed by a single route.
Configuring Static Routes
Static Routing
Static Routing
Static Routing

Sample Configuration of Static Routing


Verifying Static Route
Verifying Static Route
Summary and Default Static Routes
Verifying a Default Static Route
Dynamic Routing
Dynamic Routing Protocols – is a set of processes,
algorithms, and messages that are used to exchange
information and populate the routing table with routing
protocol’s choice of best path.
The purpose of a routing protocols includes:
• Discovering remote networks
• Maintaining up-to date routing information
• Choosing the best path to destination
networks
• Having the ability to find a new path if the
current path is no longer available.
Dynamic Routing

The main components of dynamic routing protocols include:


•Data structures: Routing protocols typically use tables or
databases for their operations. This information is kept in
RAM.
•Routing protocol messages: Routing protocols use various
types of messages to discover neighboring routers,
exchange routing information, and perform other tasks to
learn and maintain accurate information about the network.
•Algorithm: An algorithm is a finite list of steps used to
accomplish a task. Routing protocols use algorithms for
facilitating routing information and for best path
determination.
Dynamic Routing
Operations of a dynamic routing protocol
• The router sends and receives routing messages on its
interfaces.
• The router share routing messages and routing
information with other routers that are using the same
routing protocol
• Routers exchange routing information to learn about
remote networks
• When a router detects a topology change, the routing
protocol can advertise this change to other routers
Dynamic Routing
Feature Dynamic Routing Static Routing

Configuration Complexity Generally independent of Increases with network


the network size size

Required administrator Advance knowledge No extra knowledge


knowledge required

Topology Change Automatically adapts to Administrator


topology changes interventions required

Scaling Suitable for simple and Suitable for simple


complex topologies topology

Security Less secure More secure

Resource Usage Use CPU, memory and link No extra resources needed
bandwidth

Predictability Route depends on the Route to destination is


Dynamic Routing
Classifying Routing Protocols (3.1.4.1)

Routing protocols can be classified into different groups according to their


characteristics. Specifically, routing protocols can be classified by their:
• Purpose: Interior Gateway Protocol (IGP) or Exterior Gateway Protocol (EGP)
• Operation: Distance vector protocol, link-state protocol, or path-vector protocol
• Behavior: Classful (legacy) or classless protocol
For example, IPv4 routing protocols are classified as follows:
• RIPv1 (legacy): IGP, distance vector, classful protocol
• IGRP (legacy): IGP, distance vector, classful protocol developed by Cisco
(deprecated from 12.2 IOS and later)
• RIPv2: IGP, distance vector, classless protocol
• EIGRP: IGP, distance vector, classless protocol developed by Cisco
• OSPF: IGP, link-state, classless protocol
• IS-IS: IGP, link-state, classless protocol
• BGP: EGP, path-vector, classless protocol

The classful routing protocols, RIPv1 and IGRP, are legacy protocols and are only
used in older networks. These routing protocols have evolved into the classless
routing protocols, RIPv2 and EIGRP, respectively. Link-state routing protocols are
classless by nature.
Dynamic Routing
hierarchical view of dynamic routing protocol
classification.
Dynamic Routing
IGP and EGP Routing Protocols (3.1.4.2)
An autonomous system (AS) is a collection of routers under a
common administration such as a company or an organization.
An AS is also known as a routing domain. Typical examples of
an AS are a company’s internal network and an ISP’s network.
The Internet is based on the AS concept; therefore, two types
of routing protocols are required:
•Interior Gateway Protocols (IGP): Used for routing within an
AS. It is also referred to as intra-AS routing. Companies,
organizations, and even service providers use an IGP on their
internal networks. IGPs include RIP, EIGRP, OSPF, and IS-IS.
•Exterior Gateway Protocols (EGP): Used for routing between
autonomous systems. It is also referred to as inter-AS routing.
Service providers and large companies may interconnect
using an EGP. The Border Gateway Protocol (BGP) is the only
currently viable EGP and is the official routing protocol used
by the Internet.
Dynamic Routing
five individual autonomous systems in the
scenario:
Dynamic Routing
Routing Protocol Characteristics (3.1.4.7)
Routing protocols can be compared based on the following characteristics:
•Speed of convergence: Speed of convergence defines how quickly the routers in
the network topology share routing information and reach a state of consistent
knowledge. The faster the convergence, the more preferable the protocol. Routing
loops can occur when inconsistent routing tables are not updated due to slow
convergence in a changing network.
•Scalability: Scalability defines how large a network can become, based on the
routing protocol that is deployed. The larger the network is, the more scalable the
routing protocol needs to be.
•Classful or classless (use of VLSM): Classful routing protocols do not include the
subnet mask and cannot support variable-length subnet mask (VLSM). Classless
routing protocols include the subnet mask in the updates. Classless routing protocols
support VLSM and better route summarization.
•Resource usage: Resource usage includes the requirements of a routing protocol
such as memory space (RAM), CPU utilization, and link bandwidth utilization. Higher
resource requirements necessitate more powerful hardware to support the routing
protocol operation, in addition to the packet forwarding processes.
•Implementation and maintenance: Implementation and maintenance describes
the level of knowledge that is required for a network administrator to implement and
maintain the network based on the routing protocol deployed.
Dynamic Routing
Dynamic Routing
Routing Protocol Metrics (3.1.4.8)
There are cases when a routing protocol learns of more than
one route to the same destination. To select the best path, the
routing protocol must be able to evaluate and differentiate
between the available paths. This is accomplished through the
use of routing metrics.
A metric is a measurable value that is assigned by the routing
protocol to different routes based on the usefulness of that
route. In situations where there are multiple paths to the same
remote network, the routing metrics are used to determine the
overall “cost” of a path from source to destination. Routing
protocols determine the best path based on the route with the
lowest cost.
Different routing protocols use different metrics. The metric
used by one routing protocol is not comparable to the metric
used by another routing protocol. Two different routing
protocols might choose different paths to the same destination.
Dynamic Routing
Dynamic Routing
Metric Parameters
Hop Count: A simple metric that counts the number of
routers a packet must traverse
Bandwidth: Influence path selection by preferring the
path with the highest bandwidth
Load: Considers the traffic utilization of a certain link
Delay: Considers the time a packet takes to traverse a
path
Reliability: Assess the probability of a link failure,
calculated from the interface error count or previous link
failures
Cost: A value determine either by the IOS or by the
network administrator to indicate preference for a route.
Cost can represent a metric or combination of metrics or
a policy
Dynamic Routing

Metric for each routing protocol

RIP: Hop count: Best path is chosen by route with the


lowest hop count
IGRP and EIGRP: Bandwidth, delay, reliability and
load: Best path is chosen by the route the smallest
composite metric value calculated These multiple
parameters: By default, only bandwidth and delay are
used.
IS-IS and OSPF: Cost: Best path is chosen by the route
with the lowest cost. The Cisco implementation of OSPF
uses bandwidth to determine the cost.
Dynamic Routing
Administrative Distance
Before the routing process can determine which route
to use when forwarding a packet, it must first determine
which routes to include in the routing. There can times
when a router learns a route to a remote networks from
more than one routing source. The routing process will
need to determine which routing source to use.
Administrative distance is used for this purpose.
Purpose of Administrative Distance
Administrative Distance (AD) defines the preference of a
routing source. Each routing source - including specific
routing protocols, static routes, and even directly connected
network - is prioritized in order of most to least preferable
using an administrative distance value. Cisco routers use the
AD feature to select the best path when they learn about the
same destination network from two or more different routing
sources.
Dynamic Routing
Administrative distance is an integer value from 0 to 255. Lower that the value
the preferred the route source. An administrative distance of 0 is the most
preferred. Only a directly network has an administrative distance of 0, which
cannot be changed.

Default Administrative Distance

Route Source AD Route Source AD


Connected 0 OSPF 110
Static 1 IS-IS 115
EIGRP summary route 5 RIP 120
External BGP 20 External EIGRP 170
Internal EIGRP 90 Internal BGP 200
IGRP 100
Distance Vector Routing Protocols
Distance Vector Routing Protocols
Distance Vector Routing Protocols
Distance Vector Routing Protocols

Cold Start (3.1.3.2)


All routing protocols follow the same patterns of
operation. When a router powers up, it knows nothing
about the network topology. It does not even know that
there are devices on the other end of its links. The only
information that a router has is from its own saved
configuration file stored in NVRAM. After a router boots
successfully, it applies the saved configuration. If the IP
addressing is configured correctly, then the router initially
discovers its own directly connected networks.
Distance Vector Routing Protocols

Network Discovery (3.1.3.3)


After initial boot up and discovery, the routing table is
updated with all directly connected networks and the
interfaces those networks reside on. If a routing protocol is
configured, the next step is for the router to begin exchanging
routing updates to learn about any remote routes. The router
sends an update packet out all interfaces that are enabled on
the router. The update contains the information in the routing
table, which currently comprises all directly connected
networks. At the same time, the router also receives and
processes similar updates from other connected routers. Upon
receiving an update, the router checks it for new network
information. Any networks that are not currently listed in the
routing table are added.
Distance Vector Routing Protocols
Initial Routing Table Before Exchange
Distance Vector Routing Protocols
Initial Convergence
Distance Vector Routing Protocols

Routing Table After Initial Exchange


Distance Vector Routing Protocols
Distance Vector Routing Protocols

Routing Table After Convergence


Dynamic Routing
Routing Table Maintenance

After the routers have initially learned about remote networks, routing protocols must
maintain the routing tables so that they have the most current routing information.

Even though none of the routers have new information to share, periodic updates
are sent anyway. The term periodic updates refers to the fact that a router sends the
complete routing table to its neighbors at a predefined interval. For RIP, these
updates are sent every 30 seconds as a broadcast (255.255.255.255), whether or
not there has been a topology change. This 30 - second interval is a route update
timer that also aids in tracking the age of routing information in the routing table.

Reasons for Periodic Update


• Failure of a link
• Introduction of a new link
• Failure of a router
• Change of link parameters
Dynamic Routing
Maintaining the Routing Table

• periodic updates refers to the fact that a router sends the complete routing table
to its neighbors at a predefined interval. For RIP, these updates are sent every 30
seconds as a broadcast (255.255.255.255), whether or not there has been a
topology change. This 30- second interval is a route update timer that also aids
in tracking the age of routing information in the routing table.

• Bounded Updates
EIGRP uses updates that are
■ Nonperiodic, because they are not sent out on a regular basis

■ Partial, because they are sent only when there is a change in topology that

influences routing information


■ Bounded, meaning that the propagation of partial updates is automatically

bounded so that only those routers that need the information are updated

• Triggered updates are sent when one of the following events occurs:
■ An interface changes state (up or down).

■ A route has entered (or exited) the unreachable state.

■ A route is installed in the routing table.


Dynamic Routing
Dynamic Routing
Dynamic Routing
Dynamic Routing
LAN
Ethernet
Ethernet Frame
MAC Address
Unicast, Multicast, Broadcast
Duplex
Network Hub
Network Bridge
Ne

You might also like