Professional Documents
Culture Documents
Definition of a network
Sometimes small offices and home offices are grouped under the term
(SOHO Small Office, Home Office)
The rules.
The messages.
The media. And,
The devices.
Characteristics of a Network.
The main characteristics of a network are as follows:
Network Media
<SPEED>BASE-<MEDIA/CABLE TYPE>
<Speed>. Refers to the actual speed of the network in Mbps or
Gbps.
<Media/Cable Type>. Refers to the type of cable and media
used by the network.
Routers:
Routers are used to interconnect networks (LANs and WANs).
There are different key factors to be considered when selecting a
router:
Tip. If you were wondering why at your home you are using a straight
forward cable to connect to your broadband router instead of a
crossover, the answer is not so obvious. In reality, ports used to
connect your home computers to broadband routers are acting as
switch ports and, therefore, they have the same pin-out as a switch.
As we have learnt, the straight forward cable is the most appropriate.
- Computers.
- Interconnections: Network Interface Cards (NICs) and the media.
- Networking devices (hubs, switches and routers).
- Protocols (Ethernet, IP, TCP, etc.).
Teleworkers
Access Layer.
Routers.
Telephone/VPN Concentrators.
Switches having the following characteristics. Port security,
VLAN support, Different speed Interfaces (Fast/Giga Ethernet),
Power over Ethernet (IP Phones), Link Aggregation, Quality of
Service (QoS).
Distribution Layer.
Core Layer.
Enterprise Architecture
Cisco has developed what is called an Enterprise Architecture. An
Enterprise Architecture is a generic representation of the most
common layout for enterprise networks showing the different layers
and the structured approach towards providing network connectivity
for companies.
Section Summary
When two systems are able to communicate because they use the
same set of protocols we usually say they can interoperate.
Protocols add additional information to the user data in a way
the delivery of the information from the source to the destination can
be accomplished. This additional information is protocol specific (for
example message sequence number) and it is called overhead.
There are two types of overhead:
ISO wanted the OSI model to become the open standard that
everyone would follow in a world where all computers and systems
could communicate. Finally TCP/IP became the de facto standard
that everyone follows, but still OSI Model is used to illustrate how
protocols work in a layered architecture.
The process of passing data down the stack and adding headers and
trailers is called encapsulation.
By means of encapsulation peer layers (layers at the same OSI level
at source and destination) interchange control information and critical
parameters for the communication.
For simplicity we have not illustrated any trailer except the FCS.
At the receiver side, the opposite process takes place. The receiver
de-encapsulates the information received through the media:
Each layer of the OSI model at the source communicates with its peer
layer at the destination in order for the data to travel across the
network. This form of communication is called peer-to-peer
communication.
When data travels down the OSI model stack, each layer encapsulates
the data received from the upper layer building a Protocol Data Unit
(PDU) by adding headers/trailers which information is only relevant to
its peer layer at the destination side.
Each layer offers services to the upper layer that are achieved
by means of the headers/trailers being added to the data received
from the upper layer.
For example: Transport layer, takes data from the session layer and
fragments this data if required. Once the data is fragmented, the
corresponding header is added where each fragment is identified. In
this example, the transport layer offers the fragmentation service to
the session layer and adds a header that will only be understood and
used by the peer transport layer at the destination side. The transport
layer at the destination side will use the information contained in the
header to identify each fragment and sent the complete data upwards
to the receiving session layer.
Application Layer:
Transport Layer:
Network Layer:
OSI Model and TCP/IP Model are two different layered architectures
defining how hosts deliver data end to end through a set of protocols
providing different functions required for the communication to take
place.
Both are based on the concept of encapsulation, i.e. data travelling
down the stack and headers/trailers being added on each layer to
support layer specific communication related functions.
Modern LAN and WAN technologies are packet switched based where
the media is shared by different flows and, therefore, the need of a
data link or network access layer determining how the media is
accessed and shared.
Section Summary
The most relevant TCP/IP layes is the network layer. The TCP/IP
network layer has been critical to the success of the stack and, what
is more important, to the success and growth of Internet.
TCP/IP Stack
IP Layer header
Two hosts on the same network, share the Network ID part of the IP
Address.
The address is hierarchy is as follows:
IP Address Format
11000000101010000000000100000001
The full IP range space has been divided into what is called classes.
Classes are used to accommodate for different network sizes by:
Class A
1 octet (byte) used for the Network ID
First bit of the network ID set to 0 (range of Network ID
varies from 00000000 (0) to 01111111 (127).
The network IDs 0 and 127 are reserved and can not be used.
Therefore there is a total of 126 available networks: Network IDs (1-
126).
The total number of hosts that can be part of a network are 224-2
because there are 24 bits available for the Host ID and we need to
take out the 2 reserved IP Addresses by Network (Network Address
and Broadcast Address).
Class B
2 octets (bytes) used for the Network ID
First bits of the network ID set to 10 (range of Network
ID varies from 10000000 (128) to 10111111 (191).
The total number of hosts that can be part of a network are 216-2
because there are 16 bits available for the Host ID and we need to
take out the 2 reserved IP Addresses by Network (Network Address
and Broadcast Address).
Class C
3 octets (bytes) used for the Network ID
First bits of the network ID set to 110 (range of
Network ID varies from 11000000 (192) to 11011111
(223).
The total number of hosts that can be part of a network are 28-2
because there are 8 bits available for the Host ID and we need to take
out the 2 reserved IP Addresses.
Class D
Multicast addresses. In this class, the first octet has a range from
1110 (224) to 11101111 (239) and the remaining bits specify the
multicast group.
Class E
Size of
Size of
Leading network Number Addresses
Class rest Start address End address
bits number of networks per network
bit field
bit field
Network A:
There are 16,777,214 possible Host IDs (if we exclude the reserved
network and broadcast addresses)
Network B:
Network B has three interfaces attached to it: two host interfaces and
one router interface.
There are 65,534 possible Host IDs (if we exclude the reserved
network and broadcast addresses)
Network C:
Network C has three interfaces attached to it: two host interfaces and
one router interface.
There are 254 possible Host IDs (if we exclude the reserved network
and broadcast addresses)
It is also true that there are networks that will always be isolated
(never connected to other networks) Why cant we use on those
isolated networks IP addresses that are used in other locations? Why
cant we reuse IP addresses on isolated networks?
In order to allow reuse of IP addresses (duplication), the IP space has
been divided into two different groups of addresses:
A subnet mask is a 32 bit vector where there are as many top-left bits
set to one as the number of bits that are part of the Network ID.
As for the IP addresses, and for simplicity and readability, the subnet
mask is also translated into decimal octet by octet.
192.168.1.5 = 11000000.10101000.00000001.00000101
AND
255.255.255.128 = 11111111.11111111.11111111.10000000
Equals 11000000.10101000.00000001.00000000
=192.168.1.0
For this subnet the Network address is built by setting all the Host ID
bits to 0:
11000000.10101000.00000001.00000000 =192.168.1.0
For this subnet the Broadcast address is built by setting all the Host ID
bits to 1:
11000000.10101000.00000001.01111111 =192.168.1.127
Let us perform the AND operation between the IP Address and the
subnet mask:
A subnet mask is a 32 bit vector where there are as many top-left bits
set to one as the number of bits that are part of the Network ID.
As for the IP addresses, and for simplicity and readability, the subnet
mask is also translated into decimal octet by octet.
192.168.1.5 = 11000000.10101000.00000001.00000101
AND
255.255.255.128 = 11111111.11111111.11111111.10000000
Equals 11000000.10101000.00000001.00000000
=192.168.1.0
For this subnet the Broadcast address is built by setting all the Host ID
bits to 1:
11000000.10101000.00000001.01111111 =192.168.1.127
Let us perform the AND operation between the IP Address and the
subnet mask:
Subnetting:
If we add one additional bit to the natural mask (/25), we split the
network into two subnetworks (one having bit 25 set to 0 and one
having bit 25 set to 1.
As we have added 1 bit to the natural mask, we have a total of 21=2
subnets available after the split.
In this case, there are 7 bits left for the Host ID, meaning each of the
subnets will have a total of 27-2=126 valid hosts.
Subnet 1:
Network ID: 192.168.1.00000000=192.168.1.0 (bit 25=0)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.00000000=192.168.1.0 (all Host ID
bits = 0)
Broadcast Address: 192.168.1.01111111=192.168.1.127 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.1 192.168.1.126
(126 hosts)
Subnet 2:
Network ID: 192.168.1.10000000=192.168.1.128 (bit 25=1)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.10000000=192.168.1.128 (all Host
ID bits = 0)
Broadcast Address: 192.168.1.11111111=192.168.1.255 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.129 192.168.1.254
(126 hosts)
Subnetting
In this case, there are 7 bits left for the Host ID, meaning each of the
subnets will have a total of 27-2=126 valid hosts.
Subnet 1:
Network ID: 192.168.1.00000000=192.168.1.0 (bit 25=0)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.00000000=192.168.1.0 (all Host ID
bits = 0)
Broadcast Address: 192.168.1.01111111=192.168.1.127 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.1 192.168.1.126
(126 hosts)
Subnet 2:
Network ID: 192.168.1.10000000=192.168.1.128 (bit 25=1)
Subnet Mask: 255.255.255.128 (/25)
Network Address: 192.168.1.10000000=192.168.1.128 (all Host
ID bits = 0)
Broadcast Address: 192.168.1.11111111=192.168.1.255 (all
Host ID bits = 1)
Valid range of host addresses: 192.168.1.129 192.168.1.254
(126 hosts)
Summarizing
Summarizing is the opposite to subneting. By summarizing, the
network mask length is reduced and a number of networks grouped
under the umbrella of a bigger network called summary or
supernet
The summary network, groups all the combination of subnets having
a longer network mask as it includes all the addresses contained in
those.
Routing
As described before, one of the functions of the TCP/IP Network layer
is routing packets, which refers to the move of packets across the
network until they reach the destination network/host.
This move is based on the knowledge of the network topology and the
location of the different hosts based on their IP addresses.
Each router has what is called a routing table where each known
network has an entry containing information about the network
such as, the cost to reach the network, the next router interface to
visit in order to reach the network (next hop), the interface to be
used to reach the network, etc. By means of the routing table, the
packets are forwarded among routers and can reach the destination
systems/hosts.
Section Summary
Based on the Network Access Layer rules (you can only communicate
with hosts/systems within your own link/network) any communication
between A and B will involve two hops.
Looking at the layered architecture, we can extract the following
conclusions:
The diagram shows the the ARP Tables and Routing Tables on the
different network elements:
- ARP Tables: Contain the mapping between the IP addresses
and the MAC/Physical addresses on each of the network
elements. In the case of Frame Relay networks, the ARP table
looks a little bit different: We will discuss this in detail, but the
table contains the Frame Relay circuit required to reach each of
the destination networks.
- Routing Tables: Contain the next hop IP and designated
interface to reach each of the know networks. They also contain
the default route, i.e. the next hop interface to be used in
case the destination network is not listed within the routing
table.
Other elements to pay attention within the diagram are:
- MAC addresses of all network interfaces.
- Interface names of all network interfaces: Ethernet interfaces
are called En (n being the interface number) and Frame Relay
interfaces are called Sn (n being the interface number)
We will now follow up the end to end communication process where a
web session will be established and mantained between the
web client (Host A) and the web server (Host B).
1. The user opens the web browser application and types the link
of the web page he wants to visit. This web client
application/process needs to communicate with the web server
containing the page described by the link.
2. In order to establish the connection to the web server, the IP
address must be obtained. This is achieved by means of the
DNS application layer protocol.
3. Once the destination IP is known, we are ready to establish a
connection at transport level. Web browsing uses the HTTP
application protocol which runs on top of the TCP
transport protocol. TCP is a reliable connection oriented protocol
requiring initial session establishment through the three
way handshake process (SYN, SYN-ACK, ACK). TCP
segments sent from A to B will use a random port as source TCP
port (1124) and a well know destination port (80) in order for
Host B to direct the traffic to the correct process (web server).
Using ports we allow traffic to be multiplexed.
4. After establishing the connection the application will request
the web page from the server and the server will find the
page in its repository and will deliver it to the client. All the
request and delivery of the web page process is achieved
through the HTTP protocol. HTTP messages are interchanged
on top of the TCP session protocol. During all the interchange
process the error recovery and flow control mechanisms at
TCP level will make sure the information is interchanged on a
reliable manner.
5. Once the content of the web page is received, it will get
displayed by the browser application.
6. Finally, once the page has been received, the connection will
be released using the TCP four way connection release
mechanism.
In the following sections we will examine how a packet is built and
sent on all the interfaces of the networks it will traverse until reaching
its final IP destination (Host B)
Section Summary
Ethernet standards define the behavior of the two lower layers of the
OSI Model: data link and physical.
The data link layer in Ethernet is further divided into two sublayers:
FDDI and Token Ring are no longer used due to the lower cost and
higher performance
CSMA/CD
CSMA/CD stands for Carrier Sense Multiple Access/Collision Detection
and is the protocol used by Ethernet Medium Access Control (MAC)
sublayer to gain access and transmit frames on a shared media.
Each station/host willing to RECEIVE frames from the media will listen
to information flow (frames on the cable) and will read and process
only those frames containing its unicast Ethernet address, a multicast
Ethernet address representing a group where the station is subscribed
to or the Ethernet broadcast address.
At this point in time, although the station sensed the media to use
it only when the media is free, it could still be the case that
multiple stations/hosts transmit at the same time based on the fact
that all of them found the media unused.
In case multiple stations/hosts transmit at the same time, the
signals will get mixed in the media making the frames
unrecognizable in the same multiple conversations get mixed up
when several people speak in loud voice. This phenomenon is
known as collision. When a collision happens all the frames being
transmitted need to be discarded and resent into the media. This is
an important circumstance that has to be monitored by all the
transmitting stations (Collision Detect).
The figure shows the typical formats of an Ethernet frame under two
different standards:
Both frame formats are very similar. The main difference is as follows:
Ethernet II uses a field called Type to indicate the network layer
protocol being carried; In the case of IEEE 802.3, this field does not
exist and there is another field (with the same size) called Length
indicating the size in bytes of the network layer packet being
carried. In addition, when IEEE 802.3 format is used, an additional
LLC sublayer header (IEEE 802.2) is added before the payload to
perform specific functions related to the operation of the LLC protocol.
In 1997, the IEEE 802.3 standard was reviewed to include the original
Ethernet format in such a way that networks using IEEE 802.3
standard could indistinctly use the Type field or the combination of
Length + LLC header (IEEE 802.2).
ADVANCED NOTE:
As the maximum payload size on an Ethernet frame is 1500 bytes,
the length field of the frame can have a maximum value of 1500, i.e.
0x5DC in hexadecimal. In order for the receiving station to easily find
out whether the filed carries length or type, the type of protocol field
uses always values above 0x5DC in such a manner that the receiving
station will consider this field a length when the value is less or equal
than 0x5DC and will consider it a protocol type when the value is
above 0x5DC. For a full list of protocol values refer to the following
link:
http://www.iana.org/assignments/ethernet-numbers
Ethernet Addressing
A MAC address is 48 bits long and is divided into the fields depicted
by the figure. The first 24 bits are known as Organizational Unique
Identifier (OUI) while the remaining 24 bits are called vendor
assigned station address
The Organizational Unique Identifier (OUI) is further divided into
the following:
<SPEED>BASE-<MEDIA/CABLE TYPE>
<Speed>: Refers to the speed of the network in Mbps
<Media/Cable Type>: Refers to the type of media and
connector being used
Example:
100BASE-TX: 100Mbps over twisted pair cable with a maximum reach
of 100m
The diagram shows the layout of the UTP cable. The different
conductors are colored in a predefined fashion, allowing them
to be distinguished without ambiguity at both ends of the cable:
There are two ways of assigning the TX and RX signals into the
conductors: T568A and T568B. The difference has to do with the fact
that TX and RX signals are reversed in T568A and T568B standard,
meaning the conductors acting as TX in T568A are RX in T568B and
vice versa.
Host: T568A
Router: T568A
Hub, Bridge or Switch: T568B
Hubs replicate the electrical signal received in one of its ports into the
others and, therefore, devices connected to them are virtually within
the same cable, i.e. collision domain.
When a frame is received within one of the switch ports, the switch
must decide among one of the following options:
In order to forward the frame to a given port, the switch uses a MAC
address Table stored in its RAM memory. This table maps each
connected station with the port were the station is connected to. The
switch identifies each station based on its MAC address.
In order to build the MAC address Table, the switch listens to the
Ethernet traffic passing through. When a frame is received in one of
the ports the switch extracts the SOURCE MAC address of the
transmitting station. If a given MAC address is used as source MAC in
a port, it basically means the station identified by this MAC is
connected to the port. Following this logic, the switch builds a MAC to
port table (MAC address table) that is used to make forwarding
decisions. This technique is known as Dynamic MAC Learning.
When the MAC addresses are dynamically learnt they stay on the MAC
address table for a configurable period of time know as aging time.
Once the aging time has expired, the MAC address is removed from
the MAC address table.
Once the MAC address table is complete, the switch uses the
following forwarding logic:
A switch never forwards a frame to the same port where the frame
was received.
When multiple switches are connected, they build a Layer 2 loop
free topology by means of the STP (Spanning Tree Protocol)
avoiding frames to be forwarded multiple times into the same
network segment.
The MAC table is also known as switching table, bridging table or CAM
(Content Addressable Memory) table.
Example 1:
1.1: A sends a frame to C
1.2: The frame reaches port number 1
1.3: The switch analyses the Ethernet header:
Looks at the source MAC address to find out whether the
source station is already within the MAC table. In this case
there is a matching entry for the MAC address and the
port.
Looks at the destination MAC address to find out
whether the destination station is contained within the
MAC table. In this case there is an entry pointing to port 3,
where the frame will be forwarded.
1.4: The frame gets forwarded to port 3 as per the MAC address table.
1.5: C receives the frame
Example 2:
2.1: B sends a broadcast frame to all the stations within its broadcast
domain
2.1: The frame reaches port 2
2.3: The switch analyses the frame and discovers this is a broadcast
frame and, therefore, gets forwarded to all the stations belonging to
the broadcast domain
2.4: The frame reaches all the stations within the broadcast domain: A
and C
Memory Buffering: When storing frames a switch can use one of the
following strategies:
Shared Memory: The switch uses a common memory
repository to store all the frames no matter which port they
were received into.
Port Memory: Each of the switch ports has its own memory
repository for frame storage.
VLAN Trunks
VLANs can span multiple switches, in a way two stations
connected to a different switch belong to the same VLAN. These
VLANs require layer 2 connectivity (across switches) including the
forwarding of broadcasts along all the port belonging to the VLAN no
matter the physical location.
Frames from different VLANs are carried on the trunk link. Switches at
both ends of the trunk need to identify which frames belong to
each of the VLANs in order to perform the right forwarding actions
as well as to keep the inter VLAN isolation. VLAN separation on a
trunk is achieved by means of a VLAN Tag added into the existing L2
header of each of the frames traversing the trunk.
There are two protocols describing the format and use of the VLAN
Tag:
Based on the size of the VLAN ID field, we can deduct that only 4096
VLANs are allowed. VLAN numbers 0 and 4095 are reserved.
Each of the switch members of a VTP domain can act as one of the
following roles:
Server: This is the default mode. VLANs can be configured and
updated on a server switch. Changes are propagated to all
members of the VTP domain.
Transparent: VLANs can be configured and updated on
transparent switch. Changes are not propagated to the VTP
domain and have only local impact. A VTP transparent switch
does not sync up his VLAN database with any other switch.
Client: VLANs cannot be configured or updated in a client
switch. A VTP client switch syncs up his VLAN database with
other servers and clients.
VTP Operation:
Every time the VLAN configuration of a VTP domain is changed, a new
VLAN database is created. Each database has a revision number
being the one with the highest revision number the most up to date.
In a Transparent VTP switch VLAN database revision number is always
zero.
VTP configured switches send periodic updates over all their trunk
interfaces using VLAN 1 (know as default VLAN) on predefined
intervals (default is 5 mins.). Server switches send, in addition,
asynchronous updates whenever their VLAN configuration has been
modified. Updates use a multicast frame and contain the revision
number of the database operating on the switch sending the update.
VTP configured switches send periodic updates over all their trunk
interfaces using VLAN 1 (know as default VLAN) on predefined
intervals (default is 5 mins.). Server switches send, in addition,
asynchronous updates whenever their VLAN configuration has been
modified. Updates use a multicast frame and contain the revision
number of the database operating on the switch sending the update.
In the diagram we can see there are multiple physical paths between
Host A and Host B. Two of them are highlighted (green path and
purple path). In the event of failure of the link connecting switch 2 to
switch 3, the green path wont be available any more, but the network
topology offer a number of alternate paths being the purple path one
of those.
At this point in time we can already realize that Switch 2 and 3 have
received the broadcast frame two times due to the broadcast
operation in Ethernet networks.
In the diagram A, we can see that once the STP protocol has
converged,
An spanning tree rooted at S1 has been built. Only purple
links are active while the remaining links are not utilized.
All the trunk ports are set as (F) Forwarding or (B)
Blocked. Only the links were both ports are at (F) state will
be active at a given point in time.
Imagine the link connecting S2 and S4 goes down. Initially, this event
results on switch S4 being isolated from the switching domain, as the
only active link (S2-S4) failed. STP will automatically recalculate a
new spanning tree by changing the state of the trunk ports in order to
activate or deactivate links.
STP Operation:
1) Every switch in the topology sends a multicast frame called
Bridge Packet Data Unit (BPDU) at regular intervals (2 sec.
by default). The BPDU frame contains the Bridge ID of the
switch.
2) Switches receiving a BDPU from other switches flood it to all
their connected interfaces leaving the original Bridge ID
contained on the frame untouched.
3) Switches in the network follow certain steps in order to build
the loop-free spanning tree:
1) Switches select the Root Switch: As BPDU frames
(carrying Bridge IDs) are forwarded by all the switches
into their connected interfaces, after a period of time
every switch in the layer 2 domain is aware of all the
existing switches. If a switch has not seen any other
switch with a lower Bridge ID, it elects himself as root
switch. In case of two switches having the same Bridge ID,
the one with the lowest MAC address becomes root.
The root switch will transition all its ports into Forwarding state, i.e.
all its ports will be Designated Ports (DPs).
Once the STP protocol has converged, the network will have:
A loop-free tree topology
A single Root Switch
All the ports in the Root Switch configured as DPs in Forwarding
(F) State
All the remaining switches selecting a single RP as the one
closest to the Root Switch
A single DPs on each of the network segments (excluding the
root ports)
Let us look at the depicted topology and follow the Spanning Tree
Algorithm process:
1) Selection of root switch:
In this switching domain S5 has the lowest switch priority
and therefore will become Root switch.
All the ports in of the Root switch are designated ports
and configured in Forwarding (F) state.
Selection of Root Ports:
All the non root switches select as root port the one
having the lowest path distance towards the root switch
based on the link costs (LC) defined by the standard. In
me majority of the cases there is a single port connecting
to the lowest cost path.
In the case of S1, the root switch can be reached through
two different paths having the same cost (S1-S2-S5 and
S1-S3-S5). The Port Priority will be used as tie breaker. In
this case Port 1 becomes the root port as it is the one
having the lowest priority value.
Selection of designated ports on non-root switches:
On each of the segments the selected designated port
belongs to the switch closest to the root switch:
Link S1-S3: Closest switch is S3 S3 Port 1 becomes
DP
Link S1-S2: Closest switch is S2 S2 Port 1 becomes
DP.
Link S2-S4: Closest switch is S4 S4 Port 1 becomes
DP.
Link S2-S3: Closest switch is S3 S3 Port 2 becomes
DP.
Link S4-S3: S4 and S3 are at the same distance
Tie breaker is Bridge ID. S3 port 3 becomes DP as it
has lower priority 32768 < 65535.
Once the DP have been selected on each of the segments, all ports
not selected as DP or RP will go into Blocking state.
Convergence Time is the time required for the STP protocol to reach
an stable loop-free logical topology where all the ports in the topology
are in the forwarding or blocking state. Convergence Time is 801.1d is
30 to 50 seconds.
Once the network has converged, link or port failures will result in a
topology re-calculation, i.e. in another convergence cycle.
The complete logic for state transitions is depicted in the diagram and
works as follows:
Common Spanning Tree (CST): STP is run once for all the
VLANs. All the VLANs share the same logical loop-free topology.
CST does not allow load root switch load sharing but reduces
CPU and memory use on the switches as they require a single
running instance of STP
Per-VLAN Spanning Tree + (PVST+): STP runs as many time
as VLANs are configured in the switching environment. Every
VLAN has a different loop-free topology and root switch function
can be shared by different switches for different VLANs.
Switches use more CPU and memory to hold the multi-STP
information.
In essence, RSTP behaves the same as STP and follows the same
convergence process:
In order for the different topologies to be elected, new port roles are
defined (Root, Designated, Alternate, Backup, Disable) and new port
states are also used (Discarding, Learning and Forwarding).