You are on page 1of 61

Multiprotocol Label Switching Overview

This chapter describes the Multiprotocol Label Switching (MPLS) distribution protocol. MPLS is a
high-performance packet forwarding technology that integrates the performance and traffic management
capabilities of data link layer (Layer 2) switching with the scalability, flexibility, and performance of
network-layer (Layer 3) routing. It enables service providers to meet challenges brought about by
explosive growth and provides the opportunity for differentiated services without necessitating the
sacrifice of existing infrastructure.
The MPLS architecture is remarkable for its flexibility:
• Data can be transferred over any combination of Layer 2 technologies
• Support is offered for all Layer 3 protocols
• Scaling is possible well beyond anything offered in today’s networks.
Specifically, MPLS can efficiently enable the delivery of IP services over an ATM switched network.
It supports the creation of different routes between a source and a destination on a purely router-based
Internet backbone. Service providers who use MPLS can save money and increase revenue and
productivity.
Procedures for configuring MPLS are provided in the “Configuring Multiprotocol Label Switching”
chapter later in this publication.

Note Label switching on a router requires that Cisco Express Forwarding (CEF) be enabled on that router.
Refer to the CEF feature documentation for configuration information. For more information on
enabling CEF, see the “Configuring Cisco Express Forwarding” chapter in this publication.

This chapter describes MPLS. It contains the following sections:


• MPLS/Tag Switching Terminology
• MPLS Commands and Saved Configurations
• MPLS/Tag Switching CLI Command Summary
• Benefits
• Label Switching Functions
• Distribution of Label Bindings
• MPLS and Routing
• MPLS Traffic Engineering
• MPLS Virtual Private Networks

Cisco IOS Switching Services Configuration Guide


XC-80
Multiprotocol Label Switching Overview
MPLS/Tag Switching Terminology

• MPLS Quality of Service


• MPLS Label Switch Controller
• MPLS Egress NetFlow Accounting

MPLS/Tag Switching Terminology


Beginning with Cisco IOS Release 12.1, the Tag Switching distribution protocol has been replaced with
the MPLS distribution protocol. MPLS supports the following:
• Tag Switching features
• Tag Switching command-line interface (CLI) commands
Table 22 lists tag switching terms (found in earlier releases of this document) and the equivalent MPLS
terms used in this document.

Table 22 Equivalency Table for Tag Switching and MPLS Terms

Old Tag Switching Terminology New MPLS Terminology


Tag Switching Multiprotocol Label Switching (MPLS)
Tag (short for Tag Switching) MPLS
Tag (item or packet) Label
TDP (Tag Distribution Protocol) LDP (Label Distribution Protocol)
Cisco TDP and LDP (MPLS Label Distribution Protocol) are
nearly identical in function, but use incompatible message
formats and some different procedures. Cisco is changing
from TDP to a fully compliant LDP.
Tag Switched Label Switched
TFIB (Tag Forwarding Information LFIB (Label Forwarding Information Base)
Base)
TSR (Tag Switching Router) LSR (Label Switching Router)
TSC (Tag Switch Controller) LSC (Label Switch Controller)
ATM-TSR (ATM Tag Switch Router) ATM-LSR (ATM Label Switch Router, such as the
Cisco BPX 8650 switch)
TVC (Tag VC, Tag Virtual Circuit) LVC (Label VC, Label Virtual Circuit)
TSP (Tag Switch Path) LSP (Label Switch Path)
XTag ATM (extended Tag ATM port) XmplsATM (extended MPLS ATM port)

MPLS Commands and Saved Configurations


During the transition period from tag switching to MPLS, if a configuration command has both MPLS
and tag switching forms, the tag switching version is written to saved configurations. For example, you
can configure MPLS hop-by-hop forwarding for a router POS interface by issuing the following
commands:
Router# configure terminal
Router(config)# interface POS3/0

Cisco IOS Switching Services Configuration Guide


XC-81
Multiprotocol Label Switching Overview
MPLS/Tag Switching CLI Command Summary

Router(config-if)# mpls ip

In this example, the mpls ip command has a tag switching form (tag-switching ip). After you enter these
commands and save this configuration or display the running configuration by means of the show
running configuration command, the configuration commands appear as follows:
interface POS3/0
tag-switching ip

Saving the tag switching form of commands (that have both tag switching and MPLS forms) allows for
backward compatibility. You can use a new router software image to modify and write configurations,
and then later use configurations created by the new image with earlier software versions that do not
support the MPLS forms of commands
Using the tag switching forms of the commands allows older software that supports tag switching
commands, but not new MPLS commands, to successfully interpret interface configurations.

MPLS/Tag Switching CLI Command Summary


Table 23 summarizes general-purpose MPLS commands. Except where otherwise noted, these MPLS
commands have been derived from existing tag-switching commands to preserve the familiar syntax of
existing commands that formed the basis for implementing new MPLS functionality.

Table 23 Summary of MPLS Commands Described in this Document

Corresponding Tag Switching


Command Command Description
debug mpls adjacency debug tag-switching adjacency Displays changes to label switching entries in the
adjacency database.
debug mpls events debug tag-switching events Displays information about significant MPLS events.
debug mpls lfib cef debug tag-switching tfib cef Prints detailed information about label rewrites being
created, resolved, and deactivated as CEF routes are
added, changed, or removed.
debug mpls lfib enc debug tag-switching tfib enc Prints detailed information about label encapsulations
while label rewrites are created or updated and placed
into the label forwarding information base (LFIB).
debug mpls lfib lsp debug tag-switching tfib tsp Prints detailed information about label rewrites being
created and deleted as TSP tunnels are added or
removed.
debug mpls lfib state debug tag-switching tfib state Traces what happens when label switching is enabled
or disabled.
debug mpls lfib struct debug tag-switching tfib struct Traces the allocation and freeing of LFIB-related data
structures, such as the LFIB itself, label-rewrites, and
label-info data.
debug mpls packets debug tag-switching packets Displays labeled packets switched by the host router.
interface atm interface atm Enters interface configuration mode, specifies ATM as
the interface type, and enables the creation of a
subinterface on the ATM interface.

Cisco IOS Switching Services Configuration Guide


XC-82
Multiprotocol Label Switching Overview
Benefits

Table 23 Summary of MPLS Commands Described in this Document (continued)

Corresponding Tag Switching


Command Command Description
mpls atm control-vc tag-switching atm control-vc Configures the VPI and VCI to be used for the initial
link to the label switching peer device.
mpls atm vpi tag-switching atm vpi Configures the range of values to be used in the VPI
field for label VCs.
mpls ip (global configuration) tag-switching ip (global Enables MPLS forwarding of IPv4 packets along
configuration) normally routed paths for the platform.
mpls ip (interface tag-switching ip (interface Enables MPLS forwarding of IPv4 packets along
configuration) configuration) normally routed paths for a particular interface.
mpls ip default-route tag-switching ip default-route Enables the distribution of labels associated with the IP
default route.
mpls ip propagate-ttl tag-switching ip propagate-ttl Sets the time-to-live (TTL) value when an IP packet is
encapsulated in MPLS.
mpls ip ttl-expiration pop N/A Forwards packets using the global IP routing table or
the original label stack, depending on the number of
labels in the packet.
mpls label range tag-switching tag-range Configures the range of local labels available for use
downstream on packet interfaces.
Note The syntax of this command differs slightly
from its tag-switching counterpart.

mpls mtu tag-switching mtu Sets the per-interface maximum transmission unit
(MTU) for labeled packets.
show mpls forwarding-table show tag-switching Displays the contents of the label forwarding
forwarding-table information base (LFIB).
show mpls interfaces show tag-switching interfaces Displays information about one or more interfaces that
have been configured for label switching.
show mpls label range N/A Displays the range of local labels available for use on
packet interfaces.

Benefits
MPLS provides the following major benefits to service provider networks:
• Scalable support for SVirtual Private Networks (VPNs)—MPLS enables VPN services to be
supported in service provider networks, thereby greatly accelerating Internet growth.
The use of MPLS for VPNs provides an attractive alternative to the building of VPNs by means of
either ATM or Frame Relay permanent virtual circuits (PVCs) or various forms of tunneling to
interconnect routers at customer sites.
Unlike the PVC VPN model, the MPLS VPN model is highly scalable and can accommodate
increasing numbers of sites and customers. The MPLS VPN model also supports “any-to-any”
communication among VPN sites without requiring a full mesh of PVCs or the backhauling

Cisco IOS Switching Services Configuration Guide


XC-83
Multiprotocol Label Switching Overview
Label Switching Functions

(suboptimal routing) of traffic across the service provider network. For each MPLS VPN user, the
network of the service provider appears to function as a private IP backbone over which the user can
reach other sites within the VPN organization, but not the sites of any other VPN organization.
From a user perspective, the MPLS VPN model enables network routing to be dramatically
simplified. For example, rather than needing to manage routing over a topologically complex virtual
backbone composed of many PVCs, an MPLS VPN user can generally employ the backbone of the
service provider as the default route in communicating with all of the other VPN sites.
• Explicit routing capabilities (also called constraint-based routing or traffic engineering)—Explicit
routing employs “constraint-based routing,” in which the path for a traffic flow is the shortest path
that meets the resource requirements (constraints) of the traffic flow.
In MPLS traffic engineering, factors such as bandwidth requirements, media requirements, and the
priority of one traffic flow versus another can be taken into account. These traffic engineering
capabilities enable the administrator of a service provider network to perform the following tasks:
– Control traffic flow in the network
– Reduce congestion in the network
– Make best use of network resources
Thus, the network administrator can specify the amount of traffic expected to flow between various
points in the network (thereby establishing a traffic matrix), while relying on the routing system to
perform the following tasks:
– Calculate the best paths for network traffic
– Set up the explicit paths to carry the traffic
• Support for IP routing on ATM switches (also called IP and ATM integration)—MPLS enables an
ATM switch to perform virtually all of the functions of an IP router. This capability of an ATM
switch stems from the fact that the MPLS forwarding paradigm (namely, label swapping) is exactly
the same as the forwarding paradigm provided by ATM switch hardware.
The key difference between a conventional ATM switch and an ATM label switch is the control
software used by the latter to establish its virtual channel identifier (VCI) table entries. An ATM
label switch uses IP routing protocols and the TDP to establish VCI table entries.
An ATM label switch can function as a conventional ATM switch. In this dual mode, the ATM
switch resources (such as VCI space and bandwidth) are partitioned between the MPLS control
plane and the ATM control plane. The MPLS control plane provides IP-based services, while the
ATM control plane supports ATM-oriented functions, such as circuit emulation or PVC services.

Label Switching Functions


In conventional Layer 3 forwarding mechanisms, as a packet traverses the network, each router extracts
all the information relevant to forwarding the packet from the Layer 3 header. This information is then
used as an index for a routing table lookup to determine the next hop for the packet.
In the most common case, the only relevant field in the header is the destination address field, but in some
cases other header fields might also be relevant. As a result, the header analysis must be done
independently at each router through which the packet passes. A complicated table lookup must also be
done at each router.
In label switching, the analysis of the Layer 3 header is done only once. The Layer 3 header is then
mapped into a fixed length, unstructured value called a label.

Cisco IOS Switching Services Configuration Guide


XC-84
Multiprotocol Label Switching Overview
Distribution of Label Bindings

Many different headers can map to the same label, as long as those headers always result in the same
choice of next hop. In effect, a label represents a forwarding equivalence class—that is, a set of packets
that, however different they may be, are indistinguishable by the forwarding function.
The initial choice of a label need not be based exclusively on the contents of the Layer 3 packet header;
for example, forwarding decisions at subsequent hops can also be based on routing policy.
Once a label is assigned, a short label header is added at the front of the Layer 3 packet. This header is
carried across the network as part of the packet. At subsequent hops through each MPLS router in the
network, labels are swapped and forwarding decisions are made by means of MPLS forwarding table
lookup for the label carried in the packet header. Hence, the packet header need not be reevaluated during
packet transit through the network. Because the label is of fixed length and unstructured, the MPLS
forwarding table lookup process is both straightforward and fast.

Distribution of Label Bindings


Each LSR in the network makes an independent, local decision as to which label value to use to represent
a forwarding equivalence class. This association is known as a label binding. Each LSR informs its
neighbors of the label bindings it has made. This awareness of label bindings by neighboring routers is
facilitated by the following protocols:
• TDP—Used to support MPLS forwarding along normally routed paths
• Resource Reservation Protocol (RSVP)—Used to support MPLS traffic engineering
• Border Gateway Protocol (BGP)—Used to support MPLS VPNs
When a labeled packet is being sent from LSR A to the neighboring LSR B, the label value carried by
the IP packet is the label value that LSR B assigned to represent the forwarding equivalence class of the
packet. Thus, the label value changes as the IP packet traverses the network.

MPLS and Routing


A label represents a forwarding equivalence class, but it does not represent a particular path through the
network. In general, the path through the network continues to be chosen by the existing Layer 3 routing
algorithms such as OSPF, Enhanced IGRP, and BGP. That is, at each hop when a label is looked up, the
next hop chosen is determined by the dynamic routing algorithm.

MPLS Traffic Engineering


MPLS traffic engineering software enables an MPLS backbone to replicate and expand upon the traffic
engineering capabilities of Layer 2 ATM and Frame Relay networks. MPLS is an integration of Layer 2
and Layer 3 technologies. By making traditional Layer 2 features available to Layer 3, MPLS enables
traffic engineering. Thus, you can offer in a one-tier network what now can be achieved only by
overlaying a Layer 3 network on a Layer 2 network.
Traffic engineering is essential for service provider and Internet service provider (ISP) backbones. Such
backbones must support a high use of transmission capacity, and the networks must be very resilient so
that they can withstand link or node failures.
MPLS traffic engineering provides an integrated approach to traffic engineering. With MPLS, traffic
engineering capabilities are integrated into Layer 3, which optimizes the routing of IP traffic, given the
constraints imposed by backbone capacity and topology.

Cisco IOS Switching Services Configuration Guide


XC-85
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

Why Use MPLS Traffic Engineering?


WAN connections are an expensive item in an ISP budget. Traffic engineering enables ISPs to route
network traffic to offer the best service to their users in terms of throughput and delay. By making the
service provider more efficient, traffic engineering reduces the cost of the network.
Currently, some ISPs base their services on an overlay model. In the overlay model, transmission
facilities are managed by Layer 2 switching. The routers see only a fully meshed virtual topology,
making most destinations appear one hop away. If you use the explicit Layer 2 transit layer, you can
precisely control how traffic uses available bandwidth. However, the overlay model has numerous
disadvantages. MPLS traffic engineering achieves the traffic engineering benefits of the overlay model
without running a separate network, and without needing a nonscalable, full mesh of router
interconnects.

How MPLS Traffic Engineering Works


MPLS traffic engineering automatically establishes and maintains LSPs across the backbone by using
RSVP. The path that an LSP uses is determined by the LSP resource requirements and network
resources, such as bandwidth.
Available resources are flooded by means of extensions to a link-state-based Interior Gateway Protocol
(IGP).
Traffic engineering tunnels are calculated at the LSP head based on a fit between required and available
resources (constraint-based routing). The IGP automatically routes the traffic onto these LSPs.
Typically, a packet crossing the MPLS traffic engineering backbone travels on a single LSP that connects
the ingress point to the egress point.
MPLS traffic engineering is built on the following Cisco IOS mechanisms:
• IP tunnel interfaces—From a Layer 2 standpoint, an MPLS tunnel interface represents the head of
an LSP. It is configured with a set of resource requirements, such as bandwidth and media
requirements, and priority.
From a Layer 3 standpoint, an LSP tunnel interface is the head-end of a unidirectional virtual link
to the tunnel destination.
• MPLS traffic engineering path calculation module—This calculation module operates at the LSP
head. The module determines a path to use for an LSP. The path calculation uses a link-state
database containing flooded topology and resource information.
• RSVP with traffic engineering extensions—RSVP operates at each LSP hop and is used to signal
and maintain LSPs based on the calculated path.
• MPLS traffic engineering link management module—This module operates at each LSP hop, does
link call admission on the RSVP signalling messages, and does bookkeeping of topology and
resource information to be flooded.
• Link-state IGP (Intermediate System-to-Intermediate System (IS-IS) or OSPF—each with traffic
engineering extensions)—These IGPs are used to globally flood topology and resource information
from the link management module.
• Enhancements to the SPF calculation used by the link-state IGP (IS-IS or OSPF)—The IGP
automatically routes traffic onto the appropriate LSP tunnel based on tunnel destination. Static
routes can also be used to direct traffic onto LSP tunnels.
• Label switching forwarding—This forwarding mechanism provides routers with a Layer 2-like
ability to direct traffic across multiple hops of the LSP established by RSVP signalling.

Cisco IOS Switching Services Configuration Guide


XC-86
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

One approach to engineering a backbone is to define a mesh of tunnels from every ingress device to every
egress device. The MPLS traffic engineering path calculation and signalling modules determine the path
taken by the LSPs for these tunnels, subject to resource availability and the dynamic state of the network.
The IGP, operating at an ingress device, determines which traffic should go to which egress device, and
steers that traffic into the tunnel from ingress to egress.
A flow from an ingress device to an egress device might be so large that it cannot fit over a single link,
so it cannot be carried by a single tunnel. In this case, multiple tunnels between a given ingress and
egress can be configured, and the flow is load-shared among them.
For more information about MPLS, see the following Cisco documentation:
• Cisco IOS Switching Services Configuration Guide, “Multiprotocol Label Switching” chapter
• Cisco IOS Switching Services Command Reference, “Switching Commands Introduction” chapter

Mapping Traffic into Tunnels


This section describes how traffic is mapped into tunnels; that is, how conventional hop-by-hop
link-state routing protocols interact with MPLS traffic engineering capabilities. In particular, this section
describes how the shortest path first (SPF) algorithm, sometimes called a Dijkstra algorithm, has been
enhanced so that a link-state IGP can automatically forward traffic over tunnels that MPLS traffic
engineering establishes.
Link-state protocols, like integrated IS-IS or OSPF, use an SPF algorithm to compute a shortest path tree
from the headend node to all nodes in the network. Routing tables are derived from this shortest path
tree. The routing tables contain ordered sets of destination and first hop information. If a router does
normal hop-by-hop routing, the first hop is over a physical interface attached to the router.
New traffic engineering algorithms calculate explicit routes to one or more nodes in the network. The
originating router views these explicit routes as logical interfaces. In the context of this document, these
explicit routes are represented by LSPs and referred to as traffic engineering tunnels (TE tunnels).
The following sections describe how link-state IGPs can use these shortcuts, and how they can install
routes in the routing table that point to these TE tunnels. These tunnels use explicit routes, and the path
taken by a TE tunnel is controlled by the router that is the headend of the tunnel. In the absence of errors,
TE tunnels are guaranteed not to loop, but routers must agree on how to use the TE tunnels. Otherwise,
traffic might loop through two or more tunnels.

Enhancement to the SPF Computation


During each step of the SPF computation, a router discovers the path to one node in the network, as
follows:
• If that node is directly connected to the calculating router, the first hop information is derived from
the adjacency database.
• If the node is not directly connected to the calculating router, the node inherits the first hop
information from the parents of that node. Each node has one or more parents, and each node is the
parent of zero or more downstream nodes.
For traffic engineering purposes, each router maintains a list of all TE tunnels that originate at this head
end router. For each of those TE tunnels, the router at the tailend is known to the head end router.

Cisco IOS Switching Services Configuration Guide


XC-87
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

During the SPF computation, the TENT (tentative) list stores paths that are possibly the best paths and
the PATH list stores paths that are definitely the best paths. When it is determined that a path is the best
possible path, the node is moved from TENT to PATH. PATH is thus the set of nodes for which the best
path from the computing router has been found. Each PATH entry consists of ID, path cost, and
forwarding direction.
The router must determine the first hop information using one of the following methods:
• Examine the list of tail-end routers directly reachable by a TE tunnel. If there is a TE tunnel to this
node, use the TE tunnel as the first hop.
• If there is no TE tunnel and the node is directly connected, use the first hop information from the
adjacency database.
• If the node is not directly connected and is not directly reachable by a TE tunnel, copy the first hop
information from the parent nodes to the new node.
As a result of this computation, traffic to nodes that are the tail end of TE tunnels flows over the
TE tunnels. Traffic to nodes that are downstream of the tail-end nodes also flows over the TE tunnels. If
there is more than one TE tunnel to different intermediate nodes on the path to destination node X, traffic
flows over the TE tunnel whose tail-end node is closest to node X.

Special Cases and Exceptions


The SPF algorithm finds equal-cost parallel paths to destinations. The enhancement previously described
does not change this behavior. Traffic can be forwarded over any of the following:
• One or more native IP paths
• One or more traffic engineering tunnels
• A combination of native IP paths and traffic engineering tunnels
A special situation occurs in the topology shown in Figure 24.

Figure 24 Sample Topology of Parallel Native Paths and Paths over TE Tunnels

Router A Router B Router C


26682

Router D Router E

Cisco IOS Switching Services Configuration Guide


XC-88
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

If parallel native IP paths and paths over TE tunnels are available, the following implementations allow
you to force traffic to flow over TE tunnels only or only over native IP paths. Assume that all links have
the same cost and that a TE tunnel is set up from Router A to Router D.
• When the SPF calculation puts Router C on the TENT list, it realizes that Router C is not directly
connected. It uses the first hop information from the parent, which is Router B.
• When the SPF calculation on Router A puts Router D on the TENT list, it realizes that Router D is
the tail end of a TE tunnel. Thus Router A installs a route to Router D by the TE tunnel, and not by
Router B.
• When Router A puts Router E on the TENT list, it realizes that Router E is not directly connected,
and that Router E is not the tail end of a TE tunnel. Therefore Router A copies the first hop
information from the parents (Router C and Router D) to the first-hop information of Router E.
Traffic to Router E now load balances over the following:
• The native IP path by Router A to Router B to Router C
• The TE tunnel Router A to Router D

Additional Enhancements to SPF Computation Using Configured Tunnel Metrics


When traffic engineering tunnels install an IGP route in a Router Information Base (RIB) as next hops,
the distance or metric of the route must be calculated. Normally, you could make the metric the same as
the IGP metric over native IP paths as if the TE tunnels did not exist. For example, Router A can reach
Router C with the shortest distance of 20. X is a route advertised in IGP by Router C. Route X is installed
in the RIB of Router A with the metric of 20. When a TE tunnel from Router A to Router C comes up,
by default the route is installed with a metric of 20, but the next hop information for X is changed.
Although the same metric scheme can work well in other situations, for some applications it is useful to
change the TE tunnel metric (for instance, when there are equal cost paths through TE tunnel and native
IP links). You can adjust TE tunnel metrics to force the traffic to prefer the TE tunnel, to prefer the native
IP paths, or to load share among them.
Suppose that multiple TE tunnels go to the same destination or different destinations. TE tunnel metrics
can force the traffic to prefer some TE tunnels over others, regardless of IGP distances to those
destinations.
Setting metrics on TE tunnels does not affect the basic SPF algorithm. It affects only two questions:
• Is the TE tunnel installed as one of the next hops to the destination routers?
• What is the metric value of the routes being installed into the RIB?
You can modify the metrics for determining the first hop information in one of the following ways:
• If the metric of the TE tunnel to the tail end routers is higher than the metric for the other TE tunnels
or native hop-by-hop IGP paths, this tunnel is not installed as the next hop.
• If the metric of the TE tunnel is equal to the metric of either other TE tunnels or native hop-by-hop
IGP paths, this tunnel is added to the existing next hops.
• If the metric of the TE tunnel is lower than the metric of other TE tunnels or native hop-by-hop IGP
paths, this tunnel replaces them as the only next hop.
In each of these cases, the IGP assigns metrics to routes associated with those tail end routers and their
downstream routers.

Cisco IOS Switching Services Configuration Guide


XC-89
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

The SPF computation is loop free because the traffic through the TE tunnels is basically source routed.
The result of TE tunnel metric adjustment is the control of traffic load sharing. If there is only one way
to reach the destination through a single TE tunnel, then no matter what metric is assigned, the traffic
has only one way to go.
You can represent the TE tunnel metric in two different ways: as an absolute (or fixed) metric, or as a
relative (or floating) metric.
If you use an absolute metric, the routes assigned with the metric are fixed. This metric is used not only
for the routes sourced on the TE tunnel tail end router, but also for each route downstream of this tail
end router that uses this TE tunnel as one of its next hops.
For example, if you have TE tunnels to two core routers in a remote point of presence (POP), and one of
them has an absolute metric of 1, all traffic going to that POP traverses this low-metric TE tunnel.
If you use a relative metric, the actual assigned metric value of routes is based on the IGP metric. This
relative metric can be positive or negative, and is bounded by minimum and maximum allowed metric
values. For example, assume the topology shown in Figure 25.

Figure 25 Topology That Has No Traffic Engineering Tunnel

Router A Router B Router C Router D Router E


Metric = 10 Metric = 10 Metric = 10 Metric = 10

MPLS TE-tunnel T1 Subnet x Subnet y Subnet z

26511
If there is no TE tunnel, Router A installs routes x, y, and z and assigns metrics 20, 30, and 40,
respectively. Suppose that Router A has a TE tunnel T1 to Router C. If the relative metric –5 is used on
tunnel T1, the routers x, y, and z have the installed metrics of 15, 25, and 35. If an absolute metric of 5
is used on tunnel T1, routes x, y and z have the same metric 5 installed in the RIB for Router A. The
assigning of no metric on the TE tunnel is a special case, a relative metric scheme where the metric is 0.

Making the Transition from an IS-IS Network to a New Technology


IS-IS includes extensions for MPLS traffic engineering and for other purposes. Running MPLS traffic
engineering over IS-IS or taking advantage of these other extensions requires transition to an IS-IS
network to this new technology. This section describes these extensions and discusses two ways to
migrate an existing IS-IS network from the standard ISO 10589 protocol to IS-IS with new extensions.

Note Running MPLS traffic engineering over an existing IS-IS network requires a transition to
incorporating extensions to IS-IS. However, running MPLS traffic engineering over OSPF does not
require any similar network transition.

Cisco IOS Switching Services Configuration Guide


XC-90
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

New Extensions for the IS-IS Routing Protocol


New extensions for the IS-IS routing protocol serve the following purposes:
• Remove the 6-bit limit on link metrics.
• Allow interarea IP routes.
• Enable IS-IS to carry different kinds of information for traffic engineering. In the future, more
extensions might be needed.
To serve these purposes, two new type, length, and value objects (TLVs) have been defined:
• TLV 22 describes links (or rather adjacencies). It serves the same purpose as the IS neighbor option
in ISO 10589 (TLV 2).
• TLV 135 describes reachable IP prefixes. It is similar to the IP Neighbor options from RFC 1195
(TLVs 128 and 130).

Note For the purpose of briefness, these two new TLVs, 22 and 135, are referred to as “new-style TLVs.”
TLVs 2, 128, and 130 are referred to as “old-style TLVs.”

Both new TLVs have a fixed length part, followed by optional sub-TLVs. The metric space in these new
TLVs has been enhanced from 6 bits to 24 or 32 bits. The sub-TLVs allow you to add new properties to
links and prefixes. Traffic engineering is the first technology to use this ability to add new properties to
a link.

The Problem in Theory


Link-state routing protocols compute loop-free routes. This is guaranteed because all routers calculate
their routing tables based on the same information from the link-state database.
There is a problem when some routers look at old-style TLVs and some routers look at new-style TLVs
because the routers can base their SPF calculations on different information. This can cause routing
loops.

The Problem in Practice


The easiest way to migrate from old-style TLVs to new-style TLVs would be to introduce a “flag day.”
A flag day means that you reconfigure all routers during a short period of time, during which service is
interrupted. If the implementation of a flag day is not acceptable, a network administrator needs to find
a viable solution for modern existing networks.
Network administrators have the following problems related to TLVs:
• They need to run an IS-IS network where some routers are advertising and using the new-style TLVs
and, at the same time, other routers are capable only of advertising and using old-style TLVs.
• They need to test new traffic engineering software in existing networks on a limited number of
routers. They cannot upgrade all their routers in their production networks or in their test networks
before they start testing.
The new extensions allow a network administrator to use old-style TLVs in one area, and new-style TLVs
in another area. However, this is not a solution for administrators that need or want to run their network
in one single area.
The following sections describe two solutions to the problem of the network administrator.

Cisco IOS Switching Services Configuration Guide


XC-91
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

First Solution for Making the Transition from an IS-IS Network to a New Technology
When you migrate from old-style TLVs to new-style TLVs, you can advertise the same information
twice—once in old-style TLVs and once in new-style TLVs. This ensures that all routers can understand
what is advertised.
There are three disadvantages to using that approach:
• Size of the LSPs—During the transition, the LSPs grow to about twice their original size. This might
be a problem in networks where the link-state database is large. A link-state database might be large
for the following reasons:
– There are many routers, and thus LSPs.
– There are many neighbors or IP prefixes per router. A router that advertises substantial
information causes the LSPs to be fragmented.
• Unpredictable results—In a large network, this solution can produce unpredictable results. A large
network in transition pushes the limits regarding LSP flooding and SPF scaling. During the
transition, the following behavior might occur:
– You can expect some extra network instability.
– Traffic engineering extensions might cause LSPs to be reflooded frequently.
• Ambiguity—If a router encounters different information in the old-style TLVs and the new-style
TLVs, it may not be clear what the router should do.
These problems can be largely solved easily by using the following:
• All information in old-style and new-style TLVs in an LSP
• The adjacency with the lowest link metric if an adjacency is advertised more than once
The main benefit to advertising the same information twice is that network administrators can use
new-style TLVs before all routers in the network can understand them.

Transition Actions During the First Solution

When making the transition from using IS-IS with old-style TLVs to new-style TLVs, you can perform
the following actions:
• If all routers run old software, advertise and use only old-style TLVs.
• Upgrade some routers to newer software.
• Configure some routers with new software to advertise both old-style and new-style TLVs. They
accept both styles of TLVs. Configure other routers (with old software) to continue advertising and
using only old-style TLVs.
• Test traffic engineering in parts of your network; however, new-style TLVs cannot be used yet.
• If the whole network needs to migrate, upgrade and configure all remaining routers to advertise and
accept both styles of TLVs.
• Configure all routers to advertise and accept only new-style TLVs.
• Configure metrics larger than 63.
For more information about how to perform these actions, see the section “TLV Configuration
Commands.”

Cisco IOS Switching Services Configuration Guide


XC-92
Multiprotocol Label Switching Overview
MPLS Traffic Engineering

Second Solution for Making the Transition from an IS-IS Network to a New Technology
Routers advertise only one style of TLVs at the same time, but can understand both types of TLVs during
migration. There are two main benefits to this approach:
• LSPs stay approximately the same size during migration.
• There is no ambiguity when the same information is advertised twice inside one LSP.
This method is useful when you move the whole network (or a whole area) to use wider metrics (that is,
you want a router running IS-IS to generate and accept only new-style TLVs). For more information, see
the metric-style wide router configuration command.
The disadvantage is that all routers must understand the new-style TLVs before any router can start
advertising new-style TLVs. It does not help the second problem, where network administrators want to
use the new-style TLVs for traffic engineering, while some routers are capable of understanding only
old-style TLVs.

Transition Actions During the Second Solution

If you use the second solution, you can perform the following actions:
• If all routers run old software, advertise and use only old-style TLVs.
• Upgrade all routers to newer software.
• Configure all routers one-by-one to advertise old-style TLVs, but to accept both styles of TLVs.
• Configure all routers one-by-one to advertise new-style TLVs, but to accept both styles of TLVs.
• Configure all routers one-by-one to advertise and to accept only new-style TLVs.
• Configure metrics larger than 63.

TLV Configuration Commands


Cisco IOS software has a new router isis CLI command called metric-style. Once you are in the router
IS-IS command mode, you have the option to choose the following:
• Metric-style narrow—Enables the router to generate and accept only old-style TLVs
• Metric-style transition—Enables the router to generate and accept both old-style and new-style
TLVs
• Metric-style wide—Enables the router to generate and accept only new-style TLVs
You can use either of two transition schemes when you are using the metric-style commands:
• Narrow to transition to wide
• Narrow to narrow transition to wide transition to wide

Implementation in Cisco IOS Software


Cisco IOS software implements both transition solutions of moving your IS-IS network to a new
technology. Network administrators can choose the solution that suits them. For test networks, the first
solution is ideal (see the section “First Solution for Making the Transition from an IS-IS Network to a
New Technology”). For a real transition, both solutions can be used. The first solution requires fewer

Cisco IOS Switching Services Configuration Guide


XC-93
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

steps and less configuration. Only the largest networks that do not want to double their link-state
database during transition need to use the second solution (see the “Second Solution for Making the
Transition from an IS-IS Network to a New Technology”).

MPLS Virtual Private Networks


Using MPLS VPNs in a Cisco IOS network provide the capability to deploy and administer scalable
Layer 3 VPN backbone services including applications, data hosting network commerce, and telephony
services to business customers. A VPN is a secure IP-based network that shares resources on one or more
physical networks. A VPN contains geographically dispersed sites that can communicate securely over
a shared backbone.
A one-to-one relationship does not necessarily exist between customer sites and VPNs; a given site can
be a member of multiple VPNs. However, a site can associate with only one VPN routing and forwarding
instance (VRF). Each VPN is associated with one or more VPN VRFs. A VRF includes routing and
forwarding tables and rules that define the VPN membership of customer devices attached to CE routers.
A VRF consists of the following:
• IP routing table
• CEF table
• Set of interfaces that use the CEF forwarding table
• Set of rules and routing protocol parameters to control the information in the routing tables
VPN routing information is stored in the IP routing table and the CEF table for each VRF. A separate
set of routing and CEF tables is maintained for each VRF. These tables prevent information from being
forwarded outside a VPN and also prevent packets that are outside a VPN from being forwarded to a
router within the VPN.
The following sections provide more information on MPLS VPNs:
• Benefits
• VPN Operation
• Distribution of VPN Routing Information
• BGP Distribution of VPN Routing Information
• MPLS Forwarding
• MPLS VPN Cable Interfaces
• Interautonomous Systems for MPLS VPNs
• HSRP Support for MPLS VPNS

Benefits
MPLS VPNs allow service providers to deploy scalable VPNs and build the foundation to deliver
value-added services, including the following:
• Connectionless service—A significant technical advantage of MPLS VPNs is that they are
connectionless. The Internet owes its success to its basic technology, TCP/IP. TCP/IP is built on
packet-based, connectionless network paradigm. This means that no prior action is necessary to
establish communication between hosts, making it easy for two parties to communicate. To establish
privacy in a connectionless IP environment, current VPN solutions impose a connection-oriented,

Cisco IOS Switching Services Configuration Guide


XC-94
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

point-to-point overlay on the network. Even if it runs over a connectionless network, a VPN cannot
take advantage of the ease of connectivity and multiple services available in connectionless
networks. When you create a connectionless VPN, you do not need tunnels and encryption for
network privacy, thus eliminating substantial complexity.
• Centralized service—Building VPNs in Layer 3 allows delivery of targeted services to a group of
users represented by a VPN. A VPN must give service providers more than a mechanism for
privately connecting users to intranet services. It must also provide a way to flexibly deliver
value-added services to targeted customers. Scalability is critical, because customers want to use
services privately in their intranets and extranets. Because MPLS VPNs are seen as private intranets,
you may use IP services such as the following:
– Multicast
– Quality of service (QoS)
– Telephony support within a VPN
– Centralized services including content and web hosting to a VPN
You can customize several combinations of specialized services for individual customers.
For example, a service that combines IP multicast with a low-latency service class enables
videoconferencing within an intranet.
• Scalability—If you create a VPN using connection-oriented, point-to-point overlays, Frame Relay,
or ATM virtual connections, the VPN’s key deficiency of the VPN is scalability. Specifically,
connection-oriented VPNs without fully meshed connections between customer sites are not
optimal. MPLS-based VPNs instead use the peer model and Layer 3 connectionless architecture to
leverage a highly scalable VPN solution. The peer model requires a customer site to only peer with
one provider edge (PE) router as opposed to all other CPE or CE routers that are members of the
VPN. The connectionless architecture allows the creation of VPNs in Layer 3, eliminating the need
for tunnels or virtual connections.
The following are scalability issues of MPLS VPNs due to the partitioning of VPN routes between
PE routers and the further partitioning of VPN and IGP routes between PE routers and provider (P)
routers in a core network:
– PE routers must maintain VPN routes for those VPNs that are members.
– P routers do not maintain any VPN routes.
This increases the scalability of the provider’s core and ensures that no one device is a scalability
bottleneck.
• Security—MPLS VPNs offer the same level of security as connection-oriented VPNs. Packets from
one VPN do not inadvertently go to another VPN. Security is provided
– At the edge of a provider network, ensuring that packets received from a customer are placed
on the correct VPN.
– At the backbone, ensuring that VPN traffic is kept separate. Malicious spoofing (an attempt to
gain access to a PE router) is nearly impossible because the packets received from customers
are IP packets. These IP packets must be received on a particular interface or subinterface to be
uniquely identified with a VPN label.
• Easy to create—To take full advantage of VPNs, it must be easy for you to create new VPNs and
user communities. Because MPLS VPNs are connectionless, no specific point-to-point connection
maps or topologies are required. You can add sites to intranets and extranets and form closed user
groups. When you manage VPNs in this manner, it enables membership of any given site in multiple
VPNs, maximizing flexibility in building intranets and extranets.

Cisco IOS Switching Services Configuration Guide


XC-95
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

• Flexible addressing—To make a VPN service more accessible, customers of a service provider can
design their own addressing plan, independent of addressing plans for other service provider
customers. Many customers use private address spaces and do not want to invest the time and
expense of converting to public IP addresses to enable intranet connectivity. MPLS VPNs allow
customers to continue to use their present address spaces without Network Address Translation
(NAT) by providing a public and private view of the address. A NAT is required only if two VPNs
with overlapping address spaces want to communicate. This enables customers to use their own
unregistered private addresses, and to communicate freely across a public IP network.
• Integrated Quality of Service (QoS) support—QoS is an important requirement for many IP VPN
customers. It provides the ability to address two fundamental VPN requirements:
– Predictable performance and policy implementation
– Support for multiple levels of service in an MPLS VPN
Network traffic is classified and labeled at the edge of the network before traffic is aggregated
according to policies defined by subscribers and implemented by the provider and transported across
the provider core. Traffic at the edge and core of the network can then be differentiated into different
classes by drop probability or delay.
• Straightforward migration—For service providers to quickly deploy VPN services, use a
straightforward migration path. MPLS VPNs are unique because you can build them over multiple
network architectures, including IP, ATM, Frame Relay, and hybrid networks.
Migration for the end customer is simplified because there is no requirement to support MPLS on
the CE router and no modifications are required to a intranet belonging to a customer.
Figure 26 shows an example of a VPN with a service provider (P) backbone network, service
provider edge routers (PE), and customer edge routers (CE).

Figure 26 VPNs with a Service Provider Backbone

VPN 1 VPN 2
Service provider
Site 1 Site 1
PE backbone
P P

PE CE
CE
Site 2

PE
P P CE

VPN 1

Site 2
17265

CE

A VPN contains customer devices attached to the CE routers. These customer devices use VPNs to
exchange information between devices. Only the PE routers are aware of the VPNs.

Cisco IOS Switching Services Configuration Guide


XC-96
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Figure 27 shows five customer sites communicating within three VPNs. The VPNs can communicate
with the following sites:
• VPN1—Sites 2 and 4
• VPN2—Sites 1, 3, and 4
• VPN3—Sites 1,3, and 5

Figure 27 Customer Sites within VPNs

VPN2
VPN1 VPN3

Site 2 Site 1
Site 4

Site 3 Site 5

17266
Increased BGP Functionality
The following is a list of increased BGP functionality:
• Configuring BGP hub and spoke connections—Configuring PE routers in a hub and spoke
configuration allows a CE router to readvertise all prefixes containing duplicate autonomous system
numbers (ASNs) to neighboring PE routers. Using duplicate ASNs in a hub and spoke configuration
provides faster convergence of routing information within geographically dispersed locations.
• Configuring faster convergence for BGP VRF routes—Configuring scanning intervals of BGP
routers decreases import processing time of VPNv4 routing information, thereby providing faster
convergence of routing information. Routing tables are updated with routing information about
VPNv4 routes learned from PE routers or route reflectors.
• Limiting VPN VRFs—Limiting the number of routes in a VRF prevents a PE router from importing
too many routes, thus diminishing the performance of a router. This enhancement can also be used
to enforce the maximum number of members that can join a VPN from a particular site. A threshold
is set in the VRF routing table to limit the number of VRF routes imported.
• Reusing ASNs in an MPLS VPN environment—Configuring a PE router to reuse an existing ASN
allows customers to configure BGP routes with the same ASNs in multiple geographically dispersed
sites, providing better scalability between sites.
• Distributing BGP OSPF routing information—Setting a separate router ID for each interface or
subinterface on a PE router attached to multiple CE routers within a VPN provides increased
flexibility through OSPF when routers exchange routing information between sites.
Table 24 lists the MPLS VPN features and the associated BGP commands.

Cisco IOS Switching Services Configuration Guide


XC-97
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Table 24 MPLS VPN Features and the Associated BGP Commands

Name of Cisco IOS Feature Command Description


Configuring Faster bgp scan-time import Configures scanning intervals of BGP routers to
Convergence for BGP decrease import processing time of routing
VRF Routes information.
Limiting VRF Routes maximum routes Limits the number of routes in a VRF to prevent
a PE router from importing too many routes.
Configuring BGP Hub and neighbor allowas-in Configures PE routers to allow CE routers to
Spoke Connections readvertise all prefixes that contain duplicate
ASNs to neighboring PE routers.
Reusing ASNs in an neighbor as-override Configures a PE router to reuse the same ASN
MPLS VPN Environment on all sites within an MPLS VPN by overriding
private ASNs.
Distributing BGP OSPF set ospf router-id Sets a separate router ID for each interface or
Routing Information subinterface on the PE router for each directly
attached CE router.

VPN Operation
Each VPN is associated with one or more VRFs. A VRF defines the VPN membership of a customer site
attached to a PE router. A VRF consists of an IP routing table, a derived CEF table, a set of interfaces
that use the forwarding table, and a set of rules and routing protocol parameters that control the
information that is included into the routing table.
A one-to-one relationship does not necessarily exist between customer sites and VPNs. A given site can
be a member of multiple VPNs, as shown in Figure 27. However, a site can only associate with one (and
only one) VRF. A customer’s site VRF contains all the routes available to the site from the VPNs of
which it is a member.
Packet forwarding information is stored in the IP routing table and the CEF table for each VRF.
A separate set of routing and CEF tables is maintained for each VRF. These tables prevent information
from being forwarded outside a VPN, and also prevent packets that are outside a VPN from being
forwarded to a router within the VPN.

Cisco IOS Switching Services Configuration Guide


XC-98
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Distribution of VPN Routing Information


The distribution of VPN routing information is controlled through the use of VPN route target
communities, implemented by BGP extended communities. Distribution of VPN routing information
works as follows:
• When a VPN route learned from a CE router is injected into BGP, a list of VPN route target extended
community attributes is associated with it. Typically the list of route target community extended
values is set from an export list of route targets associated with the VRF from which the route was
learned.
• An import list of route target extended communities is associated with each VRF. The import list
defines route target extended community attributes that a route must have in order for the route to
be imported into the VRF. For example, if the import list for a particular VRF includes route target
extended communities A, B, and C, then any VPN route that carries any of those route target
extended communities—A, B, or C—is imported into the VRF.

BGP Distribution of VPN Routing Information


A PE router can learn an IP prefix from a CE router by static configuration, through a BGP session with
the CE router, or through the Routing Information Protocol (RIP) exchange with the CE router. The IP
prefix is a member of the IPv4 address family. After it learns the IP prefix, the PE converts it into a
VPN-IPv4 prefix by combining it with an 8-byte route distinguisher (RD). The generated prefix is a
member of the VPN-IPv4 address family. It uniquely identifies the customer address, even if the
customer site is using globally nonunique (unregistered private) IP addresses.
The RD used to generate the VPN-IPv4 prefix is specified by a configuration command associated with
the VRF on the PE router.
BGP distributes reachability information for VPN-IPv4 prefixes for each VPN. BGP communication
takes place at two levels: within IP domains, known as autonomous systems (Interior BGP or IBGP) and
between autonomous systems (Exterior BGP or EBGP). PE-PE or PE-RR (route reflector) sessions are
IBGP sessions, and PE-CE sessions are EBGP sessions.
BGP propagates reachability information for VPN-IPv4 prefixes among PE routers by means of the BGP
multiprotocol extensions, which define support for address families other than IPv4. It does this in a way
that ensures that the routes for a given VPN are learned only by other members of that VPN, enabling
members of the VPN to communicate.

MPLS Forwarding
Based on routing information stored in the VRF IP routing table and VRF CEF table, packets are
forwarded to their destination using MPLS.
A PE router binds a label to each customer prefix learned from a CE router and includes the label in the
network-layer reachability information (NLRI) for the prefix that it advertises to other PE routers. When
a PE router forwards a packet received from a CE router across the provider network, it labels the packet
with the label learned from the destination PE router. When the destination PE router receives the labeled
packet, it pops the label and uses it to direct the packet to the correct CE router. Label forwarding across
the provider backbone, is based on either dynamic label switching or traffic engineered paths. A
customer data packet carries two levels of labels when traversing the backbone:
• The top label directs the packet to the correct PE router.
• The second label indicates how that PE router should forward the packet to the CE router.

Cisco IOS Switching Services Configuration Guide


XC-99
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

MPLS VPN Cable Interfaces


Using MPLS VPN technology, service providers can create scalable and efficient private networks using
a shared hybrid fiber coaxial (HFC) network and IP infrastructure.
The cable MPLS VPN network consists of the following:
• The multiple service operator (MSO) or cable company that owns the physical infrastructure and
builds VPNs for the ISPs to move traffic over the cable and IP backbone.
• ISPs that use the HFC network and IP infrastructure to supply Internet service to cable customers.
Each ISP moves traffic to and from the PC of a subscriber, through the physical network infrastructure
of the MSO, to the network of the ISP. MPLS VPNs, created in Layer 3, provide privacy and security by
constraining the distribution of the routes of a VPN only to the routers that belong to its network. Thus,
each VPN of the ISP is insulated from other ISPs that use the same MSO infrastructure.
An MPLS VPN assigns a unique VRF instance to each VPN. A VRF instance consists of an IP routing
table, a derived forwarding table, a set of interfaces that use the forwarding table, and a set of rules and
routing protocols that determine the contents of the forwarding table.
Each PE router maintains one or more VRF tables. It looks up a IP destination address of a packet in the
appropriate VRF table, only if the packet arrived directly through an interface associated with that table.
MPLS VPNs use a combination of BGP and IP address resolution to ensure security. Refer to the
“Configuring Multiprotocol Label Switching” chapter later in this publication.
Figure 28 shows a cable MPLS VPN network. The routers in the network are as follows:
• Provider (P) router—Routers in the core of the provider network. P routers run MPLS switching,
and do not attach VPN labels (MPLS label in each route assigned by the PE router) to routed
packets. VPN labels are used to direct data packets to the correct egress router.
• PE router—Router that attaches the VPN label to incoming packets based on the interface or
subinterface on which they are received. A PE router attaches directly to a CE router. In the MPLS
VPN approach, each Cisco uBR7200 series router acts as a PE router.
• Customer (C) router—Router in the ISP or enterprise network.
• Customer Edge (CE) router—Edge router on the network of the ISP that connects to the PE router
on the network of the MSO. A CE router must interface with a PE router.
The MPLS network has a unique VPN that exclusively manages the MSOs devices called the
management VPN. It contains servers and devices that other VPNs can access. The management VPN
connects the Cisco uBR7200 series router to a PE router, which connects to management servers such
as Cisco Network Registrar (CNR) and Time of Day (ToD) servers. A PE router connects to management
servers and is a part of the management VPN. Regardless of the ISP they belong to, the management
servers serve the Dynamic Host Configuration Protocol (DHCP), DNS (Domain Name System), and ToD
requests coming from PCs or cable modems.

Cisco IOS Switching Services Configuration Guide


XC-100
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Figure 28 MPLS VPN Network

ISP-A ISP-A
VPN customer

CE

PE

VPN
HFC cable network
MSO

A
Provider Cisco ISP-B
core uBR 7246
ISP-B customer
PE
VPN
VPN B
CE N PE
VP T
G M
M
Management
router

35638
Management subnet
Cable VPN configuration involves the following:
• MSO domain that requires a direct peering link to each enterprise network (ISP), provisioning
servers for residential and commercial subscribers, and dynamic DNS for commercial users. The
MSO manages cable interface IP addressing, Data-over-Cable Service Interface Specifications
(DOCSIS) provisioning, CM host names, routing modifications, privilege levels, and usernames and
passwords.
• ISP or enterprise domain that includes the DHCP server for subscriber or telecommuter host
devices, enterprise gateway within the MSO address space, and static routes back to the
telecommuter subnets.

Note We recommend that the MSO assign all addresses to the end-user devices and gateway interfaces.
The MSO can also use split management to let the ISP configure tunnels and security.

In an MPLS VPN configuration, the MSO must configure the following:


• CMTS (Cisco uBR7200 series routers)
• P routers
• PE routers
• CE routers
• One VPN per ISP DOCSIS server for all cable modem customers. The MSO must attach DOCSIS
servers to the management VPN, and make them visible.
The MSO must configure Cisco uBR7200 series routers that serve the ISP, and remote PE routers
connecting to the ISP, as PE routers in the VPN.

Cisco IOS Switching Services Configuration Guide


XC-101
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

The MSO must determine the primary IP address range, which is the range of the MSO for all cable
modems belonging to the ISP subscribers.
The ISP must determine the secondary IP address range, which is the range of the ISP for its subscriber
PCs.
To reduce security breaches and differentiate DHCP requests from cable modems in VPNs or under
specific ISP management, MSOs can use the cable helper-address cable interface command in Cisco
IOS software. The MSO can specify the host IP address to be accessible only in the VPN of the ISP. This
lets the ISP use its DHCP server to allocate IP addresses. Cable modem IP addresses must be accessible
from the management VPN.
The MPLS VPN approach of creating VPNs for individual ISPs or customers requires subinterfaces to
be configured on the cable interface or the cable interface bundle. Each ISP requires one subinterface.
The subinterfaces are tied to the VRF tables for their respective ISPs. The first subinterface must be
created on the cable interface bound to the management VPN.
To route a reply from the CNR back to the cable modem, the PE router that connects to the CNR must
import the routes of the ISP VPN into the management VPN. Similarly, to forward management requests
(such as DHCP renewal to CNR) to the cable modems, the ISP VPN must export and import the
appropriate management VPN routes.
Cisco uBR7200 series software supports the definition of logical network-layer interfaces over a
physical cable interface or a bundle of cable interfaces. You can create subinterfaces on either a physical
cable interface or a bundle of cable interfaces. Subinterfaces let service providers share one IP subnet
across multiple cable interfaces grouped into a cable interface bundle.
You can group all of the cable interfaces on a Cisco uBR7200 series router into a single bundle so that
only one subnet is required for each router. When you group cable interfaces, no separate IP subnet or
each individual cable interface is required. This grouping avoids performance, memory, and security
problems in using a bridging solution to manage subnets, especially for a large number of subscribers.
Subinterfaces allow traffic to be differentiated on a single physical interface, and assigned to multiple
VPNs. You can configure multiple subinterfaces, and associate an MPLS VPN with each subinterface.
You can split a single physical interface (the cable plant) into multiple subinterfaces, where each
subinterface is associated with a specific VPN. Each ISP requires access on a physical interface and is
given its own subinterface. Create a management subinterface to support cable modem initialization
from an ISP.
Using each subinterface associated with a specific VPN (and therefore, ISP), subscribers connect to a
logical subinterface, which reflects the ISP that provides their subscribed services. When properly
configured, subscriber traffic enters the appropriate subinterface and VPN.
The CMTS MSO administrator can define subinterfaces on a cable physical interface and assign Layer
3 configurations to each subinterface, or bundle a group of physical interfaces, define subinterfaces on
the bundle master, and give each subinterface a Layer 3 configuration.

Benefits
MPLS VPNs with cable interfaces provide the following benefits:
• MPLS VPNs give cable MSOs and ISPs a manageable way of supporting multiple access to a cable
plant. Service providers can create scalable and efficient VPNs across the core of their networks.
MPLS VPNs provide systems support scalability in cable transport infrastructure and management.
• Each ISP can support Internet access services from a PC of a subscriber through a physical cable
plant of a MSO to their networks.

Cisco IOS Switching Services Configuration Guide


XC-102
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

• MPLS VPNs allow MSOs to deliver value-added services through an ISP, and thus, deliver
connectivity to a wider set of potential customers. MSOs can partner with ISPs to deliver multiple
services from multiple ISPs and add value within the own network of a MSO using VPN technology.
• Subscribers can select combinations of services from various service providers.
• The Cisco IOS MPLS VPN cable feature sets build on CMTS DOCSIS 1.0 and DOCSIS 1.0
extensions to ensure that services are reliably and optimally delivered over the cable plant. MPLS
VPN provides systems support domain selection, authentication per subscriber, selection of Quality
of Service (QoS), policy-based routing (PBR), and the ability to reach behind the cable modem to
subscriber end devices for QoS and billing while preventing session spoofing.
• MPLS VPN technology ensures both secure access across the shared cable infrastructure and service
integrity.
• Cable interface bundling eliminates the need for an IP subnet on each cable interface. Instead, an IP
subnet is only required for each cable interface bundle. All cable interfaces in a Cisco uBR7200
series router can be added to a single bundle.

Interautonomous Systems for MPLS VPNs


The interautonomous system for MPLS VPNs feature allows an MPLS VPN to span service providers
and autonomous systems.
As VPNs grow, their requirements expand. In some cases, VPNs need to reside on different autonomous
systems in different geographic areas. (An autonomous system is a single network or group of networks
that is controlled by a common system administration group and that uses a single, clearly defined
routing protocol.) Also, some VPNs need to extend across multiple service providers (overlapping
VPNs). Regardless of the complexity and location of the VPNs, the connection between autonomous
systems must be seamless to the customer.
The interautonomous systems for MPLS VPNs feature provides seamless integration of autonomous
systems and service providers. Separate autonomous systems from different service providers can
communicate by exchanging IPv4 network layer reachability information (NLRI) in the form of
VPN-IPv4 addresses. The border edge routers of autonomous systems use the EBGP to exchange that
information. Then, an IGP distributes the network layer information for VPN-IPv4 prefixes throughout
each VPN and each autonomous system. Routing information uses the following protocols:
• Within an autonomous system, routing information is shared using an IGP.
• Between autonomous systems, routing information is shared using an EBGP. An EBGP allows a
service provider to set up an interdomain routing system that guarantees the loop-free exchange of
routing information between separate autonomous systems.
An MPLS VPN with interautonomous system support allows a service provider to provide to customers
scalable Layer 3 VPN services, such as web hosting, application hosting, interactive learning, electronic
commerce, and telephony service. A VPN service provider supplies a secure, IP-based network that
shares resources on one or more physical networks.
The primary function of an EBGP is to exchange network reachability information between autonomous
systems, including information about the list of autonomous system routes. The autonomous systems use
EGBP border edge routers to distribute the routes, which include label switching information. Each
border edge router rewrites the next hop and MPLS labels.

Cisco IOS Switching Services Configuration Guide


XC-103
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Interautonomous system configurations supported in an MPLS VPN can include the following:
• Interprovider VPN—MPLS VPNs that include two or more autonomous systems, connected by
separate border edge routers. The autonomous systems exchange routes using EBGP. No IGP or
routing information is exchanged between the autonomous systems.
• BGP confederations—MPLS VPNs that divide a single autonomous system into multiple
subautonomous systems, and classify them as a single, designated confederation. The network
recognizes the confederation as a single autonomous system. The peers in the different autonomous
systems communicate over EBGP sessions; however, they can exchange route information as if they
were IBGP peers.
Benefits of interautonomous Systems for MPLS VPNs are as follows:
• Allows a VPN to cross more than one service provider backbone—The interautonomous systems for
MPLS VPNs feature allows service providers, running separate autonomous systems, to jointly offer
MPLS VPN services to the same end customer. A VPN can begin at one customer site and traverse
different VPN service provider backbones before arriving at another site of the same customer.
Previous MPLS VPNs could only traverse a single BGP autonomous system service provider
backbone. The interautonomous system feature allows multiple autonomous systems to form a
continuous (and seamless) network between customer sites of a service provider.
• Allows a VPN to exist in different areas—The interautonomous systems for MPLS VPNs feature
allows a service provider to create a VPN in different geographic areas. Having all VPN traffic flow
through one point (between the areas) allows for better rate control of network traffic between the
areas.
• Allows confederations to optimize IBGP meshing—The interautonomous systems for MPLS VPNs
feature can make IBGP meshing in an autonomous system more organized and manageable. You can
divide an autonomous system into multiple, separate subautonomous systems and then classify them
into a single confederation (even though the entire VPN backbone appears as a single autonomous
system). This capability allows a service provider to offer MPLS VPNs across the confederation
because it supports the exchange of labeled VPN-IPv4 NLRI between the subautonomous systems
that form the confederation.

Routing Between Autonomous Systems


Figure 29 illustrates one MPLS VPN consisting of two separate autonomous systems. Each autonomous
system operates under different administrative control and runs a different IGP. Service providers
exchange routing information through EBGP border edge routers (ASBR1 and ASBR2).

Cisco IOS Switching Services Configuration Guide


XC-104
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Figure 29 EBGP Connection Between Two Autonomous Systems

Service Provider 1 Service Provider 2


RR-1 RR-2

Core of P Core of P
routers routers

EBGP VPNv4
routes with label
distribution

PE-1 ASBR1 ASBR2 PE-2 PE-3

CE-1 CE-2 CE-5


VPN1

CE-3 CE-4

43877
VPN1

This configuration uses the following process to transmit information:

Step 1 The provider edge router (PE-1) assigns a label for a route before distributing that route. The PE router
uses the multiprotocol extensions of a BGP to send label mapping information. The PE router distributes
the route as an VPN-IPv4 address. The address label and the VPN identifier are encoded as part of the
NLRI.
Step 2 The two route reflectors (RR-1 and RR-2) reflect VPN-IPv4 internal routes within the autonomous
system. The border edge routers of autonomous systems (ASBR1 and ASBR2) advertise the VPN-IPv4
external routes.
Step 3 The EBGP border edge router (ASBR1) redistributes the route to the next autonomous system (ASBR2).
ASBR1 specifies its own address as the value of the EBGP next hop attribute and assigns a new label.
The address ensures the following:
• That the next hop router is always reachable in the service provider (P) backbone network.
• That the label assigned by the distributing router is properly interpreted. (The label associated with
a route must be assigned by the corresponding next hop router.)
Step 4 The EBGP border edge router (ASBR2) redistributes the route in one of the following ways, depending
on its configuration:
• If the IBGP neighbors are configured with the neighbor next-hop-self router configuration
command, ASBR2 changes the next hop address of updates received from the EBGP peer, then
forwards it.
• If the IBGP neighbors are not configured with the neighbor next-hop-self router configuration
command, the next hop address does not get changed. ASBR2 must propagate a host route for the
EBGP peer through the IGP. To propagate the EBGP VPN-IPv4 neighbor host route, use the
redistribute connected subnets command. The EBGP VPN-IPv4 neighbor host route is
automatically installed in the routing table when the neighbor comes up. This is essential to establish
the label-switched path between PE routers in different autonomous systems.

Cisco IOS Switching Services Configuration Guide


XC-105
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Exchanging VPN Routing Information

Autonomous systems exchange VPN routing information (routes and labels) to establish connections.
To control connections between autonomous systems, the PE routers and EBGP border edge routers
maintain an LFIB. The LFIB manages the labels and routes that the PE routers and EBGP border edge
routers receive during the exchange of VPN information.
Figure 30 illustrates the exchange of VPN route and label information between autonomous systems.
The autonomous systems use the following guidelines to exchange VPN routing information:
• Routing information includes:
– The destination network (N)
– The next hop field associated with the distributing router
– A local MPLS label (L)
• An RD1: route distinguisher (the route target value) is part of a destination network address to make
the VPN-IPv4 route globally unique in the VPN service provider environment.
• When a router redistributes the route, it reassigns the label value and sets the next hop field to the
address of the distributing router (next-hop-self). Each VPN-IPv4 NRLI includes an MPLS label.
When a router changes the next hop field for a route, it changes the label field to a value that is
significant to the next hop destination router.

Figure 30 Exchanging Routes and Labels Between Autonomous Systems in an Interprovider VPN
Network

Service Provider 1 Service Provider 2


RR-1 RR-2
Network = RD1:N Network = RD1:N
Next hop = PE-1 Next hop = ASBR2
Label = L1 Label = L3
Network = RD1:N
Core of P Next hop = ASBR2 Core of P
routers Label = L3 routers
Network = RD1:N
Next hop = PE-1
Label = L1
PE-1 PE-2 PE-3

ASBR1 ASBR2
Network = RD1:N
Next hop = ASBR1
Label = L2
Network = N
Next hop = CE-2 Network = N
Next hop = PE-3
43878

CE-1 CE-2 CE-3 CE-4 CE-5


VPN1 VPN1

Figure 31 illustrates the exchange of VPN route and label information between autonomous systems.
The difference between Figure 30 and Figure 31 is that ASBR2 is configured with the redistribute
connected router configuration command, which propagates the host routes to all PEs. The redistribute
connected router configuration command is necessary because ASBR2 is not configured to change the
next hop address.

Cisco IOS Switching Services Configuration Guide


XC-106
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Figure 31 Exchanging Routes and Labels Between Autonomous Systems in an Interprovider VPN
Network

Service Provider 1 Service Provider 2


RR-1 RR-2
Network = RD1:N Network = RD1:N
Next hop = PE-1 Next hop = ASBR1
Label = L1 Label = L2

Network = RD1:N
Core of P Next hop = ASBR1 Core of P
routers Label = L2 routers
Network = RD1:N
Next hop = PE-1
Label = L1

PE-1 PE-2 PE-3

ASBR1 ASBR2
Network = RD1:N
Next hop = ASBR1
Label = L2
Network = N
Next hop = CE-2 Network = N
Next hop = PE-3

CE-1 CE-2 CE-5


VPN1

48299
CE-3 CE-4
VPN1

Packet Forwarding

Figure 32 illustrates how packets are forwarded between autonomous systems in an interprovider
network using the following packet forwarding method.
Packets are forwarded to their destination by means of MPLS. Packets use the routing information stored
in the LFIB of each PE router and EBGP border edge router.
The service provider VPN backbone uses dynamic label switching to forward labels.
Each autonomous system uses standard multilevel labeling to forward packets between the edges of the
autonomous system routers (for example, from CE-5 to PE-3). Between autonomous systems, only a
single level of labeling is used, corresponding to the advertised route.
A data packet carries two levels of labels when traversing the VPN backbone:
• The first label (IGP route label) directs the packet to the correct PE router or EBGP border edge
router. (For example, the IGP label of ASBR2 points to the ASBR2 border edge router.)
• The second label (VPN route label) directs the packet to the appropriate PE router or EBGP border
edge router.

Cisco IOS Switching Services Configuration Guide


XC-107
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Figure 32 Forwarding Packets Between Autonomous Systems in an Interprovider VPN Network

Service Provider 2
RR-1 RR-2 Network = N
Service Provider 1 IGP label = ASBR2
VPN label = L3

Core of P Core of P
Network = N Network = N
routers routers
IGP label = PE1 VPN label = L3
Network = N VPN label = L1
VPN label = L1 Network = RD1:N
PE-1 VPN label = L2 PE-2

ASBR1 ASBR2
PE-3

Network = RD1:N

Network = RD1:N

CE-1 CE-2 CE-5


VPN 1

43879
CE-3 CE-4
VPN 1
Figure 33 illustrates the same packet forwarding method, except the EBGP router (ASBR1) forwards the
packet without reassigning it a new label.

Figure 33 Forwarding Packets Between Autonomous Systems in an Interprovider VPN Network

Service Provider 2
RR-1 RR-2 Network = N
Service Provider 1 IGP label = ASBR1
VPN label = L2

Core of P Core of P
routers Network = RD1:N Network = RD1:N
routers
IGP label = PE1 IGP label = ASBR1
Network = N VPN label = L1 VPN label = L2
VPN label = L1 Network = RD1:N
PE-1 VPN label = L2 PE-2

ASBR1 ASBR2
PE-3

Network = N
Network = N

CE-1 CE-2 CE-5


VPN 1
48300

CE-3 CE-4
VPN 1

Cisco IOS Switching Services Configuration Guide


XC-108
Multiprotocol Label Switching Overview
MPLS Virtual Private Networks

Routing Between Subautonomous Systems in a Confederation


A VPN can span service providers running in separate autonomous systems or between multiple
subautonomous systems that have been grouped together to form a confederation.
A confederation reduces the total number of peer devices in an autonomous system. A confederation
divides an autonomous system into subautonomous systems and assigns a confederation identifier to the
autonomous systems.
In a confederation, each subautonomous system is fully meshed with other subautonomous systems. The
subautonomous systems communicate using an IGP, such as OSPF or IS-IS. Each subautonomous
system also has an EBGP connection to the other subautonomous systems. The confederation EBGP
(CEBGP) border edge routers forward next-hop-self addresses between the specified subautonomous
systems. The next-hop-self address forces the BGP to use a specified address as the next hop rather than
letting the protocol choose the next hop.
You can configure a confederation with separate subautonomous systems in two ways:
• You can configure a router to forward next-hop-self addresses between only the CEBGP border edge
routers (both directions). The subautonomous systems (IBGP peers) at the subautonomous system
border do not forward the next-hop-self address. Each subautonomous system runs as a single IGP
domain. However, the CEBGP border edge router addresses are known in the IGP domains.
• You can configure a router to forward next-hop-self addresses between the CEBGP border edge
routers (both directions) and within the IBGP peers at the subautonomous system border. Each
subautonomous system runs as a single IGP domain but also forwards next-hop-self addresses
between the PE routers in the domain. The CEBGP border edge router addresses are known in the
IGP domains.

Note Figure 30 and Figure 31 illustrate how two autonomous systems exchange routes and forward
packets. Subautonomous systems in a confederation use a similar method of exchanging routes and
forwarding packets.

Figure 34 illustrates a typical MPLS VPN confederation configuration. The following behavior occurs
in this confederation configuration:
• The two CEBGP border edge routers exchange VPN-IPv4 addresses with labels between the two
subautonomous systems.
• The distributing router changes the next hop addresses and labels and uses a next-hop-self address.
• IGP-1 and IGP-2 know the addresses of CEBGP-1 and CEBGP-2.

Cisco IOS Switching Services Configuration Guide


XC-109
Multiprotocol Label Switching Overview
MPLS Quality of Service

Figure 34 EBGP Connection Between Two Subautonomous Systems in a Confederation

Service Provider 1 Service Provider 1


Sub-AS1 with Sub-AS2 with
IGP-1 IGP-2
Core of P Core of P
routers routers
eBGP intraconfederation
for VPNv4 routes with label
distribution
PE-1 PE-2 PE-3

CEGBP-1 CEBGP-2

CE-1 CE-2
CE-5
VPN 1

43880
CE-3 CE-4
VPN 1
The following behavior occurs in this confederation configuration:
• CEBGP border edge routers function as neighboring peers between the subautonomous systems.
The subautonomous systems use EBGP to exchange route information.
• Each CEBGP border edge router (CEBGP-1 and CEBGP-2) assigns a label for the route before
distributing the route to the next subautonomous system. The CEBGP border edge router distributes
the route as an VPN-IPv4 address by using the multiprotocol extensions of BGP. The label and the
VPN identifier are encoded as part of the NLRI.
• Each PE and CEBGP border edge router assigns its own label to each VPN-IPv4 address prefix
before redistributing the routes. The CEBGP border edge routers exchange VPN-IPv4 addresses
with the labels. The next-hop-self address is included in the label (as the value of the EBGP next
hop attribute). Within the subautonomous systems, the CEBGP border edge router address is
distributed throughout the IBGP neighbors and the two CEBGP border edge routers are known to
both confederations.

HSRP Support for MPLS VPNS


Hot Standby Router Protocol (HSRP) can now provide transparent “first-hop IP routing” redundancy for
workstations or routers connected to interfaces within MPLS VPNs. For more information on enabling
HSRP or configuring HSRP group attributes, refer to the “Configuring IP Services” chapter in the
Cisco IOS IP Configuration Guide.

MPLS Quality of Service


The quality of service (QoS) feature for MPLS enables network administrators to provide differentiated
types of service across an MPLS network. Differentiated service satisfies a range of requirements by
supplying for each packet transmitted the particular kind of service specified for that packet by its QoS.
Service can be specified in different ways, for example, using the IP precedence bit settings in IP
packets.

Cisco IOS Switching Services Configuration Guide


XC-110
Multiprotocol Label Switching Overview
MPLS Quality of Service

In supplying differentiated service, MPLS QoS offers packet classification, congestion avoidance, and
congestion management. Table 25 lists these functions and their descriptions.

Table 25 QoS Services and Features

Service QoS Function Description


Packet Committed access rate (CAR). Classifies packets according to input or output
classification Packets are classified at the edge transmission rates. Allows you to set the MPLS
of the network before labels are experimental bits or the IP Precedence or DSCP bits
assigned. (whichever is appropriate).
Congestion Weighted Random Early Monitors network traffic to prevent congestion by
avoidance Detection (WRED). Packet dropping packets based on the IP Precedence or
classes are differentiated based on DSCP bits or the MPLS experimental field.
drop probability.
Congestion Class-based weighted fair An automated scheduling system that uses a
management queueing (CBWFQ). Packet queueing algorithm to ensure bandwidth allocation
classes are differentiated based on to different classes of network traffic.
bandwidth and bounded delay.

Note MPLS QoS lets you duplicate Cisco IOS IP QoS (Layer 3) features as closely as possible in MPLS
devices, including label edge routers (LERs), LSRs, and ATM-LSRs. MPLS QoS functions map
nearly one-for-one to IP QoS functions on all interface types.

For more information on configuration of the QoS functions (CAR, WRED, and CBWFQ), refer to the
Cisco IOS Quality of Service Solutions Configuration Guide.
For complete command syntax information for CAR, WRED, and WFQ, refer to the Cisco IOS Quality
of Service Solutions Command Reference.

Specifying the QoS in the IP Precedence Field


When you send IP packets from one site to another, the IP Precedence field (the first three bits of the
DSCP field in the header of an IP packet) specifies the QoS. Based on the IP precedence marking, the
packet is given the desired treatment such as the latency or the percent of bandwidth allowed for that
quality of service. If the service provider network is an MPLS network, then the IP precedence bits are
copied into the MPLS EXP field at the edge of the network. However, the service provider might want
to set a QoS for a MPLS packet to a different value determined by the service offering.
This feature allows the service provider to set the MPLS experimental field instead of overwriting the
value in the IP precedence field belonging to a customer. The IP header remains available for the
customer’s use; the QoS of an IP packet is not changed as the packet travels through the MPLS network.

Cisco IOS Switching Services Configuration Guide


XC-111
Multiprotocol Label Switching Overview
MPLS Quality of Service

Figure 35 shows an MPLS network that connects two sites of a IP network belonging to a customer.

Figure 35 MPLS Network Connecting Two Sites of a IP Network Belonging to a Customer

MPLS MPLS
network network
IP IP
network network

Host A Host B
CE1 PE1 P1 P2 PE2 CE2

41867
Owned by
service provider

Note The network is bidirectional, but for the purpose of this document the packets move left to right.

In Figure 35, the symbols have the following meanings displayed in Table 26:

Table 26 Device Symbols

Symbol Meaning
CE1 Customer equipment 1
PE1 Service provider edge router (ingress LSR)
P1 Service provider router within the core of the network of the service provider
P2 Service provider router within the core of the network of the service provider
PE2 Service provider edge router (egress LSR)
CE2 Customer equipment 2

Note Notice that PE1 and PE2 are at the boundaries between the MPLS network and the IP network.

In Figure 35, the following behavior occurs:


• Packets arrive as IP packets at PE1, the provider edge router (also known as the ingress label
switching router).
• PE1 sends the packets as MPLS packets.
• Within the service provider network, there is no IP Precedence field for the queueing mechanism to
look at because the packets are MPLS packets. The packets remain MPLS packets until they arrive
at PE2, the provider edge router.
• PE2 removes the label from each packet and forwards the packets as IP packets.

Cisco IOS Switching Services Configuration Guide


XC-112
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

This MPLS QoS enhancement allows service providers to classify packets according to their type, input
interface, and other factors by setting (marking) each packet within the MPLS experimental field without
changing the IP Precedence or DSCP field. For example, service providers can classify packets with or
without considering the rate of the packets that PE1 receives. If the rate is a consideration, the service
provider marks in-rate packets differently from out-of-rate packets.

Note The MPLS experimental bits allow you to specify the QoS for an MPLS packet. The IP
Precedence/DSCP bits allow you to specify the QoS for an IP packet.

MPLS Label Switch Controller


The MPLS LSC, combined with slave ATM switch, supports scalable integration of IP services over an
ATM network. The MPLS LSC enables the slave ATM switch to do the following:
• Participate in an MPLS network
• Directly peer with IP routers
• Support the IP features in Cisco IOS software
The MPLS LSC supports highly scalable integration of MPLS (IP+ATM) services by using a direct peer
relationship between the ATM switch and MPLS routers. This direct peer relationship removes the
limitation on the number of IP edge routers (typical of traditional IP-over-ATM networks), allowing
service providers to meet growing demands for IP services. The MPLS LSC also supports direct and
rapid implementation of advanced IP services over ATM networks using ATM switches.
MPLS combines the performance and VC capabilities of Layer 2 (data link layer) switching with the
scalability of Layer 3 (network layer) routing capabilities. This combination enables service providers
to deliver solutions for managing growth, providing differentiated services, and leveraging existing
networking infrastructures.
The MPLS LSC architecture provides the following flexibility:
• Run applications over any combination of Layer 2 technologies
• Support any Layer 3 protocol while scaling the network to meet future needs
By deploying the MPLS LSC across large enterprise networks or wide area networks, you can achieve
the following benefits:
• Save money by using existing ATM and routing infrastructures
• Grow revenue using MPLS-enabled services
• Increase productivity through enhanced network scalability and performance

MPLS LSC Functional Description


The MPLS LSC is an LSR that is configured to control the operation of a separate ATM switch. Together,
the MPLS LSC and the controlled ATM switch function as a single ATM MPLS router (ATM-LSR).

Cisco IOS Switching Services Configuration Guide


XC-113
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Figure 36 shows the functional relationship between the MPLS LSC and the ATM switch that it controls.

Figure 36 MPLS Label Switch Controller and Controlled ATM Switch

Other label controlled


Label switch controller
or nonlabeled controlled
router interfaces
Master control port/
VSI
switch control port

Controlled ATM switch LC-ATM interface

LC-ATM LC-ATM

S6867
interface interface

The following routers can function as an MPLS LSC:


• Cisco 7200 series router
• Cisco 6400 Universal Access Concentrator (UAC)
The following ATM switches can function with the Cisco 7200 series router as the controlled ATM
switch:
• Cisco BPX 8600, 8650 (which includes a Cisco 7204 router), and 8680
• Cisco IGX 8410, 8420, and 8430

Note QoS is not an available feature with the IGX series ATM switches.

The MPLS LSC controls the ATM switch by means of the VSI, which runs over an ATM link connecting
the two devices.
The dotted line in Figure 36 represents the logical boundaries of the external interfaces of the MPLS
LSC and the controlled ATM switch, as discovered by the IP routing topology. The controlled ATM
switch provides one or more XTagATM interfaces at this external boundary. The MPLS LSC can
incorporate other label controlled or nonlabel controlled router interfaces.
MPLS LSC benefits are as follows:
• IP-ATM integration—Enables ATM switches to directly support advanced IP services and protocols,
thereby reducing operational costs and bandwidth requirements, while at the same time decreasing
time-to-market for new services.
• Explicit routing—Provides Layer 2 VCs to gigabit router backbones and integrated IP+ATM
environments, including support for explicit routing and provisioning of IP VPN services.
• SVPNs—Supports IP-based VPNs on either a Frame Relay or ATM backbone, an integrated
IP-ATM backbone, or a gigabit router backbone.

Cisco IOS Switching Services Configuration Guide


XC-114
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Using Controlled ATM Switch Ports as Router Interfaces


In the LSC, the XTagATM ports on the controlled ATM switch are used as a Cisco IOS interface type
called extended Label ATM (XTagATM). To associate these XTagATM interfaces with particular
physical interfaces on the controlled ATM switch, use the extended-port interface configuration
command.
Figure 37 shows a typical MPLS LSC configuration that controls three ATM ports on a Cisco BPX
switch: ports 6.1, 6.2, and 12.2. These corresponding XTagATM interfaces were created on the MPLS
LSC and associated with the corresponding ATM ports on the Cisco BPX switch by means of the
extended-port command.

Figure 37 Typical MPLS LSC and BPX Configuration

Label Switch Controller


(7200 series)

XTagATM61 XTagATM62 XTagATM122

extended-port a1/0 extended-port a1/0 extended-port a1/0


BPX 6.1 BPX 6.2 BPX 12.2

Master control port


ATM1/0
tag-control-protocol vsi

Switch Control Protocol


(Virtual Switch Interface)

Switch Control
Port (12.1)

6.1 12.2
Controlled Switch
(BPX)
6.2
S6856

Figure 37 shows the following:


• An additional port on the Cisco BPX switch (port 12.1) acts as the switch control port
• An ATM interface (ATM1/0) on the MPLS LSC acts as the master control port

Using the MPLS LSC as a Label Edge Device

Note Using the MPLS LSC as a label edge device is not recommended. Using the MPLS LSC as a label
edge device introduces unnecessary complexity to the configuration. Refer to the tag-switching atm
disable-headend-vc command in the Cisco IOS Switching Services Command Reference to disable
edge LSR functionality on the LSC.

Cisco IOS Switching Services Configuration Guide


XC-115
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

The MPLS LSC can perform as label edge device for the following purposes:
• Function simultaneously as a controller for an ATM switch and as a label edge device. Traffic can
be forwarded between a router interface and an interface on the controlled switch, and between two
XTagATM interfaces on the controlled switch.
• Perform label imposition and disposition and serve as the headend or tailend of a label-switched
path tunnel.
However, when the MPLS LSC acts as a label edge device, it is limited by the following factors:
• Label space for LSC-terminated VCs is limited by the number of VCs supported on the control link.
• Packets are process switched between the LSC edge and an XTagATM interface.
• Throughput depends on the following factors:
– The slave switch VSI partition configuration of the maximum cells per second for the master
control port interface and the XTagATM interface.
– SAR limitations of the ATM Lite (PA-A1) and ATM Deluxe (PA-A3) and process switching.
– CPU utilization for the LSC and edge LSR functionality.

Creating Virtual Trunks


Virtual trunks provide connectivity for Cisco WAN MPLS switches through an ATM cloud, as shown in
Figure 38. Because several virtual trunks can be configured across a given private or public physical
trunk, virtual trunks provide a cost-effective means of connecting across an entire ATM network.
The ATM equipment in the cloud must support virtual path switching and transmission of ATM cells
based solely on the VPI in the ATM cell header. The VPI is provided by the ATM cloud administrator
(that is, by the service provider).

Typical ATM Hybrid Network with Virtual Trunks


Figure 38 shows three Cisco WAN MPLS switching networks, each connected to an ATM network by a
physical line. The ATM network links all three of these subnetworks to every other subnetwork with a
fully meshed network of virtual trunks. In this example, each physical interface is configured with two
virtual trunks.

Cisco IOS Switching Services Configuration Guide


XC-116
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Figure 38 Typical ATM Hybrid Network Using Virtual Trunks

MPLS MPLS

Physical
interface

Virtual
trunk

ATM

MPLS 33962

Benefits of virtual trunks are as follows:


• Reduced costs—By sharing the resources of a single physical trunk among a number of virtual
(logical) trunks, each of the virtual trunks provided by the public carrier needs to be assigned only
as much bandwidth as needed for that interface, rather than the full T3, E3, OC-3, or OC-12
bandwidth of an entire physical trunk.
• Migration of MPLS services into existing networks—VSI virtual trunks allow MPLS services to be
carried over part of a network that does not support MPLS services. The part of the network that
does not support such services may be a public ATM network, for example, that consists of switches
that are not MPLS-enabled.

Virtual Trunk Configuration


A virtual trunk number (slot number.port number.trunk number) differentiates the virtual trunks found
within a physical trunk port. In Figure 39, three virtual trunks (4.1.1, 4.1.2, and 4.1.3) are configured on
a physical trunk that connects to the port 4.1 interface of a BXM switch.

Cisco IOS Switching Services Configuration Guide


XC-117
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Figure 39 Virtual Trunks Configured on a Physical Trunk

4.1.1 (virtual trunk)

4.1.2 (virtual trunk)

4.1.3 (virtual trunk)


.
.
.

33963
Physical trunk
4.1.31 (virtual trunk)
(slot4 port 1)

These virtual trunks are mapped to the XTagATM interfaces on the LSC. On the XTagATM interface,
you configure the respective VPI value using the tag-switching atm vp-tunnel vpi interface command.
This VPI should match the VPI in the ATM network. The LVCs are generated inside this Virtual Path
(VP), and this VP carries the LVCs and their traffic across the network.

Virtual Trunk Bandwidth

The total bandwidth of all the virtual trunks on one port cannot exceed the maximum bandwidth of the
port. Trunk loading (units of load) is maintained per virtual trunk, but the cumulative loading of all
virtual trunks on a port is restricted by the transmit and receive rates for the port.

Virtual Trunk Features

The maximum number of virtual trunks that can be configured per card equals the number of virtual
interfaces on the BPX or IGX switch. The following lists virtual interface support for BXM and UXM:
• The BXM supports 32 virtual interfaces; hence, it supports up to 32 virtual trunks. Accordingly, you
can have interfaces ranging from XTagATM411 to XtagATM4131 on the same physical interface.
• The UXM supports 16 virtual interfaces. You can have interfaces ranging from XTagATM411 to
XTagATM 4116.

Using LSC Redundancy


The following sections explain how LSC redundancy works:
• LSC Redundancy Architecture
• General Redundancy Operational Modes
• How LSC Redundancy Differs from Router and Switch Redundancy
• How the LSC, ATM Switch, and VSI Work Together
• Implementing LSC Redundancy
• Reducing the Number of LVCs for LSC Redundancy

Cisco IOS Switching Services Configuration Guide


XC-118
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

LSC Redundancy Architecture


LSC redundancy allows you to create a highly reliable IP network, one whose reliability is nearly
equivalent to that provided by Hot Standby routing. Instead of using Hot Standby routing processes to
create redundancy, this method uses a combination of LSCs, the VSI, and IP routing paths with the same
cost path for hot redundancy, or different costs for warm redundancy. The VSI allows multiple control
planes (MPLS, Private Network-Network Interface (PNNI), and voice) to control the same switch. Each
control plane controls a different partition of the switch.
In the LSC redundancy model, two independent LSCs control the different partitions of the switch. Thus,
two separate MPLS control planes set up connections on different partitions of the same switch. This is
where LSC redundancy differs from Hot Standby redundancy: the LSCs do not need copies of the other
internal state to create redundancy; the LSCs control the partitions of the switch independently.
A single IP network consists of switches with one LSC (or a Hot Standby pair of LSCs) and MPLS edge
LSRs.
If you change that network configuration by assigning two LSCs per switch, you form two separate
MPLS control planes for the network. You logically create two independent parallel IP subnetworks
linked at the edge.
If the two LSCs on each switch are assigned identical shares of switch resources and links, the two
subnetworks are identical. You have two identical parallel IP subnetworks on virtually the same
equipment, which would otherwise support only one network.
For example, Figure 40 shows a network of switches that each have two LSCs. MPLS edge LSRs are
located at the edge of the network, to form a single IP network. The LSCs on each switch have identical
shares of switch resources and links, which makes the networks identical. In other words, there are two
identical parallel IP subnetworks.

Figure 40 LSC Redundancy Model

Physical LSC redundant network


LSC-1 LSC-2 LSC-3 LSC-4

ATM switch ATM switch

Edge LSR Edge LSR

Logical equivalent

ATM-LSR-1 ATM-LSR-3

Edge LSR Edge LSR


35149

ATM-LSR-2 ATM-LSR-4

Cisco IOS Switching Services Configuration Guide


XC-119
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Part of the redundancy model includes edge LSRs, which link the two networks at the edge.
If the network uses OSPF or a similar IP routing protocol with an equal cost on each path, then there are
at least two equally viable paths from every edge LSR to every other edge LSR. The OSPF equal-cost
multipath distributes traffic evenly on both paths. Therefore, MPLS sets up two identical sets of
connections for the two MPLS control planes. IP traffic travels equally across the two sets of
connections.

Note The LSC redundancy model works with any routing protocol. For example, you can use OSPF or
IS-IS. Also, you can use both the TDP and the LDP.

With the LSC redundancy model, if one LSC on a switch fails, IP traffic uses the other path, without
needing to establish new links. LSC redundancy does not require the network to set up new connections
when a controller fails. Because the connections to the other paths have already been established, the
interruption to the traffic flow is negligible. The LSC redundancy model is as reliable as networks that
use Hot Standby controllers. LSC redundancy requires hardware like that used by Hot Standby
controllers. However, the controllers act independently, rather than in Hot Standby mode. For LSC
redundancy to work, the hardware must have connection capacity for doubled-up connections.
If an LSC fails and LSC redundancy is not present, IP traffic halts until other switches break their present
connections and reroute traffic around the failed controller. The stopped IP traffic results in undesirable
unreliability.

General Redundancy Operational Modes


The LSC redundancy model allows you to use the following four operational models. Most other
redundancy models cannot accommodate all of these redundancy models.
• Transparent Mode—The primary and secondary redundant systems have the same copies of the
image and startup configurations. When one system fails, the other takes over, and the operations
are identical. However, this mode risks software failures, because both systems use the same
algorithms. A software problem on the primary system is likely to affect the secondary system as
well.
• Upgrade mode—You can upgrade the image or configuration of the redundant system, without
rebooting the entire system. You can use this mode to change the resources between different
partitions of the slave ATM switch.
• Nontransparent mode—The primary and secondary systems have different images or configurations.
This mode is more reliable than transparent mode, which loads the same software on both
controllers. In nontransparent mode, the use of different images and configurations reduces the risk
of both systems encountering the same problem.
• Experimental mode—You load an experimental version of the image or configuration on the
secondary system. You can use experimental mode when you want to test the new images in a real
environment.

How LSC Redundancy Differs from Router and Switch Redundancy


In traditional IP router networks, network managers ensure reliability by creating multiple paths through
the network from every source to every destination. If a device or link on one path fails, IP traffic uses
an alternate path to reach its destination.

Cisco IOS Switching Services Configuration Guide


XC-120
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Router Redundancy

Because routers need not establish a VC to transfer data, they are inherently connectionless. When a
router discovers a failed device or link, it requires approximately less than 1 second to reroute traffic
from one path to another.
Routers can incorporate a warm or Hot Standby routing process to increase reliability. The routing
processes share information about the routes to direct different streams of IP traffic. They need not keep
or share connection information. Routers can also include redundant switch fabrics, backplanes, power
supplies, and other components to decrease the chances of node failures.

ATM, Frame Relay, and Circuit Switch Redundancy

ATM, Frame Relay, and circuit switch networks transfer data by establishing circuits or VCs. To ensure
the transfer of data in switches, network managers incorporate redundant switch components. If any
component fails, a spare component takes over. Switches can have redundant line cards, power supplies,
fans, backplanes, switch fabrics, line cards, and control cards. The following describes these redundant
components:
• The redundant backplanes include all the hardware to operate two backplanes and to switch to the
backup backplane if one fails.
• Redundant line cards protect against failed links. If a link to a line card fails, the redundant line card
takes over. To create redundant line cards, you must program the same connection information into
both line cards. This ensures that the circuits or VCs are not disrupted when the new line card takes
over.
• The redundant switch fabric must also have the same connection information as the active switch
fabric.
A software application usually monitors the state of the switches and their components. If a problem
arises, the software sets an alarm to bring attention to the faulty component.
The redundant switch hardware and software are required, because switches take some time to reroute
traffic when a failure occurs. Switches can have connection routing software, such as Cisco automatic
connection routing, PNNI, or MPLS. However, rerouting the connections in a switch takes much more
time than rerouting traffic in a router network. Rerouting connections in a switch requires calculating
routes and reprogramming some hardware for each connection. In router networks, large aggregates of
traffic can be rerouted simultaneously, with little or no hardware programming. Therefore, router
networks can reroute traffic more quickly and easily than connection oriented networks. Router networks
rely on rerouting techniques to ensure reliability. Connection-oriented networks use rerouting only as a
last resort.

General Hot/Warm Standby Redundancy in Switches

Network managers can install redundant copies of the connection routing software for ATM and Frame
Relay switches on a redundant pair of control processors.
With Hot Standby redundancy, the active process sends its state to the spare process to keep the spare
process up to date in case it needs to take over. The active process sends the state information to the spare
process or writes the state to a disk, where both processes can access the information. In either case, the
state information is shared between controllers. Because the state of the network routing tables changes
frequently, the software must perform much work to maintain consistent routing states between
redundant pairs of controllers.
With Warm Standby redundancy, the state information is not shared between the active and spare
processes. If a failure occurs, the spare process resets all of the connections and reestablishes them.
Reliability decreases when the spare resets the connections. The chance of losing data increases.

Cisco IOS Switching Services Configuration Guide


XC-121
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

LSC Redundancy

Connecting two independent LSCs to each switch by the VSI creates two identical subnetworks.
Multipath IP routing uses both subnetworks equally. Thus, both subnetworks have identical connections.
If a controller in one subnetwork fails, the multipath IP routing diverts traffic to the other path. Because
the connections already exist in the alternate path, the reroute time is very fast. The LSC redundancy
model matches the reliability of networks with Hot Standby controllers, without the difficulty of
implementing Hot Standby redundancy.
One benefit of implementing the LSC redundancy model is that you eliminate the single point of failure
between the LSC and the ATM switch it controls. If one LSC fails, the other LSC takes over and routes
the data on the other path. The following sections explain the other benefits of LSC redundancy.

LSC Redundancy Does Not Use Shared States or Databases

In the LSC redundancy model, the LSCs do not share states or databases, which increases reliability.
Sometimes, when states and databases are shared, an error in the state or database information can cause
both controllers to fail simultaneously.
Also, new software features and enhancements do not affect LSC redundancy. Because the LSCs do not
share states or database information, you need not worry about ensuring redundancy during every step
of the update.

LSC Redundancy Allows Different Software Versions

The LSCs work independently and there is no interaction between the controllers. They do not share the
state or database of the controller, as other redundancy models require. Therefore, you can run different
versions of the Cisco IOS software on the LSCs, which provides the following advantages:
• You can test the features of the latest version of software without risking reliability. You can run the
latest version of the Cisco IOS software on one LSC and an older version of the Cisco IOS software
on a different LSC. If the LSC running the new Cisco IOS software fails, the LSC running the older
software takes over.
• Running different versions of the Cisco IOS software reduces the chance of having both controllers
fail. If you run the same version of the Cisco IOS software on both controllers and that version
contains a problem, it could cause both controllers to fail. Running different versions on the
controllers eliminates the possibility of each controller failing because of the same problem.

Note Using different Cisco IOS software version on different LSCs is recommended only as a temporary
measure. Different versions of Cisco IOS software in a network could be incompatible, although it
is unlikely. For best results, run the same version of Cisco IOS software on all devices.

LSC Redundancy Allows Different Hardware

You can use different models of routers in this LSC redundancy model. For example, one LSC can be a
Cisco 7200 series router, and the other LSC can be a Cisco 7500 series router. Using different hardware
in the redundancy model reduces the chance that a hardware fault would interrupt network traffic.

LSC Redundancy Allows You to Switch from Hot to Warm Redundancy Immediately

You can implement hot or warm redundancy and switch from one model to the other. Hot redundancy
can use redundant physical interfaces, slave ATM switches with Y redundancy, and redundant LSCs to
enable parallel paths and near-instant failover. If your resources are limited, you can implement warm

Cisco IOS Switching Services Configuration Guide


XC-122
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

redundancy, which uses only redundant LSCs. When one controller fails, the backup controller requires
some reroute time. As your network grows, you can switch from hot to warm redundancy and back,
without bringing down the entire network.
Other redundancy models require complex hardware and software configurations, which are difficult to
alter when you change the network configuration. You must manually change the connection routing
software from Hot Standby mode to Warm Standby mode.

LSC Redundancy Provides an Easy Migration from Standalone LSCs to Redundant LSCs

You can migrate from a standalone LSC to a redundant LSC and back again without affecting network
operations. Because the LSCs work independently, you can add a redundant LSC without interrupting
the other LSC.

LSC Redundancy Allows Configuration Changes in a Live Network

The hot LSC redundancy model provides two parallel, independent networks. Therefore, you can disable
one LSC without affecting the other LSC. This feature has the following benefits:
• LSC redundancy model facilitates configuration changes and updates. After you finish with
configuration changes or image upgrades to the LSC, you can add it back to the network and resume
the LSC redundancy model.
• The redundancy model protects the network during partitioning of the ATM switch. You can disable
one path and perform partitioning on that path. While you are performing the partitioning, data uses
the other path. The network is safe from the effects of the partitioning, which include breaking or
establishing LVC connections.

LSC Redundancy Provides Fast Reroute in IP+ATM Networks

The hot LSC redundancy model offers redundant paths for every destination. Therefore, reroute recovery
is very fast. Other rerouting processes in IP+ATM networks require many steps and take longer to
reroute.
In normal IP+ATM networks, the reroute process consists of the following steps:
• Detecting the failure
• Converging the Layer 2 routing protocols
• Completing label distribution for all destinations
• Establishing new connections for all destinations
After this reroute process, the new path is ready to transfer data. Rerouting data using this process takes
time.
The hot LSC redundancy method allows you to quickly reroute data in IP+ATM networks without using
the normal reroute process. When you incorporate hot LSC redundancy, you create parallel paths. Every
destination has at least one alternative path. If a device or link along the path fails, the data uses the other
path to reach its destination. The hot LSC redundancy model provides the fastest reroute recovery time
for IP+ATM networks.

Cisco IOS Switching Services Configuration Guide


XC-123
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

How the LSC, ATM Switch, and VSI Work Together


In an LSC implementation, the LSC and slave ATM switch have the following characteristics:
• The LSC runs all of the control protocols.
• The ATM switch forwards the data.
• Each physical interface on the slave ATM switch maps to an XTagATM interface on the LSC. Each
XTagATM interface has a dedicated LDP session with a corresponding interface on the edge. The
XTagATM interfaces are mapped in the routing topology, and the ATM switch behaves as a router.
• The LSC can also function as an edge LSR. The data for the edge LSR passes through the control
interface of the router.
If a component on the LSC fails, the IP switching function of the ATM switch is disabled. The standalone
LSC is the single point of failure.
The VSI implementation includes the following characteristics:
• The VSI allows multiple, independent control planes to control a switch. The VSI ensures that the
control processes (Signaling System 7 (SS7), MPLS, PNNI, and so on) can act independently of
each other by using a VSI slave process to control the resources of the switch and apportion them
to the correct control planes.
• In MPLS, each physical interface on the slave ATM switch maps to an XTagATM interface on the
LSC through the VSI. In other words, physical interfaces are mapped to their respective logical
interfaces.
• The routing protocol on the LSC generates route tables entries. The master sends connection
requests and connection release requests to the slave.
• The slave sends the configured bandwidth parameters for the ATM switch interface to the master in
the VSI messages. The master includes the bandwidth information in the link-state topology. You
can override these bandwidth values by manually configuring the bandwidth on the XTagATM
interfaces.

Implementing LSC Redundancy


To make an LSC redundant, you can partition the resources of the slave ATM switch, implement a
parallel VSI model, assign redundant LSCs to each switch, and create redundant LSRs. The following
sections explain these steps.

Cisco IOS Switching Services Configuration Guide


XC-124
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Partitioning the Resources of the ATM Switch

In the LSC redundancy model, two LSCs control different partitions of the ATM switch. When you
partition the ATM switch for LSC redundancy, use the following guidelines:
• Make the MPLS partitions identical. If you create two partitions, make sure both partitions have the
same amount of resources. (You can have two MPLS VSI partitions per switch.) Use the cnfrsrc
router configuration command to configure the partitions.
• If the partitions are on the same switch card, perform the following steps:
– Create different control VCs for each partition. For example, there can be only one (0, 32)
control VC on the XTagATM interface. To map two XTagATM interfaces on the same ATM
switch interface, use a different control VC for the second LSC. Use the tag-switching atm
control-vc interface command.
– Create the LVC on the XTagATM interfaces using nonintersecting VPI ranges. Use the
tag-switching atm vpi interface command.
• Specify the bandwidth information on the XTagATM interfaces. Normally, this information is read
from the slave ATM switch. When you specify the bandwidth on the XTagATM interface, the value
you enter takes precedence over the switch-configured interface bandwidth.
• Configure the logical channel number (LCN) ranges for each partition according to the expected
number of connections.
See the documentation on the Cisco BPX 8600 series or Cisco IGX 8400 series switches for more
information about configuring the slave ATM switch.

Implementing the Parallel VSI Model

The parallel VSI model means that the physical interfaces on the ATM switch are shared by more than
one LSC. For instance, LSC1 in Table 26 maps VSI slave interfaces 1 to N to the ATM switch physical
interfaces 1 to N. LSC2 maps VSI slave interfaces to the ATM switch’s physical interfaces 1 to N. LSC1
and LSC2 share the same physical interfaces on the ATM switch. With this mapping, you achieve fully
meshed independent masters.
Figure 41 shows four ATM physical interfaces mapped as four XTagATM interfaces at LSC1 and LSC2.
Each LSC is not aware that the other LSC is mapped to the same interfaces. Both LSCs are active all the
time. The ATM switch runs the same VSI protocol on both partitions.

Figure 41 XTagATM Interfaces

XtagATM
LSC 1 interfaces LSC 2

Control Control
port port
VSI 1 VSI 2
48468

ATM Switch

Cisco IOS Switching Services Configuration Guide


XC-125
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Adding Interface Redundancy

To ensure reliability throughout the LSC redundant network, you can also implement:
• Redundant interfaces between the edge LSR and the ATM-LSR. Most edge LSRs are collocated with
the LSCs. Creating redundant interfaces between the edge LSRs and the ATM LSRs reduces the
chance of a disruption in network traffic by providing parallel paths.
• Redundant virtual trunks and VP tunnels between slave ATM switches. To ensure hot redundancy
between the ATM switches, you can create redundant virtual trunks and VP tunnels. See Figure 42.

Figure 42 Interface Redundancy

LSC LSC LSC LSC

ATM
networks
Virtual Trunk/ Virtual trunk/
Edge LSR ATM switch Edge LSR
VP Tunnel VP tunnel ATM switch

LSC LSC

Virtual trunk Edge LSR

35150
Physical interface ATM switch

Implementing Hot or Warm LSC Redundancy

Virtually any configuration of switches and LSCs that provides hot redundancy can also provide warm
redundancy. You can also switch from warm to hot redundancy with little or no change to the links,
switch configurations, or partitions.
Hot and warm redundancy differ in the following ways:
• Hot redundancy uses both paths to route traffic. You set up both paths using equal-cost multipath
routing, so that traffic is load balanced between the two paths. As a result, hot redundancy uses twice
the number of MPLS label VCs as warm redundancy.
• Warm redundancy uses only one path at a time. You set up the paths so that one path has a higher
cost than the other. Traffic only uses one path and the other path is a backup path.
The following sections explain the two redundancy models in detail.

Implementing Hot LSC Redundancy

Hot redundancy provides near-instant failover to the other path when an LSC fails. When you set up hot
redundancy, both LSCs are active and have the same routing costs on both paths. To ensure that the
routing costs are the same, run the same routing protocols on the redundant LSCs.
In hot redundancy, the LSCs run parallel and independent LDPs. At the edge LSRs, when the LDP has
multiple routes for the same destination, it requests multiple labels. It also requests multiple labels when
it needs to support QoS. When one LSC fails, the labels distributed by that LSC are removed.

Cisco IOS Switching Services Configuration Guide


XC-126
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

To achieve hot redundancy, you can implement the following redundant components:
• Redundant physical interfaces between the edge LSR and the ATM-LSR to ensure reliability in case
one physical interface fails.
• Redundant interfaces or redundant VP tunnels between the ATM switches.
• Slave ATM switches, such as the BPX 8650, can have redundant control cards and switch fabrics. If
redundant switch fabrics are used and the primary switch fails, the other switch fabric takes over.
• Redundant LSCs.
• The same routing protocol running on both LSCs. (You can have different tag or label distribution
protocols.)
Figure 43 shows one example of how hot LSC redundancy can be implemented.

Figure 43 Hot LSC Redundancy

Physical LSC redundant network


LSC-1 LSC-2 LSC-3 LSC-4

ATM switch ATM switch

Edge LSR Edge LSR

Logical equivalent

ATM-LSR-1 ATM-LSR-3

Edge LSR Edge LSR

35149

ATM-LSR-2 ATM-LSR-4

Implementing Warm LSC Redundancy

To achieve warm redundancy, you need only redundant LSCs. You need not run the same routing
protocols or distribution protocols on the LSCs.

Note You can use different routing protocols on parallel LSCs. However, you do not get near-instant
failover. The failover time includes the time it takes to reroute the traffic, plus the LDP bind request
time. If the primary routing protocol fails, the secondary routing protocol finds new routes and
creates new LVCs. An advantage to using different routing protocols is that the ATM switch uses
fewer resources and offers more robust redundancy.

Cisco IOS Switching Services Configuration Guide


XC-127
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

If you run the same routing protocols, specify a higher cost for the interfaces on the backup LSC to allow
the data to use only the lower-cost path and also saves resources on the ATM switch (the edge LSR
requests LVCs only through the lower-cost LSC). When the primary LSC fails, the edge LSR uses the
backup LSC and creates new paths to the destination. Creating new paths requires reroute time and LDP
negotiation time.
Figure 44 shows one example of how warm LSC redundancy can be implemented.

Figure 44 Warm LSC Redundancy

Physical LSC redundant network


LSC-1 LSC-2 LSC-3 LSC-4

Virtual trunk/
Virtual trunk/ VP tunnel 10 Virtual trunk/
VP tunnel 4 VP tunnel 16

Virtual trunk/
ATM switch VP tunnel 12 ATM switch
Virtual trunk/ Virtual trunk/
Edge LSR VP tunnel 8 VP tunnel 20 Edge LSR

Note: Tunnels are virtual interfaces.


Physical interfaces are marked by thin lines.

Logical equivalent

ATM-LSR-1 ATM-LSR-3

Edge LSR Edge LSR

35152
ATM-LSR-2 ATM-LSR-4

Reducing the Number of LVCs for LSC Redundancy


By default, an LSC includes edge LSR functionality, which means that the LSC can act as a label edge
device. To achieve the edge LSR functionality, the LSC creates an LSP for each destination in the route
table.
With LSC redundancy, if 400 destinations exist in the network, each redundant LSC adds 400 headend
VCs. In hot redundancy mode, 800 headend VCs are created for the LSCs. If the LSCs are not edge
LSRs, then 800 LVCs are wasted.
The number of LVCs increases as the number of redundant LSCs increases. In the case of a VC-merged
system, the number of LVCs can be low. However, in non-VC-merged system, the number of LVCs can
be high. To reduce the number of LVCs, disable the edge LSR functionality in the LSC. Enter the
tag-switching atm disable headend-vc interface command to disable the edge LSR functionality on the
LSC and prevent the creation of headend VCs.

Cisco IOS Switching Services Configuration Guide


XC-128
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Note As an alternative to the tag-switching atm disable headend-vc interface command, you can issue
the tag-switching request-tags for interface command with an access list to save LVC space.

For more information on reducing the number of LVCs, see the “Reducing the Number of Label Switch
Paths Created in an MPLS Network” section.

Implementation Considerations
The following sections explain items that need to be considered when implementing hot or warm LSC
redundancy in a network.

Hot LSC Redundancy Considerations

The following list explains the items you need to consider when implementing hot LSC redundancy:
• LSC hot redundancy needs parallel paths. Specifically, there must be the capacity for at least two
end-to-end parallel paths traveling from each source to each destination. Each path is controlled by
one of a pair of redundant LSCs.
• LSPs for the destinations are initiated from the edge LSR. The edge LSR initiates multiple paths for
a destination only if it has parallel paths to its next hop. Therefore, it is important to have parallel
paths from the edge LSR. You can achieve parallel paths by having two physical links from the edge
LSR or by having two separate VP tunnels on one link.
• Hot redundancy protection extends from the edge LSR only as far as parallel paths are present. So,
it is best if parallel paths are present throughout the entire network.
• Hot redundancy increases the number of VCs used in the network. Each physical link with two VSI
partitions has twice the number of VCs used than would otherwise be the case. Various techniques
can be used to alleviate VC usage. The use of unnumbered links (“ip unnumbered” in the Cisco IOS
link configuration) reduces the number of routes in the routing table and hence the number of VCs
required. On the LSCs, you can use the tag-switching atm disable headend-vc interface command
to disable edge LSR functionality on the LSC and also reduce the number of VCs used. The
tag-switching request-tags for interface command with an access list also restricts the creation of
LVCs.

Warm LSC Redundancy Considerations

The following list explains the items you need to consider when implementing warm LSC redundancy:
• LSC warm redundancy needs a single active path between the source and destination. However,
there is also a requirement for end-to-end parallel paths, as in the hot redundancy case. Only one
path has an active LSP for the destination. In the event of the failure, the other path is established,
with some delay due to rerouting.
• The number of VCs in the network does not change with the warm redundancy.
• Hot LSC redundancy achieves failure recovery with little loss of traffic. However, hot redundancy
doubles the VC requirements in the network. Warm LSC redundancy requires the same number of
VCs as a similar network without LSC redundancy. However, traffic loss due to a failure is greater;
traffic may be lost for a period of seconds during rerouting.

Cisco IOS Switching Services Configuration Guide


XC-129
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Note The precise traffic loss depends on the type of failure. If the failure is in an LSC, the LSPs controlled
by that LSC typically remain connected for some time. Traffic can still flow successfully on the
“failed” path until the edge LSRs switch all traffic to the alternate path (which might occur tens of
seconds later, depending on routing protocol configuration). The only traffic loss might occur in the
edge LSR when traffic changes to the new path, which typically takes a few milliseconds or less.

Reducing the Number of Label Switch Paths Created in an MPLS Network


You can use two methods to reduce the number of LSPs created in an MPLS network:
• Disable LSPs from being created from a edge LSR or LSC to a destination IP address. Use the
tag-switching request-tags for interface command. Specify the destination IP addresses that you
want to disable from creating LSPs. This command allows you to permit creation of some LSPs,
while preventing the creation of others.
• Disable the LSC from acting as an edge LSR by using the tag-switching atm disable headend-vc
interface command. This command removes all LSPs that originate at the MPLS LSC and disables
the LSC from acting as an edge LSR.

Using an Access List to Disable Creation of LSPs to Destination IP Addresses


You can prevent LSPs from being created between edge LSRs and LSCs to prevent the unnecessary use
of LVC resources in a slave ATM switch. Use the tag-switching request-tags for interface command
with an access list to disable the creation of the LSPs.
Some LSPs are often unnecessary between some edge LSRs in an MPLS network. Every time a new
destination is created, LSPs are created from all edge LSRs in the MPLS network to the new destination.
You can create an access list at an edge LSR or LSC to restrict the destinations for which a
downstream-on-demand request is issued.
For example, Figure 45 is an MPLS ATM network that consists of the following elements:
• The PE routers in the VPN require LSPs to communicate with each other.
• All the PE routers are in network 1 (198.x.x.x).
• All the IGP IP addresses are in network 2 (192.x.x.x).
• If numbered interfaces are required (for network management or other purposes), they are placed in
network 2 (192.x.x.x).
Use tag-switching request-tags for interface commands to accomplish the following tasks:
• Allow the PE routers in network 1 to create LSPs and communicate with each other.
• Prevent LSPs from being created in network 2.
Performing these tasks reduces the number of LSPs in the MPLS ATM cloud, which reduces the VC
usage in the cloud.

Cisco IOS Switching Services Configuration Guide


XC-130
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Figure 45 Sample MPLS ATM Network

CE router PE router PE router CE router


192.168.x.x 192.168.x.x

MPLS ATM
Network

IGP
172.16.x.x

PE router PE router
CE router 192.168.x.x 192.168.x.x CE router

46928
Note When using access lists to prevent the creation of headend LVCs or LSPs, do not disable the LSC
from acting as an edge LSR with the tag-switching disable headend-vc interface command, which
prevents all LSPs from being established.

Cisco IOS Switching Services Configuration Guide


XC-131
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

The following examples of the tag-switching request tags-for interface command use Figure 46 as a
basis. The examples show different ways to disable the creation of LSPs from the LSC to the edge LSR,
and from the edge LSRs to the LSC.

Figure 46 Sample Configuration

LSC
172.16.53.1

45566
Edge LSR 1 ATM switch Edge LSR 2
192.168.0.1 192.168.0.2

Using a Numbered Access List

The following examples use a numbered access list to restrict creation of LSPs.

Preventing LSPs from the LSC to the Edge LSRs


The following example prevents LSPs from being established from the LSC to all 198.x.x.x destinations.
However, transit LSPs are allowed between 198.x.x.x destinations. Add the following commands to the
LSC configuration:
tag-switching request-tags for 1
access-list 1 deny 198.0.0.0 0.255.255.255
access-list 1 permit any

Preventing LSPs from the Edge LSRs to the LSC


The following example prevents headend LVCs from being established from edge LSR 1 and edge LSR
2 to the LSC (192.x.x.x). However, transit LSPs are allowed between 198.x.x.x destinations. Add the
following commands to the edge LSR 1 and 2 configurations:
tag-switching request-tags for 1
access-list 1 deny 192.0.0.0 0.255.255.255
access-list 1 permit any

Using a Named Access List

The following examples use a named access list to perform the same tasks as in the previous examples:
tag-switching request-tags for nolervcs
ip access-list standard nolervcs
deny 198.0.0.0 0.255.255.255
permit any

tag-switching request-tags for nolervcs


ip access-list standard nolervcs
deny 192.0.0.0 0.255.255.255
permit any

Cisco IOS Switching Services Configuration Guide


XC-132
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Specifying Exact Match IP Addresses with an Access List

The following examples use exact IP addresses to perform the same tasks as in the previous examples:
tag-switching request-tags for 1
access-list 1 deny 198.5.0.1 0.0.0.0
access-list 1 deny 198.5.0.2 0.0.0.0
access-list 1 permit any

tag-switching request-tags for 1


access-list 1 deny 192.6.53.1 0.0.0.0
access-list 1 permit any

Instead of configuring an access list on the LSC, you can issue the tag-switching atm
disable-headend-vc interface command to disable the creation of LSPs. This command works only with
LSCs.

Disabling the LSC from Acting as an Edge LSR


To remove all LSPs from the MPLS LSC and disable its ability to function as an edge LSR, you can use
either of the following interface commands:
• tag-switching atm disable-headend-vc
• tag-switching request-tags for
Disabling the LSC from acting as an edge LSR causes the LSC to stop initiating LSPs to any destination.
Therefore, the number of LVCs used in the network is reduced. The LSC can still terminate tailend
LVCs, if required.
With downstream on demand, LVCs are depleted with the addition of each new node. These commands
save resources by disabling the LSC from setting up unwanted LSPs. The absence of those LSPs allows
traffic to follow the same path as control traffic.
The following example uses the tag-switching atm disable-headend-vc interface command to disable
the LSC from functioning as an edge LSR. The following line is added to the LSC configuration:
tag-switching atm disable-headend vc

The following example uses the tag-switching request-tags for interface command to disable the LSC
from functioning as an edge LSR. The following lines are added to the LSC configuration:
tag-switching request-tags for dedicatedlsc
ip access-list standard dedicatedlsc
deny any

Note For a Cisco 6400 UAC with an NRP configured to function as an LSC, disable the LSC from acting
as an edge LSR. An NRP LSC should only support label switch paths through the controlled ATM
switch under VSI control.

Using the Cisco 6400 Universal Access Concentrator as an MPLS LSC


You can configure the Cisco 6400 UAC to operate as an MPLS LSC in an MPLS network. The hardware
that supports MPLS LSC functionality on the Cisco 6400 UAC is described in the following sections.

Cisco IOS Switching Services Configuration Guide


XC-133
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Note If you configure a Cisco 6400 UAC with a node resource processor (NRP) to function as an LSC,
disable MPLS edge LSR functionality. Refer to the tag-switching atm disable-headend-vc
command in the Cisco IOS Switching Services Command Reference for information on disabling
MPLS edge LSR functionality. An NRP LSC should support transit label switch paths only through
the controlled ATM switch under VSI control.

Cisco 6400 UAC Architectural Overview


A Cisco 6400 UAC can operate as an MPLS LSC if it incorporates the following components:
• Node switch processor (NSP)—The NSP incorporates an ATM switch fabric, enabling the Cisco
6400 UAC to function as ATM-LSR in a network. The NSP manages all the external ATM interfaces
for the Cisco 6400 UAC.
• NRP—The NRP enables a Cisco 6400 UAC to function as an LSC. When you use the NRP as an
LSC, however, you must not configure the NRP to perform other functions.
The NRP contains internal ATM interfaces that enable it to be connected to the NSP.
However, the NRP cannot access the external ATM interfaces of the Cisco 6400 UAC. Only the
NSP can access the external ATM interfaces.

Note A Cisco 6400 UAC chassis can accommodate multiple NRPs, including one dedicated to
MPLS LSC functions. You cannot use an additional NRP as an MPLS LSC. However,
you can use additional NRPs to run MPLS and perform other networking services.

• ATM port adapter—The Cisco 6400 UAC uses an ATM port adapter to provide external connectivity
for the NSP.

Cisco IOS Switching Services Configuration Guide


XC-134
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Figure 47 shows the components that you can configure to enable the Cisco 6400 UAC to function as an
MPLS LSC.

Figure 47 Cisco 6400 UAC Configured as an MPLS LSC

ATM port adapter


NRP supports
provides external
LSC functions for
ATM connectivity
Cisco 6400 UAC
for NSP

N N
PVP PVP
R R (n) x (n)
P P . .
1 2
. .
PVP PVP
(n+3)
x (n+3)
E
d
g PVP x PVP
e

L L N
S S S
R C P

30787
Cisco 6400 UAC chassis
Additional NRPs can NSP supports ATM
support MPLS and switching functions
IP Layer 3 services Legend: x = switch fabric for Cisco 6400 UAC

Configuring Permanent Virtual Circuits and Permanent Virtual Paths


The NRP controls the slave ATM switch through the VSI protocol. The VSI protocol operates over a
PVC that you configure. The PVC is dedicated to the VCs that the VSI control channel uses.
For the NRP to control an ATM switch through the VSI, cross-connect the control VCs from the ATM
switch through the NSP to the NRP. The ATM switch (BPX) uses defined control VCs for each BXM
slot of the BPX chassis, enabling the LSC to control external XTagATM interfaces through the VSI.
Table 27 defines the PVCs that must be configured on the NSP interface connected to the BPX VSI shelf.
These PVCs are cross-connected via the NSP to the NRP VSI master control port, which is running the
VSI protocol.
For an NRP that is installed in slot 3 of a Cisco 6400 UAC chassis, the master control port would be
ATM3/0/0 on the NSP. As shown in Figure 37, the BPX switch control interface is 12.1, and the NSP
ATM port connected to this interface is the ATM interface that is cross-connected to ATM3/0/0. Because

Cisco IOS Switching Services Configuration Guide


XC-135
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Figure 37 shows that the BXM slaves in BPX slots 6 and 12 are configured as external XTagATM ports,
the PVCs that must be cross-connected through the NSP are 0/45 for slot 6 and 0/51 for slot 12,
respectively, as outlined in Table 27.

Table 27 VSI Interface Control PVCs for BPX VSI Slave Slots

BPX VSI Slave Slot VSI Interface Control VC


1 0/40
2 0/41
3 0/42
4 0/43
5 0/44
6 0/45
7 0/46
8 0/47
9 0/48
10 0/49
11 0/50
12 0/51
13 0/52
14 0/53

Figure 48 shows the functional relationships among the Cisco 6400 UAC hardware components and the
permanent virtual paths (PVPs) that you can configure to support MPLS LSC functionality.

Figure 48 Cisco 6400 UAC PVP Configuration for MPLS LSC Functions

VP = n from
NSP to PVPs for VP = n from
slave ATM switch LSC functions NSP to NRP
VC = 0/32 VC = 0/32
6.1 12.2 VC = 2/83
I/F = xtag2
VC = 2/83
mapped to
0/32

I/F = xtag1
VC = 2/35
VC = 2/35 mapped to
0/32

Slave ATM switch NSP NRP


VSI interface
29752

Cisco 6400 UAC


PVP for VSI
control channel

Cisco IOS Switching Services Configuration Guide


XC-136
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

All other MPLS LSC functions, such as routing, terminating LVCs, and LDP control VCs (default 0/32),
can be accomplished by means of a separate, manually configured PVP (see the upper shaded area in
Figure 48). The value of “n” for this manually configured PVP must be the same among all the associated
devices (the NRP, the NSP, and the slave ATM switch). Because the NSP uses VP = 0 for ATM Forum
signalling and the BPX uses VP = 1 for autoroute, the value of “n” for this PVP for MPLS LSC functions
must be greater than or equal to 2, while not exceeding an upper bound.
Note that some edge LSRs have ATM interfaces with limited VC space per virtual path (VP). For these
interface types, you define several VPs. For example, the Cisco ATM Port Adapter (PA-A1) and the AIP
interface are limited to VC range 33 through 1018. To use the full capacity of the ATM interface,
configure four consecutive VPs. Make sure the VPs are within the configured range of the BPX.
For internodal BPX connections, we suggest that you configure VPs 2 through 15; for edge LSRs, we
suggest that you configure VPs 2 through 5. (Refer to the BPX cnfrsrc command in the Cisco BPX 8600
Series documentation for examples of how to configure BPX service nodes.)

Control VC Setup for MPLS LSC Functions


After you connect the NRP, the NSP, and the slave ATM switch by means of manually configured PVPs
(as shown in Figure 48), the NRP can control the slave ATM switch as though it is directly connected to
the NRP. The NRP discovers the interfaces of the slave ATM switch and establishes the default control
VC to be used in creating MPLS VCs.
The slave ATM switch shown in Figure 48 incorporates two external ATM interfaces (labeled xtag1 and
xtag2) that are known to the NRP as XTagATM61 and XTagATM122, respectively. On interface 6.1 of
the slave ATM switch, VC = 0/32 is connected to VC 2/35 by the VSI protocol. On the NRP, VC 2/35 is
terminated on interface XTagATM61 and mapped to VC 0/32, also by means of the VSI protocol. This
mapping enables the LDP to discover MPLS LSC neighbors by means of the default control VC 0/32 on
the physical interface. On interface 12.2 of the slave ATM switch, VC 0/32 is connected to VC 2/83 by
the VSI protocol. On the NRP, VC 2/83 is terminated on interface XTagATM122 and mapped to VC
0/32.
Note that the selection of these VCs is dependent on the availability of VC space. Hence it is not
predictable which physical VC will be mapped to the external default control VC 0/32 on the XTagATM
interface. The control VC will be shown as a PVC on the LSC, as opposed to an LVC, when you enter
the Cisco IOS show xtagatm vc EXEC command.

Cisco IOS Switching Services Configuration Guide


XC-137
Multiprotocol Label Switching Overview
MPLS Label Switch Controller

Configuring the Cisco 6400 UAC to Perform Basic MPLS LSC Operations
Figure 49 shows a Cisco 6400 UAC containing a single NRP that has been configured to perform basic
MPLS LSC operations.

Figure 49 Typical Cisco 6400 UAC Configuration to Support MPLS LSC Functions

Io = 2.2.2.2 Io = 3.3.3.3

LSR1 LSR2
LDP and routing paths
Data path between between LSR1 and LSR2
LSR1 and LSR2 for
their respective
networks
6.1 12.2

Loopback = 1.1.1.1

Slave ATM switch NSP NRP

29753
Cisco 6400 UAC

Note If the NRP incurs a fault that causes it to malfunction (in a single NRP configuration), the LVCs and
routing paths pertaining to MPLS LSC functions are lost.

Note The loopback addresses must be configured with a 32-bit mask and be included in the relevant IGP
or BGP routing protocol, as shown in the following example:
ip address 192.103.210.5 255.255.255.255

Defining the MPLS Control and IP Routing Paths

In the MPLS LSC topology shown in Figure 49, the devices labeled LSR1 and LSR2 are external to the
Cisco 6400 UAC. These devices, with loopback addresses as their respective LDP identifiers, are
connected to two separate interfaces labeled 6.1 and 12.2 on the slave ATM switch. Both LSR1 and LSR2
learn about the routes of each other from the NRP by means of the data path represented as the thick
dashed line in Figure 49. Subsequently, LVCs are established by means of LDP operations to create the
data paths between LSR1 and LSR2 through the ATM slave switch.
Both LSR1 and LSR2 learn of the loopback address of the NRP and create a data path (LVCs) from each
other that terminates in the NRP. These LVCs, called tailend LVCs, are not shown in Figure 49.

Cisco IOS Switching Services Configuration Guide


XC-138
Multiprotocol Label Switching Overview
MPLS Egress NetFlow Accounting

Disabling Edge LVCs

By default, the NRP requests LVCs for the next hop devices (the LSRs shown in Figure 49). The headend
LVCs enable the LSC to operate as an edge LSR. Because the NRP is dedicated to the slave ATM switch
by default, the headend LVCs are not required.

Note If a Cisco 6400 UAC with an NRP is configured to function as an LSC, disable the edge LSR
functionality. An NRP LSC should support transit LSPs only through the controlled ATM switch
under VSI control. Refer to the tag-switching atm disable-headend-vc interface command in the
Cisco IOS Switching Services Command Reference to disable edge LSR functionality.

The tag-switching atm disable-headend-vc command disables the default behavior of the NRP in
setting up headend switch LVCs, thereby saving VC space.

Supporting ATM Forum Protocols


You can connect the MPLS LSC to a network that is running ATM Forum protocols while the MPLS
LSC simultaneously performs its functions. However, you must connect the ATM Forum network
through a separate ATM interface (that is, not through the master control port).

MPLS Egress NetFlow Accounting


MPLS egress NetFlow accounting allows you to capture IP flow information for packets undergoing
MPLS label disposition; that is, packets that arrive on a router as MPLS and are sent as IP.
Previous to the MPLS Egress NetFlow Accounting feature, you captured NetFlow data only for flows
that arrived on the packet in IP format. When an edge router performed MPLS label imposition (received
an IP packet and sent it as an MPLS packet), NetFlow data was captured when the packet entered the
network. Inside the network, the packet was switched based only on MPLS information, and thus
NetFlow information was not captured until after the last label was removed.
One common application of the MPLS egress NetFlow accounting feature allows you to capture the
MPLS VPN IP flows that are traveling from one site of a VPN to another site of the same VPN through
the service provider backbone.
Previous to the MPLS Egress NetFlow Accounting feature, you captured flows only for IP packets on
the ingress interface of a router. You could not capture flows for MPLS encapsulated frames, which were
switched through CEF from the input port. Therefore, in an MPLS VPN environment you captured flow
information as packets were received from a CE router and forwarded to the backbone. However, you
could not capture flow information as packets were sent to a CE router because those packets were
received as MPLS frames.
The MPLS egress NetFlow accounting feature lets you capture the flows on the outgoing interfaces.
Figure 50 shows a sample topology. To capture the flow of traffic going to Site 2 of VPN 1 from any
remote VPN 1 sites, you enable MPLS egress NetFlow accounting on link PE2-CE5 of provider edge
router PE2. The flows are stored in a global flow cache maintained by the router. You can use the show
ip cache flow EXEC command or other aggregation flow commands to view the egress flow data.

Cisco IOS Switching Services Configuration Guide


XC-139
Multiprotocol Label Switching Overview
MPLS Egress NetFlow Accounting

Figure 50 Provider and Customer Networks with MPLS Egress NetFlow Accounting

Site 2
C VPN 1

VPN-SC Backbone
Site 1 CE5
VPN 1 Collector 2
P P

CE1 PE1 PE2


Collector 1 Site 3
VPN 1
Site 2
VPN 2
PE3 PE4
Site 1 Site 4 CE3
VPN 2 VPN 1
CE2

42949
CE4 CE6

The PE routers export the captured flows to the configured collector devices in the provider network. The
NetFlow Analyzer or the VPN solution center (VPN-SC) application collects this information and
computes and displays site-to-site VPN traffic statistics.
Benefits to MPLS Egress NetFlow Accounting are as follows:
• Enhanced network monitoring for complete billing solution—You can now capture flows on the
egress and ingress router interfaces to provide complete end-to-end usage information on network
traffic. The accounting server uses the collected data for various levels of aggregation for accounting
reports and API accounting information, thus providing a complete billing solution.
• More accurate accounting statistics—NetFlow data statistics now account for all the packets that are
dropped in the core of the service provider network, thus providing more accurate traffic statistics
and patterns.

Cisco IOS Switching Services Configuration Guide


XC-140

You might also like