Professional Documents
Culture Documents
0 Mobile Transport
Design and Implementation Guide
September 2014
The CVD program consists of systems and solutions designed, tested, and documented to facilitate faster, more reliable,
and more predictable customer deployments. For more information visit http://www.cisco.com/go/designzone.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR
THEIR APPLICATION OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR
OTHER PROFESSIONAL ADVICE OF CISCO, ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT
THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING THE DESIGNS. RESULTS MAY VARY
DEPENDING ON FACTORS NOT TESTED BY CISCO.
The Cisco implementation of TCP header compression is an adaptation of a program developed by the University of
California, Berkeley (UCB) as part of UCBs public domain version of the UNIX operating system. All rights reserved.
Copyright 1981, Regents of the University of California.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other
countries. To view a list of Cisco trademarks, go to this URL: www.cisco.com/go/trademarks. Third-party trademarks
mentioned are the property of their respective owners. The use of the word partner does not imply a partnership relation-
ship between Cisco and any other company. (1110R)
Any Internet Protocol (IP) addresses and phone numbers used in this document are not intended to be actual addresses
and phone numbers. Any examples, command display output, network topology diagrams, and other figures included in
the document are shown for illustrative purposes only. Any use of actual IP addresses or phone numbers in illustrative
content is unintentional and coincidental.
Service OAM Implementation for LTE and 3G IP UMTS RAN Transport with MPLS Access 8-2
Service OAM Implementation for ATM and TDM Circuit Emulation Pseudowires for 2G and 3G RAN
Transport 8-2
Transport OAM 8-3
The Cisco Evolved Programmable Networks (EPN) System Release 4.0 continues to develop the
design of the Unified MPLS for Mobile Transport (UMMT) and Fixed Mobile Convergence (FMC)
systems as part of a multi-year ongoing development program that builds towards a flexible,
programmable, and cost-optimized network infrastructure, targeted to deliver in-demand fixed and
mobile network services.
As described in the Cisco EPN 4.0 System Concept Guide, the EPN System follows a layered design,
with each layer building on top of the previous. This guide focuses on the design and implementation
aspects of the service infrastructure layer, with specific focus on Transport Services for mobile
technologies spanning across all generations, including 2G, 3G, and 4G.
The following essential features complement transport of mobile services:
Network synchronization (physical layer and packet based)
Hierarchical quality of service (H-QoS)
Operations, administration, and maintenance (OAM)
Performance management
Fast convergence
The Cisco EPN System architecture also optimizes around advanced 4G requirements such as the
following:
Direct enhanced NodeB (eNodeB) communication through the X2 interface
IPv4 and IPv6 Multicast for optimized video transport based on Evolved Multimedia Broadcast
Multicast Services (eMBMS) architecture
Virtualization for Radio Access Network (RAN) sharing
Distribution of the Evolved Packet Core (EPC) gateways
Cisco developed the Cisco EPN System to simplify the end-to-end mobile transport/backhaul
architecture. EPN achieves this by decoupling the transport and service layers of the network, thereby
allowing these two distinct entities to be provisioned and managed independently. The Unified MPLS
Transport layer seamlessly interconnects the access, aggregation, and core MPLS domains of the
network infrastructure with hierarchical Label-Switched Paths (LSPs). Once this Unified MPLS
Transport is established (a task that only needs to be undertaken once), a multitude of services can be
deployed on top of it. These services can span any location in the network without restricting topological
boundaries.
This guide focuses on delivery of mobile backhaul services across the Cisco EPN System, which
provides a comprehensive RAN backhaul solution for transport of LTE, legacy 2G Global System for
Mobile Communications (GSM), existing 3G Universal Mobile Telecommunications Service (UMTS)
services and small cells (Wi-Fi and Hybrid Radio). Transport of LTE and IP-enabled 3G/Wi-Fi services
is provided by a highly scaled MPLS L3VPN. Legacy GSM and ATM-based UMTS backhaul are
provided by pseudowire emulation edge-to-edge (PWE3)-based transport of emulated time-division
multiplexing (TDM) and ATM circuits, respectively. Enhanced Multimedia Broadcast Multicast Service
(eMBMS) transport is provided by multicast-based mechanisms, minimizing packet duplication within
the transport network.
Figure 2-1 provides an overview of the backhaul of legacy 2G/3G and LTE services across the Cisco
EPN System with an IP/MPLS-enabled access domain. All packet-based transport mechanisms originate
at the cell site gateway (CSG) in this scenario.
Covered by the
ISEM System
BBC
RNC
MPLS MPLS VPN
AToM Pseudowire VPN (v4/v6)
ATM or
TDM BTS, ATM Node B TDM GGSN
SGSN
S/PGW
S1-U
IP eNB MPLS VPN
Optional Mobile Transport Gateway (v4/v6)
NID
MME
v4 or v6 MPLS VPN for S1, X2
IP/PIM v4/v6 for eMBMS M3/M1 S1-C/M3
X2-C, X2-U
Mobile Transport Gateway MPLS VPN
(v4/v6)
S1/X2 and M1/M3 require different IP endpoints and VLAN interfaces in eNB when
S/PGW
IP/PIM is used for M3/M1
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
ME-1200
ZTD NID Mobile Transport PE
ASR-9000
IP/MPLS
Transport IP/MPLS Transport IP/MPLS Transport
Cell Site Gateway Pre-Aggregation Node Aggregation Node Core Node Core Node
ASR-901, ASR-920 ASR-903, ASR-9001 ASR-9000 CRS-3 CSR-3
297585
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
Figure 2-2 provides an overview of the backhaul of legacy 2G/3G and LTE services across the Cisco
EPN System with a native Ethernet or TDM-based access domain. The MPLS VPWS and L3 VPN
transport mechanisms originate at the pre-aggregation node (PAN) in this scenario. The Ethernet-based
access domain may be based on hierarchical/multipoint microwave access or on Ethernet G.8032 access
rings over fiber or microwave connections.
Covered by the
ISEM System
SDH/SONET BBC
RNC
MPLS VPN
AToM Pseudowire (v4/v6)
TDM BTS, ATM Node B ATM or
TDM
GGSN
SGSN
294853
Ethernet/TDM Microwave DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
Figure 2-3 provides an overview of the transport of eMBMS interfaces across the Cisco EPN System
with an IP/MPLS-enabled access domain. All packet-based transport mechanisms are between the cell
site gateway (CSG) and the respective service gateways in this scenario. The Unified MPLS Core and
Aggregation enable mLDP-labeled multicast transport and the access network distributes eMBMS with
PIM v4/v6 and IP multicast.
S1-U S11
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
The Cisco EPN System design provides transport for both legacy and current mobile services. To
accomplish this on a single network, MPLS service virtualization is employed, which provides emulated
circuit services via L2VPN for 2G and 3G services and L3VPN services for IP-enabled 3G, 4G and
Wi-Fi services.
All the service models are outlined in this chapter, which includes the following major topics:
L3 MPLS VPN Service Model for LTE, page 3-1
Multicast Service Model for LTE eMBMS, page 3-7
L2 MPLS VPN Service Model for 2G and 3G, page 3-9
Mobile Transport Capacity Monitoring, page 3-10
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
SGW
S1-U
eNode B
MTG
S1-C MME
S1-U
SGW
The Mobile RAN includes cell sites with enhanced NodeBs (eNB) that are connected:
Directly in a point-to-point fashion to the PANs utilizing Ethernet fiber or microwave, OR
Through CSGs connected in G.8032-protected ring topologies over Ethernet fiber or microwave
transmission, OR
Through CSGs connected in ring topologies by using MPLS/IP packet transport over Ethernet fiber
or microwave transmission.
Furthermore, eNodeBs can be connected to a CSG or a PAN directly, or through an intermediate Ethernet
NID device, owned by the backhaul operator and located at the mobile base stations to enhance
end-to-end service visibility. The connectivity model between the CSG/PAN and the NID involves the
use of two VLANs carrying the following types of traffic:
Mobile data and control unicast traffic, for inter-eNodeB and MPC gateway communication.
Mobile data multicast traffic, for eMBMS-enabled mobile architecture.
The cell sites in the RAN access are collected in a MPLS/IP pre-aggregation/aggregation network that
may be comprised of a physical hub-and-spoke or ring connectivity that interfaces with the MPLS/IP
core network that hosts the EPC gateways.
From the E-UTRAN backhaul perspective, the most important LTE/SAE reference points are the X2 and
S1 interfaces. The eNodeBs are interconnected with each other via the X2 interface, and towards the EPC
via the S1 interface.
The S1-c or S1-MME interface is the reference point for the control plane between E-UTRAN and
MME. The S1-MME interface is based on the S1 Application Protocol (S1AP) and is transported
over the Stream Control Transmission Protocol (SCTP). The EPC architecture supports MME
pooling to enable geographic redundancy, capacity increase, and load sharing. This requires the
eNodeB to connect to multiple MMEs. The L3 MPLS VPN service model defined by the Cisco EPN
System allows eNodeBs in the RAN access to be connected to multiple MMEs that may be
distributed across regions of the core network for geographic redundancy.
The S1-u interface is the reference point between E-UTRAN and SGW for the per-bearer user plane
tunneling and inter-eNodeB path switching during handover. The application protocol used on this
interface is GPRS Tunneling Protocol (GTP) v1-U, transported over User Datagram Protocol
(UDP). SGW locations affect u-plane latency, and the best practice for LTE is to place S/PGWs in
regions closer to the aggregation networks that they serve so that the latency budget of the eNodeBs
to which they connect is not compromised. The EPC architecture supports SGW pooling to enable
load balancing, resiliency, and signaling optimization by reducing the handovers. This requires the
eNodeB to connect to multiple SGWs. The L3 MPLS VPN service model allows eNodeBs in the
RAN access to be connected to multiple SGWs, which include ones in the core close to the local
aggregation network and SGWs that are part of the pool serving neighboring core POPs.
The X2 interface comprised of the X1-c and X2-u reference points for control and bearer plane
provides direct connectivity between eNodeBs. It is used to hand over user equipment from a source
eNodeB to a target eNodeB during the inter-eNodeBs handover process. For the initial phase of LTE,
the traffic passed over this interface is mostly control plane related to signaling during handover.
This interface is also used to carry bearer traffic for a short period (<100ms) between the eNodeBs
during handovers. The stringent latency requirements of the X2 interface requires that the mesh
connectivity between CSGs introduces a minimum amount of delay that is in the order of less than
10ms. The L3 MPLS VPN service model provides shortest path connectivity between eNodeBs so
as to not introduce unnecessary latency.
During initial deployments in regions with low uptake and smaller subscriber scale, MME and
SGW/PGW pooling can be used to reuse mobile gateways serving neighboring core POPs.
Gradually, as capacity demands and subscriber scale increases, newer gateways can be added closer
to the region. L3 MPLS VPN service model for LTE backhaul that is defined by the EPN System
allows migrations to newer gateways to take place without any re-provisioning of the service model
or re-architecting of the underlying transport network required.
With the distribution of the new spectrum made available for 3G and 4G services, many new SPs
have entered the mobility space. These new entrants would like to monetize the spectrum they have
acquired, but lack the national infrastructure coverage owned by the incumbents. LTE
E-UTRAN-sharing architecture allows different core network operators to connect to a shared radio
access network. The sharing of cell site infrastructure could be based on:
A shared eNodeBshared backhaul model where different operators are presented on different
VLANs by the eNodeB to the CSG, OR
A different eNodeBshared backhaul model where the foreign operator's eNodeB is connected
on a different interface to the CSG.
Regardless of the shared model, the Cisco EPN System provides per-mobile SP-based L3 MPLS VPNs
that are able to identify, isolate, and provide secure backhaul for different operator traffic over a single
converged network.
MTG
VRF
VRF VRF
Export: MPC RT
Import: MPC RT, Common RT
297306
Each aggregation and/or each individual RAN access region in the network is assigned a unique RT
denoted as the aggregation-wide or RAN-wide AGGR RT and RAN RT, respectively. The AGGR
RT identifies all cell site routes in a given aggregation domain regardless of the RAN access domain
of origin, while the RAN-wide RT is RAN access domain specific.
For S1 communication, all CSGs import the MSE RT and export the Global RT. In the MPC, the MTGs
import the MSE RT and the Global RT and export only the MSE RT. This allows the MTGs to have
connectivity to all other gateways in the MPC, as well as to the CSGs in the RAN access regions across
the entire network. The MTGs are capable of handling large scale and learn all VPNv4 prefixes in the
LTE VPN.
In some cases, depending on the spread of the macro cell footprint, it might be desirable to provide X2
interfaces between CSGs located in neighboring RAN access regions.
For X2 communication, each CSG in a given access domain exports the local AGGR-wide and/or RAN
RTs and imports the AGGR or RAN RT of local aggregation or neighboring access domains according
to its route scale capabilities. In a network with low route scale-capable CSGs, limiting import of routes
to those advertised by the neighboring RAN networks through their specific RAN RTs ensures the VRF
route scale of the CSGs is kept to a minimum. VPNv4 prefixes corresponding to CSGs in other
non-neighbor RAN access regionseither in the local aggregation domain, or RAN access regions in
remote aggregation domain across the coreare not learnt. On the contrary, in a network with only high
route scale-capable CSGs, import and export of AGG-wide RTs achieves larger X2 communication
domains, thus increasing the number of VPN routes learned, but greatly reduces the operational
complexity associated with the management of route filtering in different access domains.
Figure 3-3 and Figure 3-4 describe further the route filtering logic implemented in inter-AS and
multi-area transport architectures and for access networks designs based on Labeled BGP or IGP/LDP
redistribution.
MSE BGP
Community
1001:1001 Access-2 RAN Access networks with low
Access-3 route-scale capable CSGs
CN-ASBR CN-ASBR
Inline RR Inline RR
AGN-ASBR AGN-ASBR
Inline RR Inline RR
Metro-1
vAGN-RR S1 Traffic
RR
X2 X2
Access-3
VRF VRF VRF VRF
X2
Unified MPLS Transport: Unified MPLS Transport:
VRF VRF
Advertise loopbacks in iBGP Advertise loopbacks in iBGP
labeled-unicast with community X2 X2 labeled-unicast with community
10:10, 10:100, 10:102 inter-access VRF inter-access 10:10, 10:100, 10:104
297588
Global RT
Import: RAN-2 RT, RAN-3 RT,
RAN-4 RT, MSE RT
MSE BGP
Community
1001:1001
Access-2 RAN Access networks with low
Access-3 route-scale capable CSGs
Redistribute RAN IGP-3 in iBGP, mark Redistribute RAN IGP-4 in iBGP, mark
BGP Community 10:10, 10:100, 10:0103 BGP Community 10:10, 10:0100, 10:0104
Metro-1
Redistribute BGP Community Redistribute BGP Community
1000:1000, 10:0102, 10:0104 in RAN IGP-3 S1 Traffic 1000:1000, 10:0100, in RAN IGP-4
Inter-access
X2 Traffic
Access-2 Access-4
VRF VRF
X2 X2
Access-3
VRF VRF VRF VRF
X2
X2 VRF VRF X2
inter-access inter-access
VRF
Export: RAN-2 RT, AGGR-1 RT, Global RT Export: RAN-4 RT, AGGR-1 RT, Global RT
Import: RAN-1 RT, RAN-2 RT, RAN-3 RT, MSE RT Import: AGGR-1 RT, , MSE RT
297589
Export: RAN-3 RT, AGGR-1 RT, Global RT
Import: RAN-2 RT, RAN-3 RT, RAN-4 RT, MSE RT
As shown in Figure 3-3 and Figure 3-4, connectivity can easily be accomplished using the BGP
community-based coloring of prefixes used in the Unified MPLS Transport.
As described in the Cisco EPN 4.0 Transport Infrastructure Design and Implementation Guide, the
CSG loopbacks are colored in BGP labeled-unicast with a common BGP community that represents
the Global RAN community and up to two BGP communities, one unique to the RAN access region
and the other to the aggregation regionwide. This tagging can be done when the CSGs advertise their
loopbacks in iBGP labeled-unicast if labeled BGP is extended to the access or at the PANs when
redistributing from the RAN IGP to iBGP when IGP/ LDP is used in the RAN access using the
redistribution approach.
For access domains made of low route scale-capable CSG nodes, such as the Access-2 and Access-3
RAN networks shown in Figure 3-3 and Figure 3-4, the adjacent RAN access domain CSG
loopbacks can be identified at the PAN based on the unique RAN access region BGP community and
be selectively propagated into the access based on egress filtering if labeled BGP is extended to the
access or be selectively redistributed into the RAN IGP if IGP/LDP is used in the RAN access using
the redistribution approach. Please note that X2 interfaces are based on eNodeB proximity and
therefore a given RAN access domain only requires connectivity to the ones immediately adjacent.
This filtering approach allows for hierarchical-labeled BGP LSPs to be set up across neighboring
access regions while preserving the low route scale in the access. At the service level, any CSG in
a RAN access domain that needs to establish inter-access X2 connectivity will import its
neighboring CSG access region RT in addition to its own RT in the LTE MPLS VPN.
Alternatively, for access domains made of high route scale-capable CSG nodes, such as the Access-4
RAN network shown in Figure 3-3 and Figure 3-4, loopbacks of CSGs in the same aggregation
domain can be identified at the PAN based on the aggregation network-wide BGP community and
be selectively propagated into the access based on egress filtering if labeled BGP is extended to the
access. Alternatively, it can be selectively redistributed into the RAN IGP if IGP/LDP is used in the
RAN access using the redistribution approach. This filtering approach allows for a simplified route
filtering logic to the detriment of larger routing table on the CSG nodes. At the service level, any
CSG in a RAN access domain that needs to establish inter-access X2 connectivity with other CSGs
in the same aggregation domain will import the aggregation network-wide RT in the LTE MPLS
VPN.
The CN-ABR inline-RR applies selective NHS function using route policy in the egress direction
towards its local PAN neighbor group in order to provide shortest-path connectivity for the X2 interface
between CSGs across neighboring RAN access regions. The routing policy language (RPL) logic
involves changing the next-hop towards the PANs for only those prefixes that do not match the local
RAN access or aggregation regions based on a simple regular expression matching BGP communities.
This allows for the CN-ABR to change the BGP next-hop and insert itself in the data path for all prefixes
that originate in the core corresponding to the S1 interface, while keeping the next-hop set by the PANs
unchanged for all prefixes from local RAN regions. With this, the inter-access X2 traffic flows across
adjacent access regions along the shortest path interconnecting the two PANs without having to loop
through the inline-RR CN-ABR node.
Finally, the rapid adoption of LTE and the massive increase in subscriber growth is leading to an
exponential increase in cell sites that are being deployed in the network. This is introducing a crunch in
the number of IP addresses that need to be assigned to the eNodeBs at the cell sites. For mobile SPs that
are running out of public IPv4 addresses or those that cannot obtain additional public IPv4 addresses
from the registries for eNodeB assignment, the Cisco EPN System enables carrying IPv6 traffic over a
IPv4 Unified MPLS Transport infrastructure using 6VPE as defined in RFC 4659. The eNodeBs and
EPC gateways can be IPv6 only or dual stack-enabled to support IPv6 for S1 and X2 interfaces while
using IPv4 for network management functions, if desired. The dual stack-enabled eNodeBs and EPC
gateways connect to CSGs and MTGs configured with a dual stack VRF carrying VPNv4 and VPNv6
routes for the LTE MPLS VPN service. The IPv6 reachability between the eNodeBs in the cell site and
the EPC gateways in the MPC is exchanged between the CSGs and MTGs acting as MPLS VPN PEs
using the BGP address family [address family identifier (AFI)=2, subsequent address family identifier
(SAFI)=128].
This design can be downgraded to accommodate UMTS IP IuB and Wi-Fi CAPWAP, maintaining only
the hub-and-spoke VPN filtering configuration used for the LTE service, hence using only the MSE and
Global RAN RT relevant import and exports across CSGs and MTGs.
MME
M3 Sm
SGmb
296016
M1 SGi-mb
UE eNB MBMS-GW BM-SC
The following interfaces, which are within the scope of the Cisco EPN System design, are involved in
eMBMS service delivery:
M3 interfaceA unicast interface between the MME and MCE (assumed to be integrated into the
eNB for the sake of Cisco EPN), which primarily carries MBMS session management signaling.
M1 interfaceA downstream user-plane interface between the MBMS Gateway (MBMS-GW) and
the eNB, which delivers content to the user endpoint. IP Multicast is used to transport the M1
interface traffic.
In the context of the Cisco EPN System design, transport of the eMBMS interfaces is conducted based
on the interface type. This is illustrated in Figure 3-6:
S1-U S11
296017
Ethernet Access: IGMP/MLDPv2 mLDP transport
The M3 interface is transported within the same L3 MPLS VPN as other unicast traffic, namely the
S1 and X2 interfaces. Since both the S1 and M3 interfaces are between the eNB and the MME, it
makes logical sense to carry both in the same VPN.
The M1 interface transport is handled directly via IP over mLDP transport in core and aggregation
and IP Multicast with PIM SSM or IGMP/MLDP in access. This transport is reused for the wireline
IPTV as well, hence the need for a common not virtualized multicast infrastructure.
Additionally, the Cisco EPN System enables the delivery of mobile multicast traffic over IPv4 and IPv6
address families. Both address families are carried over the same IPv4-enabled LSM transport
infrastructure as defined in RFC 6514. The eNodeBs, MBMS and/or packet gateways can be IPv6 only
or dual stack-enabled to support IPv4 and IPv6 multicast forwarding at the user plane (M1) or a
combination of IPv6 only multicast at the user plane (M1) and IPv4 at the control plane (M3) interfaces,
The multicast mechanism utilized for transporting the M1 interface traffic depends upon the location in
the network:
From the MTG attached to the MBMS-GW, through the core and aggregation domains to the AGN
node, LSM is utilized to transport the M1 interface traffic, using a combination of BGP signaling
and mLDP transport. This provides efficient and resilient transport of the multicast traffic within
these regions.
From the AGN to the CSG, in the MPLS access networks, native IP Multicast is utilized to transport
the M1 interface traffic. In the Ethernet-bridged access, the transport is over a dedicated multicast
VLAN (bridge domain), shared by all eNodeBs, with IGMP snooping and MLDv2 enabled,
optionally, while all eNBs will join the M1 interface. This provides efficient and resilient transport
of the multicast traffic while utilizing the lowest amount of resources on these smaller nodes.
In the case of MPLS (or Layer 3) access, multicast forwarding in the access domain is based on
PIM SSM v4/v6. IPv4/v6 addresses of multicast sources are re-distributed in the ISIS IGP
process at level 2 on the AGN nodes, and are leaked in ISIS at level 1 on the PAN nodes for
distribution to the CSGs.
In the case of G.8032-enabled Ethernet access, multicast forwarding is based on IGMP or
MLDv2 for the IPv4 and IPv6 address families, respectively.
From the CSG to the eNodeB, two models are available depending on the capabilities of the CSG
and the type of access network:
For MPLS access and CSG nodes capable of leaking IPv4 multicast routes from the global
routing table in the mobile L3MPLS VPN a single VLAN is used to deliver all mobile interfaces
(S1, X2, M1 an dM3).
For Ethernet-bridged access and all others MPLS access scenarios, two VLANs are utilized to
deliver the various interfaces to the eNB. One VLAN handles unicast interface (S1, X2, M3)
delivery, while the other handles M1 multicast traffic delivery and, in case of Ethernet access,
it is an extension toward the eNodeB of the multicast VLAN used between the AGN and the
CSG.
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
e Node B
CSG MTG
AToM Pseudowire
ATM or BBC
TDM BTS, ATM Node B
TDM ATM RNC
PAN MTG
AToM Pseudowire
293299
Typical GSM (2G) deployments will consist of cell sites that don't require a full E1/T1 for support. In
such cell sites, a fractional E1/T1 is used. The operator can deploy these cell sites in a daisy chain fashion
(for example, down a highway) or aggregate them at the BSC location. To save in the CAPEX investment
on the number of channelized STM-1/OC-3 ports required on the BSC, the operator will utilize a digital
XConnect to merge multiple fractional E1/T1 links into a full E1/T1. This reduces the number of T1/E1s
needed on the BSC, which results in fewer channelized STM-1/OC-3 ports being needed. Deploying
CESoPSN PWs from the CSG to the RAN distribution node supports these fractional T1/E1s and the
aggregation of them at the BSC site. In this type of deployment, the default behavior of CESoPSN for
alarm sync needs to be changed. Typically, if a T1/E1 on the ANs goes down, the PWs will forward the
alarm indication signal (AIS) alarm through the PW to the distribution node and then propagate the alarm
indication signal (AIS) alarm to the BSC by taking the T1/E1 down. In this multiplexed scenario, TS
alarming must be enabled on a CESoPSN PW to only propagate the AIS alarm on the affected time slots,
thus not affecting the other time slots (for example, cell sites) on the same T1/E1.
The same BGP-based control plane and label distribution implemented for the L3VPN services is also
used for circuit emulation services. The CSGs utilize MPLS/IP routing in this system release when
deployed in a physical ring topology. TDM and ATM PWE3 can be overlaid in either deployment model.
The CSGs, PAN, AGNs, and MTGs enforce the contracted ATM CoS SLA and mark the ATM and TDM
PWE3 traffic with the corresponding per-hop behavior (PHB) inside the access, aggregation, and core
DiffServ domains. The MTG enables multi-router automatic protection switching (MR-APS) (or
single-router automatic protection switching [SR-APS] redundancy for the BSC or RNC interface, as
well as pseudowire redundancy and two-way pseudowire redundancy for transport protection.
NetFlow Exporter
NetFlow Collection
and Analysis
SGW
eNode B MTG
S1-U
297590
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
By enabling NetFlow exporter capabilities on the MTG L3VPN interfaces facing the mobile packet core
gateways and delivering the cumulative flow records to a Collector function implemented on Cisco
Prime Performance Monitoring, it is possible to visualize the amount of traffic sent and received by a
given cell site at configurable interval and over customizable periods of time.
Synchronization Distribution
Every mobile technology deployment has synchronization requirements in order to enable aspects such
as radio framing accuracy, user endpoint handover between cell towers, and interference control on cell
boundaries. Some technologies only require frequency synchronization across the transport network,
while others require phase and time-of-day (ToD) synchronization as well. The Cisco EPN System
delivers a comprehensive model for providing network-wide synchronization of all three aspects with an
accuracy that exceeds the threshold requirements of any mobile technology deployed across the system.
The primary target for the current system release is to provide frequency synchronization by using the
Ethernet physical layer (SyncE) and phase and Time (ToD, Phase) synchronization by using IEEE
1588-2008 PTPv2, here simply referred as PTPv2. SyncE operates on a link-by-link basis and will
provide a high quality frequency reference similar to that provided by SONET and SDH networks.
SyncE is complemented by Ethernet Synchronization Message Channel (ESMC), which allows
transmitting over SyncE-enabled links a quality level value as done with synchronization status message
in SONET and SDH. This allows the SyncE node to select a timing signal from the best available source
and help detect timing loops, which is essential for the deployment of SyncE in ring topologies.
Because not all links on the network may be SyncE-capable or support synchronization distribution at
the physical layer, PTPv2 may also be used for frequency distribution. IEEE 1588 packet-based
synchronization distribution is overlaid across the entire system infrastructure; third-party master and
third-party IP-NodeB client equipment are considered outside the scope of the system. The mechanism
is standards-based and can provide frequency and/or phase distribution, relying on unicast or multicast
packet-based transport. As with any packet-based mechanism, PTP traffic is subject to loss, delay, and
delay variation. However, the packet delay variation (PDV) is the main factor to control. To minimize
the effects of these factors and meet the requirements for synchronization delivery utilizing PTP, EF
PHB treatment across the network is required.
The Cisco EPN System also supports a combination of SyncE and PTP in a hybrid synchronization
architecture, aiming to improve the stability and accuracy of the phase and frequency synchronization
delivered to the client for deployments such as Time Division Duplex (TDD)-LTE eNodeBs, LTE
Advanced and eMBMS. In such an architecture, the packet network infrastructure is frequency
synchronized by SyncE. The phase signal is delivered by PTPv2. The CSG, acting as a PTPv2 ordinary
clock or as a Boundary Clock (BC), may combine the two synchronization methods, using the SyncE
input as the frequency reference clock for the PTPv2 engine. The combined recovered frequency and
phase can be delivered to clients via 1 pulse per second (PPS), 10MHz and Building Integrated Timing
Supply (BITS) timing interfaces, SyncE and PTPv2. For access networks that don't support SyncE, the
hybrid 1588 BC function may be move to the PANs.
Figure 4-1 illustrates how synchronization distribution is achieved for mobile transport services over
both fiber and microwave access networks in the Cisco EPN System architecture.
No Physical
Synchronization TP-5000
SyncE, ESMC 1588 PMC/PRTC
Optional
NID SyncE
External Synchronization
Interface (ToD and Pbase)
1588 BC HC
Global Navigation Satellite System (e.g., GPS, GLONASS,
GALILEO) - Primary Reference Time Clock (PRTC)
ME-1200
ZTD NID
IP/MPLS or
G.8032 Ethernet IP/MPLS Transport IP/MPLS Transport
Transport
Cell Site Gateway (CSG) Pre-Aggregation Node Aggregation Node Core Node Core Node
ASR-901, ASR-920 ASR-903 ASR-9000 CRS-3 CSR-3
297591
Fiber or uWave Link, Ring DWDM, Fiber Rings, H&S, Hierarchical Topology DWDM, Fiber Rings, Mesh Topology
The frequency source for the mobile backhaul network is the Primary Reference Clock (PRC), which
can be based on free-running atomic clock (typically Cesium), a global navigation satellite system
(GNSS) receiver that derived frequency from signals received from one or more satellite system, or a
combination of both.
The time (phase and Time of Day-ToD) source for the mobile backhaul network is the Primary Reference
Time Clock (PRTC), which is usually based on GNSS receiver that derived time synchronization from
one or more satellite systems with traceability to the Universal Coordinated Time (UTC). A PRC
provides a frequency signal of G.811/Stratum-1 quality signal (traceable to UTC frequency if coming
from GNSS) to the AGNs via G.703-compliant dedicated external interfaces (aka BITS input) or 10Mhz
interface. A PRTC provides time via a 1PPS signal for phase, and a serial ToD interface. DOCSIS Timing
Interface (DTI) is an alternative to the frequency, 1PPS and ToD interface. A PRTC can also provide
frequency as a PRC. If required by the architecture, the IEEE 1588 Primary Master Clock (PMC) will
also derive synchronization from the PRC or PRTC. From this point, three models of synchronization
distribution are supported:
For mobile services that only require frequency synchronization, where all network nodes support
SyncE, then frequency is carried to the NodeB via SyncE. The ESMC provides source traceability
between nodes through the Quality Level (QL) value which helps selecting best signal and
preventing timing loops in SyncE topologies.
For mobile services that require synchronization over an infrastructure which does not support
SyncE, PTPv2 is then utilized for frequency synchronization distribution. The PMC generates
PTPv2 streams for each PTP slave that is routed globally by the regional MTG to the CSG, which
then provides sync to the eNodeB. The PMC can be a network node which receives the frequency
source signal via physical layer (e.g., SyncE). Proper network engineering shall prevent excessive
PDV to allow timing network to provide packet-based quality signal to the slaves.
For mobile services that require time synchronization, PTPv2 can be used in conjunction with SyncE
to provide a hybrid synchronization solution, where SyncE provides accurate and stable frequency
distribution, and PTPv2 is used for allowing phase and ToD synchronization. In this Cisco EPN
System release, the PTPv2 streams are routed globally from the regional MTG to the CSG, which,
combined with SyncE frequency, then provides synchronization to the eNodeBs.
In general, a packet-based timing mechanism such as PTPv2 has strict packet delay variation
requirements, which restricts the number and type of hops over which the recovered timing from the
source is still valid. With globally routed model, strict priority queuing of the PTPv2 streams is
necessary. With a good implementation of PTPv2 BC on intermediate transit nodes, it is possible to
provide better guarantee over more hops from the PMC to the NodeB.
Scalability and reliability of PTPv2 in the Cisco EPN System is enhanced by enabling BC in some or all
of the following: the core-facing AGN, the PAN, and the CSG. Implementing BC functionality in these
nodes serves two purposes:
Increases scaling of PTPv2 phase/frequency distribution, by replicating a single stream from the
PMC to multiple destinations, thus reducing the number of PTP streams needed from the PMC.
Improves the phase stability of PTPv2, by stabilizing the frequency of the PTP servo with SyncE or
another physical frequency source as described in the hybrid synchronization architecture.
High Availability
As highlighted in the "Redundancy and High Availability" section of the EPN 4.0 Transport
Infrastructure Design and Implementation Guide, the Cisco EPN System architecture implements high
availability at the transport network level and the service level. By utilizing these various technologies
throughout the network, the Cisco EPN design is capable of meeting the stringent Next-Generation
Mobile Network (NGMN) requirements of 200 ms recovery times for LTE real time services.
Implementation of high availability technologies at the transport layer that are common to all services is
covered in the EPN 4.0 Transport Infrastructure Design and Implementation Guide. Implementation of
the high availability technologies at the service level is covered in this chapter. Synchronization
resiliency implementation is covered in Synchronization Distribution Implementation, page 6-1.
For MPLS VPN services, BGP Edge protection and BGP FRR Edge protection mechanisms are
supported, and Virtual Router Redundancy Protocol (VRRP) is enabled on the MTGs for redundant
connectivity to the Mobile Packet Core (MPC).
For ATM and TDM pseudowire-based services, pseudowire redundancy is supported, and MR-APS
is enabled for redundant connectivity to the base station controller (BSC) or radio network controller
(RNC).
For Ethernet access based on G.8032 rings, the ring terminates on two different SE nodes and it is
closed via a routed pseudowire. VRRP is then enabled between the SEs for dual homing redundancy
in the access domain.
Quality of Service
The Cisco EPN System uses a Differentiated Services (DiffServ) QoS model across all network layers
of the transport network in order to guarantee proper treatment of all services being transported. This
QoS model guarantees the service-level agreement (SLA) requirements of all residential, business, and
mobile backhaul services across the transport network. QoS policy enforcement is accomplished with
flat QoS policies with DiffServ queuing on all Network-to-Network Interfaces (NNIs), and with H-QoS
policies with parent shaping and child queuing on the UNIs and Service Edge node interfaces.
Specific to mobile services, the QoS design aims to satisfy the SLA requirements of TDM circuits, ATM
classes of service (CoS), and various LTE QoS class identifier (QCI) values that correspond to different
traffic types (voice, video, etc.) with varying resource types (constant bit rate [CBR], variable bit rate
real time [VBR-rt], variable bit rate nonreal time [VBR-nrt], unspecified bit rate [UBR] for ATM; and
guaranteed bit rate [GBR] or non-GBR for LTE).
IPSLA PM
LTE, IPSLA IPSLA
3G IP UMTS, Probe Probe
Transport
VRF VRF
Service OAM
Transport OAM
End-to-end LSP
With Unified MPLS MPLS LSP OAM
293456
Node B CSG MTG RNC/BSC/SAE GW
The Cisco EPN 4.0 System divides implementation of mobile services backhaul into two major areas:
This includes deployments with MPLS access, either via labeled Border Gateway Protocol (BGP)
or Interior Gateway Protocol (IGP)/Label Distribution Protocol (LDP) control planes, where all
services originate on the CSG node. This is described in Mobile Services Implementation with
MPLS Access, page 5-1.
Deployments with non-MPLS access, such as TDM over microwave, native Ethernet (Hub & Spoke
and G8032 Ring), or native IP. All services originate on the PAN in this scenario. This is described
in Mobile Services Implementation with Non-MPLS Access, page 5-33.
See also Mobile Transport Capacity Monitoring, page 5-55.
We are segregating the CSG MPLS VPN configuration into two sections, depending upon the type of
device used as CSGs, as their scale capabilities differ. The implementation shown is for ASR901 and
ASR920 routers.
Route map to export common route target (RT) 10:10 in addition to Local RAN RT 10:101
route-map ADDITIVE permit 10
!***Common RAN RT. Exported by every CSG in the entire network***
set extcommunity rt 10:10 additive
This section describes the L3VPN configuration aspects for CSGs that have relatively greater scale
capability in terms of routes in the VRF, such as ASR920. Here, the CSGs will use aggregation-wide RT
to import/export routes in VRF, instead of having unique RTs for different access rings across the
aggregation.
Note Both the models described above can co-exist in the same domain by manipulating the RTs for
import/export. For example, ASR920 CSG can additionally export RAN RT-2 for import on ASR901
CSG, and ASR901 CSG can additionally export aggregation-wide RT for import on ASR920 CSG,
making both the rings establish X2 connectivity. Please refer to Figure 5-1.
This section describes the configuration required to bridge traffic from eNodeB to CSG via NID device.
To bridge traffic, one EVC with VLAN 100 on uplink port and one EVC control Entry (ECE) are created
on NID. ECE is mapped to UNI port for all traffic by using NID controller. ME 3600 is used as NID
Controller.
Note The following NID-related configurations are entered from the ME3600 controller.
ProvisionEVC
addECE ece_configuration ece_id 100
addECE ece_configuration control ingress_match uni_ports GigabitEthernet_3_UNI enable
addECE ece_configuration control ingress_match outer_tag_match match_type any
addECE ece_configuration control egress_outer_tag mode enabled
addECE ece_configuration control egress_outer_tag pcp_mode fixed
addECE ece_configuration control egress_outer_tag pcp_value 0
addECE ece_configuration control actions class specific 0
addECE ece_configuration control actions tag_pop_count 0
addECE ece_configuration control actions evc_id specific 100
addece commit
exit
Note For multicast services, a separate VLAN is required on CSG and NID. The creation of Multicast VLAN
on NID is the same as for Unicast service VLAN.
This section describes the configuration required to bridge traffic from eNodeB to CSG performed via
Cisco Prime Provisioning.
Step 1 Configure EVC with VLAN 100 on Uplink port at shown in Figure 5-2.
Note For multicast services, a separate VLAN is required on CSG and NID. The creation of Multicast VLAN
on NID is as same as Unicast service VLAN.
This is a one-time MPLS VPN configuration done on the MTGs. No modifications are made when
additional CSGs in any RAN access or other MTGs are added to the network.
VRF Definition
vrf LTE102
address-family ipv4 unicast
!***Common CSG RT imported by MTG***
!***MSE RT imported for reachability to other MPC areas***
import route-target
10:10
1001:1001
!
!***Export MSE RT.***
!***Imported by every CSG in entire network.***
export route-target
1001:1001
!
!
address-family ipv6 unicast
import route-target
10:10
1001:1001
!
export route-target
1001:1001
!
!
!
!
vrf LTE102
rd 1001:1001
address-family ipv4 unicast
redistribute connected
!
address-family ipv6 unicast
redistribute connected
!
!
Note Each MTG has a unique RD for the MPLS VPN VRF to properly enable BGP FRR Edge functionality.
This section describes the BGP control plane aspects for the VPNv4 and VPNv6 LTE backhaul service
deployed in an Inter-Autonomous System (AS) design. These configurations are designed to build upon
the transport layer BGP configurations described in the EPN 4.0 Transport Infrastructure Design and
Implementation Guide.
Figure 5-4 IGP Control Plane for MPLS VPN Service (Inter-AS Design)
Mobile Access Aggregation Network Core Network Aggregation Network Mobile Access
Network IS-IS L2 IS-IS L2 IS-IS L2 Network
IS-IS L1 IS-IS L1
AS-B AS-A AS-C
PAN PAN
Inline RR MME Inline RR
VRF
MTG
VRF VRF
CSG iBGP CSG
VRF VPNv4/v6
iBGP vAGN-RR Multi-hop eBGP vCN-RR Multi-hop eBGP vAGN-RR VRF iBGP
VPNv4/v6 VPNv4/v6 VPNv4/v6 VPNv4/v6
RR RR RR
VRF VRF
iBGP iBGP
CSG MTG iBGP CSG
VRF VPNv4/v6 MTG VPNv4/v6 VRF
VPNv4/v6
CSG CSG
297759
SGW/PGW SGW
!
!***RT Constrained Route Distribution***
address-family rtfilter unicast
neighbor pan send-community extended
neighbor 100.111.14.1 activate
neighbor 100.111.14.2 activate
exit-address-family
!
Note Please refer to Route Scale Control for LTE L3 MPLS VPN Service Model, page 3-3 for a detailed
explanation of how RT-constrained RD is used in order to constrain VPNv4 routes from remote RAN
access regions.
!***vCN-RR***
neighbor 100.111.15.50 peer-group inter-as
!***PANs***
neighbor 100.111.14.1 peer-group intra-as
neighbor 100.111.14.2 peer-group intra-as
!
address-family vpnv4
bgp nexthop trigger delay 3
neighbor intra-as send-community both
neighbor intra-as route-reflector-client
neighbor inter-as send-community both
!***Next-Hop Unchanged towards CN-RR***
neighbor inter-as next-hop-unchanged
neighbor 100.111.15.50 activate
neighbor 100.111.14.1 activate
neighbor 100.111.14.2 activate
exit-address-family
!
address-family vpnv6
bgp nexthop trigger delay 3
neighbor intra-as send-community both
neighbor intra-as route-reflector-client
neighbor inter-as send-community both
!***Next-Hop Unchanged towards CN-RR***
neighbor inter-as next-hop-unchanged
neighbor 100.111.15.50 activate
neighbor 100.111.14.1 activate
neighbor 100.111.14.2 activate
exit-address-family
Note Please refer to the "BGP Transport Control Plane" sections in the EPN 4.0 Transport Infrastructure
Design and Implementation Guide for a detailed explanation of how egress filtering is done at the
vCN-RR in order to constrain VPN routes from remote RAN access regions.
!***CN-RR***
neighbor 100.111.15.50
use neighbor-group cn-rr
!
MPLS VPN Control Plane for Single-AS Design with MPLS Access
This section describes the BGP control plane aspects for the VPNv4 and VPNv6 LTE backhaul service
deployed in a single-AS design. These configurations are designed to build upon the transport layer BGP
configurations described in the EPN 4.0 Transport Infrastructure Design and Implementation Guide. See
Figure 5-5.
Figure 5-5 BGP Control Plane for MPLS VPN Service (Single-AS Design)
exit-address-family
!
exit-address-family
!
address-family vpnv4
bgp nexthop trigger delay 3
neighbor csg send-community extended
!***CSGs are RR clients***
neighbor csg route-reflector-client
!***CN-ABR is next level RR***
neighbor abr send-community both
neighbor 100.111.11.1 activate
neighbor 100.111.11.2 activate
neighbor 100.111.13.22 activate
neighbor 100.111.13.23 activate
neighbor 100.111.13.24 activate
exit-address-family
!
address-family vpnv6
bgp nexthop trigger delay 3
neighbor csg send-community extended
!***CSGs are RR clients***
neighbor csg route-reflector-client
!***CN-ABR is next level RR***
neighbor abr send-community both
neighbor 100.111.11.1 activate
neighbor 100.111.11.2 activate
neighbor 100.111.13.22 activate
neighbor 100.111.13.23 activate
neighbor 100.111.13.24 activate
exit-address-family
!
!***RT Constrained Route Distribution towards CSGs***
address-family rtfilter unicast
neighbor csg send-community extended
neighbor 100.111.13.22 activate
neighbor 100.111.13.23 activate
neighbor 100.111.13.24 activate
exit-address-family
CN-RR Configuration
router bgp 1000
bgp router-id 100.111.11.50
!
address-family vpnv4 unicast
nexthop trigger-delay critical 2000
!
address-family vpnv6 unicast
nexthop trigger-delay critical 2000
!
session-group intra-as
remote-as 1000
!
!***Neighbor Group for MTGs***
neighbor-group mtg
use session-group intra-as
!
!***MTGs are Route-Reflector Clients***
address-family vpnv4 unicast
route-reflector-client
!
address-family vpnv6 unicast
route-reflector-client
!
!
!***Neighbor Group for CN-ABR inline RR***
neighbor-group cn-abr
use session-group intra-as
!
address-family vpnv4 unicast
route-reflector-client
!***Egress filter to drop unwanted RAN loopbacks towards neighboring aggregation
regions***
route-policy BGP_Egress_RAN_Filter out
!
address-family vpnv6 unicast
route-reflector-client
!***Egress filter to drop unwanted RAN loopbacks towards neighboring aggregation
regions***
route-policy BGP_Egress_RAN_Filter out
!
!
!***CN-ABRs***
neighbor 100.111.2.1
use neighbor-group cn-abr
!
neighbor 100.111.4.1
use neighbor-group cn-abr
!
neighbor 100.111.10.1
use neighbor-group cn-abr
!
neighbor 100.111.10.2
use neighbor-group cn-abr
!
!***MTGs***
neighbor 100.111.15.1
use neighbor-group mtg
!
neighbor 100.111.15.2
use neighbor-group mtg
route-policy BGP_Egress_Transport_Filter
!***10:10 = RAN_Community for CSGs***
if community matches-any (10:10) then
drop
else
pass
endif
end-policy
!
address-family vpnv6 unicast
!
!
!***CN-RR***
neighbor 100.111.15.50
use neighbor-group cn-rr
This section describes the BGP control plane aspects for the VPNv4 and VPNv6 LTE backhaul service
deployed in a Small Network Design with an integrated core/aggregation domain. These configurations
are designed to build upon the transport layer BGP configurations described in the EPN 4.0 Transport
Infrastructure Design and Implementation Guide. See Figure 5-6.
Figure 5-6 BGP Control Plane for MPLS VPN Service (Small Network Design)
VRF
VRF CN MTG CN VRF
CSG CSG
iBGP iBGP
vCN-RR VPNv4/v6
VPNv4/v6
iBGP iBGP
VRF VPNv4/v6 RR VPNv4/v6 VRF
CSG iBGP CSG
VPNv4/v6 MTG
CN CN
VRF VRF VRF
CSG CSG
295950
SGW/PGW
Note Please refer to Route Scale Control for LTE L3 MPLS VPN Service Model, page 3-3 for a detailed
explanation of how RT-constrained RD is used in order to constrain VPNv4 routes from remote RAN
access regions.
CN-RR Configuration
!
router bgp 1000
nsr
bgp router-id 100.111.15.50
!***session group for iBGP clients (AGNs and MTGs)***
session-group intra-as
remote-as 1000
!
!***MTG neighbor group***
neighbor-group mtg
use session-group intra-as
!
!***MTGs are RR clients***
address-family vpnv4 unicast
route-reflector-client
!
address-family vpnv6 unicast
route-reflector-client
!
!
!***PAN neighbor group***
neighbor-group pan
use session-group intra-as
!
!***PANs are RR clients
address-family vpnv4 unicast
route-reflector-client
!
address-family vpnv6 unicast
route-reflector-client
!
!
!***MTG-K1501***
neighbor 100.111.15.1
use neighbor-group mtg
!
!***MTG-K1502***
neighbor 100.111.15.2
use neighbor-group mtg
!
!***PANs***
neighbor 100.111.5.7
use neighbor-group pan
!
neighbor 100.111.5.8
use neighbor-group pan
!
neighbor 100.111.9.21
use neighbor-group pan
!
neighbor 100.111.9.22
use neighbor-group pan
!
neighbor 100.111.14.3
use neighbor-group pan
!
neighbor 100.111.14.4
use neighbor-group pan
!
!
!
session-group intra-as
!
neighbor-group cn-rr
use session-group intra-as
!
address-family vpnv4 unicast
!
address-family vpnv6 unicast
!
!
!***CN-RR***
neighbor 100.111.15.50
use neighbor-group cn-rr
As shown in Figure 5-7, the eMBMS service is implemented attaching a multicast sources to each of the
MTGs: MTG-1501 and MTG-1502 respectively. Multicast receivers are located to both access rings,
AGN K1101 and AGN K1102 mark the end of aggregation/core to where the mLDP domain extends.
Both PAN rings, ASR 9001 PAN and ASR 903 PAN, and the access rings, ME 3600 and ASR 901, run
native IP multicast; multicast forwarding in the access domain is based on PIM SSM v4/v6.
Native IPv4 multicast forwarding is implemented in both the ME 3600 and in the ASR901 access rings,
while native IPv6 multicast is implemented in the ASR901 access ring only.
Within the LSM domain, which terminates at the SE nodes, AGN-K1101 and AGN-K1102, MLDP relies
on RFC 3107-learnt IPv4/v6 multicast source addresses to build the multicast tree in core. The AGNs
then redistribute the IPv4/v6 multicast source addresses learnt via RFC 3107 into the level 2 ISISv4/6
process running in the native multicast domain.
At the PAN nodes, the multicast source prefixes now available in ISISv4/6 Level 2 are leaked into
ISISv4/6 Level1 such that the nodes in the access ring are aware of the multicast source prefixes and
PIM can build end-to-end multicast distribution trees.
Note Please refer to the "Global Multicast" section in the EPN 4.0 Transport Infrastructure Design and
Implementation Guide for implementation of ISISv6 & PIMv6 in ASR 903 PAN ring and ASR 901
access ring.
MLDP should be enabled on all MPLS nodes participating in LSM. MLDP base configuration is covered
in the EPN 4.0 Transport Infrastructure Design and Implementation Guide chapters on global multicast
support implementation.
With MLDP-Global in-band signaling, PIM is required only at the edge. PIM-SSM is used in the Cisco
EPN System architecture. SSM is enabled by default on the multicast group range mentioned below.
By default, PIM-SSM operates in the 232.0.0.0/8 Multicast group range for IPv4 and ff3x::/32 (where x
is any valid scope) in IPv6. To configure these values, use the SSM range command. The default SSM
range was used in this implementation--thus, there's no need to explicitly configure SSM range.
In MLDP-Global in-band signaling, the root of the MLDP LSP can be chosen from BGP-Nexthop.
Hence, the source-address of the (S,G) Join must be reachable through a BGP route. Also, in
MLDP-Global in-band signaling, the multicast source for the eMBMS service should be reachable in a
global routing table.
Cisco EPN 4.0 only supports MLDP-Global in-band signaling for the Intra-AS scenarios. Expanding the
MLDP-Global for Inter-AS scenario will be supported in future releases of Cisco EPN System
architecture.
MTG-9006-K1501 (ROOT-node/Ingress-PE)
Advertise the network prefix where the Multicast source of the IPTV and eMBMS service is located.
router bgp 1000
bgp router-id 100.111.15.1
address-family ipv4 unicast
network 200.15.12.0/24 route-policy MSE_IGW_Community
network 200.15.1.0/24 route-policy MSE_IGW_Community
Note Please refer to the "Prefix Filtering" section in the EPN 4.0 Transport Infrastructure Design and
Implementation Guide for explanation of the need for using the route-policy MSE_IGW_Community.
As described in the design section, from CSG to eNodeB, two options are available depending on the
capabilities of CSGs and the type of access network. In the case of ASR 901 access ring, for IPv4
multicast, single VLAN option is implemented for both unicast and multicast traffic with the help of a
feature called VRF route leaking.
Static mroutes are added in the VRF for the multicast sources along with the keyword "fallback-lookup
global," thus enabling the RPF resolution to happen in the global routing table while the multicast source
prefixes remain available for the multicast traffic coming on the VRF enabled interface.
In the case of IPv6 multicast, the VRF route leaking feature is not supported in the current release.
Hence, the two VLANs option is implemented, with a dedicated VLAN for unicast and a second VLAN
for multicast delivery
As depicted in Figure 5-8, the CSG node is configured differently in the two cases. For IPv6, the physical
interface connected to the enhanced NodeB has two SVIs: one for Unicast in VRF for the L3VPN mobile
transport service (VLAN 111), and one for Multicast in Global for the eMBMS service (VLAN 222 in
this example). For IPv4, only one SVI (VLAN 111) for both unicast and multicast traffic exists.
Figure 5-8 Multicast Transport Implementation for eMBMS Services in ASR901 to ASR 903
Access Domain
The CSG node facing the eNodeB may use SSM mapping if the eNodeB/Multicast Receiver is not
SSM-aware and only supports IGMP v2. The CSG node is implemented with a Cisco ASR901. The
eNodeBs are emulated using a traffic generator.
interface Vlan111
!***Unicast VRF***
vrf forwarding LTE128
ip address 111.13.22.1 255.255.255.0
ip pim sparse-mode
!***Enable IGMPv3 (default is IGMPv2)***
ip igmp version 3
Note IGMPv2 configuration requires static SSM mapping at the CSG access node.
Enable PIM-SSM and use the default SSM range 232.0.0.0/8 for IPv4. If you are using a non-232.0.0.0/8
Multicast group address, use the ssm range command.
ip pim ssm default
Enable SSM mapping and define a static SSM map to support IGMPv2 in the PIM-SSM network.
ip igmp ssm-map enable
no ip igmp ssm-map query dns
ip igmp ssm-map static SSM-map2 200.15.1.2
ip igmp ssm-map static SSM-map1 200.15.12.2
!
ip access-list standard SSM-map1
permit 232.200.13.0 0.0.0.255
ip access-list standard SSM-map2
permit 232.200.14.0 0.0.0.255
Enable PIM in the L3 interfaces connecting other nodes in the ring and the L3 interface facing the
simulated eNodeB.
!***Physical interface connected to other CSG node in access ring***
interface GigabitEthernet0/10
description To CSG-K1322 G0/10
no ip address
load-interval 30
negotiation auto
cdp enable
service-policy input PMAP-NNI-I
service-policy output PMAP-NNI-E
service instance 10 ethernet
encapsulation dot1q 10
rewrite ingress tag pop 1 symmetric
bridge-domain 10
!
end
interface Vlan10
ip address 10.13.23.0 255.255.255.254
ip pim sparse-mode
!
!***Physical interface connected to other CSG node in access ring***
interface GigabitEthernet0/11
description To CSG-K1324 g0/11
no ip address
load-interval 30
negotiation auto
cdp enable
service-policy input PMAP-NNI-I
service-policy output PMAP-NNI-E
service instance 10 ethernet
encapsulation dot1q 10
rewrite ingress tag pop 1 symmetric
bridge-domain 20
!
end
!***L3 interface connected to other CSG node in access ring***
interface Vlan20
ip address 10.13.22.3 255.255.255.254
ip pim sparse-mode
!
!***Physical interface connected to simulated eNodeB***
interface GigabitEthernet0/4
no ip address
negotiation auto
!***EFP for Unicast VRF for S1/X2/M3 transport***
service instance 111 ethernet
encapsulation dot1q 111
rewrite ingress tag pop 1 symmetric
bridge-domain 111
!
!***EFP for global Multicast transport for M1 transport***
service instance 222 ethernet
encapsulation dot1q 222
rewrite ingress tag pop 1 symmetric
bridge-domain 222
!
interface Vlan222
ip address 222.13.23.1 255.255.255.0
ip pim sparse-mode
interface Vlan111
!***Unicast VRF***
vrf forwarding LTE128
ip address 111.13.23.1 255.255.255.0
As depicted in Figure 5-9, the CSG physical interface connected to the eNodeB has two SVIs: one for
Unicast, in VRF, for L3VPN Mobile Transport service (VLAN 111) and one for Multicast, in Global, for
eMBMS service (VLAN 222 in this example).
Figure 5-9 Multicast Transport Implementation for eMBMS Services in ME 3600 to ASR 9000
Access Domain
ASR 901 Rings as CSGs ASR 903 as PANs ASR 9000 and CSR as AGN
and Core Nodes
The CSG node facing the eNodeB may use SSM mapping if the eNodeB/Multicast Receiver is not
SSM-aware and only supports IGMPv2.
The CSG node is implemented with a Cisco ME 3600X Series switch. The eNodeB is emulated using
IXIA traffic generator.
Enable PIM-SSM and use the default SSM range 232.0.0.0/8 for IPv4. If you are using a non-232.0.0.0/8
Multicast group address, use the ssm range command.
ip pim ssm default
Enable PIM in the L3 interfaces connecting other nodes in the ring and the L3 interface facing the
eNodeB.
interface TenGigabitEthernet0/1
no switchport
ip address 10.7.11.1 255.255.255.254
ip pim sparse-mode
!
interface TenGigabitEthernet0/2
no switchport
ip address 10.7.12.0 255.255.255.254
ip pim sparse-mode
!
!***L3 interface connecting to simulated eNodeB***
!***Physical interface connected to simulated eNodeB***
interface GigabitEthernet0/3
switchport trunk allowed vlan none
switchport mode trunk
!***EFP for Unicast VRF for S1/X2/M3 transport***
service instance 111 ethernet
encapsulation dot1q 111
rewrite ingress tag pop 1 symmetric
bridge-domain 111
!
!***EFP for global Multicast transport for M1 transport***
service instance 222 ethernet
encapsulation dot1q 222
rewrite ingress tag pop 1 symmetric
bridge-domain 222
!
interface Vlan222
ip address 222.7.12.1 255.255.255.0
ip pim sparse-mode
!***Enable IGMPv3 (default is IGMPv2)***
ip igmp version 3
interface Vlan111
!***Unicast VRF***
vrf forwarding LTE126
ip address 111.7.12.1 255.255.255.0
Note IGMPv2 configuration requires static SSM mapping at the CSG access node.
Enable PIM-SSM and use the default SSM range 232.0.0.0/8 for IPv4. If you are using a non-232.0.0.0/8
Multicast group address, use the SSM range command.
ip pim ssm default
Enable SSM mapping and define a static SSM map. This supports IGMPv2 in the PIM-SSM network.
ip igmp ssm-map enable
no ip igmp ssm-map query dns
ip igmp ssm-map static SSM-map2 200.15.1.2
ip igmp ssm-map static SSM-map1 200.15.12.2
!
ip access-list standard SSM-map1
permit 232.200.13.0 0.0.0.255
ip access-list standard SSM-map2
permit 232.200.14.0 0.0.0.255
Enable PIM in the L3 interfaces connecting other nodes in the ring and the L3 interface facing the
simulated eNodeB.
interface TenGigabitEthernet0/1
no switchport
ip address 10.7.11.1 255.255.255.254
ip pim sparse-mode
!
interface TenGigabitEthernet0/2
no switchport
ip address 10.7.12.0 255.255.255.254
ip pim sparse-mode
!
!***Physical interface connected to simulated eNodeB***
interface GigabitEthernet0/3
switchport trunk allowed vlan none
switchport mode trunk
!***EFP for Unicast VRF for S1/X2/M3 transport***
service instance 111 ethernet
encapsulation dot1q 111
rewrite ingress tag pop 1 symmetric
bridge-domain 111
!
!***EFP for global Multicast transport for M1 transport***
service instance 222 ethernet
encapsulation dot1q 222
rewrite ingress tag pop 1 symmetric
bridge-domain 222
!
interface Vlan222
description To IXIA g2/14 (acting as eNodeB)
ip address 222.7.9.1 255.255.255.0
ip pim sparse-mode
interface Vlan111
!***Unicast VRF***
vrf forwarding LTE128
ip address 111.7.9.1 255.255.255.0
CESoPSN provides structured transport of TDM circuits down to the DS0 level across an MPLS-based
backhaul architecture. The configurations for the CSG and MTGs are outlined in this section, including
an illustration of basic backup pseudowire configuration on the CSG in order to enable transport to
redundant MTGs. Complete high availability configurations are available in High Availability, page 4-4.
MTG
BTS
TDM
CSG Pseudowire
Primary TDM
PGP
MR-APS BSC
Backup TDM
Pseudowire
TDM BTS
293449
MTG
!
!
mpls ldp discovery targeted-hello accept
!*** ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote
PE so as to establish AToM PW***
SAToP provides unstructured transport of TDM circuits across an MPLS-based backhaul architecture.
The configurations for the CSG and MTGs are outlined in this section, including Figure 5-11, which
illustrates backup pseudowire configuration on the CSG to enable transport to redundant MTGs.
MTG
BTS
TDM
CSG Pseudowire
Primary TDM
PGP
MR-APS BSC
Backup TDM
Pseudowire
TDM BTS
293450
MTG
Regarding Figure 5-11:
Cisco ASR 901 Series motherboard with built-in 12GE, 1FE, 16T1E1 (A901-12C-FT-D) is used to
create the CEM interface for the TDM pseudowire.
Cisco ME 3600X 24CX Series utilizes on-board T1/E1 interfaces.
Both Cisco ASR 9000 Series MTGs utilize 1-port channelized OC3/STM-1 ATM and circuit
emulation SPA (SPA-1CHOC3-CE-ATM) in a SIP-700 card for the TDM interfaces.
SAToP encapsulates T1/E1 services, disregarding any structure that may be imposed on these
streams, in particular the structure imposed by the standard TDM framing. This mode is based on
IETF RFC 4553.
mpls ldp discovery targeted-hello accept is required because the LDP session is tunneled via PW
between the PEs, since they are not directly connected. Since targeted-hello response is not
configured, both sessions will show as passive.
ais-shut
report lais
report lrdi
sts 1
mode vt15-t1
delay trigger 250
!
!
controller T1 0/2/1/0/1/5/1
cem-group unframed
forward-alarm AIS
forward-alarm RAI
clock source internal
!
controller T1 0/2/1/0/1/3/3
cem-group unframed
forward-alarm AIS
forward-alarm RAI
clock source internal
!
interface CEM0/2/1/0/1/5/1
load-interval 30
l2transport
!
!
interface CEM0/2/1/0/1/3/3
load-interval 30
l2transport
!
!
!
interface Loopback0
description Global Loopback
ipv4 address 100.111.15.1 255.255.255.255
!
l2vpn
pw-class SAToP
encapsulation mpls
control-word
!
!
xconnect group TDM-K0913
p2p T1-SAToP-01
interface CEM0/2/1/0/1/5/1
neighbor ipv4 100.111.9.13 pw-id 9131501
pw-class SAToP
!
!
!
xconnect group TDM-K1326
p2p T1-SAToP-01
interface CEM0/2/1/0/1/3/3
neighbor ipv4 100.111.13.26 pw-id 1326150101
pw-class SAToP
!
!
!
router isis core
interface Loopback0
passive
point-to-point
address-family ipv4 unicast
!
!
!
router bgp 1000
bgp router-id 100.111.15.1
address-family ipv4 unicast
network 100.111.15.1/32 route-policy MTG_Community
!
mpls ldp
router-id 100.111.15.1
discovery targeted-hello accept
!
!*** ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote
PE so as to establish AToM PW***
Note ASR903 RSP1 and RSP2 support Mobile Services with Non-MPLS access.
SGW
S1-U
S1-U
SGW
Export: Common RAN RT Export: MPC RT (1001:1001)
(10:10) and Local RAN Import: Common RAN
RT (10:203) RT (10:10)
Import: MPC RT
297760
MME
VRF
MTG
iBGP
VPNv4/v6
PAN VRF vAGN-RR Multi-hop eBGP vCN-RR Multi-hop eBGP vAGN-RR PAN
VRF
(PE) VPNv4/v6 VPNv4/v6 (PE)
RR RR RR
PAN VRF iBGP iBGP VRF PAN
(PE) VPNv4/v6 MTG iBGP MTG VPNv4/v6 (PE)
VPNv4/v6
VRF VRF
297761
SGW/PGW SGW
VRF Definition
vrf LTE102
address-family ipv4 unicast
!***Common CSG RT imported by MTG***
import route-target
10:10
!
!***Export MSE RT.***
!***Imported by every CSG in entire network.***
export route-target
1001:1001
!
!
address-family ipv6 unicast
import route-target
10:10
!
export route-target
1001:1001
!
!
!
!
vrf LTE102
rd 1001:1002
address-family ipv4 unicast
redistribute connected
!
address-family ipv6 unicast
redistribute connected
!
!
Note Each MTG has a unique RD for the MPLS VPN VRF to properly enable BGP FRR Edge functionality.
!***AGN-RR***
neighbor 100.111.15.5 peer-group agn-rr
!
address-family vpnv4
bgp nexthop trigger delay 3
!***AGN-RR is next level RR***
neighbor agn-rr send-community both
neighbor 100.111.15.5 activate
exit-address-family
!
address-family vpnv6
bgp nexthop trigger delay 3
!***AGN-RR is next level RR***
neighbor agn-rr send-community both
neighbor 100.111.15.5 activate
exit-address-family
!
!***RT Constrained Route Distribution towards CSGs and AGN-RR***
address-family rtfilter unicast
neighbor agn-rr send-community extended
neighbor 100.111.15.5 activate
exit-address-family
!
remote-as 1000
!
session-group inter-as-rr
remote-as 101
!
!***Neighbor Group for MTGs***
neighbor-group mtg
use session-group intra-as
!
!***MTGs are Route-Reflector Clients***
address-family vpnv4 unicast
route-reflector-client
!
address-family vpnv6 unicast
route-reflector-client
!
!
!***Multihop Neighbor Group for AGN-RR***
neighbor-group inter-as-rr
use session-group inter-as-rr
!***eBGP Multihop***
ebgp-multihop 20
address-family vpnv4 unicast
route-policy pass-all in
!***Filters unwanted RAN prefixes towards remote AGN domains***
route-policy BGP_Egress_Transport_Filter out
next-hop-unchanged
!
address-family vpnv6 unicast
route-policy pass-all in
!***Filters unwanted RAN prefixes towards remote AGN domains***
route-policy BGP_Egress_Transport_Filter out
next-hop-unchanged
!
!
!***AGN-RR***
neighbor 100.111.15.5
use neighbor-group inter-as-rr
!
!***MTGs***
neighbor 100.111.15.1
use neighbor-group mtg
!
neighbor 100.111.15.2
use neighbor-group mtg
!
!***Drops common RAN RTs towards AGN-RR***
route-policy BGP_Egress_Transport_Filter
if community matches-any (10:10) then
drop
else
pass
endif
end-policy
The following section shows the configuration of PAN K1401 to which eNodeB is directly connected.
VRF Definition
vrf definition LTE224
rd 10:104
!
address-family ipv4
export map ADDITIVE
route-target export 10:104
route-target import 10:104
route-target import 1001:1001
route-target import 236:236
route-target import 235:235
exit-address-family
!
address-family ipv6
export map ADDITIVE
route-target export 10:104
route-target import 10:104
route-target import 1001:1001
route-target import 235:235
exit-address-family
!
The Cisco EPN 4.0 System design validation includes cell site connectivity via GPON access. From the
CSG perspective, the connection to the ONU is configured just like any other Ethernet connection. The
transport over the PON access network between the ONU and OLT is configured as "native" (or
untagged) in order to transport the untagged traffic between the CSG and aggregation nodes. This section
gives an overview of the CSG and PON access configurations.
Step 1 Create BitStream service in the OLT, and then associate the service to the ONU.
Step 2 To configure services in the OLT, go to Configuration > Services.
Step 3 To associate the services to the ONU, go to Configuration > Remote Equipments > ONU, and then
select the correct PON port and ONU ID of the ONU to be configured.
Step 4 Click Apply, and then scroll down to the Services portion. Add the services and configure them with the
appropriate VLAN.
Step 5 For BitStream service type for mobile services, the ETH-VLAN and UNI-VLAN are kept the same.
Step 6 Under Services, choose the following options:
TypeBitstream (this service type is used in Business Services, where all packets pass
transparently by the system and MAC learning is disabled).
Eth VlanIdentifies the outer tag of the service used in the uplink port (should match the dot1q tag
configured in the PE).
Uni VlanIdentifies the VLAN delivered to or received from the ONT (usually the same VLAN
tag with Eth Vlan).
CSG Configuration
CSG-K1317 Configuration
!***Ethernet interface connected to ONU***
interface GigabitEthernet0/1
description To MOB-ONT-OLT3 setup
load-interval 30
negotiation auto
service instance 900 ethernet
encapsulation untagged
bridge-domain 900
interface Vlan900
!***Link-local subinterface with PAN connected to far end of PON link***
ip address 10.5.4.14 255.255.255.254
ip router isis agg-acc
load-interval 30
mpls ip
mpls ldp igp sync delay 10
bfd interval 50 min_rx 50 multiplier 3
isis circuit-type level-1
isis network point-to-point
isis csnp-interval 10
end
!***VRF Configuration***
vrf definition LTE229
rd 10:209
!
address-family ipv4
export map ADDITIVE
route-target export 10:209
route-target import 1001:1001
route-target import 236:236
route-target import 235:235
exit-address-family
!***VRF added under BGP Configuration***
router bgp 1000
address-family ipv4 vrf LTE229
redistribute connected
exit-address-family
!
route-map ADDITIVE permit 10
set extcommunity rt 10:10 additive
Note VPN prefixes are exported with RT 10:209. All the CSGs are configured to import RT 10:209 for X2
communication and RT 1001:1001 for S1 communication. Route-map ADDITIVE is used for appending
RT 10:10 to 10:209.
MPLS VPN Transport for LTE S1 and X2 Interfaces G.8032 Ethernet Access- ASR 901
This section describes the L3VPN configuration aspects on the CSGs in the RAN access and the Mobile
Transport Gateways (MTGs) in the core network required for implementing the LTE backhaul service
for X2 and S1 interfaces. In this scenario, the access network is deployed as a G.8032-protected Ethernet
ring, and it is dual homed to a pair of SE nodes that provide the VRF Services for the LTE backhaul. SE
Node Dual Homing is achieved by a combination of VRRP, Routed PW, and G.8032 providing resiliency
and load balancing in the access network.
In Figure 5-16, the SE Nodes, AGN-K0301 and AGN-K0302, implement the L3 MPLS VPN Service for
the transport of LTE traffic to the MPC. A routed BVI interface acts as the service endpoint. The LTE
S1 interface and a X2 interface set across different access domains are carried over the L3VPN service.
A X2 interface between LTE eNodeBs on the same G.8032 access domain is bridged over the ring.
Figure 5-16 MPLS VPN Service Implementation for LTE Backhaul on Ethernet G8032 Ring Access
Ten 0/0
K0308
IXIA SGW
10/11
ASR9K
Ten 0/1 K0301
Ten 0/2/1/3
Sub I/F
K1502
MTG 3
BVI
S1-C MME
VRF
K0306
Gig 0/2
Gig 0/5
VRF
BVI
Gig 0/7
MTG 4
Sub I/F
VRF K1502
S1-U
Ten 0/1
K0305 ASR9K
K0302
Ten 0/0
IXIA
295956
10/1
SGW
The Ethernet access network is implemented as a G.8032 access ring and carries a dedicated VLAN to
L3 MPLS VPN-based service. A PW running between the SE nodes closes the service VLAN providing
full redundancy on the ring.
VRRP is configured on the Routed BVI interface to ensure the eNodeBs have a common default gateway
regardless of the SE node forwarding the traffic. The LTE eNodeB is emulated by a Cisco ME3400 node,
K0901.
CESoPSN provides structured transport of TDM circuits down to the DS0 level across an MPLS-based
backhaul architecture. The configurations for the CSG and MTGs are outlined in this section, including
an illustration of basic backup pseudowire configuration on the CSG in order to enable transport to
redundant MTGs. Complete high availability configurations are available in High Availability, page 4-4.
MTG
BTS
PAN TDM
Pseudowire
Primary TDM
PGP
MR-APS BSC
Backup TDM
Pseudowire
TDM BTS
293453
MTG
Regarding Figure 5-17:
The Cisco ASR 903 Series router utilizes a 16-port T1/E1 Interface Module (A900-IMA16D) for
TDM interfaces.
Both Cisco ASR 9000 Series MTGs utilize 1-port channelized OC3/STM-1 ATM and circuit
emulation SPA (SPA-1CHOC3-CE-ATM) in a SIP-700 card for the TDM interfaces.
CESoPSN encapsulates T1/E1 structured (channelized) services. Structured mode (CESoPSN)
identifies framing and sends only payload, which can be channelized T1s within DS3 and DS0s
within T1. DS0s can be bundled to the same packet. This mode is based on IETF RFC 5086.
mpls ldp discovery targeted-hello accept is required because the LDP session is tunneled via PW
between the PEs, since they are not directly connected. Since targeted-hello response is not
configured, both sessions will show as passive.
!
!
!*** ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote
PE so as to establish AToM PW***
mpls ldp discovery targeted-hello accept
SAToP provides unstructured transport of TDM circuits across an MPLS-based backhaul architecture.
The configurations for the PAN and MTGs are outlined in this section, including an illustration of
backup pseudowire configuration on the PAN in order to enable transport to redundant MTGs.
MTG
BTS
PAN TDM
Pseudowire
Primary TDM
PGP
MR-APS BSC
Backup TDM
Pseudowire
TDM BTS
293453
MTG
Regarding Figure 5-18:
The Cisco ASR 903 Series router utilizes a 16-port T1/E1 Interface Module (A900-IMA16D) for
TDM interfaces.
Both Cisco ASR 9000 Series MTGs utilize 1-port channelized OC3/STM-1 ATM and circuit
emulation SPA (SPA-1CHOC3-CE-ATM) in a SIP-700 card for the TDM interfaces.
SAToP encapsulates T1/E1 services, disregarding any structure that may be imposed on these
streams, in particular the structure imposed by the standard TDM framing. This mode is based on
IETF RFC 4553.
mpls ldp discovery targeted-hello accept is required because the LDP session is tunneled via PW
between the PEs, since they are not directly connected. Since targeted-hello response is not
configured, both sessions will show as passive.
interface Loopback0
passive
point-to-point
address-family ipv4 unicast
!
!
!
router bgp 1000
bgp router-id 100.111.15.1
address-family ipv4 unicast
network 100.111.15.1/32 route-policy MTG_Community
!
mpls ldp
router-id 100.111.15.1
discovery targeted-hello accept
AToM pseudowire circuits are utilized to provide ATM circuit transport across an MPLS-based backhaul
architecture. The ATM interface is configured for ATM Adaptation Layer 0 (AAL0) to allow for
transparent transport of the entire permanent virtual circuit (PVC) across the transport network. The
configurations for the PAN and MTGs are outlined in this section. QoS implementation for ATM circuits
is covered in Chapter 7, Quality of Service Implementation, and resiliency via pseudowire redundancy
and MR-APS are covered in High Availability, page 4-4.
MTG
Node B
PAN ATM
Pseudowire
Primary ATM PGP
MR-APS ATM RNC
Backup ATM
Pseudowire
ATM NodeB
293454
MTG
Regarding Figure 5-19:
The Cisco ASR 903 Series router utilizes a 16-port T1/E1 Interface Module (A900-IMA16D) for
ATM interfaces.
Both Cisco ASR 9000 Series MTGs utilize 1-port OC3/STM-1 ATM SPA (SPA-1XOC3-ATM-V2)
in a SIP-700 card for the TDM interfaces.
ATM transport via pseudowire over an MPLS infrastructure is detailed in IETF RFC 4447.
The PE side of the ATM interface uses AAL0 encapsulation, and the CE side uses AAL5
Subnetwork Access Protocol (SNAP), or AAL5SNAP, encapsulation.
mpls ldp discovery targeted-hello accept is required because the LDP session is tunneled via PW
between the PEs, since they are not directly connected. Since targeted-hello response is not
configured, both sessions will show as passive.
controller T1 0/5/2
framing esf
clock source internal
linecode b8zs
cablelength long 0db
atm
!
interface ATM0/5/2
no ip address
no atm enable-ilmi-trap
!
interface ATM0/5/2.100 point-to-point
no atm enable-ilmi-trap
pvc 100/4011 l2transport
encapsulation aal0
xconnect 100.111.15.1 1401150115 encapsulation mpls
!
!
interface Loopback0
ip address 100.111.14.1 255.255.255.255
!
!
router isis agg-acc
passive-interface Loopback0
!
!***ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote
PE***
mpls ldp discovery targeted-hello accept
!
interface GigabitEthernet2/3/0
description Traffic Generator with IP 214.15.3.18
ip address 214.15.3.17 255.255.255.252
load-interval 30
speed 1000
no negotiation auto
cdp enable
!
ip route 214.14.6.16 255.255.255.252 ATM3/1/0.100
!
AToM PW circuits are utilized to provide ATM circuit transport across an MPLS-based backhaul
architecture. The ATM interface in this example is configured for IMA. The configurations for the PAN
and MTGs are outlined in this section. QoS implementation for ATM circuits is covered in Chapter 6,
Functional Components Implementation, and resiliency via pseudowire redundancy and MR-APS are
covered in High Availability, page 4-4.
MTG
Node B
PAN ATM
Pseudowire
Primary ATM
PGP
MR-APS ATM RNC
Backup ATM
Pseudowire
ATM NodeB
293454
MTG
Regarding Figure 5-20:
The Cisco ASR 903 Series router utilizes a 16-port T1/E1 Interface Module (A900-IMA16D) for
ATM interfaces.
Both Cisco ASR 9000 Series MTGs utilize 1-port OC3/STM-1 ATM SPA (SPA-1XOC3-ATM-V2)
in a SIP-700 card for the TDM interfaces.
ATM transport via pseudowire over an MPLS infrastructure is detailed in IETF RFC 4447.
The PE side of the ATM interface uses AAL0 encapsulation, and the CE side uses AAL5SNAP
encapsulation.
mpls ldp discovery targeted-hello accept is required because the LDP session is tunneled via PW
between the PEs, since they are not directly connected. Since targeted-hello response is not
configured, both sessions will show as passive.
no atm ilmi-keepalive
no atm enable-ilmi-trap
pvc 200/4021
protocol ip 100.14.15.29 broadcast
encapsulation aal5snap
!
!
interface Vlan201
ip address 214.7.1.1 255.255.255.0
no ptp enable
!
interface GigabitEthernet0/3
switchport access vlan 201
switchport mode access
!
ip route 215.3.1.0 255.255.255.0 100.14.15.29
!
Cisco ASR 903 Series Pre-Aggregation Node Configuration
card type t1 0 5
license feature atm
controller T1 0/5/3
framing esf
clock source internal
linecode b8zs
cablelength short 110
ima-group 1
!
interface ATM0/5/ima0
no ip address
atm bandwidth dynamic
no atm enable-ilmi-trap
no atm ilmi-keepalive
pvc 200/4021 l2transport
encapsulation aal0
xconnect 100.111.15.1 1402150116 encapsulation mpls
backup peer 100.111.15.2 1402150216
!
interface Loopback0
ip address 100.111.14.1 255.255.255.255
!
!
router isis agg-acc
passive-interface Loopback0
!
!*** ISIS and BGP related configuration needed to ensure MPLS LDP binding with remote
PE so as to establish AToM PW***
mpls ldp discovery targeted-hello accept
services, Cisco's Prime Performance Manager (PPM) can now be used for collecting the performance
monitoring data, in this case, NetFlow statistics, which can be used for capacity monitoring for the
mobile L3 MPLS VPN services. See Figure 5-21
This section describes the implementation of mobile transport capacity monitoring, which is done
leveraging the NetFlow functionality on MTG and the NetFlow collector functionality on Cisco PPM.
NetFlow Exporter
NetFlow Collection
and Analysis
GigE 0/0/1/15
172.18.133.92
SGW
eNode B MTG
S1-U
297762
Mobile Access Network Mobile Aggregation Network Mobile Packet Core Network
Enabling the NetFlow recording and exporting capabilities on MTGs involves configuring a sampler
map, an exporter map, and a monitor map. The monitor map is then applied on the Mobile Packet
Core-facing interface on MTG 1 and MTG 2. The connection between the MTGs and the PPM is
established through a regular line card at the MTG and not through the management interface. Once this
is done, the MTG is now ready to export NetFlow statistics to PPM. Such NetFlow statistics are collected
in PPM in the form of reports, which can be accessed via GUI to display the amount of traffic sent and
received by a given cell site.
NetFlow capabilities are enabled on MTG1 and 2 and the flow monitor map is applied on tenGigE
0/0/0/2.1100 on both MTG1 and MTG2. This interface is connected to MPC gateway.
Both the MTGs are connected to PPM through the data center connectivity via the interface
GigabitEthernet0/0/1/15.
!*** this is the default udp port used for NetFlowNetFlow communication between MTG
and PPM ***
transport udp 9991
source Loopback0
!*** Prime Performance Manager IP address ***
destination 172.18.133.92
!
!*** Monitor Map applied on the MPC gateway facing interface ***
interface TenGigE0/0/0/2.1100
description To A-RAN-4500-K1504 Ten1/29
vrf LTE102
ipv4 address 115.1.102.3 255.255.255.0
ipv6 nd dad attempts 0
ipv6 address 2001:115:1:102::3/64
load-interval 30
flow ipv4 monitor IPv4-fmm sampler fsm1 ingress
flow ipv4 monitor IPv4-fmm sampler fsm1 egress
encapsulation dot1q 1100
Note Netflow exported device must be already present in Prime Network. If not, the device must be first
added in Prime Network and then synced to Prime Performance Manager.
Synchronizing the Device Database Between Prime Network and Prime Performance Manager
In order to synchronize the devices database from Prime Network to PPM, from the Administration tab
select Prime Network Integration.
Figure 5-23 Synchronizing Prime Network and PPM with Strict Sync
After the database is synchronized between Prime Network and PPM, from the Network tab, select
Devices to check if the device is present in the PPM database and NetFlow is enabled for the same.
After the device appears in the PPM database, go to Reports in the side panel. Select Report/Group
Settings and then enable the required time interval options for the report generation.
Next, select Report/Group Policies and enable all the required NetFlow reports for the report
generation.
Select Report/Group Status to check if all the necessary NetFlow report generations are enabled.
Select NetFlow under Reports in the side panel, and navigate to the desired reports. The following
figure shows the example for the NetFlow Conversations report under NetFlow Endpoints. The report
shows the conversations between individual CSG nodes (currently identified by the CSG node interface
IP address) and MTG interfaces. For example,
IP address 113.29.224.2 corresponds to CSG node K1329 (belongs to large network).
IP address 113.30.224.1 corresponds to CSG node K1330 (belongs to large network).
IP address 113.22.128.2 corresponds to CSG node K1322 (belongs to small network).
IP address 115.1.102.2 corresponds to MTG1.
The different data points in terms of volume (bytes), throughput (bytes/sec) and packets pertaining to
individual conversations are shown in the report.
Select NetFlow under Reports in the side panel, and navigate to the desired reports. Figure 5-31 shows
the example for NetFlow Applications-Conversations report under NetFlow
Applications-Conversations.
PRS
PRS
back into clock source ASR9K
ASR903 Hybrid BC
with BMCA
T0/2/0/1 Slave: Loopback1
T0/0/0/0 3.1.0
10.1.1 T0/0/0/2 Master: Loopback2
3.1.2 P
T0/2/0/0 AGN-K0301 DURS G0/7
AGN-
T0/0/0/1 3.1.4 S 13.25.0
ASBR-K1002 G0/3/0
3.2.3 T0/2/0/0 14.1.2 CSG-K1329
T0/2/0/1 3.1.5 TE0/0/0 PRSS G0/11 ASR901 Hybrid BC
S 3.1.3 DU 13.25.2 Slave: Loopback1
ASR9K OC Master DUS 3.2.2 DU RS
AGN-K0302 P PAN-K1401 Master: Loopback2
PRUS
Master: Ten0/0/0/1 PRS
D
S
Gi0/0/1/1.300 T0/2/0/2 Disable SyncE input TE0/0/0 G0/6
DUS
3.2.0
PRS
DUSS
296018
Regarding Figure 6-1:
Symmetricom TimeProvider 5000 (TP5000) has two Ethernet interface connected to
AGN-ASBR-K1001 and AGN-ASBR-K1002 in order to provide PTP PRC source redundancy. The
PTP PRC source redundancy is marked with green and blue (green with priority 100/105, blue with
priority 110/115). Both AGN-ASBR-K1001 and AGN-ASBR-K1002 have two ports configured as
BC slave ports: local ports with higher local priority and inter-link ports between them as backup
slave ports.
TP5000 has two E1 output connects to AGN-ASBR-K1001 and AGN-ASBR-K1002 RSP BITS
front panel ports. For redundancy under RSP switchover or node failure, it's recommended to use
two IOC modules to provide a total of four E1 outputs to connect to both RSPs of AGN-ASBR
nodes.
ASBR-AGG node with 1588v2 PTP hybrid BC derives frequency from SyncE and phase from 1588.
Green PTP source has higher priority, so both AGN-ASBR nodes synchronize with the green PRC
source. The recovered PTP clock is regenerated from AGN-ASBR WAN interface PTP master port
and provides 1588 clock to the PANs.
The PAN with the 1588v2 PTP hybrid BC derives frequency from SyncE and phase from 1588. Each
PAN, based on its metric, will choose the closest AGN-ASBR as its primary PRC source and the
other as its backup.
Access CSG nodes with 1588v2 PTP hybrid BC (Fiber Port) and with BMCA will pick up one of
the PANs as a PRC source and provide synchronization to the downstream eNodeB from the
recovered clock regenerated from the BC master port.
NID nodes takes frequency reference from SyncE output signal of CSG devices.
Note The ASBR-AGG in this configuration is a Cisco ASR 9000 Series router. The ASR 9000 Series router
currently implements a port-based 1588v2 PTP implementation, which needs PTP messages exchanged
from the same port for both slave and master. A future release will have loopback support for PTP on
the ASR 9000 Series router.
GPS Information
Port Enabled:yes
Clock Id:00:B0:AE:FF:FE:01:90:86
Profile:unicast
Clock Class:locked to reference
Clock Accuracy :within 100ns
Timescale:PTP
Num clients:5
Client load:1%
Packet load:1%
Port Enabled:yes
Clock Id:00:B0:AE:FF:FE:01:90:87
Profile:unicast
Clock Class:locked to reference
Clock Accuracy :within 100ns
Timescale:PTP
Num clients:2
Client load:0%
Packet load:0%
tp5000>
IndexVLAN-IDPriStateAddressNetmaskGateway
1 100 5 enable20.1255.255.255.25420.10.1.8
2 300 5 enable200.1255.255.255.254200.10.1.8
IndexVLAN-IDPriStateAddressNetmaskGateway
1 200 5 enable20.1255.255.255.25420.10.2.8
2 400 5 enable200.1255.255.255.254200.10.2.8
PTP TimescaleAUTO
PTP Stateenabled
PTP Max Number Clients500
PTP Profileunicast
PTP ClockId00:B0:AE:FF:FE:01:90:86
PTP Priority 1100
PTP Priority 2105
PTP Domain0
PTP DSCP46
PTP DSCP Stateenabled
PTP Sync Limit-7
PTP Announce Limit-3
PTP Delay Limit-7
PTP Unicast Negotiationenabled
PTP Unicast Lease Duration300
PTP Ditherdisabled
PTP TimescaleAUTO
PTP Stateenabled
PTP Max Number Clients500
PTP Profileunicast
PTP ClockId00:B0:AE:FF:FE:01:90:87
PTP Priority 1110
PTP Priority 2115
PTP Domain0
PTP DSCP46
PTP DSCP Stateenabled
PTP Sync Limit-7
PTP Announce Limit-3
PTP Delay Limit-7
PTP Unicast Negotiationenabled
PTP Unicast Lease Duration300
PTP Ditherdisabled
tp5000>
Aggregation Node Configurations for SyncE and 1588v2 PTP Hybrid BC with BMCA
This Cisco ASR 9000 Series router is the main SyncE source node for the network.
!*** Global Configuration ***
!
frequency synchronization
quality itu-t option 1
log selection changes
!
!*** Interface Configuration ***
!***BITS Clock-Interface for SyncE frequency source***
!***Repeat for other RSP***
clock-interface sync 0 location 0/RSP0/CPU0
port-parameters
bits-input e1 non-crc-4 hdb3
!
frequency synchronization
selection input
priority 1
wait-to-restore 0
quality receive exact itu-t option 1 PRC
!
!
interface TenGigE0/0/0/0
description To AGN-ASBR-K1002 T0/0/0/2
frequency synchronization
selection input
priority 1
!
!
interface TenGigE0/0/0/1
description To AGN-K0502::T0/0/0/1
frequency synchronization
selection input
priority 2
!
!
clock
domain 0
identity mac-address router
timescale PTP
!
profile AGN-ASBR-BC-Slave
dscp 46
transport ipv4
port state slave-only
!***Secondary Master from TP5000***
master ipv4 200.10.1.9
priority 125
!
!***Primary Master from TP5000***
master ipv4 200.10.2.9
priority 120
!
sync frequency 64
clock operation one-step
delay-request frequency 64
!
profile AGN-ASBR-BC-Master
dscp 46
transport ipv4
sync frequency 64
clock operation one-step
announce timeout 2
delay-request frequency 64
!
!
!*** PTP hybrid BC slave port ***
interface GigabitEthernet0/0/1/1.400
ptp
profile AGN-ASBR-BC-Slave
port state slave-only
sync frequency 64
delay-request frequency 64
!
ipv4 address 200.10.2.8 255.255.255.254
encapsulation dot1q 400
!
interface TenGigE0/0/0/0
description To AGN-ASBR-K1002 T0/0/0/2
ptp
profile AGN-ASBR-BC-Slave
port state slave-only
sync frequency 64
delay-request frequency 64
!
ipv4 address 10.10.1.0 255.255.255.254
carrier-delay up 2000 down 0
load-interval 30
frequency synchronization
selection input
priority 1
!
!
!*** PTP hybrid BC master port ***
interface TenGigE0/0/0/1
description To AGN-K0502::T0/0/0/1
ptp
profile AGN-ASBR-BC-Master
sync frequency 64
delay-request frequency 64
!
ipv4 address 10.5.2.1 255.255.255.254
carrier-delay up 2000 down 0
load-interval 30
frequency synchronization
selection input
priority 2
!
!
*** IGP to reach PAN ***
router isis agg-acc
is-type level-2-only
net 49.0100.1001.1101.0001.00
address-family ipv4 unicast
!
interface TenGigE0/0/0/1
circuit-type level-2-only
bfd minimum-interval 15
bfd multiplier 3
bfd fast-detect ipv4
point-to-point
link-down fast-detect
address-family ipv4 unicast
mpls ldp sync
!
!
!
This Cisco ASR 9000 Series router is configured as the backup SyncE source with SSM QL-SSU-A
override enabled.
!*** Global Config ***
frequency synchronization
quality itu-t option 1
log selection changes
priority 2
!
ipv4 address 10.3.2.3 255.255.255.254
!
Cisco ASR 903 Series PAN Configuration for SyncE and 1588v2 PTP Hybrid BC with BMCA and
Asymmetry Correction
The Cisco ASR 903 Series interface default SyncE state is master. In order to pass SyncE SSM messages
from the aggregation ring into the access ring, the network-clock input-source selection from the Cisco
ASR 903 Series side must be disabled. If this is not done, interlink issues between the Cisco ASR 903
Series and ASR 901 Series devices will occur.
Note This limitation doesn't apply to links between the Cisco ASR 903 Series and ASR 9000 Series devices.
Asymmetry result observed in PTP session between Master and Slave is compensated by asymmetry
correction, which has been configured by using CLI command "Asymmetry offset cli set to xxxx" under
PTP global command.
!
router isis agg-acc
net 49.0100.1001.1101.4001.00
ispf level-1-2
passive-interface Loopback1
passive-interface Loopback2
!
interface TenGigabitEthernet0/1/0
description to PAN-903-K0917::Ten0/1
synchronous mode
interface GigabitEthernet0/3/0
description To K1331::Gi0/11
synchronous mode
Access Node Configuration for SyncE and 1588 Hybrid BC with Asymmetry Correction
This section provides configuration required to activate Hybrid boundary clock on CSG Devices
ASR901 and ASR920. It also provides configuration details to activate SyncE between NID and CSG
devices.
The Cisco ASR 901 Series interface default SyncE state is master. Note that the A901-12C-FT-D
integrated chassis has 4 combination SFP and copper GigE ports. In some applications, like when
interfacing with Nokia Siemens Network (NSN) microwave devices, which require a copper RJ-45
Ethernet connection, SyncE is not supported with SFP-GE-T or GLC-T copper SFP modules. In order
to pass SyncE SSM messages from the aggregation ring into the access ring, Gigabit Ethernet port 0 to
3 and 5 to 7 should be used instead. Hybrid BC on Fiber Ring supported.
Before the Cisco ASR 901 Series router GigE port is added into the network-clock input-source selection
pool, SyncE state slave must be entered under the physical interface level of copper port but not on the
fiber port. The reason for this is because IEEE 802.3ab and later requires GigE interface on one side to
use its internal oscillator in order to drive synchronization on the link and it requires the other side to
use the input clock from the line in order to drive its transmit line. This forms the proper master-slave
relationship on the point-to-point GigE link. Asymmetry result observed in PTP session between master
and slave is compensated by asymmetry correction, which has been configured by using CLI command
"Asymmetry offset cli set to xxxx" under PTP global command.
interface GigabitEthernet0/6
description To CSG-901-K1330::GigabitEthernet0/6(syncE support)
synchronous mode
synce state slave
!*** Synce slave configured, only applicable for RJ45***
interface GigabitEthernet0/6
description To CSG-901-K1331::GigabitEthernet0/6(syncE support)
synchronous mode
synce state slave
!*** Synce slave configured, only applicable for RJ45***
*** PTP Configuration ***
!
interface Loopback1
description PTP BC Slave to PAN transport intf
ip address 100.100.13.30 255.255.255.255
!
interface Loopback2
description PTP BC master to eNB transport intf
ip address 100.101.13.30 255.255.255.255
!
ptp clock boundary domain 0 hybrid
1pps-out 250 4096 us
!*** Asymmetry offset cli set to 250ns ***
time-properties gps timeScaleTRUE currentUtcOffsetValidTRUE leap59FALSE leap61FALSE
35
clock-port BC_Slave_K1330 slave
transport ipv4 unicast interface Lo1 negotiation
clock source 100.101.14.1
!***PAN-K1401***
clock source 100.101.14.2 1
!***PAN-K1402***
clock-port BC_Master_K1330 master
transport ipv4 unicast interface Lo2 negotiation
!
router isis agg-acc
advertise passive-only
passive-interface Loopback1
passive-interface Loopback2
!
!
interface Vlan400
description To K1407 1588 PTPv2 client
ip address 114.10.7.2 255.255.255.0
load-interval 30
!
interface GigabitEthernet0/6
description To K1406 1588 PTPv2 client
service instance 400 ethernet
encapsulation untagged
bridge-domain 400
!
interface TenGigabitEthernet0/0/2
description To CSG-920-K0207:: TenGigabitEthernet0/0/3
synchronous mode
service instance 62 ethernet
encapsulation untagged
bridge-domain 62
interface TenGigabitEthernet0/0/2
description To NID-ME1200-K0304::GigabitEthernet Gi1/2
synchronous mode
service instance 62 ethernet
encapsulation untagged
bridge-domain 62
This section provides configuration detail required to enable SyncE on NID device from PAN Nodes. By
default, no SyncE is enabled on a NID device. Configuration of SyncE on NID starts with provisioning
of default clock Parameter on Global and port level. Global level clock setting configures Auto revertive
mode wait to restore is 5sec. The Port level clock setting configures port with SyncE Master and SSM
is disabled. ME 3600 is used as NID controller.
Note The following NID-related configurations are entered from the ME3600 controller.
setSyncEclockDefaultConfig set_synce_clock_config_defaults_req
setSyncEclockDefaultConfig commit
Hybrid Model Configuration with a Cisco ASR 9000 Series Router as Grandmaster Clock Source
This section shows the end-to-end configurations to implement the hybrid clocking model in the Cisco
EPN System architecture by using a Symmetricom TimeSource 3600 (TS3600) to provide frequency,
phase and ToD inputs into the Cisco ASR 9000 Series router. The ASR 9000 Series router acts as the
grandmaster clock source for 1588v2 PTP.
In order for the Cisco ASR 9000 Series router to perform as a grandmaster clock, an external timing
source is required to provide a stable reference for frequency, phase, and ToD to the GPS 10 MHz, 1
PPS, and ToD interfaces on the RSP440. For Cisco EPN validation purposes, a Symmetricom TS3600
was used as the reference clock. The signal types from the external reference clock need to map to the
Cisco ASR 9000 Series GPS interface configuration accordingly:
SSM quality mode needs to be ITU-T option 2 generation 2
ToD format is required to be "cisco"
1PPS format need to be "rs422"
The Cisco ASR 9000 Series router will use these three input signals as a reference source for generating
SyncE and 1588v2 PTP in order to distribute synchronization information to the rest of the Cisco EPN
network. SyncE is enabled across aggregation, pre-aggregation, and access networks in order to provide
frequency for every hybrid BC node, which uses this to recover the slave port clock, synchronize the
internal 1588 servo, and regenerate the clock for downstream clock devices. 1588v2 PTP signals will be
used only to synchronize phase and time-of-day information across the network.
Figure 6-2 Hybrid Model Implementation with the Cisco ASR 9000 Series Grandmaster
Note The ASBR-AGG in this configuration is a Cisco ASR 9000 Series router. The Cisco ASR 9000 Series
router currently implements a port-based 1588v2 PTP implementation, which needs PTP messages
exchanged from the same port for both slave and master. A future release will have loopback support for
PTP on the Cisco ASR 9000 Series router.
Aggregation Node Configurations for SyncE and 1588v2 PTP Hybrid BC with BMCA
!
interface TenGigE0/0/0/1
description To AGN-K0502::T0/0/0/1
frequency synchronization
selection input
priority 2
!
!
selection input
priority 3
!
!
*** IGP to reach PAN ***
router isis agg-acc
is-type level-2-only
net 49.0100.1001.1101.0001.00
address-family ipv4 unicast
!
interface TenGigE0/0/0/1
circuit-type level-2-only
bfd minimum-interval 15
bfd multiplier 3
bfd fast-detect ipv4
point-to-point
link-down fast-detect
address-family ipv4 unicast
mpls ldp sync
!
!
!
frequency synchronization
quality itu-t option 2 generation 2
log selection changes
!
!*** Interface Configuration ***
!***GPS Clock-Interface for SyncE frequency source***
!***Repeat for other RSP***
clock-interface sync 2 location 0/RSP0/CPU0
port-parameters
gps-input tod-format cisco pps-input rs422
!
frequency synchronization
selection input
priority 1
wait-to-restore 0
ssm disable
time-of-day-priority 1
quality receive exact itu-t option 2 generation 2 PRS
!
!
interface TenGigE0/0/0/1
description To AGN-K0302::T0/0/0/1
frequency synchronization
selection input
priority 2
!
ipv4 address 10.3.2.3 255.255.255.254
!
Cisco ASR 903 Series PAN and ASR 901 Series CSG Configuration
The configurations for all other nodes in the network are identical to the configurations shown in Hybrid
Model Configuration with a Third-Party Grandmaster Clock Source, page 6-1. Please refer to that
section for the configuration of these devices.
As described in Time Asymmetry Correction between Boundary Clocks, page 4-3, when a PTP session
is established between two remote PTP Nodes, a time error asymmetry may develop on the path between
them that varies depending on the type and number of intermediate nodes. The accumulated asymmetry
value of each segments on the PTP Path will be used for asymmetry correction. Such correction is done
at the PTP Slave using the "1pps-out <offset>" command, where <offset> is the expected Asymmetry
value. If the PTP slave has multiple paths to the master, the offset is calculated as average of the expected
asymmetry value of primary and secondary paths.
Table 6-3 provides the asymmetry value that is expected between any two nodes on a link. and helps in
estimating the expected total asymmetry value on a path between two PTP peers. The rows and columns
represent the left-most and right-most node on the segment for which we are estimating asymmetry.
In Figure 6-3, we have shown expected asymmetry values for ASR901 with Fiber and uwave link
(uwave segment provides more asymmetry value than Fiber segment).
The following example calculates the expected asymmetry correction value for Node ASR901_1, which
has two redundant paths to the same TP5K PTP Master clock.
The ASR901-1 node is four hops away from the PTP Master on the primary path, and five hops away on
the Secondary Path. Paths are made of different combinations of Cisco ASR9000 and ASR903 nodes.
The time errors calculated for the primary and secondary path are -1675 and -2475, respectively.
To address such asymmetry, the slave clock should apply a corrective OFFSET calculated as the mean
value between the time errors on the two paths.
Primary Path
BC Master BC BC BC BC
Slave Master Slave
Secondary Path
BC Master BC BC BC BC BC
Slave Master Slave
Total Asymmetry expected = (-25) + (-800) + (-800) + (-350) + (-150) + (-350) = -2475
295946
Note The VRF configuration under the BGP process uses a unique route distinguisher (RD) per MTG. This
unique RD configuration in each MTG, combined with the BGP and VRRP timer adjustments in MTG
1, enables the ability for the rest of the transport infrastructure to optimize MPLS VPN protection via
BGP FRR. This RD does not have to match the route target defined for the MPLS VPN VRF. The need
for this unique RD will be eliminated once support for BGP additional-paths receive is implemented for
BGP VPNv4 address-family configuration in IOS, thus allowing for multiple MTG information to be
propagated for the MPLS VPN.
VRF Configuration
vrf LTE102
address-family ipv4 unicast
import route-target
10:10
!
export route-target
1001:1001
!
!
address-family ipv6 unicast
import route-target
10:10
!
export route-target
1001:1001
!
!
!
interface TenGigE0/0/0/2.1100
vrf LTE102
ipv4 address 115.1.102.3 255.255.255.0
ipv6 address 2001:115:1:102::3/64
encapsulation dot1q 1100
!
VRRP Configuration
router vrrp
interface TenGigE0/0/0/2.1100
delay minimum 1 reload 240
address-family ipv4
vrrp 110
priority 254
timer msec 100 force
address 115.1.102.1
!
!
address-family ipv6
vrrp 110
priority 254
timer msec 100 force
address global 2001:115:1:102::1
address linklocal autoconfig
!
!
VRF Configuration
vrf LTE102
address-family ipv4 unicast
import route-target
10:10
!
export route-target
1001:1001
!
!
address-family ipv6 unicast
import route-target
10:10
!
export route-target
1001:1001
!
!
!
interface TenGigE0/0/0/2.1100
vrf LTE102
ipv4 address 115.1.102.4 255.255.255.0
ipv6 address 2001:115:1:102::4/64
encapsulation dot1q 1100
!
VRRP Configuration
router vrrp
interface TenGigE0/0/0/2.1100
delay minimum 1 reload 240
address-family ipv4
vrrp 110
priority 253
timer msec 100 force
address 115.1.102.1
!
!
address-family ipv6
vrrp 110
priority 253
timer msec 100 force
address global 2001:115:1:102::1
address linklocal autoconfig
!
!
!
!
K0308
IXIA SGW
10/11
ASR9K
Ten 0/1 K0301
Ten 0/2/1/3
Sub I/F
K1502
MTG 3
BVI
S1-C MME
VRF
K0306
Gig 0/2
Gig 0/5
VRF
BVI
Gig 0/7
MTG 4
Sub I/F
VRF K1502
S1-U
Ten 0/1
K0305 ASR9K
K0302
Ten 0/0
IXIA
295956
10/1
SGW
AGN-9006-K0302
ethernet ring g8032 profile ring_profile
timer wtr 10
timer guard 100
timer hold-off 0
!
l2vpn
ethernet ring g8032 EPNRING
port0 interface TenGigE0/2/1/3
!
port1 none
open-ring
instance 1
profile ring_profile
inclusion-list vlan-ids 99,300-350,604
aps-channel
port0 interface TenGigE0/2/1/3.99
port1 none
!
!
instance 2
profile ring_profile
rpl port0 owner
inclusion-list vlan-ids 199,351-400,651
aps-channel
port0 interface TenGigE0/2/1/3.199
port1 none
!
CSG-K0305/CSG-K0306/CSG-K0307/CSG-k0308
ethernet ring g8032 profile ring_profile
timer wtr 10
timer guard 100
!
ethernet ring g8032 CERING
open-ring
port0 interface TenGigabitEthernet0/0
port1 interface TenGigabitEthernet0/1
instance 1
profile ring_profile
inclusion-list vlan-ids 64,99,300-350,604
aps-channel
port0 service instance 99
port1 service instance 99
!
!
instance 2
profile ring_profile
inclusion-list vlan-ids 199,351-400,651
aps-channel
port0 service instance 199
port1 service instance 199
!
VRRP
VRRP is used as a gateway redundancy protocol on the AGN nodes for node failure scenarios at AGN.
The routed PW is implemented for communication of VRRP messages between the BVI interfaces of
AGNs. The implementation of VRRP is as shown below.
AGN-9006-K0302
!*** VRRP configuration****
router vrrp
interface BVI302
address-family ipv4
vrrp 2
priority 253
preempt delay 600
address 30.2.1.1
bfd fast-detect peer ipv4 30.2.1.3
!
!
address-family ipv6
vrrp 2
priority 253
preempt delay 600
bfd fast-detect peer ipv6 2001:13:2:102::3
address global 2001:13:2:102::1
address linklocal autoconfig
!
!
!
!***BVI Interface configuration***
interface BVI302
vrf LTE224
ipv4 address 30.2.1.2 255.255.255.0
ipv6 nd dad attempts 0
ipv6 address 2001:13:2:102::2/64
!
AGN-9006-K0301
!*** VRRP configuration****
router vrrp
interface BVI302
address-family ipv4
vrrp 2
priority 253
preempt delay 600
address 30.2.1.2
bfd fast-detect peer ipv4 30.2.1.3
!
!
address-family ipv6
vrrp 2
priority 252
preempt delay 600
bfd fast-detect peer ipv6 2001:13:2:102::3
address global 2001:13:2:102::1
address linklocal autoconfig
!
!
!
!***BVI Interface configuration***
interface BVI302
vrf LTE224
ipv4 address 30.2.1.1 255.255.255.0
ipv6 nd dad attempts 0
ipv6 address 2001:13:2:102::1/64
!
TDM Services
Figure 6-5 and the example that follows illustrate TDM services.
MTG
BTS
PAN TDM
Pseudowire
Primary TDM
PGP
MR-APS BSC
Backup TDM
Pseudowire
TDM BTS
293453
MTG
Note The only difference between CESoPSN and SAToP configuration is the lack of "control-word" in the
pseudowire-class for SAToP configs.
pseudowire-class CESoPSN
encapsulation mpls
control-word
!
!
interface CEM0/0
cem 0
xconnect 100.111.15.1 13261501 encapsulation mpls pw-class CESoPSN
backup peer 100.111.15.2 13261502 pw-class CESoPSN
!
interface Loopback0
ip address 100.111.13.26 255.255.255.255
!
mode vt15-t1
delay trigger 250
!
clock source line
!
controller T1 0/2/1/0/1/2/2
cem-group framed 0 timeslots 1-24
forward-alarm AIS
forward-alarm RAI
!
interface Loopback0
description Global Loopback
ipv4 address 100.111.15.1 255.255.255.255
!
l2vpn
!
pw-class CESoPSN
encapsulation mpls
control-word
!
!
xconnect group TDM-K1326
p2p T1-CESoPSN-01
interface CEM0/2/1/0/1/2/2:0
neighbor ipv4 100.111.13.26 pw-id 13261501
pw-class CESoPSN
!
!
!
!
encapsulation mpls
control-word
!
!
xconnect group TDM-K1326
p2p T1-CESoPSN-01
interface CEM0/2/1/0/1/2/2:0
neighbor ipv4 100.111.13.26 pw-id 13261502
pw-class CESoPSN
!
!
!
!
ATM Services
Figure 6-6 and the example that follows illustrate ATM PW redundancy for a clear-channel
implementation. The same configurations would be used for an IMA implementation as well, just using
ATM IMA interfaces instead.
MTG
Node B
PAN ATM
Pseudowire
Primary ATM
PGP
MR-APS ATM RNC
Backup ATM
Pseudowire
ATM NodeB
293454
MTG
This chapter, which discusses the QoS implementation for mobile services, includes the following major
topic:
CSG QoS Configuration, page 7-2
The aggregate QoS policies implemented on the NNIs of the transport network are covered in the QoS
chapter of the EPN 4.0 Transport Infrastructure Design and Implementation Guide. This chapter covers
the QoS policies implemented on the UNIs of the CSG and MTG, covering the service-level QoS for
TDM, ATM, and L3VPN services.
Note IEEE 1588v2 PTP traffic is marked with a DSCP of 46 by the grandmaster and thus receives EF PHB
treatment across the transport network.
QoS policy enforcement is accomplished with H-QoS policies with parent shaping and child queuing on
the UNIs. The classification criteria used to implement the DiffServ PHBs is described in detail in the
EPN 4.0 Transport Infrastructure Design and Implementation Guide and summarized in Figure 7-1.
Residential Voice
Business Real-time
Network Sync (1588 PTP) EF 46 5 46 5 5 46 5 CBR
Mobility & Signaling traffic
Mobile Conversation/Streaming
Residential TV and Video Distribution AF 32 4 32 4 4 NA 4 NA
Business Telepresence AF 24 3 24 3 3 NA 3 NA
Business Critical
16 2 16 2 2 16 2
In Contract AF VBR-nrt
8 1 8 1 1 8 1
Out of Contract
Residential HSI
Business Best Effort
BE 0 0 0 0 0 0 0 UBR
Mobile Background
293241
eNode B CSG Fiber Access Pre-Agg Aggregation ASBR Core ASBR MTG SGW/MME
b F
1G
400 Mbps
Microwave Access
E G8032 Access
297773
Figure 7-2 shows the following elements, which are covered in this section:
(a) H-QoS policy map on CSG UNIs
(b) H-QoS policy map on pre-aggregation NNI connecting the microwave access network
(4) Flat QoS policy map on ingress for ATM and TDM UNIs
(E) H-QoS policy map on CSG UNIs for G8032 access Network
(F) H-QoS policy map on pre-aggregation NNI connecting the G8032 access network
Note The values of all policer rates and shaper rates in these examples are simply there to show how policers
and shapers are configured. The actual values to be deployed in a production network should be modeled
after the actual traffic rates that will be encountered.
Class Maps
In MPLS Access, QoS classification at the UNI in the ingress direction for upstream traffic is based on
IP DSCP, and the marking is done by the connected eNodeB.
!***Network management traffic***
class-map match-any CMAP-NMgmt-DSCP
match dscp cs7
!
!***Voice/Real-Time traffic***
class-map match-all CMAP-RT-DSCP
match dscp ef
!
!***Broadcast Video traffic***
class-map match-any CMAP-Video-DSCP
match dscp cs4
In Non-MPLS Access, QoS classification at the UNI in the ingress direction for upstream traffic is based
on COS, and the marking is done by the connected eNodeB.
!***Broadcast Video traffic***
class-map match-any CMAP-VIDEO-COS
match cos 4
!***Network management traffic***
QoS classification at the UNI in the egress direction for downstream traffic is based on QoS groups,
and the QoS group mapping is done at the ingress NNI.
QoS classification at the NNI in the egress direction is based on QoS groups, and:
QoS group mapping for upstream traffic is done at the ingress UNI.
QoS group mapping for traffic transiting the access ring is done at the ingress NNI.
!***Network management traffic***
class-map match-any CMAP-NMgmt-GRP
match qos-group 7
!
!***Network control traffic***
class-map match-any CMAP-CTRL-GRP
match qos-group 6
!
!***Voice/Real-Time traffic***
class-map match-all CMAP-RT-GRP
match qos-group 5
!
!***Broadcast Video traffic***
class-map match-any CMAP-Video-GRP
match qos-group 4
In MPLS Access
!***Interface connecting eNodeB.***
interface GigabitEthernet0/2
service-policy output PMAP-eNB-UNI-P-E
service-policy input PMAP-eNB-UNI-I
!
policy-map PMAP-eNB-UNI-I
class CMAP-RT-DSCP
police cir 20000000
set qos-group 5
class CMAP-NMgmt-DSCP
police 5000000
set qos-group 7
class CMAP-Video-DSCP
police 100000000
set qos-group 3
class class-default
police 200000000
!
policy-map PMAP-eNB-UNI-P-E
class class-default
In Non-MPLS Access
!***Interface connecting eNodeB.***
interface GigabitEthernet0/2
service-policy output PMAP-eNB-UNI-P-E
service-policy input PMAP-eNB-UNI-I
!
policy-map PMAP-eNB-UNI-I
!*** Ingress policy for UNI Port ***
class CMAP-RT-COS
police cir 20000000
set qos-group 5
class CMAP-NMgmt-COS
police 5000000
set qos-group 7
class CMAP-Video-COS
police 100000000
set qos-group 3
class class-default
police 200000000
!
policy-map PMAP-eNB-UNI-P-E
!*** Egress Policy for eNB UNI Port ***
class class-default
shape average 425000000
service-policy PMAP-eNB-UNI-C-E
!
policy-map PMAP-eNB-UNI-C-E
class CMAP-RT-GRP
priority percent 5
class CMAP-NMgmt-GRP
bandwidth percent 1
class CMAP-HVideo-GRP
bandwidth percent 25
class class-default
Ingress and Egress policy configured on all NNI port which makes ring of G 8032
service instance 302 ethernet
encapsulation dot1q 302
!*** Vlan 302 used for EnodeB service
rewrite ingress tag pop 1 symmetric
bridge-domain 302
!
policy-map PMAP-NNI-E
!*** Egress policy on NNI PORT ***
class CMAP-RT-GRP
priority percent 20
class CMAP-BC-GRP
bandwidth percent 5
class CMAP-BC-Tele-GRP
bandwidth percent 10
class CMAP-NMgmt-GRP
bandwidth percent 5
class CMAP-CTRL-GRP
bandwidth percent 2
class CMAP-Video-GRP
bandwidth percent 20
class class-default
!
!
policy-map PMAP-NNI-I
!*** Ingress Policy on NNI Port ***
class CMAP-BC-COS
set qos-group 2
class CMAP-RT-COS
set qos-group 5
police rate 1000000
class CMAP-BC-Tele-COS
set qos-group 3
!
policy-map PMAP-eNB-UNI-P-E
class class-default
service-policy PMAP-eNB-UNI-C-E
shape average 100000000 bps
!
end-policy-map
interface CEM0/0
service-policy input PMAP-TDM-UNI-I
policy-map PMAP-UNI-E
class CMAP-RT-DSCP
priority level 1
police rate 10000 kbps
!
!
class CMAP-NMgmt-DSCP
bandwidth 50000 kbps
!
class CMAP-HVideo-DSCP
bandwidth 200000 kbps
!
class class-default
!
end-policy-map
!
policy-map PMAP-UNI-I
class CMAP-RT-DSCP
priority level 1
police rate 10000 kbps
!
set mpls experimental imposition 5
!
class CMAP-NMgmt-DSCP
bandwidth 50000 kbps
!
set mpls experimental imposition 7
!
class CMAP-HVideo-DSCP
bandwidth 200000 kbps
!
set mpls experimental imposition 3
!
class class-default
!
end-policy-map
!
The only ingress marking to match is the ATM CLP bit on ATM UNIs, which indicates a discard
preference for marked cells within a particular ATM CoS. This can be utilized to offer a bursting
capability in a particular ATM CoS.
Two ATM policy maps are shown. The first corresponds to an ATM VBR-rt service where cells are
marked with a CLP of 1 above a certain cell rate. The second corresponds to an ATM UBR service, again
where cells are marked with a CLP of 1 above a certain cell rate. The proper map is applied to an ATM
PVC which corresponds to the ATM CoS carried on that PVC.
policy-map PMAP-ATM-UNI-I
class CMAP-ATM-CLP0-UNI-I
set mpls experimental imposition 5
!
class CMAP-ATM-CLP1-UNI-I
set mpls experimental imposition 4
!
class class-default
!
end-policy-map
!
policy-map PMAP-ATM-UNI-DATA-I
class CMAP-ATM-CLP0-UNI-I
set mpls experimental imposition 4
!
class CMAP-ATM-CLP1-UNI-I
This chapter, which describes the operations, administration, and maintenance (OAM) and performance
management (PM) implementation for mobile RAN service transport in the Cisco EPN System, includes
the following major topics:
Service OAM Implementation for LTE and 3G IP UMTS RAN Transport with MPLS Access,
page 8-2
Service OAM Implementation for ATM and TDM Circuit Emulation Pseudowires for 2G and 3G
RAN Transport, page 8-2
Transport OAM, page 8-3
IP SLA Configuration, page 8-3
Figure 8-1 depicts OAM implementation for Mobile RAN service transport.
IPSLA PM
LTE, IPSLA IPSLA
3G IP UMTS, Probe Probe
Transport
VRF VRF
Service OAM
Transport OAM
End-to-end LSP
With Unified MPLS MPLS LSP OAM
293456
Transport OAM
The MPLS transport for mobile backhaul is based on the unified MPLS approach. The unified MPLS
deployment approach allows for end-to-end LSPs to be built between the RAN access domain or PAN
and the centralized MTG in the MPC:
IP ping and traceroute operations for verifying the data plane against control plane and for isolating
faults within the inter-domain LSP in the MPLS/IP network between the CSGs or PANs and the
MTG.
MPLS LSP ping and traceroute operations for verifying the data plane against control plane and for
isolating faults within the intra-domain LSPs in the access, aggregation, and core domains.
MPLS LSP ping and traceroute operations for inter-domain LSPs will be supported in a future
release.
IP SLA Configuration
Note The ToS values in IOS-XR are equal to four times the desired DSCP value. Please refer to Chapter 7,
Quality of Service Implementation, for LTE QCI to DSCP mapping.
Reaction Configuration
The reaction configuration defines the thresholds for the previously configured probes, and it defines the
corresponding actions to be taken when those thresholds are exceeded. The following configuration
shows a single example of a jitter probe reaction and an echo probe reaction.
ipsla
reaction operation 6
react connection-loss
action logging
action trigger
threshold type immediate
!
react jitter-average dest-to-source
action logging
action trigger
threshold type immediate
threshold lower-limit 10 upper-limit 15
!
react jitter-average source-to-dest
action logging
action trigger
threshold type immediate
threshold lower-limit 10 upper-limit 15
!
react packet-loss dest-to-source
action logging
action trigger
threshold type immediate
threshold lower-limit 3 upper-limit 5
!
!
!
The EPN 4.0 MEF Transport Services Design and Implementation Guide is part of a set of resources that
comprise the Cisco EPN System documentation suite. The resources include:
EPN 4.0 System Concept Guide: Provides general information about Cisco's EPN 4.0 System
architecture, its components, service models, and the functional considerations, with specific focus
on the benefits it provides to operators.
EPN 4.0 System Brochure: At-a-glance brochure of the Cisco Evolved Programmable Network
(EPN).
EPN 4.0 Transport Infrastructure Design and Implementation Guide: Design and Implementation
guide with configurations for the transport models and cross-service functional components
supported by the Cisco EPN System concept.
EPN 4.0 Residential Services Design and Implementation Guide: Design and Implementation guide
with configurations for deploying the consumer service models and the unified experience use cases
supported by the Cisco EPN System concept.
EPN 4.0 Enterprise Services Design and Implementation Guide: Design and Implementation guide
with configurations for deploying the enterprise L3VPN service models over any access and the
personalized use cases supported by the Cisco EPN System concept.
EPN 4.0 MEF Transport Services Design and Implementation Guide: Design and Implementation
guide with configurations for deploying the Metro Ethernet Forum service transport models and use
cases supported by the Cisco EPN System concept.
Note All of the documents listed above, with the exception of the System Concept Guide and System
Brochure, are considered Cisco Confidential documents. Copies of these documents may be obtained
under a current Non-Disclosure Agreement with Cisco. Please contact a Cisco Sales account team
representative for more information about acquiring copies of these documents.