Professional Documents
Culture Documents
Legal notice
Alcatel, Lucent, Alcatel-Lucent and the Alcatel-Lucent logo are trademarks of Alcatel-Lucent. All other
trademarks are the property of their respective owners
The information presented is subject to change without notice. Alcatel-Lucent assumes no responsibility for
inaccuracies contained herein.
Contains proprietary/trade secret information which is the property of Alcatel-Lucent and must not be made
available to, or copied or used by anyone outside Alcatel-Lucent without its written authorization.
CONTENTS
1 INTRODUCTION ................................................................................................................................ 14
1.1 OBJECT ........................................................................................................................................ 14
1.2 SCOPE OF THIS DOCUMENT ........................................................................................................... 14
1.3 AUDIENCE FOR THIS DOCUMENT .................................................................................................... 15
1.4 NOMENCLATURE ........................................................................................................................... 15
1.5 REASONS FOR REISSUE ................................................................................................................ 15
1.6 RELATED DOCUMENTS .................................................................................................................. 17
1.6.1 Reference Documents........................................................................................................ 17
1.6.2 Customer Documents......................................................................................................... 17
1.6.3 Internal Reference Documents........................................................................................... 18
2 LTE OVERVIEW................................................................................................................................. 19
2.1 LTE NETWORK OVERVIEW ............................................................................................................ 19
2.1.1 Requirements and Targets for LTE .................................................................................... 19
2.1.1.1 Air Interface Evolution ........................................................................ 19
2.1.1.2 Core Network Architecture Evolution ...................................................... 21
2.1.2 LTE Network Architecture Overview .................................................................................. 23
2.1.3 LTE Functions & Interfaces - eUTRAN .............................................................................. 25
2.1.3.1 User Equipment (UE) .......................................................................... 25
2.1.3.2 LTE Radio Interface ........................................................................... 26
2.1.3.2.1 LTE FDD Radio Interface .................................................................. 26
2.1.3.2.1.1 FDD DL Physical Resource Block Structure ....................................... 26
2.1.3.2.1.2 FDD UL Physical Resource Block Structure ....................................... 27
2.1.3.2.2 Capacity vs. Coverage ..................................................................... 30
2.1.3.3 E-UTRAN Node B (eNodeB) ................................................................... 30
2.1.3.4 eUTRAN Interfaces ............................................................................ 32
2.1.4 LTE Functions & Interfaces - ePC ...................................................................................... 35
2.1.4.1 MME .............................................................................................. 35
2.1.4.2 HSS ............................................................................................... 38
2.1.4.3 AAA ............................................................................................... 38
2.1.4.4 S-GW ............................................................................................. 39
2.1.4.5 P-GW ............................................................................................. 40
2.1.4.6 PCRF ............................................................................................. 41
2.1.4.7 MBMS-GW ........................................................................................ 42
2.1.4.8 ePC Interfaces .................................................................................. 44
2.2 LTE NETWORK CAPACITY AND DIMENSIONING OVERVIEW............................................................... 47
2.2.1 Initial Engineering and Dimensioning ................................................................................. 47
2.2.1.1 LTE Call Model Elements ..................................................................... 48
2.2.1.2 Paging and TAU Load .......................................................................... 54
2.2.2 Capacity Monitoring and Troubleshooting.......................................................................... 58
4 LTE INTERFACE MONITORING: TROUBLESHOOTING & CAPACITY GROWTH ACTIONS .... 262
4.1 CAPACITY MONITORING & TROUBLESHOOTING PRINCIPLE ............................................................ 263
4.2 AIR INTERFACE CAPACITY MONITORING ....................................................................................... 265
4.2.1 Critical Triggers for Air Interface Capacity Evaluation ..................................................... 265
4.2.2 Air Interface Monitoring Method ....................................................................................... 277
4.2.2.1 PRB Consumption ............................................................................. 278
4.2.2.2 Average Throughput .......................................................................... 278
4.2.2.2.1 Average aggregate cell Throughput .................................................... 278
4.2.2.2.2 Average User Experienced Throughput ................................................ 279
4.2.2.3 Average Number of Subscribers ............................................................ 280
4.3 ENB CAPACITY MONITORING ....................................................................................................... 281
4.3.1 eNB Capacity Limits ......................................................................................................... 281
4.3.2 Critical Triggers for eNB Capacity Evaluation .................................................................. 281
4.3.2.1 Monitoring Indicators for User Plane Blocking Evaluation ............................. 281
4.3.2.2 Monitoring Indicators for User Plane Load Evaluation ................................. 288
4.3.3 eNB Monitoring Method .................................................................................................... 292
4.3.3.1 Number of User per Cell/Modem Connections Monitoring & Troubleshooting ..... 292
4.3.3.2 Number of Bearers per Cell/Modem Monitoring & Troubleshooting ................. 295
4.3.4 Other eNB Capacity Monitoring Aspects .......................................................................... 298
4.3.4.1 eNB Control Plane overload duration ..................................................... 298
4.4 MME CAPACITY MONITORING ..................................................................................................... 306
4.4.1 Maximum Number of Simultaneous Attached Users ....................................................... 307
4.4.1.1 Counters and Indicators ..................................................................... 307
4.4.1.2 Monitoring Method & Critical Triggers .................................................... 307
4.4.2 Maximum Number of Registered Users ........................................................................... 310
4.4.2.1 Counters and Indicators ..................................................................... 310
4.4.2.2 Monitoring Method & Critical Triggers .................................................... 310
4.4.3 External Message Throughput ......................................................................................... 313
4.4.3.1 Counters and Indicators ..................................................................... 313
4.4.3.1.1 MAF Message Counters and Indicators ................................................. 313
4.4.3.1.2 CPU Usage Counters and Indicators .................................................... 313
4.4.3.1.3 Overload Counters and Indicators ...................................................... 314
4.4.3.1.4 Paging Counters and Indicators ......................................................... 318
4.4.3.2 Monitoring Method & Critical Triggers .................................................... 323
4.4.3.2.1 MAF Message Monitoring ................................................................. 324
4.4.3.2.2 CPU Usage Monitoring .................................................................... 326
4.4.3.2.3 Overload monitoring ...................................................................... 332
4.4.3.2.4 MME Paging Monitoring (Future) ........................................................ 332
4.5 EUTRAN INTERFACES CAPACITY MONITORING ............................................................................ 333
4.5.1 eUTRAN interfaces monitoring method ........................................................................... 333
4.5.2 eUTRAN interface critical trigger counters ....................................................................... 336
4.5.2.1 S1 Throughput Monitoring ................................................................... 337
4.5.2.2 X2 THROUGHPUT Monitoring ............................................................... 341
4.5.2.3 M1 THROUGHPUT Monitoring ............................................................... 345
4.5.2.4 OAM THORUGHPUT Monitoring ............................................................. 346
4.5.2.5 Link Utilization Monitoring .................................................................. 348
4.5.2.6 Vlan Throughput Monitoring ................................................................ 352
4.6 MOBILE GATEWAY (S-GW, P-GW) CAPACITY MONITORING .......................................................... 355
4.6.1 S-GW / P-GW critical triggers........................................................................................... 355
4.6.1.1 P-GW critical triggers ........................................................................ 356
4.6.1.2 S-GW critical trigger counters .............................................................. 366
4.6.2 S-GW / P-GW additional counters ................................................................................... 388
4.6.2.1 P-GW additional counters ................................................................... 388
4.6.2.2 S-GW additional counters ................................................................... 389
LIST OF FIGURES
LIST OF TABLES
Table 1-1: Alcatel Lucent LTE Releases Mapping ........................................................................................... 14
Table 2-1: 3GPP defined LTE UE categories ................................................................................................... 25
Table 2-2: LTE Bandwidth vs. Physical Resource Blocks (UL & DL) ................................................................ 29
Table 2-3: Call Model Parameters – Aggregate Values ................................................................................... 50
Table 2-4: General and User Plane Parameters by Call Type ........................................................................ 52
Table 2-5: Subscriber Loading and Data Usage Profile ................................................................................... 53
Table 2-6: Detailed Data Call Model for the Example Traffic Model ............................................................. 54
Table 2-7: Cumulative Page Success Rate ..................................................................................................... 56
Table 3-1: LTE Uplink Air Interface Capacities ............................................................................................... 64
Table 3-2: LTE Downlink Air Interface Capacities .......................................................................................... 65
Table 3-3: Capacity Sectorization Factor ....................................................................................................... 66
Table 3-4: Call Model to Air Interface Dimensioning Elements Conversion Table ......................................... 72
Table 3-5: Recommended Power Amplifier Sizing .......................................................................................... 74
Table 3-6: Number of Digital Boards per 9926 BBU ........................................................................................ 86
Table 3-7: LR14.1.L eNB Max Capacity Figures - eCCM .................................................................................. 92
Table 3-8: LR14.1.L eNB Max capacity Figures – eCCM2................................................................................. 93
Table 3-9: CA Bands and Bandwidths Supported in LR14.1.L ....................................................................... 101
Table 3-10: Call Model to eNB Dimensioning Elements Conversion Table ................................................... 104
Table 3-11: eNB Resources and Associated Token Units .............................................................................. 115
Table 3-12: example of VoLTE SVC Token Calculation ................................................................................. 117
Table 3-13 : Total eNodeB OAM Operational Bandwidth .......................................................................... 128
Table 3-14: Total MME OAM Operational Bandwidth .................................................................................... 128
Table 3-15: eNodeB Bandwidth for OAM Maintenance ................................................................................. 130
Table 3-16: OAM transport encapsulation..................................................................................................... 131
Table 3-17: IP Headers .................................................................................................................................. 132
Table 3-18: Transport + Application Headers ............................................................................................... 132
Table 3-19: PM Bandwidth Requirement / eNB ........................................................................................... 134
Table 3-20: PCMD bandwidth requirement over S1-MME.............................................................................. 135
Table 3-21: PCMD Bandwidth requirement oOver MME OAM Interface ........................................................ 135
Table 3-22: Software size .............................................................................................................................. 135
Table 3-23: Software ApplicationUpgrade bandwidth requirement / eNB .................................................. 136
Table 3-24: Call Trace Bandwidth requirement / eNB ................................................................................. 136
Table 3-25: DDT Bandwidth requirement / eNB ........................................................................................... 137
Table 3-26: Board Restart Bandwidth Requirement / eNB ........................................................................... 138
Table 3-27: eNodeB Application Restart Context Bandwidth Requirement ................................................. 138
Table 3-28: L3 Snapshot Bandwidth Requirement / eNB ............................................................................. 138
Table 3-29: On-Demand Snapshot Bandwidth Requirement / eNB .............................................................. 138
Table 3-30: OAM SAM Bandwidth Dimensioning with ePC and Transport Network Elements ...................... 139
Table 3-31: OAM SAM Bandwidth Dimensioning Elements ............................................................................ 140
Table 3-32: MME WM8.0.0 Capacity Figures (for Maximum Configuration) ................................................. 144
Table 3-33: Call Model Parameters for MME Dimensioning .......................................................................... 147
Table 3-34: WMM WM8.0.0 Message Load Estimates .................................................................................... 148
Table 3-35: S1-U Transport Headers and Size details .................................................................................. 157
Table 3-36: Aggregate S1-U Transport Header Size details.......................................................................... 158
Table 3-37: Aggregate S1-MME Transport Headers and Size details ............................................................ 159
Table 3-38: Aggregate S1-MME Transport Header Size ................................................................................. 159
Table 3-39: Attach Procedure ....................................................................................................................... 166
Table 3-40: Detach Procedure ...................................................................................................................... 167
Table 3-41: Connection Request Procedure ................................................................................................. 167
Table 3-42: Inter-eNB (X2) Handover Procedure .......................................................................................... 167
Table 3-43: Inter-eNB (S1) Handover Procedure .......................................................................................... 167
Table 3-44: Inter-RAT Handover Procedure .................................................................................................. 168
Table 3-45: Paging eNB Procedure ................................................................................................................ 168
Table 3-46: UE Requested PDN Connectivity Procedure .............................................................................. 168
1 INTRODUCTION
1.1 OBJECT
The objective of the LTE Network Capacity Monitoring and Engineering (LNCME) is to provide an
engineering view of the Alcatel-Lucent LTE interfaces dimensioning & monitoring aspects.
LR14.1.L is an LTE FDD-only deployment. TDD is not supported in LR14.1.L. This document covers
LTE Network Elements and/or interfaces within the LTE wireless scope, which includes UE, air
interface, eNodeB, MME, S-GW and P-GW.
The interfaces with EPC network elements like PCRF and HSS-AAA will be discussed briefly in this
document.
The interfaces with other external network elements (such as IMS, other Application) will not be
discussed in this document.
The intended audience of this document is engineers who work with the Alcatel-Lucent LTE network
and need to know how to plan and expand an LTE network using network statistics.
1.4 NOMENCLATURE
Engineering rules (mandatory to be followed) are presented as the following:
Rule:
Restriction:
Engineering Recommendation:
The difference between Release n and Release n-1 are presented as the following:
LAn-1-LAn Delta:
Individual network element descriptions were updated to include the latest capacity
improvements available in the end-to-end LR14.1.L solution.
The example call model described in Chapter 2 and all associated illustrations seen throughout
the document were updated to more closely correlate with what is seen in the field.
The template used to define critical triggers in Chapter 4 was updated for usability/readability.
[R1] 3GPP TS 36.306 (840), "Evolved Universal Terrestrial Radio Access (E-UTRA); User Equipment
(UE) radio access capabilities"
Unless otherwise noted, the following documents are available from the documentation area of the
Alcatel-Lucent Online Customer Support (OLCS) website:
[C5] LTE/DCL/APP/031092, 9471 Wireless Mobility Manager Model Offer Provisioning Guide
[C6] LTE/DCL/APP/031094, 9471 Wireless Mobility Manager LTE Parameters User Guide
[C7] LTE/DCL/APP/034072, LTE Transport Engineering Guide for LR13.3 and WMM7
[C8] Equivalent Capacity and its Application to Bandwidth Allocation in High-Speed Networks (R.
Guérin, H. Ahmadi, M. Naghshineh), IEEE login required
[C9] LTE Dimensioning Guidelines – Outdoor Link Budget (Alcatel-Lucent External documentation)
[C11] Alcatel-Lucent 7750 Service Router Mobile Gateway Integrated Services Module (MG-ISM) –
Data Sheet
[C12] Alcatel-Lucent 7750 Service Router – Mobile Gateway Release 6.0 – Data Sheet
[C13] Alcatel-Lucent 7750 Service Router Mobile Gateway Integrated Services Module – Data Sheet
[C14] 93-0525-01-01, 7750 SR OS MG Basic System Configuration Guide, 6.0
[C20] 9YZ-06174-0206-RKZZA , 9959 Network Performance Optimizer MME and MI Indicators ROI
Reference Guide, LR14.1.L
[C21] 9YZ-06174-0207-RKZZA , 9959 Network Performance Optimizer S-GW and P-GW Indicators
ROI Reference Guide, LR14.1.L
[C22] 9YZ-06174-0208-RKZZA , 9959 Network Performance Optimizer PCMD and NUART Counters
and Indicators ROI Reference Guide, LR14.1.L
[C23] 9YZ-06174-0211-RKZZA, 9959 Network Performance Optimizer S-GW, and P-GW Indicators
Reference Guide, LR14.1.L
[C24] 9YZ-06010-0006-RKZZA, 9471 WMM MME, SGSN and SRS Applications Observation Counters
Reference Guide, WM8.0.0
2 LTE OVERVIEW
This chapter provides a high level overview of the LTE system architecture, functionalities &
interfaces.
The work towards 3GPP Long Term Evolution (LTE) started in 2004 with the definition of the LTE
targets. Following these targets, LTE is supposed to be able to deliver superior performance
compared to existing 3GPP networks based on High Speed Packet Access (HSPA) technology.
The LTE performance targets in 3GPP (defined relative to HSPA in Release 6) are listed below:
spectral efficiency two to four times more than with HSPA Release 6;
peak rates exceed 100 Mbps in downlink and 50 Mbps in uplink;
round trip time <10 ms;
Simplified network architecture;
high level of mobility and security;
optimized terminal power efficiency;
Frequency bandwidth flexibility from below 1.4 MHz up to 20 MHz allocations.
In addition, one important goal was the co-existence with legacy standards. LTE users should be
capable to start a call or data transfer in an area using an LTE standard, and, if LTE coverage
becomes unavailable, continue the operation without any action on their part using GSM/GPRS or W-
CDMA-based UMTS or 3GPP2 networks such as CDMA2000.
In order to reach the above mentioned targets, several elements were modified/evolved comparing
to existing UMTS/HSPA standards.
The multiple access schemes in LTE downlink use Orthogonal Frequency Division Multiple Access
(OFDMA) and uplink uses Single Carrier Frequency Division Multiple Access (SC-FDMA). These
multiple access solutions provide orthogonality between the users, reducing the interference and
improving the network capacity.
The resource allocation in the frequency domain takes place with a resolution of 180 kHz resource
blocks (12 sub-carriers x 15 KHz) both in uplink and in downlink. The frequency dimension in the
packet scheduling is one reason for the high LTE capacity.
The uplink user specific allocation is continuous, enabling single carrier transmission while the
downlink can use resource blocks freely from different parts of the spectrum.
The uplink single carrier solution is also designed to allow efficient terminal power amplifier design,
which is relevant for the terminal battery life.
Uplink ….
SC-FDMA
Downlink ….
OFDMA
Frequency
The LTE solution enables spectrum flexibility where the transmission bandwidth can be selected
between 1.4 MHz and 20 MHz depending on the available spectrum.
The use of multiple antenna technology also allows the exploitation of the spatial-domain as
another dimension. This becomes essential in the quest for higher spectral efficiencies
For example, 20 MHz bandwidth can provide up to 150 Mbps downlink user data rate with 2 × 2
MIMO, and 300 Mbps with 4 × 4 MIMO.
LTE has been designed as a completely packet-oriented multi-service system, without the reliance
on circuit-switched connection-oriented protocols prevalent in its predecessors.
The route towards fast packet scheduling over the radio interface was already opened by HSPA,
which allowed the transmission of short packets having a duration of the same order of magnitude
as the coherence time of the fast fading channel, as shown in Figure 2.1-2 below:
Time
In LTE, in order to improve the system latency, the packet duration was further reduced from the 2
ms used in HSDPA down to just 1ms. This short transmission interval, together with the new
dimensions of frequency and space, has further extended the field of cross-layer techniques
between the MAC and physical layers to include the following techniques in LTE:
The high network capacity requires efficient network architecture in addition to the advanced radio
features. The 3GPP target was to improve the network scalability for traffic increase and to
minimize the end-to-end latency by reducing the number of network elements.
In LTE, all radio protocols, part of mobility management, header compression and all packet
retransmissions are located in the base stations called eNodeB (also called eNB). The eNodeB
includes all those algorithms that are located in Radio Network Controller (RNC) in 3GPP UMTS
architecture.
Also the core network is streamlined by separating the user and the control planes. The Mobility
Management Entity (MME) is just the control plane element while the user plane bypasses MME
directly to System Architecture Evolution (SAE) Gateway (GW).
The LTE architecture evolution is illustrated in Figure 2.1-3. This LTE core network is also often
referred to as Evolved Packet Core (EPC) while for the whole system the term Evolved Packet
System (EPS) is used.
UMTS
GGSN
LTE
EPC
UTRAN e-UTRAN
NodeB
eNodeB
RNC
Control Plane
User Plane
A key feature of the EPS architecture is the clean separation in the core network of control plane
(MME) and the user plane (EPS gateway), allowing for independent scaling of these two planes.
Control plane functionality depends on number of mobiles and their mobility patterns.
The following figure shows the division of the LTE architecture into three main high level domains:
User Equipment (UE), Evolved UTRAN (E-UTRAN) and Evolved Packet Core Network (EPC).
Note that network elements and interfaces presented in this part of the document are generic LTE
elements and may not all be supported in the LR14.1.L release. Additionally, this document does not
describe dimensioning and monitoring of the HSS itself, but does discuss the S6a interface between
the HSS and the MME.
OAM &
Charging i/f
not shown
MBMS-GW BM-SC
SGSN SGmb
Content
Provider
SGi-mb
Gb M1 Sm
MSC/VLR
1xCS IWS CBC EIR 8650 SDM
GERAN S4 (HSS)
BSC
Gx
S11
S1-MME
M3 S12 SGi
X2
Packet Data Network
S5/S8
S1-U P-GW
e-UTRAN
eNodeB S-GW 1357 LIG Not in the scope of LNCME
Supported and described
New for LR14.1.L
Xi
Figure 2.1-4: System Architecture for LTE E-UTRAN & EPC Network
The development in E-UTRAN is concentrated on one node, the evolved NodeB (eNodeB or eNB). All
radio functionality is collapsed there, i.e. the eNodeB is the termination point for all radio related
protocols. As a network, E-UTRAN is simply a mesh of eNodeBs connected to neighboring eNodeBs
with the X2 interface.
In the LTE core network (EPC), the UMTS equivalent SGSN has been split into MME (Mobility
Management Entity), which handles control functions, and Serving SAE GW (S-GW), which handles
User Plane traffic. There is one anchor MME and S-GW per UE. MME and S-GW may be collocated.
The UMTS GGSN equivalent is PDN SAE GW (P-GW), which may or may not be collocated with S-GW.
The P-GW is the inter-system anchor point for each UE. It provides access to PDNs.
For details on Alcatel-Lucent LTE implementation of eNodeB and MME products and LR14.1.L
solutions & configurations please refer to [C2] & [C4].
This section introduces LTE network elements and Interfaces that are part of the basic System
Architecture configuration (with particular focus on elements involved on Capacity & Dimensioning
exercises).
Functionally the UE is a platform for communication applications, which exchange signal with the
network for setting up, maintaining and removing the communication links for the end user needs.
This signalization includes mobility management functions such as handovers, reporting terminal’s
location, and in these, the UE performs as instructed by the network.
The supported data ranges run from 5 to 75 Mbps in the uplink direction and from 10 to 300 Mbps in
the downlink direction. All devices support the 20 MHz bandwidth for transmission and reception.
Only a category 5 device will do 64QAM in the uplink, others use QPSK and 16QAM. The receiver
diversity and MIMO are implemented in all categories, except in category 1, which does not support
MIMO.
UE Category
Specifications
1 2 3 4 5 6 7 8
3GPP Release 8 8 8 8 8 10 10 10
Maximum downlink data rate (Mbps)* 10 50 100 150 300 300 300 1200
The UE behavior (number of UEs handled by a NE, UEs spatial repartition, mobility patterns) is an
important call model element that is considered in LTE dimensioning.
In previous releases, eNB fully supports UE category 2 and 3, and does not reject UE category 4 and
5 (but with no extra performance compared to UE category 3).
In LR14.1.L, the UL Scheduler accepts UE category 4 with all its possibilities, since they are identical
to UE category 3.
Also, for a coherent behaviour, it's preferable to handle UE category 5 as UE category 4 (instead of
category 3 as currently), to avoid that a UE of category 5 achieves less throughput than a UE of
category 4 (since in 3GPP, category 5 has the highest performance).
This increases the performance of both peak rate per cell and aggregated per modem peak.
Category 4 UE supports a peak rate of 150.8 Mbps on the downlink which is needed for 20 MHz
deployments (non-CA as well as CA); Category 4 is also needed to achieve the peak rate of 110.1
Mbps in 15 MHz.
There is no change to the peak rate on the uplink and it remains the same as Category 3 UE.
The LTE FDD Radio Interface relies on OFDMA and SC-FDMA multiple access techniques on the DL and
UL, respectively. In an LTE system, many users of one LTE carrier can be active at the same time. All
such users share the UL and DL air interface resources. These resources comprise of a number of
Physical Resource Blocks (PRBs), each of which is made up of twelve 15 kHz sub carriers and 14
modulation symbols.
The Physical Resource Block (PRB) is the minimum unit of radio resources allocation in LTE.
The PRB structure is the same for UL and DL, but the resource element (RE) usage (and implicitly
the space available for user data) is different on the two directions.
Not all 168 Resources Elements (REs) available in a DL PRB (14 OFDM Symbols x 12 Sub-carrier) are
used for user data traffic (for PDSCH).
Up to 3 OFDM symbols (on all 12 carriers) are reserved for signaling channels (PCFICH, PDCCH,
PHICH).
In addition to the control symbols, some PRB space is used for the Reference Signal (RS). The
number of RS blocked for reference signals depends on the cell MIMO configuration (8 REs for 1
antenna transmission, 16 REs for 2 antenna transmission).
For details on LTE DL Physical Channels structure please refer to [R1].
14 OFDM symbols
(1 ms sub-frame)
In Figure 2.1-6, we can see that the DL PRB space for user data is reduced from 168 to 120 REs. This
is a typical cell configuration case: 3 OFDM symbols used for signaling channels (3x12=36 REs) and
MIMO 2x2 cell configuration (16 REs for RS).
As with the DL PRB, the UL PRB space for user data is also impacted by the REs reserved for
reference signals:
DM-RS SRS
14 SC-FDMA symbols
(1 ms sub-frame)
Data demodulation reference signal (DM-RS) is sent with each packet transmission in order to
demodulate data. It occupies center SC-FDMA symbol of the slot.
Sounding reference signal (SRS) is used to sound uplink channel to support frequency selective
scheduling.
For details on LTE UL Physical Channels structure, please refer to [R1].
Due to reference signals, we can see in the Figure 2.1-7 that the UL PRB space for user data is
reduced from 168 to 132 REs (24 REs for DM-RS and 12 REs for SRS).
The PRB available space for User data (nb. of user data REs) is one of the main elements impacting
the radio interface dimensioning. It is the main input for the DL/UL peak cell & user throughput, the
dimensioning elements of the radio interface.
The number of Physical Resource Blocks (PRBs) and their associated sub-carriers is dependent upon
the carrier bandwidth which can be 1.4, 3, 5, 10, 15 or 20MHz:
Bandwidth NPRB
1.4MHz 6
3MHz 15
5MHz 25
10MHz 50
15MHz 75
20MHz 100
Table 2-2: LTE Bandwidth vs. Physical Resource Blocks (UL & DL)
Considering the above mentioned dimensioning elements (available PRB space for user data and the
number of PRBs for specific bandwidth) we can compute the peak cell throughput for a given cell
(details in next chapters).
For a given LTE scheduling instance, each LTE user has one or more PRBs dedicated for their sole
use. As each PRB within an LTE system is dedicated to a specific user at given instant in time,
individual users don’t interfere with other users simultaneously scheduled in nearby PRBs in the
same cell. As a consequence, the internal interference experienced by users can only emanate from
other LTE cells.
In terms of radio network dimensioning, we can say that LTE coverage and capacity are closely
related, in a similar way to that of CDMA and WCDMA systems: the capacity supported by a given
cell will impact the interference level generated in other cells, especially adjacent cells. A higher
PRB loading on one cell will result in a reduction in the coverage/capacity of adjacent cells.
However, a key difference between an LTE system and a CDMA/WCDMA system is that there is no
intra-cell interference (much like GSM system). As the user traffic increases the effect of increasing
interference is experienced by the adjacent cells rather than on the loaded cell itself.
For WCDMA and CDMA systems it is best practice to consider a feedback loop linking coverage and
capacity analysis to match both constraints (as illustrated below):
Such feedback is not considered mandatory in the dimensioning process for an LTE system, given
that load and coverage are less strongly linked.
Another consideration for the LTE air interface (compared to WCDMA and CDMA) is the lack of soft
handoff functionality, which reduces the capacity losses due to multiple radio-links per UE.
The only node in the E-UTRAN is the Evolved NodeB (eNodeB). The eNodeB is a radio base station
that is in control of all radio related functions in the fixed part of the system.
Functionally eNodeB acts as a layer 2 bridge between UE and the EPC, being the termination point
of all the radio protocols towards the UE, and relaying data between the radio connection and the
corresponding IP based connectivity towards the EPC.
The eNodeB is responsible for the Radio Resource Management (RRM) which controls usage of the
radio interface. This includes, for example, allocating resources based on requests, prioritizing and
scheduling traffic according to required Quality of Service (QoS), and constant monitoring of the
resource usage situation.
In addition, the eNodeB has an important role in Mobility Management. The eNodeB controls and
analyzes radio signal level measurements carried out by the UE, makes similar measurements itself,
and based on those measurements, makes decisions to handover UEs between cells. This includes
exchanging handover signaling with other eNodeBs and the MME.
In Alcatel-Lucent solution the eNB is implemented as a distributed solution composed of two distinct
parts: the Base Band unit (9926) and the RF Modules.
For details on eNB HW configurations and SW functions supported in LR14.1.L please refer to [C2].
Modem board(s) and Controller board (the two main boards of the Base Band unit) are the eNB
elements considered in the eNB dimensioning exercise.
Number of Active users handled by the board (for Modem and/or Controller board)
The board decoding throughput (for modem boards) and the backhaul throughput (for controller
board)
The eUTRAN consists of a set of eNodeBs (eNBs) connected to the Evolved Packet Core (ePC)
through interface S1. eNBs can be interconnected as well through X2 interface (source eNB to target
eNB) to enable intra eUTRAN handover. eUTRAN supports in addition M1 and M3 interfaces, for the
purpose to enable evolved Multimedia Broadcast/Multicast Service (eMBMS).
eUTRAN overall architecture is depicted in Figure 2.1-10 below. Note that the figure provides as
well an overview of the eMBMS architecture from an ePC perspective. Further information,
regarding eMBMS can be found in section 2.1.4.7.
Note that a Multi-cell/multicast Coordination Entity (MCE) is defined in the eMBMS 3GPP standard
architecture. The MCE provides eMBMS Session Control Signalling capabilities from the MME through
M3 interface. MCE is either standalone or embedded to the eNB. MCE further control eNBs through
M2 interface. Thus M2 interface can either be a eUTRAN external interface (standalone MCE
configuration) or an eNB internal interface (MCE embedded to the eNB). So far, Alcatel-Lucent value
proposition supports only MCE embedded to the eNB configuration. Thus M2 interface is not shown
in Figure 2.1-10 below and is not described further in the document.
S1 & X2 interfaces are split into a control plane related part (S1-MME & X2-C) and a user plane
related part (S1-U & X2-U).
S1
S1-MME is the reference point for the control plane protocol between eUTRAN and the ePC
Mobility Management Entity (MME). S1-MME uses S1 Application layer Protocol (S1AP) to provide
the signaling service between eUTRAN and the MME. S1AP services are divided into two groups:
- Non UE-associated services are related to the whole S1 interface instance between the eNB
and MME utilizing a non UE-associated signaling connection.
- UE-associated services are related to one UE. S1AP functions that provide these services are
associated with a UE-associated signaling connection that is maintained for this particular
UE.
S1-MME enables as well source / target eNBs handover for X2-based and / or S1-based
handovers. S1AP is carried over SCTP/IP which provides reliable, in-sequence transport of
messages with congestion control over IP. S1-MME protocol stack is depicted in section 3.5.4.1.
S1-U is the reference point between eUTRAN and the ePC (i.e the Serving GW) for the per
bearer user plane tunneling and inter eNB path switching during handover. S1-U is based on GTP-
U protocol and is transported over UDP/IP as depicted in section 3.5.4.1.
X2
X2-C is the reference point for the control plane protocol between eNBs (source and target
eNBs) when inter eNBs handover is to be supported. X2-C uses X2 Application layer Protocol
(X2AP) to provide the signaling service between source and target eNBs, which are involved in
inter eNB handover. X2AP services are divided into two groups:
- X2AP basic mobility procedures are used to handle the UE mobility within eUTRAN.
- X2AP global procedures are not related to a specific UE and are used to enable eUTRAN
inter eNBs communication.
X2AP is carried over SCTP/IP which provides reliable, in-sequence transport of messages with
congestion control over IP. X2AP protocol stack is depicted in section 3.5.5.1.
X2-U is the reference point between source and target eNBs, for inter eNB path bearer switching
during handover. X2-U enables data forwarding from source eNB toward target eNB, while
handing over between eNBs, when radio interface is already established in the target cell
(target eNB). X2-U is based on GTP-U protocol and is transported over UDP/IP as depicted in
section 3.5.5.1.
S1 & X2 dimensioning depends very much about the amount of traffic carried by eNBs. This is
directely related to the traffic profile which is used as the input for the dimensioning exercise. S1 &
X2 interfaces are required to be carefully engineered and monitored in order to avoid capacity
shortage which would affect resulting end user quality of service.
M1
M1 interface is the reference point between MBMS-GW and eUTRAN for eMBMS data delivery. IP
Multicast is used on this interface to forward data. M1 interface is a bearer interface. The M1
interface has the function to manage the IP multicast groups. The MBMS-GW maintains the IP
multicast groups. The MBMS-GW allocates the IP multicast group address when the session
arrives, and may release it at session stop. At the eNB side, the eNB joins the IP multicast group
to receive the eMBMS User Plane data at session start, leaves the IP multicast group when the
session stops, may leave the IP multicast group when the session suspends and may rejoin the IP
multicast group when the session resumes.
The packets delivered over M1 interface will be synchronously transmitted by eNBs towards the
air interface. To support the inter-eNB content synchronization, MBMS SYNC protocol layer is
defined between BM-SC (Broadcast Multicast Service Center) and the eNBs, which is used to
carry information, in addition to the MBMS application data, that enable eNBs to identify the
timing for radio frame transmission and detect packet loss. Every packet in M1 contains the
MBMS SYNC protocol information which is encapsulated at BM-SC. MBMS SYNC protocol (TS
25.446) is used over the M1 interface to keep the content synchronization for MBMS service data
transmission. SYNC protocol is further encapsulated into GTPv1-U as depicted in section 3.5.6.1.
M2
As explained above, MCE is embedded to the Alcatel-Lucent eNB this is thus an internal eNB
interface which is not further described in this release of the document.
M3
M3 interface is the reference point between the MME and the eNB embedded Multi-
cell/multicast Coordination Entity (MCE) aimed to provide control plane capabilities between
MME and eNBs. M3 mainly carries MBMS session management signaling.
M3 interface features an application part (M3-AP) for the purpose to allow MBMS Session
Management (responsible for starting, stopping and updating eMBMS sessions), reset
functionality (to ensure a well defined initialization on the M3 interface), error indication
functionality (to allow a proper error reporting/handling in cases where no failure messages are
defined), M3 setup functionality (for initial M3 interface setup for providing configuration
information) and MCE configuration update functions (to update application level configuration
data needed for the MCE and MME to interoperate correctly on the M3 interface).
MBMS-GW BM-SC
SGmb
Content
Provider
SGi-mb
M1
Sm
IP multicast
e-UTRAN
M1 MME
eNodeB
S1-MME
M3
X2-U
X2-C
S11 SGi
S5/S8 SGi
S1-U Packet Data
Network
2.1.4.1 MME
Mobility Management Entity (MME) is the main control element in the Evolved Packet Core (ePC).
MME handles the Control Plane, and it is not involved in the User Plane, excepted for bearer
management functions (bearer set up, release, etc).
In addition to the interfaces that are shown in the 3GPP standards, please note that the MME does
provide a direct control plane connection to the UE through S1-AP/NAS, to enable UE / MME control
plane connection.
The MME manages subscription profile and service connectivity. When the UE registers to the
network, the MME is responsible for retrieving UE subscription profile from the home network. This
profile determines which Packet Data Network (PDN) connection(s) has to be allocated to the UE at
network attachment. The MME automatically sets up the default bearer, giving the UE the basic IP
connectivity.
From a mobility management perspective, the MME keeps track of the location of all UEs in its
service area. The MME manage also the cooperation with other radio access technology (2G / 3G),
also known as “Inter Radio Access Technology IRAT” as it enables handovers, reselection and
redirections in case 4G/LTE coverage is weak or missing. The MME controls as well Circuit Switching
Fall Back (CSFB) functionality for voice and SMS services.
The MME functionalities are hosted by the Alcatel-Lucent 9471 WMM. Further details regarding the
9471 WMM product can be found in reference [C4]. The main MME functionalities, with regards to
the 3GPP base system architecture, are:
Authenticates and authorizes a UE at the time of initial attachment, and subsequent
attachments.
Terminates Non-Access Stratum (NAS) signaling from the UE.
Assigns temporary identities (GUTI) to UEs.
Selects S-GW at the time of initial attachment and during relocation.
Selects P-GW based on subscriber profile or provisioned data.
Performs idle-mode paging.
Manages tracking-area lists.
Performs bearer management functions such as activation and deactivation of bearer when
requested by UE or the network.
Provides lawful intercept support.
Selects S-GW for intra-LTE handover.
Supports intra-LTE handover.
Supports inter- Radio Access Technology (RAT) handovers.
Supports 9471 WMM and S-GW pooling.
Supports Location-based Services to assist subscribers who place emergency calls.
Supports Warning Message Delivery to UEs in a particular area.
Supports Multimedia Broadcast/Multicast Service, a broadcast service in which data is
transmitted from a single source entity to multiple recipients.
CS fallback for 1xRTT (CDMA radio access) in EPS enables the delivery of CS-domain services (e.g. CS
voice) by reuse of the 1xCS infrastructure when the UE is served by eUTRAN. A CS fallback enabled
terminal, while connected to eUTRAN may register in the 1x RTT CS domain in order to be able to
use 1xRTT access to establish one or more CS services in the CS domain. The CS Fallback function is
only available where eUTRAN coverage overlaps with 1xRTT coverage.
The reference architecture in the scope of CS fallback to 1xRTT and SMS over S102 in EPS function is
described in the figure below (extracted from 3GPP TS 23.272). The CS fallback to 1xRTT and SMS
over S102 in EPS functions are performed by means of S102 reference point between the MME and
the 1xCS IWS. S102 interface provides tunneling capability between MME and 3GPP2 1xCS IWS to
relay 3GPP2 1xCS signaling messages (1x CS signaling messages are those messages that are defined
for A21 interface as described in 3GPP2 A.S0008-C and 3GPP2 A.S0009). S102 encapsulated 3GPP2
1xCS signaling messages are transported over UDP/IP. Note that S102 interface is used to support
UEs that do not transmit and receive on both the LTE and 1x radio interfaces simultaneously.
1xRTT CS
Access 1xRTT MSC
A1
A1
1xCS IWS
1xCS CSFB UE
S102
S1-MME
MME
e-UTRAN
eNodeB S11
S-GW P-GW
3GPP2 1xCS IWS uses the S102 reference point to communicate with the MME and to transport
3GPP2 1xCS signaling messages to the UE. The role of the 3GPP2 1xCS IWS is:
To be a signaling tunneling end point towards the MME for receiving / sending encapsulated
3GPP2 1xCS signaling messages to/from the UE.
To emulate a 1xRTT BSS towards the 1xRTT MSC (reference point A1 as defined in
3GPP2 A.S0014 between 1xBS and MSC).
The MME enabled for CS fallback to 1xRTT supports the following additional functions:
It serves as a signaling tunneling end point towards the 3GPP2 1xCS IWS via S102 interface for
sending/receiving encapsulated 3GPP2 1xCS signaling messages to/from the UE, which are
encapsulated in S1-MME S1 Information Transfer messages, as defined in TR 36.938.
1xCS-IWS (terminating S102 reference point) selection for CSFB procedures.
Handling of S102 tunnel redirection in case of MME relocation.
Buffering of messages received via S102 for UEs in idle state.
From release LR13.1L onwards, the MME enables 3GPP Rel-8 CS Fallback to 1xRTT support and 3GPP
Rel9 enhanced CSFB to 1xRTTmobility scenario for single and dual receiver UE support.
2.1.4.2 HSS
The LTE Home Subscriber Server (HSS) manages the eUTRAN Mobile Subscribers and all the
associated standardized subscription information which are needed to setup LTE (so called 4G) calls.
In a 3GPP inter-working configuration, the HSS is associated with GSM/UMTS Home Local Register
HLR, in order to manage the GERAN/UTRAN Mobile Subscribers and all associated standardized
subscription information needed to setup GSM, SMS, UMTS HSPA+ or GPRS calls.
In the scope of ePC, the 8650 SDM supports the following interfaces:
2.1.4.3 AAA
3GPP-AAA component is introduced in the core LTE architecture to perform authentication of the
non-3GPP user (from both trusted and un-trusted networks) accessing the LTE network services.
Non-3GPP networks can either be a 3GPP2 network such as the CDMA, which operates in a licensed
radio spectrum (trusted non 3GPP networks) or a network such as WiFi, which operates in
unlicensed spectrum (un-trusted non-3GPPnetwork).
3GPP standards have defined procedures for interoperability of these networks with the LTE
network.
Provides authentication services for trusted non-3GPP network interconnections. For example,
with CDMA or EVDO.
Provides authentication services for un-trusted non-3GPP network interconnections through the
ePDG. For example, WiFi offload, WiMax interconnection, and so on.
The evolved Packet Data Gateway (ePDG) provides ePC access to PS based services for a WLAN
UE.
ePDG functionalities is described in 3GPP TS 23.402.
Proxy the authentication requests to and from other 3GPP-AAA in roaming scenarios.
Obtains authentication vectors and session encryption keys from the HSS.
Indicates user session state changes to the HSS.
Retrieves the user service profile data from the HSS and provides it to the PGW / ePDG.
Notifies HSGW / PGW / ePDG about an abort user session request initiated by the HSS.
Notify user profile changes initiated by the HSS (Push-Profile request from HSS) to HSGW, PGW,
and ePDG. This notification causes are authorization (in a trusted access) or a full re-
authentication (in an un-trusted access) of the user session.
Cache HSS authentication vectors and user profiles for fast re-authentication.
The Alcatel-Lucent non 3GPP-LTE AAA functionalities are hosted by the 8950 AAA. Further details
regarding the 8950 AAA product can be found in reference [C17].
In the scope of non 3GPP-LTE AAA, the 8950 AAA supports the following interfaces:
S6b to the P-GW.
STa to non-3GPP IP Access.
SWa to the un-trusted non-3GPP IP Access.
SWd between 3GPP AAA Server and 3GPP AAA proxy.
SWm to the ePDG.
SWx to the HSS.
2.1.4.4 S-GW
The Serving GW (S-GW) is the gateway which terminates the interface towards eUTRAN. For each UE
associated with the EPS, at a given point of time, there is a single Serving GW.
The S-GW provides local mobility anchor point for inter-eNB handover. The S-GW provides mobility
anchoring for inter-3GPP mobility, downlink packet buffering and initiation of network triggered
service request procedure (when an UE is in idle mode, eNB resources only are released, but
resources are maintained at S/P-GW levels, when S-GW receives data packets from P-GW for that
UE, the S-GW buffers the received packets, and requests the MME to page the UE, so that the UE re-
connects, and when the resources are re-established, buffered packets are forwarded to the UE
through the eUTRAN). The S-GW provides packet routing and forwarding along with transport level
packet marking in the uplink and the downlink, e.g. setting the DiffServ Code Point, based on the
QCI of the associated EPS bearer.
The S-GW functionalities are hosted by the Alcatel-Lucent 7750 SR OS MG, which provides the 7750
Service Router Serving Gateway functionalities by use of dedicated Mobile Gateway - Integrated
Services Module (MG-ISM).
Further details regarding the 7750 SR OS MG product can be found in [C11] and [C12].
2.1.4.5 P-GW
The Packet Data Network Gateway (PDN-GW or P-GW) is the gateway which terminates the ePC
interface towards the PDN (SGi reference point). This is the highest level mobility anchor in the
ePC, and it provides the UE with IP point of attachment. If a UE is accessing multiple PDNs, there
may be more than one PDN GW for that UE.
P-GW features UE IP address allocation, per user service based uplink/downlink traffic gating, per
user service based uplink/downlink rate enforcement, uplink/downlink per APN rate enforcement,
per user based packet filtering (by means of deep packet inspection), uplink/downlink transport
level packet marking (setting the DiffServ Code Point, based on the QCI of the associated EPS
bearer), accounting for inter-operator charging, offline charging system interfacing, DHCP
functions.
The P-GW functionalities are hosted by the Alcatel-Lucent 7750 SR OS MG, which provides the 7750
Service Router Serving Gateway functionalities by use of dedicated Mobile Gateway - Integrated
Services Module (MG-ISM).
Further details regarding the 7750 SR OS MG product can be found in [C11] and [C12].
SGi to the packet data network (PDN) and to the Broadcast Multicast Service Center (BM-SC)
Gx to the PCRF
Ga to the charging Gateway function
Gz / Rf to the Offline Charging System (OFCS)
Ro / Gy to the Online Charging System (OCS)
Lawful intercept.
Interface to the management system (please refer to section 3.3).
Please note that the 7750 SR OS MG, can operate as an S-GW or a P-GW or both. When it supports P-
GW functional role, the 7750 SR OS MG can feature as well GGSN type of functionalities. As the
scope of this document is LTE only, GGSN type of capability is not addressed in this document.
Please note in addition, that capacity and engineering limits which are described in section 3.6.1,
apply to the 7750 SR OS MG regardless of whether the product operates as an S-GW or a P-GW.
2.1.4.6 PCRF
The PCRF features policy control decision and flow based charging control functionalities. It
provides network control regarding the service data flow detection, gating, QoS and flow based
charging (except credit management) towards the Policy and Charging Enforcement Function
(PCEF); in the scope of LTE ePC architecture the PCEF is hosted by the P-GW.
The PCRF applies the security procedures, as required by the operator, before accepting service
information from the Application Function (AF); which can be IP Multimedia Subsystem (IMS) or any
other policy enabled applications.
The PCRF decides whether application traffic detection is applicable, as per operator policies,
based on user profile configuration, received within subscription information.
The PCRF decides how certain service data flow/detected application traffic shall be treated in the
PCEF (traffic gating, traffic enforcement, filtering, QoS control, etc), and ensure that the PCEF user
plane traffic mapping and treatment is in accordance with the user's subscription profile.
PCRF operates throughout the user interaction duration with the ePC i.e. at User Equipment (UE)
attachment, upon resource request to support application session establishment, or upon session
resources modification resulting from subscriber profile updates in the database.
The PCRF is hosted by the Alcatel-Lucent 5780 DSC. Further details regarding the 5780 DSC product
can be found in reference [C18].
Gx to the P-GW.
S9 between a visited PLMN PCRF (V-PCRF) and a home PCRF (H-PCRF).
Sp. So far this interface is not in the scope of this document.
Rx to the IMS network. This interface is not in the scope of this document.
Sy to the Online Charging System.
Interface to the management system (please refer to section 3.3).
2.1.4.7 MBMS-GW
Broadcast Multicast service is known as Multimedia Broadcast Multicast Service (MBMS) for 2G and
3G radio access networks and as evolved MBMS (eMBMS) for 4G/LTE radio access networks.
MBMS is a point-to-multipoint service in which data is transmitted from a single source entity to
multiple recipients. Transmitting the same data to multiple recipients allows network resources to
be shared. The MBMS bearer service offers two modes of operation: Broadcast Mode where any
subscriber which is located in an area served by the service can receive data and Multicast Mode
where only subscribers having subscribed to the service and having joined the multicast group
associated with this service can receive data. Both modes of operation are unidirectional point-to-
multipoint transmission of multimedia data and can be highly applied to broadcast test, audio,
picture, video from Broadcast Multicast Service Centre to any user located in the service area.
MBMS service architecture and functional description is given in 3GPP TS 23.246. Please note that
the evolved packet system (EPS) currently only supports broadcast mode. The eMBMS reference
architecture is depicted in Figure 2.1-12 below.
S-GW P-GW
S5/S8 SGi
Packet Data
S1-U Network
e-UTRAN S11
eNodeB
S1-MME
MME
M3
Sm SGi
M1
MBMS-GW
SGmb
IP multicast Content
Provider
SGi-mb
BM-SC
M1 Sn
UTRAN
Iu-PS
SGSN
NodeB
eMBMS demo is supported from LR13.1L. Use of Alcatel-Lucent MME for the support of eMBMS is
mandatory for LR13.1L. Use of non Alcatel-Lucent MME for the support of eMBMS is planned for
LR13.3L. Commercial eMBMS support is planned for LR14.1L.
In the Evolved Packet System a functional entity the MBMS gateway (MBMS-GW) exists at the edge
between the Core Network and the Broadcast Multicast Service Centre (BM-SC) for the purpose to
enable eMBMS services.
The MBMS-GW has the role to broadcast the packets to all eNBs within an MBMS service area. It
features in addition MBMS session management: session Start and Session Stop procedures). It is also
in charge of collecting charging information relative to the distributed MBMS traffic for each
terminal having an active MBMS session. In addition, the MBMS-GW features the following
functionalities:
It provides an interface to BM-SC through SGi-mb user plane interface and SGmb control plane
interface.
IP multicast distribution of MBMS user plane data to eNBs through M1 interface.
It allocates an IP Multicast address to which the eNB should join to receive the MBMS data. This
IP Multicast address together with the IP address of the multicast source (SSM) and a C-TEID is
provided to the eNB via MME (through M3 interface).
The Broadcast Multicast service Centre (BM-SC) provides functions for MBMS user service
provisioning and delivery. It may serve as an entry point for content provider MBMS transmissions,
used to authorize and initiate MBMS Bearer Services within the PLMN and can be used to schedule
and deliver MBMS transmissions. The BM-SC is a functional entity, which must exist for each MBMS
User Service. The BM-SC consists of the following sub-functions:
Membership function to provide authorization for UEs requesting to activate an MBMS service.
Session and Transmission function to schedule MBMS session transmissions. The BM-SC Session
and Transmission Function, is able to schedule MBMS session retransmissions, and label each
MBMS session with an MBMS Session Identifier to allow the UE to distinguish the MBMS session
retransmissions. When IP multicast is used for distribution of payload from MBMS-GW to eNB,
the BM-SC Session and Transmission Function shall include synchronization information for the
MBMS payload (SYNC protocol which is described in 3GPP TS 25.346 and TS 25.446).
Proxy and Transport function is a Proxy Agent for signalling over SGmb reference point between
MBMS-GWs and other BM-SC sub-functions (e.g. BM-SC Membership Function, BM-SC Session and
Transmission Function). The BM-SC Proxy and Transport Function, is also able to control when
BM-SC functions for different MBMS services are provided by multiple physical network
elements. Routing of the different signalling interactions shall be transparent to the MBMS-GW.
Service Announcement function provides service announcements for broadcast MBMS user
services. The BM-SC Service Announcement function provides the UE with media descriptions
specifying the media to be delivered as part of an MBMS user service (e.g. type of video and
audio encodings, etc). The BM-SC Service Announcement function provides the UE with MBMS
session descriptions specifying the MBMS sessions to be delivered as part of an MBMS user service
(e.g. broadcast service identification, addressing, time of transmission, etc.).
Security function provides integrity and/or confidentiality protection of MBMS data. The MBMS
Security function is used for distributing MBMS keys (Key Distribution Function) to authorized
UEs.
Content synchronization for MBMS in eUTRAN for broadcast mode. Some MBMS user services may
broadcast different content in different areas of the network. In such case the UE is not aware
of the relation between location and content, i.e. the UE just activates the reception of the
service and receives the content that is relevant for its location.
The MBMS-GW supports the following interfaces:
SGmb to the BM-SC (control plane).
SGi-mb to the BM-SC (user plane for MBMS data delivery).
M1 to the eNB (Multi-cell/multicast Coordination Entity (MCE) embedded in the eNB, user plane
for MBMS data delivery.
Sm to the MME.
Interface to the management system.
The BM-SC supports the following interfaces:
Interface to the content provider. This interface is not in the scope of this document.
Interface to the management system. This interface is not in the scope of this document.
Alcatel-Lucent {MBMS-GW + BM-SC} relies upon the 9774 LTE Multimedia Gateway (9774 LMG). A
single network element hosts both the BM-SC and the MBMS-GW.
Ga: Interface between S-GW or P-GW and Offline-Charging System (OFCS). Ga relies on GTP’.
Charging interfaces will be described in a further release of the document.
Gn: Interface between MME and pre-release 8 SGSN (control plane) and interface between pre-
release 8 SGSN and PGW (user and control plane). Gn relies on GTP tunnels (GTP-U for user
plane and GTP-C for control plane).
Gp: interface between pre-release 8 SGSN and PGW (user and control plane), roaming
architecture. Gp relies on GTP tunnels (GTP-U for user plane and GTP-C for control plane).
Gx: Interface between P-GW and PCRF. Gx relies on Diameter protocol.
Gz/Rf: Interface between S-GW / P-GW and Offline-Charging System (OFCS). Gz/Rf relies on
Diameter protocol. Charging interfaces will be described in a further release of the document.
M1: Interface between eNB and Multicast Broadcast/Multicast Service (MBMS) Gateway (MBMS-
GW). This interface between the eUTRAN and the ePC is described in section 2.1.3.4.
M3: Interface between the eNB embedded Multicast Control Entity (MCE) and MME. This
interface between the eUTRAN and the ePC is described in section 2.1.3.4.
Ro/Gy: Interface between the P-GW and Online Charging System (OCS) Ro/Gy interface relies on
Diameter protocol. Charging interfaces will be described in a further release of the document.
Rx: Interface between PCRF and IMS application function or any other policy enabled
applications. Rx relies on Diameter protocol. This interface is not in the scope of this document.
S1: Interface between the eNB and the ePC (S1-U is the user plane interface between the eNB
and the S-GW; S1-MME is the control plane interface between the eNB and the MME). This
interface between the eUTRAN and the ePC is described in section 2.1.3.4.
S2a: Interface between trusted non-3GPP IP access and S-GW / P-GW. S2a relies upon PMIPv6 (it
is used for user plane on 3GPP2 enhanced High Rate Packet Data (eHRPD) hand over purpose).
So far this interface is not in the scope of this document.
S2b: Interface between PDN GW/S-GW and ePDG for trusted non-3GPP access with GTP or
PMIPv6. So far this interface is not in the scope of this document.
S2c: Interface between P-GW and non 3GPP UE. So far this interface is not in the scope of this
document.
S3: Interface between MME and release-8 SGSN. It enables the MME to support handovers
between a 3GPP UMTS or GERAN network and an LTE network. S3 relies on GTP tunnels (GTPv2-
C). This interface is supported from an end to end Alcatel-Lucent solution perspective from
release LR13.3L (indeed the Alcatel-Lucent MME did support S3 interface prior to this release,
but Alcatel-Lucent S-GW supports S4 and S12 interfaces from release LR13.3L only).
S4: Interface between S-GW and release-8 SGSN. It enables the MME to support handovers
between a 3GPP UMTS or GERAN network and an LTE network. S4 relies on GTP tunnels (GTP-U).
This interface is supported from release LR13.3L onwards.
S5: Interface between S-GW and P-GW within the same Public Land Mobile Network (PLMN). S5
relies on GTP tunnels (GTP-U for user plane and GTPv2-C for control plane).
S6a: Interface between MME and HSS (hosting subscriber profile database). S6a allows transfer
of subscription and authentication data. S6a relies on Diameter protocol.
S6b: Interface between P-GW and 3GPP AAA Server / Proxy. S6b relies on Diameter. This
interface is specified in TS 29.273. This interface will be described in a further release of the
document.
S8: Interface between S-GW and P-GW which is located in a different PLMN (in case of roaming).
S8 relies on GTP tunnels (GTP-U for user plane and GTPv2-C for control plane).
S9: Interface between PCRF in the HPLMN (H-PCRF) and PCRF in the VPLMN (V-PCRF) (in case of
roaming). S9 relies on Diameter protocol.
S10: Interface between MMEs in the same or different MME pools. S10 relies on GTP tunnels
(GTPv2-C).
S11: Interface between MME and S-GW to enable bearer session creation and deletion. S11 relies
on GTP tunnels (GTPv2-C).
S12: Interface between 3G UTRAN and S-GW. It enables user-plane direct tunnel capability. S12
relies on GTP tunnels (GTP-U). This interface is supported from release LR13.3L onwards.
S13: Interface between MME and EIR to enable transfer of Mobile Equipment Identity (MEI). S13
relies on Diameter protocol.
SBc: Interface between MME and Cell Broadcasting Center (CBC) (for the purpose to enable
warning messages delivery in the Commercial Mobile Alert System (CMAS) context). SBc relies on
SBc Application Protocol (SBcAP) and is transported over SCTP.
SGi: Interface between P-GW and Packet Data Network (PDN). SGi relies on IP. As mentioned in
section 2.1.4.7 above, this interface provides also interconnection capability between ePC and
BM-SC for specific MBMS file repair needs.
SGi-mb: User plane interface between the BM-SC and the MBMS-GW. User plane data are
encapsulated over MBMS synchronization protocol (3GPP TS 25.446 and TS 29.061).
SGmb: Control plane interface between the BM-SC and the MBMS-GW. SGmb relies on Diameter
protocol (refer to 3GPP TS 29.061 for details) and is transported over TCP.
SGs: Interface between MME and MSC/VLR (to support UEs location coordination for the purpose
to enable CSFB). SGs relies on SGs Application Protocol (SGsAP) and is transported over SCTP.
SLg: Interface between MME and Gateway Mobile Location Center (GMLC). SLg relies on
Diameter protocol. This interface will be described in a further release of the document.
SLs: Interface between MME and the ePC-Serving Mobile Location Center (E-SMLC). SLs relies on
SLs Application Protocol (SLsAP) and is transported over SCTP. This interface will be described
in a further release of the document.
Sm: Interface between MME and Multicast Broadcast/Multicast Service Gateway (MBMS-GW). Sm
relies on GTPv2-C (refer to 3GPP TS 23.246).
Sp: Interface between the Subscription Profile Repository (SPR) and the PCRF. So far this
interface is not in the scope of this document.
STa: Interface between Trusted non-3GPP IP Access and 3GPP AAA Server/proxy. This interface
is specified in TS 29.273. This interface will be described in a further release of the document.
Sv: Interface between MME and MSC/VLR (to enable SRVCC support). Sv relies on GTPv2-C.
SWa: Interface between un-trusted non-3GPP IP Access and 3GPP AAA Server/proxy. This
interface is specified in TS 29.273. This interface will be described in a further release of the
document.
SWd: Interface between 3GPP AAA Server and 3GPP AAA proxy. This interface is specified in
TS 29.273. This interface will be described in a further release of the document.
SWm: Interface between ePDG and 3GPP AAA Server/proxy. This interface is specified in
TS 29.273. This interface will be described in a further release of the document.
SWx: Interface between HSS and 3GPP AAA Server. This interface is specified in TS 29.273. This
interface will be described in a further release of the document.
Sy: Interface between PCRF and Online Charging System (OCS). Sy relies on Diameter protocol.
This interface will be described in a further release of the document.
S102: Interface between 3GPP2 1xCS IWS and MME. This interface enables 3GPP Rel-8 CS
Fallback to 1xRTT support and 3GPP Rel9 enhanced CSFB to 1xRTTmobility scenario for single
and dual receiver UE support. Reference architecture for S102 interface support is described in
section 2.1.4.1 above.
A third activity may be required for the network capacity evolution. This activity is specific to
mature networks impacted by unexpected traffic growth (generated by commercial offers or
massive subscriber migration):
This activity is required for the deployment preparation. The proposition is to correctly configure
(from HW and SW perspective) each network element and interface that will be deployed in a
specific area, avoiding over or under-dimensioning.
Another goal of this activity is to correlate the different dimensioning constraints of different
network elements in order to obtain a coherent LTE Network design.
The main inputs and outputs of the initial engineering and dimensioning phase are presented in the
following figure:
Figure 2.2-1: Initial Engineering & Dimensioning: Generic Inputs & Outputs
One of the most important inputs for a dimensioning exercise is the Traffic profile. Traffic profile (or
Traffic Model or Call Model) represents a set of traffic parameters that characterize the UE activity
from User & Control Plane perspective:
Either provided by the operator, based on marketing inputs or monitored metrics/field data
Or, if the operator call model is not available, a generic Alcatel-Lucent call model can be
obtained through the CTA organization
An Example of a Traffic Model:
The following example traffic model assumes 85% smart phones and 15% data-only devices (PC
modem, tablet, MIFI). This traffic model is used as an example throughout the document and is
referred to as the “Example Traffic Model”. Individual applications are grouped into call types or
categories as follows:
VoLTE – this is voice traffic with QCI = 1. The call model assumes about 1 BHCA of voice traffic
for smart phones. Approximately 15% of all connected users will be voice users.
Best Effort (BE) Data – this includes all applications with QCI = 5 thru 9 (SMS, chatty
applications, download, upload, email, web browsing) plus 90% of all streaming traffic. The call
model assumes that 90% of users of streaming applications will not subscribe to a premium
service and will accept BE for their streaming applications. Approximately 80% of all connected
users will be BE data users.
Guaranteed Bit Rate (GBR) Data – this includes gaming traffic and 10% of all streaming traffic.
The call model assumes that 100% of all gaming users and 10% of streaming users subscribe to a
premium service that provides GBR data. Approximately 5% of users will be premium data users.
Carrier Aggregation (CA) – The eNB configures a second carrier for higher downlink throughput.
The main impact to the traffic model is configuration messages (secondary cell setup and
release) at the eNB. Uplink CA is not supported. Calls are only eligible for aggregation in about
10% of sites. The UE must also support CA.
The main elements of the call model (with specific values for the Example Traffic Model) are
presented in the following tables. Note that the tables show an example and do not represent any
specific field configuration. The example call model is used as a basis to explain dimensioning and
monitoring in later chapters. For all customer-specific use cases, the example in this document will
require updates to reflect the true customer network. Please refer to Section 5. Note that services
teams may have access to preliminary information for call models in future releases and may also
have additional information regarding customizing traffic models for specific customers and
commercial offers. Please refer to Section 6 for an example call model with CSFB.
Example
Customer
Call Model Parameter Traffic Model
Value
Value
Control Attach .9
Plane
Detach .9
S1 release 27.6
Inter-RAT handover 0
Example
Customer
Call Model Parameter Traffic Model
Value
Value
TA size (# eNBs) 60
paging-related parameters:
- Terminating calls: This is used in MME and eNB paging rate calculations. It is equal to (BHCA
* %Terminating calls). The percentage of terminating calls may be different for each call
type category.
- Paging - eNB: This is the number of paging events per subscriber per hour, at fully-loaded
eNB. Its value is higher than ‘Terminating calls’ because, depending on the paging strategy,
multiple eNB could be paged for each terminating call at MME. The estimated value in the
Example Traffic Model is based on example MME paging implementation in LR14.1.L. Please
refer to section 2.2.1.2 for more details.
The dormancy period in the Example Traffic Model is 10 sec.
This dormancy period is controlled through Traffic Based Context Release functionality
configuration at eNB level. This functionality is automatically disconnecting dormant users after
a preconfigured UE Inactivity timer (timeToTrigger timer), allowing an optimal eNB resource
usage. For Traffic Based Context Release configuration parameters details please see Volume 5
in [C1].
In LR14.1.L, the Traffic Based Context Release functionality can be activated in the eNB
through the OAM parameter isTrafficBasedContextReleaseAllowed (in ActivationService
MO). In LTE networks handling commercial traffic it is strongly recommended to set this
parameter to “true“.
The OAM parameter that controls the dormancy period in the eNB is timeToTrigger (in
TrafficBasedReleaseConf MO). In order to ensure a good balance between User plane and
Control plane load, it is strongly recommended not to go below 10000 ms value for this
parameter (equivalent to a dormancy period of 10 sec). For details of this parameter and
its LR14.1.L default value, please refer to [C1][R1].
The Control Plane parameters are counted in number of procedures (Attach, Detach, etc.)
per subscriber and per Busy Hour.
The following table provides a further breakdown, by call type, of the general and user plane
parameters from Table 2-3.
Alcatel-Lucent projects that user traffic in LR14.1.L deployments will be primarily Best Effort data.
However, the Alcatel-Lucent LR14.1.L end-to-end network solution supports VoLTE, GBR data, and
CA, and includes these call types in the traffic analysis.
Example
Customer
Call Model Parameter Traffic Model
Value
Value
Table 2-5 provides some additional dimensioning parameters for subscriber loading and data usage
per cell. Note that Table 2-5 shows examples of maximum numbers of subscribers for a few
frequencies. The table is not a complete list of all frequencies and does not include all methods for
counting subscribers (OOT, etc.)
Example
Item Customer Notes
Traffic
Value
Model Value
Note (*) Number of subscribers (idle mode + active mode) per cell per Busy Hour (BH). These
numbers are calculated with the hypothesis developed in Section 3.1.3, where L2 of air interface is
fully loaded (90% of the frame) by connected users in active mode, during the peak traffic of Busy
hours, for a 10MHz bandwidth.
Table 2-6 shows the detailed call model for traffic in the LR14.1 Example Traffic Model. Please note
that values in Table 2-3 are aggregated by call type (VoLTE, BE Data, GBR Data) while the values
below are aggregated by application type (Download, Upload, Web Browsing, etc.). The Voice
category below is split between VoLTE and BE Data in Table 2-3. Similarly, streaming applications are
split between BE Data and GBR Data in Table 2-3. (10% of all eligible streaming traffic is defined as
GBR in our example traffic model.)
Web
5.4 52 223 227 912
Browsing
Small
15 16 18 417 441
Messages
Table 2-6: Detailed Data Call Model for the Example Traffic Model
The paging strategy is a tradeoff between paging response time, paging effectiveness (the call
completion rate) and paging efficiency (use of paging channel and other system resources). Several
paging strategies are available to optimize performance and achieve customer objectives. For time
critical service such as VoLTE, a more aggressive paging strategy may be used. Typically, customers
optimize performance without achieving 100% effectiveness. KPIs can be used to identify issues and
provide data for determining the best methods for balancing response time and effectiveness with
efficiency.
When a UE is in idle state, its location may not be exactly known. A Tracking Area (TA) represents an
area in which the UE was last registered, and it is necessary to page the UE in the TA to locate the
UE in a particular eNodeB. A TA update (TAU) is generated when the UE crosses the boundary from
one TA to another TA.
The LTE system has defined two TA operational schemes: the “multiple-TA registration” scheme and
the “overlapping TA” scheme.
With multiple-TA registration, MME sends to the UE a TA List containing the current TA and one
or several neighbor TAs. UE will request TAU only when it moves to a TA that is not in current
list. This avoids ping-pong scenarios at the TA border areas. MME will automatically include the
previous (old) Last Seen TA as a neighbor in the TA list and is capable of updating the TA List
automatically in order to track cyclic movement between 2 and 3 TAs. There is no need to
provision/configure TA neighbor lists in MME. On average (using the example call model
presented in Table 2-3), it is estimated that TA list size is ~1.2 TAs.
In overlapping scheme, the eNodeB is configured to broadcast to more than one TA.
The following paging strategy is used as an example throughout this document as a realistic strategy
for the example traffic model described in Section 2.2.1.1. MME supports different paging methods
for Voice (VoLTE), and non-VoLTE calls.
The following paging strategy provides optimal performance for the example traffic model used
within this document:
Paging S-TMSI
Attempt 1
Attempt 2
Attempt 3
TA4
TA1
TA2 TA3
Figure 2.2-2: MME Paging Strategy Example for non-VoLTE Traffic in LR14.1.L
The page response rate assumptions for each of the paging methods are as follows:
Cumulative Cumulative
Page Success rate
success rate success rate
Response Paging Strategy after 1st page
after 2nd page after 3rd page
Assumption attempt
attempt attempt
TA list
VoLTE 93% 95% -
TA list
Paging rate at the MME is determined by the number of eNBs paged per MME terminating call. Based
on the above paging strategy, the paging response rates and the TA/TA list sizes, the eNBs paged per
MME terminating call can be calculated as following:
eNBs paged / MME terminating VoLTE call = numberEnbsTAList (for 1st paging attempt)
+ numberEnbsTAList * ( 1 - SucessRateAfter1stPage ) (for 2nd paging attempt)
Example with TAsize = 60 eNBs, TAlistsize = 72 eNBs, and 100% eNB loading in average for all
subscribers in the entire network:
eNBs paged per MME terminating Voice call = 72 + 72 *7%= ~77 eNBs.
For UEs in all non-VoLTE calls (BE data and GBR data, including SMS):
eNBs paged / MME terminating non-VoLTE call = 1 (for 1st paging attempt)
+ numberEnbsEnbList * ( 1 - SucessRateAfter1stPage ) (for 2nd paging attempt)
eNBs paged per MME terminating Non-VoLTE call = 1 + 5 *14% + 72*7% =~6.75 eNBs.
Note that the above calculations assume a worst case (high load) of 100% loading for eNBs in the
TA/TAList. Measured field data typically shows average loading below 80%.
The traffic assumptions in the LR14.1 example call model used for paging on VoLTE calls are 0.32
BHCA with a termination percentage of 50% which results in 0.16 terminations. For non-VoLTE calls,
27 nonGBR BHCA and 0.045 GBR BHCA have respectively 35.2% and 12.6% termination percentage,
which equals to 9.5 terminations. The total eNBs paged = 77 eNBs * 0.16 (VoLTE) + 6.75 eNBs * 9.5
(non-VoLTE) = ~76 eNBs.
To compute the weighted average per terminating BHCA, 76 eNBs / (0.16+9.5) = 7.9 eNBs paged for
all services termination. Note that there are 2 additional messages required for handling the UE
response to the page request so the average messages per page attempt is ~10.
This activity is generally launched once the network is deployed and handling commercial traffic.
The purpose is to evaluate the load and the consequences of this load on Key Performance Indicator,
on all critical network elements and to rapidly detect/predict any resource shortage issue (and take
appropriate actions).
Yes
Status Yellow
Resource Low
Capacity analysis/tuning
Load required
High
Status Red
Capacity analysis/tuning
and/or resource addition
required
The initial dimensioning evaluates the network element count as well as the associated capacity of
those elements and interfaces. Specific architecture design requirements, such as redundancy, and
MME and SGW pooling strategy are not presented in the LNCME. Using the estimated site
configurations for the area of interest and based on a certain traffic distribution, the initial
dimensioning allows us to calculate capacity and required configuration, and to estimate the
amount of hardware needed for each Network Element.
The mix of applications supported, traffic density, traffic growth estimates and QoS (Quality of
Service) requirements are important elements in the initial dimensioning exercise. Service Quality is
considered in terms of blocking probability and/or guaranteed throughput per subscriber.
Calculations are carried out for each service and the tightest requirement determines the hardware
needed for each NE.
For the EPC dimensioning, the guidelines are applicable only for “overlay networks”, i.e. when the
mobile operator deploys the network elements dedicated for LTE/EPC only.
Critical triggers should be monitored and engineering rules (for growth) should be followed once a
network is established.
Once the RF Network Planning exercise is done and the coverage area is known (number of sites,
site distribution pattern, cell size…), the site configurations in terms of LTE bandwidth, Antenna
system (MIMO scheme), DL Power, number of modem resources… have to be selected so that the
traffic density supported by the chosen configuration can fulfill the traffic requirements.
Note that the RF Network Planning is not in the scope of this document.Traffic Distribution
Traffic distribution and service requirements form the basis for network planning and deployment.
These parameters are required for evaluating the resources required at each NE for a specific
planning/deployment scenario.
Bearer service and traffic distribution should also be taken into consideration. The more accurate
the traffic estimate, the more realistic the results achieved.
Scope of Network Elements and Network Interfaces
The LTE network elements covered in this document include (see chapter 1.2 for specific
restrictions):
eNodeB (eNB)
OAM (eUTRAN & ePC)
MME
S-GW
P-GW
HSS
AAA
PCRF
MBMS-GW
The LTE network interfaces covered in this document include:
Air interface
S1 & X2 (User & Control Plane), M1, and M3 interfaces
EPC Control plane interfaces : Gn, Gx, S3, S4, S5, S6a, S8, S9, S10, S11, S12, S13, SBc, SGi, SGi-
mb, SGmb, SGs, Sm, Sv, S102. Please refer to Section 2.1.4.8 and Section 3.7
OAM interfaces
3.1.1 OVERVIEW
Air interface capacity is not an element that is considered by default in the Call Admission Control
(CAC) algorithm. This means that the Air interface capacity will not directly limit the number of
users that may be accepted in a cell (this will be indirectly done by the number of users that can be
accepted per eNB modem & controller board – see Section 3.2 for details).
A CAC defense mechanism with CPU overload control and calls pre-emption was implemented since
LA5.0, in order to limit the CPU utilization in critical situation. The CAC mechanisms that limit the
number of simultaneous users and Bearers per cell, depending on the real time PRB resource usage,
still exist. The strategy used by CAC is to preserve stable calls and maintain the QoS for existing
services within the cell.
In LR13.1.L, controller CPU load improvements have been introduced aiming to reduce the load of
the processor hosting call processing functions so as to be able to manage as many RRC connected
users as possible for LR13 reference traffic model while not exceeding a PO of 75% on any core of
eCCM & eCCM2 general purpose processor.
In overload situation, some RRC Connection Request messages may be outdated when treated by the
Call Processing function. In order to avoid unnecessary processing and signalling these messages
shall be discarded when the time spent in the eNB is greater than the contention resolution timer.
Admission control has been enhanced to rely on many real-time measurement reports from other
functions or layers, such as # of users, real-time PRB consumption reports from UL and DL
schedulers, status of ongoing served QoS for existing calls, and UE radio condition. This mechanism
is based on consumption rules for the different types of calls. This consumption per type of call
values are fixed through OAM configuration parameters (not dynamically updated).
We consider that air interface dimensioning will take into account first about call blocking
limitations. To do this we used the ErlangB law that gives a number of simultaneously connected
users, considering a blocking probability and a traffic intensity hypothesis. This traffic intensity is
derived from the traffic model. After all we will just use the max available cell throughput to
determine the required LTE bandwidth considering the amount of throughput generated by this
simultaneously connected users for a given traffic model and according to Antenna system (MIMO
scheme) and cell/sector RF output power.
According to the traffic requirements in terms of number of subscribers and traffic per subscriber
during the busy hour, the computed traffic per sector shall be compared to the capacity per sector
of one LTE carrier to determine the number and type of resources that are needed considering
distinctly GBR and non GBR type. These 2 types of traffic are considered regular (GBR) and bursty
(non GBR) traffic. For bursty traffic, a peak to average method is used.
We will not present the detail CAC mechanism based on the measurement of PRB resources
consumed by the traffic generated by the current connected users. This evaluation exercise needs
to know the amount of GBR and NonGBR bearer used to carry traffic and the average PRB
consumption per kbps. Basically the CAC mechanism checks for the acceptance of a new user by
verifying the availability of PRBs every second to deliver an amount of throughput.
The following figure illustrates the main inputs and outputs associated with an LTE air interface
dimensioning analysis.
Note that in this chapter only the Air interface capacity aspects will be addressed. Some
hypothesis/inputs used for Air interface capacity analysis are based on RF Network Planning outputs.
The RF Network Planning guidelines dealing with the Link Budget & Radio coverage aspects (cell
size, number of eNB sites & site positioning…) are provided in separate documents. See [C8].
Additional capacity figures from other sets of results (NGMN Case1 & Case 3) can be found in
3.1.2.1.1 & 3.1.2.1.2.
These figures depend on the radio assumptions, features and algorithms deployed (e.g. MIMO
scheme, interference coordination, scheduler algorithms), system bandwidth, RF conditions and
traffic pattern.
The capacity analysis consists in determining the required bandwidth (e.g. 3MHz, 5MHz, 10MHz, etc)
and the right features (e.g. is MIMO 2x2 required? is a high power configuration required?) to be
deployed to support the traffic per sector. If some limitations are reached, the last resort is to add
sites to support the traffic demand. More details on this process are presented in Section
3.1.33.1.3.
Note: Figures in the tables below are colour coded. Figures in light green have been propagated
from NGMN Case 3 and figures in darker green have been propagated from Multi-Techno Whitepaper.
Table 3-1 summarizes the uplink air interface capacities for 1 LTE carrier for 1 sector for air
interface bandwidths supported in LR14.1.L: 1.4MHz, 3MHz, 5MHz, 10MHz, 15MHz & 20MHz.
Capacities are provided for both 2 branch receive diversity as well as 4 branches receive diversity.
It is important to note the following:
UL Capacity figures are for a mono-service traffic scenario, i.e. the VoIP capacities are
applicable assuming the carrier is supporting voice traffic only
The UL data capacities are average aggregate air interface throughputs
UL Capacity figures are for a regular deployment of 3 sector sites
These UL values are illustrative examples only, and may vary in field deployments
depending upon specific deployment conditions (e.g., variations in radio channel conditions,
overlay and antenna constraints, fixed wireless deployments, per-user performance
requirements, etc).
VoIP Data
FDD Configuration
CVoIP _UL CData _ UL
1.4 MHz - 2RxDiv 39.0 Erl 0.63 Mbps
3 MHz - 2RxDiv 100.0 Erl 1.64 Mbps
5 MHz - 2RxDiv 162.0 Erl 2.97 Mbps
10 MHz - 2RxDiv 324.0 Erl 6.02 Mbps
15 MHz - 2RxDiv 486 Erl 9.1 Mbps
20 MHz - 2RxDiv 648.0 Erl 12.3 Mbps
1.4 MHz - 4RxDiv 54 Erl 0.88 Mbps
3 MHz - 4RxDiv 140 Erl 2.3 Mbps
5 MHz - 4RxDiv 227 Erl 4.2 Mbps
10 MHz - 4RxDiv 454 Erl 8.4 Mbps
15 MHz - 4RxDiv 680 Erl 12.8 Mbps
20 MHz - 4RxDiv 906 Erl 17.2 Mbps
4RXDiv configuration can be supported over 10, 15 and 20 MHz with bCEM.
As more CPRI capacity is required in case of 15 and 20 MHz, eCCM2 usage is necessary.
For VoLTE, the grade of service on the air interface is not defined by blocking requirements as was
the case for GSM, WCDMA and CDMA systems. 3GPP evaluation and NGMN simulations used the
following requirement to derive the VoIP capacity:
A VoLTE user is in outage (not satisfied) if 98% radio interface tail latency of this user is
greater than 50 ms. This assumes an end-to-end delay below 200 ms for mobile-to-mobile
communications.
The system capacity is defined as the number of users in the cell when more than 98% of the
users are satisfied.
Table 3-2 summarizes the downlink air interface capacities for 1 LTE carrier for 1 sector for air
interface bandwidths supported in LR14.1.L: 1.4MHz, 3MHz, 5MHz, 10MHz, 15 MHz and 20MHz.
Capacities are provided for SIMO-1x2, MIMO-2x2 and MIMO-4x2 configurations.
VoIP Data
Configuration
CVoIP _DL CData _ DL
1.4 MHz - SIMO 1x2 60,3 Erl 1.3 Mbps
3 MHz - SIMO 1x2 142 Erl 3.1 Mbps
5 MHz - SIMO 1x2 221 Erl 5.4 Mbps
10 MHz - SIMO 1x2 444 Erl 11.0 Mbps
15 MHz - SIMO 1x2 659 Erl 16.5 Mbps
20 MHz - SIMO 1x2 869 Erl 22.0 Mbps
MIMO 4x2 configurations are not supported in LR14.1.L (grey cells in Table 3-2).
DL Capacity figures are for a mono-service traffic scenario, i.e. the VoIP capacities are
applicable assuming the carrier is supporting voice traffic
The DL data capacities are average aggregate air interface throughputs
DL Capacity figures are for a regular deployment of 3 sector sites
These DL values are illustrative examples only, and may vary in field deployments
depending upon specific deployment conditions (e.g., variations in radio channel conditions,
overlay and antenna constraints, per-user performance requirements, etc).
All the capacity figures quoted in 3.1.2.1.1 & 3.1.2.1.2 implicitly assume a uniform deployment of 3
sector sites. In order to apply these figures to other site configurations the capacities must be
scaled accordingly. Next table summarizes the sectorization factors (SF) to be applied for Omni, bi-
sector, tri-sector and hexa-sector site configurations. These factors are based on system level
simulations results.
1 1.13
2 1
3 1
4 0.95
6 0.85
The capacity figures from Table 3-1 and Table 3-2 must be multiplied by the sectorization factor to
get the corresponding air interface capacity for a given sector configuration:
CapacityNSect = Capacity3Sect x SFNSect
The main inputs required for performing an interface capacity analysis include:
The number of attached subscribers may be also determined from the Cell surface information
and the subscriber density per Km² (input provided by the customer):
NSubs = Cell_Surface * Subs_Density
Traffic Profile per subscriber
- VoIP services
o VoIP Traffic Intensity, AVoIP(i), in Erl: Traffic intensity is a typical input in the
dimensioning procedure that is computed from Call model inputs – see Table 3-4.
o Nominal UL and DL VoIP services Rates, RVoIP(i)_Nom UL/DL. Associated blocking
probability BlGBR(i), to get a connection at this rate in %.
The nominal VoIP rates when actively transmitting during on time taking into
account of VAF (voice activity factor), in Kbps, see Table 3-4 for definition.
This services that guarantee the bit rate could be used for VoIP like in the call
model example described in the Table 2-4.
- GBR services
o Nominal UL and DL for GBR Data services Rates, RGBR(i)_Nom UL/DL. Associated blocking
probability BlGBR(i), to get a connection at this rate in %.
The nominal GBR rates when actively transmitting during on time, in Kbps, see
Table 3-4 for definition.
This services that guaranteed the bit rate could be used for VoIP services but
also premium services like video streaming in the call model example describe
in the Table 2-4.
o GBR Traffic Intensity, AGBR(i), in Erl
Traffic intensity is a typical input in the dimensioning procedure that is
computed from Call model inputs – see Table 3-4.
- UL and DL VoIP and Data Air Interface Sector throughput, as defined in Section 3.1.2.
o CVoIP _UL/D capacity for VoIP in ERL, and the capacity for data CData_ UL/DL Mbps,
o These inputs are dependent upon:
Carrier Bandwidth
MIMO/RxDiv configuration
VoIP and GBR service are such that the application packets are sent at a fixed rate. Thus
VoIP and GBR services exhibit non-bursty traffic characteristics. VoIP / GBR services
equivalent bit rate can be computed easily, assuming a volume of information is exchanged
while the user is active, i.e call duration.
Instead non-GBR services traffic characteristics is highly bursty, and it’s not that easy to
assess the non-GBR services equivalent bit rate.
o Definition: The Peak to Average ratio (PtA), is an overflow factor that is used to
estimate the equivalent bandwidth for a set of variable bit rate calls. For packet based
type of calls with variable bit rate, an equivalent bandwidth K can be defined as a value
somewhere between the mean and the peak bandwidth requirement for that call. If a
transport interface has size C, then N calls of this type can be accepted at the transport
o Equivalent bandwidth service (i) = PtA(i) * Mean bit rate service (i).
The graph below illustrates the concept peak to average traffic aggregation concept.
o Further details about the computation of PtA can be found in reference [C8]. Peak to
Average ratio (PtA) depends basically upon the following factors:
o Non-GBR service mean rate (this figure is retrieved from the traffic model refer to Table
2-4 of Section 2.2.1.1.
o Non-GBR service peak rate (this figure is retrieved from the traffic model, refer to
Table 2-4 of Section 2.2.1.1.
o Target packet loss for non-GBR service, assuming peak to average aggregation is in use.
Typical figure is in the range of 1%.
o Transport interface available bit rate (this is indeed the capacity of the trunk which is
used to aggregate non-GBR services). Referring to the figure below, this is mostly
retrieved from the traffic model, with the following formula: C_Non-GBR = C_Data –
C_VoIP – C_GBR, with C_VoIP and C_GBR figures retrieved from the traffic model to
Table 2-4 of Section 2.2.1.1. Indeed C_VoIP and C_GBR equal to the VoIP and GBR
services nominal rates (see Figure 3.1-4).
As described above, the peak to average ration is mostly computed with the figures
retrieved from the traffic model. However, in some specific use cases, user defined PtA
figure can provide by the customer.
- PtA_R
PtA_R is the peak to average ratio to apply at air interface level.
For each air interface bandwidth a specific PtA_R has to be computed, as the air interface
equivalent C_Data available throughput is air interface bandwidth specific.
- PtA_B
PtA_B is the peak to average ratio to apply at the Backhaul interface level (S1, X2, M1, M2,
M3 and OAM interface). Similar to PtA_R, specific PtA_B has to be computed for each air
interface bandwidth.
The PtA computation can be performed based on different methods that are detailed in
[C8]. (These equivalent methods are providing similar results). The PtA value may either be
provided as part of a call model, or a standard value can be used instead (A PtA value up to
1.4 is acceptable):
Engineering Recommendation: PtA Value used in Air Interface Dimensioning Exercises (PtA_R)
PtA that is used in Air Interface dimensioning exercises is aligned with the following formula:
As an example, in FDD, considering the available link rate provided by the Air interface (see
CData_UL/DL values provided in Section 3.1.2), the example traffic model description in Section
2.2.1.1, and all the parameters defined in Section 3.1.3.1 and tables in Section 3.1.2
calculated with 10 MHz cell capacity limit assuming 500 subscribers:
Table 3-4: Call Model to Air Interface Dimensioning Elements Conversion Table
As shown in Figure 3.1-2 the method used for Air interface dimensioning consist in comparing the
VoIP , GBR Data and nonGBR Data resources required by a specific number of subscribers with the
VoIP and data capacity offered by LTE cell (tables provided in Section 3.1.2).
For the VoIP and GBR data, the capacity resources required per cell represents the number of
subscribers multiplied by the traffic load per subscriber (Erlang).
For data non GBR services, required capacity per cell is computed using a traffic aggregation model.
This model allow to derive what is the required capacity (in this case, a throughput, Mbps) required
for a given number of users, with a given traffic mix. Different types of traffic aggregation models
can be applied:
Overbooking factor or Peak to Average factor applied to average Data traffic throughput (This is
the preferred model due to its simplicity.)
Other models (more precise, but also requiring more complicated computation) where some
grade of service (GoS) constraints are imposed. Such GoS requirements can consist of
dimensioning the system not to exceed specific blocking rate or transmission delay:
- ErlangC if (delay, probability of delay) defined as GoS [C8].
- Multiservice Erlang (based on Kaufman-Roberts ) if some blocking requirements
Note: these models based on GoS are good approaches if more accurate traffic modeling is
required (particularly when there is a single measure of air interface capacity). Using these
methods implies a fastidious computation (requiring specific dimensioning tools).
In this document only the first option (based on overbooking factor) will be presented.
Note: The Air Interface dimensioning method described below is well adapted to a mix of VoIP, GBR
and Best Effort data traffic. (See Section 2.2.1.1, LTE Call Model Elements.)
In a first step, the total number of Erlangs of VoIP services (AVoIP) and the total number of Erlangs
of GBR services (AGBR) are computed as follows:
In a second step, the equivalent guaranteed throughput (Kbps) for VoIP services
(REquiv_Voip_Air_UL/DL), the equivalent guaranteed throughput (Kbps) for GBR data services
(REquiv_GBR_Air_UL/DL), and the equivalent “peak” non-GBR data throughput (Kbps) for data services
(RNonGBR_Equiv_Peak_Air_UL/DL) are computed as follows:
REquiv_ VoIP _ Air _UL / DL Inv _ ERL( A voip @ BlVoIP ) RVoIP _UL / DL
Based on the VoIP, GBR/non-GBR data traffic to support, and the multi-service VoIP, GBR/non-GBR
Data capacity figures for a given bandwidth and MIMO/RxDiv configuration, CVoIP_UL/DL, CData_UL/DL
(provided in tables in section 3.1.2), the next step is to determine whether the VoIP, GBR/non-GBR
Data capacity can be supported and which bandwidth and MIMO/RxDiv configuration is required.
A simple linear rule can be used to derive the system capacity in mixed VoIP, GBR /non-GBR Data
traffic conditions.
The air interface load is computed on the UL and DL independently for VoIP and Data traffic
(LVoIP_UL/DL and LData_UL/DL):
LVoIP_UL/DL (%) = AVoIP / CVoIP_UL/DL
LData_UL/DL (%) = (RnonGBR_Equiv_Peak_Air_UL/DL+ REquiv_GBR_Air_UL/DL)/ (1000 x CData_UL/DL )
For a given bandwidth and MIMO/RxDiv configuration we compute the total UL and DL air interface
loadings, LUL and LDL:
LUL/DL (%) = LVoIP_UL/DL + LData_UL/DL
The capacity check is performed to ensure the air interface capacity is not exceeded:
Max(LUL, LDL) ≤ 90%
Note: Based on the above mentioned algorithm, a recursive computation may be foreseen in order
to obtain the maximum number of Users supported on the Air interface. For a given LTE Bandwidth
and a given traffic profile we can increase the number of users per cell until the limit of 90% load is
reached (Max (LUL, LDL) = 90%). We use 90% instead of 100% of the bandwidth in order to leave
resources for high priority call like: Emergency Call and Handovers. The LTE bandwidth used will
be no more an unknown in the equation, but an input (the recursive computation being run for a
given LTE bandwidth).
Once the carrier bandwidth is found, it is important to size the downlink power amplifier to ensure
sufficient DL power resources to match the targeted imposed by the uplink coverage.
The required Power Amplifier (PA) sizing recommendation is based on a series of system simulation
studies. All scenarios considered 2x2 MIMO on the DL and 2RxDiv on the UL.
In principle, all the studies concluded that spectrum efficiency for “reasonable” cell sizes is
relatively invariant to the choice of PA sizes and that edge rates become much more sensitive to the
choice of power at large cell radiuses. For details please refer to [C8].
Table 3-5 summarizes the recommended PA sizing based on the observations from the above
mentioned study. These PA sizing recommendations are independent of the frequency band.
3.1.3.4.1 eMBMS
eMBMS is the broadcast technology for LTE. It enables the use of a single set of radio resource to
delivery content to unlimited number of connected and idle UEs. Because of the broadcast nature
of eMBMS, and the lack of unicast-broadcast interaction at the moment, broadcast services are
explicitly scheduled by operators, rather than enabled dynamically based on subscriber demand. Of
course subscriber interest based on market study is an important factor behind operators’
scheduling decisions, but this is part of broadcast service planning and not based on dynamic
subscriber consumption activity – the latter being a critical component behind unicast traffic
models.
Given this fundamental difference, there is no market where one can sample broadcast services to
obtain a meaningful average or characteristic broadcast loading. There is also little meaningful
relationship between broadcast traffic loading and the notion of busy hours – the latter is again
driven by subscribers’ average dynamic behavior, and is a critical concept for unicast traffic models.
A meaningful way to look at loading and impact due to broadcast traffic is to consider likely or
realistic use cases. As different operators may deploy eMBMS for different use cases, an average
broadcast model would tend to be more meaningful when applied to a specific operator’s network.
It is worth noting the three main aspects of loading associated with eMBMS. One is the loading due
to broadcast bearer data carrying actual broadcasted content – this is the bulk of the loading due to
eMBMS. Another is the loading due to signalling data such as to manage eMBMS sessions (starting,
stopping, updating session) and their resources within the AN. Then there is the loading due to
associated delivery procedures and security procedures. Associated delivery procedures refer to file
repair and file reception report procedures, where UEs request missing bytes from servers or report
number of files received successfully to servers – these interactions mostly occur over unicast
bearers in both uplink and downlink directions. Security procedures refer to user authentication,
service registration, authorization, and encryption key request/delivery procedures necessary to
secure (i.e. encrypt) the delivery of broadcasted data – these interactions only occur over unicast
bearers in both uplink and downlink directions.
It is also worth noting the concept of eMBMS capacity, which at a high level consists of three
dimensions – eMBMS spectral efficiency, percentage of carrier used for eMBMS, and total number of
cells participating in broadcast transmission.
eMBMS spectral efficiency is expressed as bps/Hz, and is largely the result of choosing a single MCS
that meets the coverage criteria of N% receivers in a cell achieving a average target BLER of X% or
better (keep in mind that the MCS for a physical broadcast channel is fixed once chosen, and does
not adapt to individual UE conditions). The eMBMS spectral efficiency is determined by morphology,
ISD, etc, and is different from unicast spectral efficiency for the same cell.
The percentage of carrier used for eMBMS is determined by broadcast service requirements, eMBMS
spectral efficiency, and unicast demand. Lower eMBMS spectral efficiency would require greater
percentage of subframes allocated for broadcast to meet the same capacity required per service
definition, taking away more resource from unicast traffic. 3GPP allows a maximum of 60% of a
carrier that can be allocated to broadcast. Since eMBMS subframes cannot be used by any unicast
traffic, all unicast services (GBR services, BE services) are impacted.
For every cell intended to deliver broadcast content, there would typically be one or two tiers of
surrounding cells consuming the same amount of radio and backhaul resource for eMBMS (as the
cells directly intended to deliver broadcast traffic) in order to provide sufficient eMBMS capacity
through synchronized transmission, or through participating in MBSFN. The total number of cells
impacted by broadcast service is thus typically more than just the cells directly within the intended
geographical broadcast distribution area.
For the loading due to broadcast bearer data, three eMBMS use cases are considered in-venue
broadcast video delivery; national broadcast video delivery, national broadcast file delivery.
Higher capacity: eMBMS is supported with Unicast users capacity aligned on LR14.1 capacity,
defined in the specific SW capacity features for the release.
Multi-carrier configuration: eMBMS support in any of the carriers supported by the eNB,
except B29 and unpaired spectrum.
In addition to supporting Voice over IP (VoIP) bearers from commercially deployed UEs using the
Dynamic Scheduler (DS), supported on both eCEM and bCEM boards, LR13.1.L also introduces on the
bCEM board the 3GPP Semi-Persistent Scheduling (SPS) technique in the eNodeB Medium Access
Control (MAC) layer.
Semi-persistent scheduling (SPS) is used for VoIP traffic to avoid reaching the grant capacity limit on
the PDCCH. With the Dynamic Scheduler (DS), grants are issued for each scheduled packet as well as
downlink (DL) HARQ retx. With SPS, grants are issued only for SPS activation/release and DL HARQ
retx.
SPS does not necessarily improve the voice traffic channel capacity compared with DS. SPS improves
the ultimate number of users (Data + Voice) that can be supported.
Use of SPS reduces the PDCCH consumption, which in turn allows an increase in the number of data
users accessing the cell and improves the cell throughput for non-VoIP applications.
The objective of this LR14.1.L feature is to double the capacity for VoIP Service as compared to
LR13.x release, while maintaining the same VoIP QoS.
In LR14.1.L, it will be possible to achieve VoIP capacity of 200 VoIP bearers per cell on >= 10MHz
and 100 VoIP bearers 5MHz system BW (assuming no Carrier Aggregation (CA)).
The VoIP capacity target of the feature can be reached with the following assumptions:
• RoHC compression (>90% compression ratio), Voice activity detection based DTX and
efficient BW mode RTP supported in the UE as well as the network.
• VoIP packet inter-arrival time of 20msec and effective VAF is 50% which includes the
transmission of Silence indication descriptor (SID) and RoHC speech transition frames with
random talk spurt start between all users (unsync speech).
• Fixed AMR-WB 12.65k codec rate so rates above that will incur a capacity penalty as would
BCPS VoIP bearer carrying multiplexed packets from N-users (VzW).
• No voice codec rate adaptation during speech burst, and UE VoIP client will not apply
autonomous frame bundling in RTP layer. The client will also only send RTCP traffic during
call hold time as per GSMA IR.92 specification.
• Typical inter-site distance of 1km (or 500m cell radius so needing only RACH burst format
0), low speed like 3km/hr and pedestrian channels types.
• Features like OP-PUCCH (VzW), eMBMS, eICIC that can limit the air interface resources are
assumed to be disabled. However MAC-DRX can be configured for achieving the capacity
target
For VoIP capacity target evaluation, the following features are assumed to be in the same load and
activated if an activation flag exists:
• RoHC v1 Supported for VoIP, LA6.0.1 – RoHC is required to reach the VoIP capacity target
and to improve cell edge UL performance
• Commercial VoLTE with SPS, LR13.1 – SPS is required to relieve grant capacity limit on
PDCCH for VoIP traffic to reach the VoIP capacity target
• PRACH Reservation Removal LR14.1 – This feature is required to lift the restriction on bCEM
to reserve RACH Msg1 PRB for an extra TTI for RACH burst formats 0 and 2
• eNB SW Capacity, LR14.1 – This feature is required to provide enough ACK/NACK PUCCH
resources for DL SPS, and to provide flexible TPC PUSCH resources and DCI3 grants required
for UL SPS power control
• MCS Override, LA6.0 – This feature is required to avoid segmentation for 12.65k codec
speech packets to conserve grants in case of high loT or large path loss. MCS10, 2PRB grant
size is to be used
• TTI bundling Phase 2, LR13.1 – This feature is required to improve performance in cell edge
when segmentation is disallowed. 1 PRB grant size instead of 2 for TTI bundling should be
used
For VoIP capacity target evaluation, the following features are assumed to be disabled:
• OP PUCCH Enhancement, LA3.0 – This feature has the potential to limit capacity due to PHR
constraints and PUSCH zone fragmentation.
• eMBMS with 3GPP Rel10/Rel11 Enhancements, LR14.1 – Due to the number of TTI blocked by
MBSFN in the DL direction, it is estimated VoIP capacity target may not be reachable with >
30% eMBMS allocation density
• CA Enhancements, LR14.1 – When CA is enabled, current testing results indicate the number
of SPS users that can be supported are greatly reduced, hence limiting the VoIP capacity
• eICIC Trial, LR14.1 – When eICIC is enabled, TTI flagged as ABS will not be used to send
grants for traffic. This will reduce the UL/DL air-interface capacity including VoIP capacity.
This feature is introduced in LR13.1.L and it enables eNB to monitor and control the active
throughput performance for nonGBR radio bearers in RF schedulers (DL and UL); hence the user
experience can be more predictable. The active throughput is defined as the achieved throughput
when the radio bearer carries the data.
In addition, the feature also allows eNB to differentiate resource reservation for different nonGBR
QCIs.
This feature enhances the standard nonGBR QoS which has no minimum rate control, only relative
priority differentiation. nonGBR QCIs can be used to carry content-rich services which are often
bursty and variable-rate in nature, but has a certain minimum throughput tolerance level when the
traffic is active.
When the active throughput approaches or falls below the minimum throughput target, the
scheduler applies a dynamic weight on the bearer to help schedule the traffic faster. Once the
active throughput well exceeds the minimum throughput target, the additional weight is removed.
The feature only applies to bCEM based eNB. The feature is NOT offered to eCEM based eNB and the
feature activation flag is: isNonGBRMinRateEnable.
Prior to LR13.1, load balancing is triggered based on estimated PRB consumption across all types of
channels (SRBs, TRBs, control channels). For non-GBR bearers, the estimated PRB consumption is
calculated partly using on a configuration parameter (minBitRateForBE). For GBR bearers, the
estimated PRB consumption is calculated using the actual GBR.
In LR13.1.L, This feature aims at providing a criteria for triggering load balancing upon radio
congestion, as well as introducing enhancements to the LA5.0 load balancing solution requested by
customers.
In addition to the existing criterion, this feature allows customers to trigger load balancing based on
real/actual PRB consumption and average QoS degradation. This provides customers with more
flexibility with regards to non-GBR bearers: it allows admitting large number of non-GBR bearers in
a cell (e.g. by setting minBitRateForBE to 0kbps) whilst taking full advantage of its spectrum assets
through load balancing in periods of congestion when QoS degradation becomes visible. It also
allows triggering load balancing in periods of high activity, even if the calculated/semi-static PRB
consumption remains below the preventive offload threshold. In LR13.1, QoS degradation is
measured in terms of DL throughput for non-GBR bearer.
Early equalization of load between LTE carriers is allowed when the load delta between the target
and serving cell is above a configurable threshold (target cell load is lower than serving cell load).
This requires a new load threshold in advance of the preventive offload thresholds, so that load
equalization between LTE carriers can be triggered very early. In this case, the load is in terms of
semi-static PRB usage only.
nGBR QoS Based Load Balancing aims at providing a new criterion for targeting load balancing upon
radio congestion, as well as introducing enhancements to the LA5.0/LR13.1.L load balancing
solution requested by customers.
In addition to the existing criterion, this feature allows triggering of load balancing based on
real/actual PRB consumption and average QoS degradation. This provides customers an additional
flexibility with regards to non-GBR bearers, as it makes possible admitting a large number of non-
GBR bearers in a cell (for instance by setting ulMinBitRateForBE = dlMinBitRateForBE = 0kbit/s)
whilst taking full advantage of its spectrum assets through load balancing in periods of congestion
when QoS degradation becomes visible.
It also allows triggering load balancing in periods of high activity, even if the calculated/semi-static
PRB consumption remains below the preventive offload threshold. Starting from LR13.3.L, QoS
degradation is measured in terms of downlink throughput for a non-GBR bearer.
Looking strictly at real PRB consumption is not enough as a few UEs can take the whole band, so
there must be some level of QoS deficit to confirm the occurrence of congestion:
If the real PRB consumption is high but there is no QoS degradation = No congestion
If the real PRB consumption is high and there are signs of QoS degradation = Congestion
If the real PRB consumption is low = No congestion
This Feature mainly impacts the Callp eNB subsystem, with minor impact on the modem. It is
applicable to eCCM and eCCM2.
This feature is aimed at being used in a multi-carrier environment, i.e. where there is more than
one carrier deployed (LTE and possibly from other RATs). It applies to both inter-frequency and
inter-RAT load balancing and may work with a multi-carrier or multi-band eNB, operate in
environments where two separate eNBs are co-located or deployed in overlapping coverage.
Prior to this feature, all neighbour carriers (both LTE and WCDMA) use the same set of preventive
offload threshold thresholds.
This feature enhances preventive load balancing by allowing prioritization of neighbour carriers for
preventive offload.
With this feature, the operator can define up to 4 different sets of preventive offload thresholds,
where each set of per carrier load thresholds can include one or more of the following:
This feature enables the operator to define separate pairs of UL/DL thresholds for 4 different ARP
priority classes, where a priority class is defined as a range of ARPs and the PRB threshold is the
maximum percentage of PRBs consumed. The thresholds per ARP priority class are used at new call
or bearer admission. The operator is also able to define a separate UL/DL threshold pair used for
call re-establishment and incoming mobility.
This feature enables the operator to improve VoIP user experience by selecting non-VoIP (that is,
non-QCI1) bearers/UEs before selecting QCI1 bearers/UEs for preventive offload and load
equalization.
Enhancements in the way incomings calls and bearers are rejected in case of admission control
failure. It’ll be possible in this case to trigger redirection (instead of RRC reject) or to use new 3GPP
Release 11 de-prioritization mechanism when RRC connection request is rejected.
(Auto ACB) Automatic access class barring (SIB2) triggered by a RACH message storm (in addition to
eNB overload), in order to avoid the negative effects of RACH message overload on the system.
Improvements in UE selection algorithm for reactive & preventive offload; the goal is to allow
reactive offload to work even when pre-emption is deactivated, as well as introduce UE selection
based on UE position in the cell for offload.
Improvements to the PRB Call Admission Control logic for better operational control.
Improvement in UTRAN neighbor cell selection in case of reactive offload to avoid sending the UE to
overloaded UTRAN cells or carriers.
• Allow prioritized access to the LTE network in case of high RACH arrival rate, reserving
resources for higher priority subscribers.
• Take into account UE position for offload, by favoring UEs that are connected in cell center
or at cell edge during UE selection for offload.
• Better control over PRB-based Call Admission Control, providing more flexibility to the
customer in terms of how aggressive it wants to use CAC.
3.1.3.4.8 Remove extra PRACH reservation and align UL/DL frames at antenna connector
This feature is mainly part of the global plan to reach the 200 users of VoLTE for 10MHz.
This feature provides support of UL and DL frame alignment at the antenna (and no more at modem)
in the case of Macro eNodeB with bCEM, and consequently allows the removal of the reservation of
6 PRBs for RACH message 1 for an extra TTI, which will help avoiding collisions with uplink PUSCH
signals from other UEs.
This will also allow sending TA commands for up to 100km on air (“full” PRACH format 3 support)
Currently, for macro-eNodeB with bCEM, modem calculates TA (timing advance) such that UL frame is
aligned to DL frame at the modem.
On the upper part of the figure above there is the bCEM view where the frame index of UL and DL
are aligned.
On the lower part of the same figure we can observe the “view” at the Antenna tip where the UL is
advanced and DL delayed due to HW delay.
This approach can lead to initial RACH messages being received much later with respect to the UL
sub-frame windows at modem. As a result, extra 6 PRBs needs to be reserved (associated with the
extra TTI for the RACH message 1) to avoid collisions with uplink PUSCH signals from other UEs.
The reservation of 6 extra PRBs hinders the VoIP capacity targets to be achieved. If the DL and UL
frames are aligned at the antenna, no extra PRB reservation for RACH will be required, thus
improving VoIP capacity on bCEM.
On the other hand, with the UL & DL timings aligned at modem, the RACH format-3 can only support
up to 70Km instead of 100Km.
The Solution was to move the reference of time to the antenna this will make that the frames at
bCEM are not aligned anymore.
To align UL frame boundary with DL frame boundary at the antenna, modem will need to delay its
reception window by MU UL_FRAME_OFFSET (uplink delay between cell antenna and modem). The
values of DL_FRAME_OFFSET and UL_FRAME_OFFSET are determined during cell setup.
As a result, a proper modem to UE Timing Advance (to compensate the UE round trip air travel time
based on UE location, without L2 TA compensation of twoWayEnodeBHwDelay), will allow the saving
of 6 PRB’s being blocked previously to avoid collisions with uplink PUSCH signals from other Ues.
3.2.1 OVERVIEW
In LR14.1.L, the following Controller and Modem boards are supported within the 9926 eNB digital
module (also called Baseband Unit), for full compliancy with 3GPP standards:
There is no support for the eCEM since LR13.3.L. All eCEM boards must be swapped out to
bCEM in LR13.1.L before upgrading to a higher release.
As it can be seen in the Figure 3.2-1, the baseband unit of the LTE eNB is equipped with:
1 Controller board providing interface to the backhaul (through GigE interface) and to the radio
modules (CPRI links). It also handle most of the Control Plane eNB activity
Up to 3 Modem boards ensuring the User Plane processing
3 cells of the ENB can be handled on a unique Modem (1 modem is required for a 3 Cell eNB)
Up to 3 bCEM per BBU supported in LR14.1.L.
The Dimensioning of the Alcatel Lucent eNB will be mainly based on the dimensioning of the 9926
digital module and the dimensioning of eNB backhaul interfaces (S1 & X2). The interface
dimensioning is addressed in Chapter 3.5. This chapter will focus on the baseband unit
dimensioning.
The dimensioning of this digital module in terms of number of board will be driven by the number of
cells the eNB need to handle (see Table 3-6). The Number of LTE cells to be supported on a given
eNB is an input coming from Radio Planning/Design.
1 eCCM/eCCM2 1
2 eCCM/eCCM2 1
3 eCCM/eCCM2 1
4 eCCM/eCCM2 2
5 eCCM/eCCM2 2
6 eCCM/eCCM2 2
7 eCCM2 3
8 eCCM2 3
9 eCCM2 3
Once the eNB configuration in terms of number of digital boards is chosen, additional capacity
analysis need to be performed in order to check that eNB digital boards are capable to handle the
target number of users and data bearers (for a given call model) while ensuring the required
performance/throughput targets.
These capacity limitations (Number of Users, Data Bearers and Throughput) are directly related to
the traffic that the Controller and Modem are supporting:
In this document we will focus on the number of active users & connected users that the controller
and the modem can support (and associated capacity & dimensioning rules). The Number of Bearers
limit will also be addressed, as a derived limit coming from the number of users.
The following figure illustrates the Alcatel-Lucent LTE call state definitions:
Note that above figure presents all the different states possible for an LTE User.
Connected User:
A Connected user is a LTE attached UE that has established a RRC connection with the eUTRAN. A
Connected User may be either in OOT, or Active User state.
The UE has established SRB1 with the eUTRAN. SRB1 is for RRC messages, as well as NAS
messages prior to establishment of SRB2.
Except for the transition during the RRC connection setup, the UE has activated AS security and
established SRB2 with eUTRAN.
The UE has established at least one default data radio bearer with the eNodeB. The UE may also
establish multiple data radio bearers with the eNodeB.
The UE maintains NAS a signaling connection with MME, and an EPS bearer connection with the
serving gateway and PDN gateway. The UE may also establish multiple EPS bearers with the
gateways.
The eUTRAN tracks the UE within the serving eNodeB. Intra-eNodeB and Inter-eNodeB
procedures can be performed for mobility management (i.e. handoff).
Active User:
An active user is the user with PUCCH/SRS resources assigned by the eNodeB, and an OOT user is the
user without PUCCH/SRS resources assigned by the eNodeB.
Dormant User:
With the introduction of LR13.1.L, the support of the Dormancy (MAC-DRX) function that 3GPP
TS36.321 defines for UEs in RRC CONNECTED state, was allowed. It is also sometimes called
connected state DRX (C-DRX) to be differentiated from the idle mode DRX, which refers to the
mechanism that allows an idle UE to sleep between the Paging occasions and monitor the PDCCH for
paging (P-RNTI) grants only at specific sub-frames in specific radio frames.
eNB enables MAC-DRX operation by sending a RRC Reconfiguration message to configure the MAC-
DRX parameters. A UE that’s configured with MAC-DRX monitors the Physical Downlink Control
Channel (PDCCH) for DL assignment, UL grants, and power control. If there’s no traffic activity for a
configurable interval, the UE is allowed to stop monitoring the PDCCH and also suppresses
CQI/PMI/RI and SRS transmissions on UL. To maintain communication between UE and eNB, the UE
needs to monitor the PDCCH periodically to check for possible DL traffic arrival based on the DRX
configuration. Meanwhile, any UL data arrivals at UE triggers the UL data request/scheduling
process immediately, not subject to any restriction imposed by MAC DRX operation.
On the eNB side, once a UE is configured with MAC-DRX, eNB needs to track the traffic activity of
the UE as well. When eNB determines that the UE has entered DRX dormant state, eNB must hold
any new data scheduling until the UE becomes active again. Certainly, since air interface channel
reception is not error-free, there is always a small likelihood of mismatch between the UE state and
eNB’s view of the UE state. The mismatch may impact slightly the air interface operational
efficiency, such as a data transmission to an actually-dormant user, or eNB holds back data while UE
is actually awake.
The major benefit of the feature is to reduce UE average power consumption during connected
state. Lower power consumption is important for hand-held devices to meet the user expectations
in terms of battery life of the terminals.
It should be noted that the DRX function impacts is the downlink user plane latency when there’s DL
data arrived for a user which is in DRX dormant state. In that case the data can’t be sent to the user
before the next OnDuration, thus incuring the extra latency. To reduce the latency impact, a short
DRX cycle (drx-ShortCycle) can be configured during the initial interval after the traffic stops in
anticipation of subsequent traffic, afterwards if there’s no traffic then the UE can switch to a long
cycle (drx-LongCycle) to save battery.
LR14.1.L release covers the following aspects of MAC DRX operations, which were not implemented
in LR13.1.L:
Support DRX operation for calls in Measurement Gap
Optimized DRX support for Voice over IP (VoIP), including several sub-items:
- Support of Semi-Persistent Scheduling (SPS) with DRX.
- Support of smaller configurations of OnDurationTimer and DRX inactivityTimer to allow
higher UE battery savings.
- Adaptation of OnDurationTimer with VoIP calls loading and the UE PUCCH configuration.
- Allow the configuration of the uplink (UL) maximum Hybrid Automatic Retransmission
Request (HARQ) based on the DRX profile, so UE battery consumption can be optimized
with different DRX configurations.
Note that the MAC DRX function has already been supported in previous releases for different
purposes. DRX is used for intra-LTE ANR (LA2.0 and LA4.0.1), CSFB (LA3.0) and UTRA ANR (LA4.0.1).
In these features, MAC DRX is explicitly forced by the eNB command to create idle periods for the
UE to carry out special measurements for CSFB or for ANR. After the duration of the measurement,
the DRX mode is removed.
The target of this feature is to create a new UE state, where the UE is RRC connected, but cannot
transmit or receive. This state will save modem resources
This feature introduces a scheme to control the usage of a UE’s PUCCH/SRS resource via OOT (out of
time alignment) management on the UE (from the eNodeB perspective), in order to efficiently use
the limited PUCCH/SRS resource of the air interface. By transmitting TA commands naturally based
on UE traffic activity, the UE goes into the OOT state due to traffic inactivity, and releases its
PUCCH/SRS resources to be used by other users.
The UE moves to this OOT state if the eNodeB stops to update the timing advance. And a RACH and
RRC Reconfiguration procedure is used to restart the data transfer.
It is recommended to have DRX enabled for all bearers GBR and non GBR (QCI 1 to QCI 9). In
addition to DRX, OOT should be enabled for all non GBR (QCI 5 to 9).
UE in OOT state with DRX activated is the ideal UE battery saving scenario.
Note: When isOOTManagementEnable=True OOT feature is active for all cells of the eNB. Also there
is a configuration per TrafficRadioBearer object to select if service is allowed or not for the specific
QCI TrafficRadioBearerConf:: isOOTAllowed
Traffic inactivity based MAC DRX operation is supported in LR13.1 aiming to provide the UE battery
power consumption saving for the dominant data traffic.
There are several aspects of MAC DRX operations that are not implemented in LR13.1 feature, which
are covered by this feature in LR14.1:
The below graph shows the impact of user data traffic on the transition between different user
states:
As mentioned in chapter 3.2.1 three main elements need to be analyzed and dimensioned for the
9926 eNB:
Nb of User Connections
Nb of Data bearers (VoIP + GBR + Non-GBR)
Peak L1 Throughput
The LR14.1.L maximum values of each of these three limits are presented in the following tables:
LR14.1.L Capacity Figures 1.4 MHz 3 MHz 5 MHz 10 MHz 15 MHz 20 MHz
Total
Connected 100 240 625 800 800 800
Per-cell with Users
bCEM Total Active
48 120 250 400 400 400
equipage Users
Of Which
Number of 20 40 100 200 200 200
VoIP Users
Users
Connections Total
Connected 144 360 750 1200 1200 1200
Per-eNB with Users
eCCM Total Active
144 360 750 1200 1200 1200
equipage Users
Of Which
60 120 300 600 600 600
VoIP Users
Per-cell with
bCEM Total Bearers 325 780 2032 2600 2600 2600
Number of equipage
Bearers Per-eNB with
eCCM Total Bearers 468 1170 2438 3900 3900 3900
equipage
LR14.1.L Capacity Figures 1.4 MHz 3 MHz 5 MHz 10 MHz 15 MHz 20 MHz
Total
Connected 100 240 625 800 800 800
Users
Per-cell with
bCEM Total Active
48 120 250 400 400 400
Users
equipage
Of Which
20 40 100 200 200 200
Number of VoIP Users
Users
Connections Total
Connected 300 720 1875 2400 2400 2400
Users*
Per-eNB with
eCCM2 Total Active
144 360 750 1200 1200 1200
Users
equipage
Of Which
60 120 300 600 600 600
VoIP Users
Per-cell with
bCEM Total Bearers 325 780 2032 2600 2600 2600
Number of equipage
Bearers Per-eNB with
eCCM2 Total Bearers 975 2340 6096 7800 7800 7800
equipage
The objective is to support two carriers (up to 3 cells each) in a single eNB; each carrier is
supported on one bCEM modem; therefore two bCEM modems are required in the BBU for this
configuration.
The actual definition of the sector is: a geographical area plus a class band. The configuration will
be based on a standard three sectors deployment.
The sector will be served by one CPRI link, which means (seen by OAM) one RRH per sector; the
necessary CPRI link rate is 3.
For FDD configuration each carrier must use a dedicated RF Path pair. If one or both carriers are 4
branches receive, then all four Antenna Ports will be used on the received side by those carriers.
This feature provides support for two carriers (5 MHz + 5 MHz) on a single PCS band (1900 MHz).
Another configuration requirement: the two carriers’ center frequencies must be spaced by a
minimum of 30 MHz.
The PRB licensing being done by band, it applies globally to all six cells. As a single parameter is
provided to the eNB, it needs to be split between the 6 two modems. The split to the two modems
is based on a weighted ratio. This weighted ratio corresponds to the respective number of active –
enabled cells supported on each modem (in case of three + three cells, this means half of the
tokens are allocated to each modem).
The number needs to be rounded-up to the next integer. There is no pooling of tokens across the
two modems. This allocation needs to change automatically based on changes to the number of
active - enabled cells on each modem (cell lock, disable...).
A new licensing parameter was introduced for allowing the usage of this feature:
isDualCarrierEnabled. This parameter has to be set TRUE if this feature is allowed running.
The two modems will support up to 6 cells (up to 3 cells per modem). All cells on the same modem
need to be on the same band, carrier and have the same bandwidth.
Dual-band configurations shall support the AntennaCrossConnect (ACC) feature, so far as the two
bands are cabled identically, i.e. have the same number of sectors and the same cross-connect
scheme. Dual-band with non-identical cabling scheme will be introduced by a separate feature in a
later release (e.g. one band cross-connected, the other without cross-connect).
The capacity at BBU level of a dual-band configuration is as follows: total number of users (active
and RRC) is the sum of the respective max number of users for each single-band configuration; Peak
throughput is the sum of the peaks in each band.
The PRB-based Capacity License and Pooling remains unchanged because every the PRB is allocated
by carrier and consequently every modem has its own carrier. It remains to verify PRB initial
allocation procedure to take into account two different values (one per carrier) and to distribute
them accordingly.
A new licensing parameter will be introduced for allowing the usage of this feature:
isMultipleCarriersAllowed. This parameter has to be set TRUE if this feature is allowed running
This feature introduces the configurations based on three carriers inside the same BBU, with three
modems. The cells using the same carrier will have also the same bandwidth and will be mapped on
the same modem board. Every modem will manage up to 3 cells/sectors. These configurations are
not Carrier Aggregation compatible, the Carrier Aggregation feature maps on the same modem
board cells with different carrier and bandwidth
This feature enables service providers to co-locate different carriers on different LTE bands within a
single eNodeB, when spectrum available in one band is not sufficient. Covering the same area with
more than one carrier will also offer redundancy and increased overall service reliability.
Using three modems, the cells using the same carrier will have also the same bandwidth and will be
mapped on the same modem board. Every modem will manage up to 3 cells/sectors. These
configurations are not Carrier Aggregation compatible, the Carrier Aggregation feature maps on the
same modem board cells with different carrier and bandwidth
More complex band/carrier configurations are possible using the Carrier Aggregation feature.
This feature is not available with eCCM controller and/or eCEM modem boards.
After an IEEE document regarding the “Performance Evaluation of a 6-Sector-Site Deployment for
Downlink UTRAN LTE” a significant gain in order of 88% in site capacity is achieved by changing the
3-sector-site deployment mode by 6-sector-site deployment. A mixed network topology with a
combination of 3 and 6 sector-site deployments has been also considered and its potential benefits
investigated.
The 3 and 6 sector-site deployment can be a viable option to meet high traffic demands in localized
areas such as hot spots, yielding a capacity gain in order of 95% to 110%.
This feature supports up to nine cells (9 sectors) on one carrier/one band in a single BBU (one
controller and up to three modems).
This feature specifically supports 10MHz for FDD.
This feature significantly reduces the cost to the service provider of hardware configurations.
The main benefit of this feature is that it generalizes the deployment process by extending its
application over any regular/not regular geographic areas and/or specific high traffic places such as
stadiums.
Another benefit of this feature is the possibility of concentrating onto a single eNB the activity that
would otherwise be performed by up to three actual eNBs.
Note: For Lte Cell, a new parameter “cellSiteNumber” is introduced to indicate the site number to
which the cell belongs to must be defined. This is required by the bCEM Modem board
implementation that will see always 3 sectors numbered from 1 to 3.
Carrier aggregation (CA) is new technologies introduced in LTE-A which can help the service provider
exploit several frequency resources in one LTE system.
With CA, the eNodeB can collect all discrete frequency bands together and manage them into an RB
level, to expand LTE system UL/DL throughput. CA can also work on two different RRHs with two
different frequency bands. CA includes intra-band CA and inter-band CA
The UE performs the access on the carrier it camps on. This is referred to as the Primary Cell.
The eNodeB configures a second carrier for higher downlink throughput. This is referred as the
Secondary Cell. The throughput from the Primary and Secondary Cells meet at the UE and together
provide higher throughput.
This feature allows increasing of the Downlink Peak Throughput in areas where the LTE spectrum is
fragmented. It also allows the service provider to take advantage of the best performances of
Category 4 UEs, and double the throughput previously achieved in a single 10MHz BW.
Target single CA user DL peak throughput equal to the sum of Rel-8 DL peak throughput KPI on both
carriers
In LR14.1.L, Carrier Aggregation is compatible with 3 carriers feature, but in this case, usage of
eCCM2 is mandatory.
Carrier Aggregation is supported commercially in LR14.1.L on Macro eNodeB only, and only for FDD.
Carrier Aggregation is only supported on the bCEM; the eCEM does not support Carrier Aggregation in
any release.
Prior to CA feature introduction, the different carriers belonging to the same sector are mapped
over different bCEM boards.
In order to activate CA feature in LR14.1.L, the different carriers of the same sector which need to
be aggregated must be mapped over the same bCEM board (aka intra-bCEM CA).
Note: in further releases (LR14.3.L), the bCEM architecture will evolve in order to support inter-
bCEM Carrier Aggregation, but LR14.1.L working assumption is to only support eNodeB configuration
with intra-bCEM CA.
The list of Bands and BWs combinations available in LR14.1.L is shown in Table 3-9:
CA 10MHz+20MHz supports half of the cell capacity in terms of: max number of active and
connected users, max number of UE scheduled per TTI in Scell, and max number of Voip
users.
The number of UEs that can be activated with SPS simultaneously when CA is enabled in the cell
shall be restricted to 40.
eNB is able to support Tri-Carriers configuration (F1, F2, F3) with aggregation of two carriers (one in
low band, one in high band) among three carriers, with supported band and bandwidth listed in
previous table.
(A+B)
(A+B; B+C)
Note: (A+B; B+C; A+C) is not possible (only CA [low band+high band] is valid).
The pair of cells that can be aggregated is known by the presence of the object
CarrierAggregationSecondaryConf (cardinality is extended to 2 in the RAN model).
The following eNodeB configurations are supported for Carrier Aggregation in LR14.1.L:
Note that n=1 does not apply to CA as we need at least 2 cells in the same board to allow CA.
This loss of service is also occurring when the bCEM re-starts and the eNodeB re-configures the cells
in CA-Mode over the 3 bCEMs. For this later case; a parameter is offered to customer in order to
select the automatic procedure (i.e eNodeB autonomously re-map the cells in CA Mode after bCEM is
up), or the manual procedure (i.e. customer must trigger the re-mapping in CA Mode).
The loss of service is not occurring in the case of inter-bCEM CA, each sector remains active with
degraded service (i.e. no CA is offered).
For customers with 3 bCEM boards, but not willing to activate CA Feature, the flexibility is offered
to keep the existing mapping of cells to bCEM boards (i.e. each carrier of a given sector in a
different bCEM board).
By default in LR14.1.L, this parameter is set to value “modeNonCA”, where cells of a given sector
are configured on different bCEM board, and shall not be changed by customers who are not
planning to activate CA Feature in LR14.1.L.
For customer with 3 bCEM boards, and willing to activate CA feature in LR14.1.L; it is mandated to
change the eNodeB parameter “cellMappingOverBoardMode” and set it to “modeCA”. The entire
eNodeB must then be reset to enable this mapping to be put in place.
In LR14.3.L, a new CA architecture will be available with inter-bCEM board communication for CA.
In LR14.1.L, LTE (FDD) can be supported inside a d4U rack, together with W-CDMA. This approach
allows the customer to add 4G LTE technology inside a W-CDMA 3G d4U shelf eNB, without adding an
additional sub-rack. This also will allow the reuse of an existing d4U shelf, providing a smooth
evolution path towards 4G LTE.
The following restrictions apply for the D4U shelf configuration is:
The main inputs required for performing eNB capacity & dimension exercise include:
Number of Subscribers, Nsubs. After performing the Radio Design/Coverage of the network area,
the number of attached subscribers to be supported per cell/sector and eNB should be known.
Traffic Profile per Subscriber
- VoIP Traffic Intensity, AVoIP, in mErl. Traffic intensity is a typical input in the dimension
procedure that is computed from Call model inputs – see in Table 2-3 and Table 2-4
- Note that a data traffic intensity (AData(i)) can also be estimated for some specific resources
(resources not impacted by the data traffic burstiness) – see details in 3.2.3.2
- Voice activity factor, αVoIP: ratio, over one call duration, of "speaking" periods over silence
periods (60% is the typical value used for voice calls)
- UL and DL Data Traffic Volume, VData(i)_UL/DL, in kBytes
- Blocking Requirements, Bl(i) – depending on operators QoS policy. Note: In general, the Bl
considered for data services is different/higher than the Bl considered for VoIP. Nominal UL
and DL Data Rates, RData(i)_Nom UL/DL for the i data services described in the call model.
These are the nominal rates when actively transmitting, in Kbps
Above mentioned Traffic Profile elements can be recovered from the Call model presented in
Chapter 2.2.1.1
Note: Some of the above inputs are subsets of those used for the air interface dimensioning in
Section 3.1.
In the next sub-chapters only a part of these dimensioning inputs will be used (the rest being
presented for future use).
The eNB (Modem & Controller) dimensioning principle is summarized in the following figure:
The eNB dimensioning exercise purpose is just to verify that for the given number of users per
Modem & Controller, the number of required resources is not exceeding the Max number of available
resources (for the Max number of available resources please refer to section 3.2.2.2).
If the number of resources required exceeds the number of resources available, a new modem or
controller boards need to be added which automatically implies a new eNB to be added into the
network.
Each of the three dimensioning elements of the eNB will be addressed in the following subchapters.
Number of User connections is a resource considered in the eNB Call Admission Control Algorithm
(CAC): no additional user connection can be accepted in the cell/eNB if the maximum number of
User connections is reached (blocking resource). The usage of this resource is Call Model dependant.
Performing a dimensioning exercise on this resource, means to find the number of user connections
required in order to accept a specific multi-service traffic (Aservice(i)) coming from a specific number
of subscribers (Nsubs) without exceeding a target blocking rate for each service (Bl(i)).
In order to address the dimensioning of a multi-service resource, the most suitable model is an
Erlang Multi-service law:
The computation of number of required resources following this method implies a fastidious method
based on the recursive algorithm defined by Kaufman-Roberts.
Fortunately, all the different types of services are using the same resource unit (1 User connection)
which allows simplifying the dimensioning computation by reducing it to a simple Erlang law formula
(as used in the mono-service circuit switch networks):
The traffic intensity for VoIP, GBR and nonGBR Data services (for a number of attached subscribers
per cell/Modem = Nsubs) can be computed based on the traffic model inputs:
Where,
Data traffic intensity per subscriber will be computed based on entire connection duration in
case of number of RRC_CU (RRC Connected Users) is being computed.
And will be based on the active connection duration in case of SAU (Simultaneous Active
Users).
Active connection duration is computed based on the traffic profile inputs and configuration
parameters settings.
Active Connection Duration per call per service can be computed as:
Call Duration – Dormancy Timer - Think time * (Nb. of sub sessions per call - 1) + (OOT Time
* Nb. of OOT transitions per call)
With the above mentioned hypotheses, the number of User Connections resources required per
Modem board will be:
Where:
For the number of User connections per Controller board (or eNB), the Modem result multiplied by
the Number of Modems per eNB shall be inferior or equal to the Max number of User connections
per eNB/Controller board (as per tables in Section 3.2.2.2).
Note: this section makes reference only to User Plane bearers (Voice, GBR and/or nonGBR bearers
included default bearer), signaling bearers being out of the scope of User Plane dimensioning. Thus,
from now on this dimensioning element will be simply called Number of Bearers.
As for the number of Users connections, the number of bearers per modem or per controller is a
resource considered in the eNB Call Admission Control Algorithm (CAC): no additional bearer setup
request or call setup is accepted in the cell/eNB if the maximum number of Bearer is reached
(blocking resource).
The Number of bearers dimensioning process will be then similar to the Number of user connections
(see 3.2.3.3) except that a new call model elements have to be considered:
The average number of data traffic bearers per subscriber (NbBRSub). This NbBRSub include
all types of bearer (default and dedicated) used by subscriber.
The dedicated bearers distinguish 2 categories labeled GBR and non-GBR bearers. The call
model evaluates the average number of GBR and non-GBR per subscriber. The average numbers
NbGbrBRSub and NbnonGbrBRSub depend on :
One Data Default bearer is assigned automatically for each active user during the attachment
process. This default bearer has the minimum QCI=9 so this is a nonGBR bearer. Some additional
dedicated bearer could also be assigned during the attach process they are called static-
dedicated bearer and are in used as long as the PDN session exist.
These Call Model elements (NbBRSub, NbGbrBRSub and NBnonGbrBRSub) need to be worked and
agreed with the operator, as it depends on the PDN allocation strategy (and more general, data
services usage strategy).
The number of Bearer resources required per Modem board will be computed in the same way as
the Number of User connections (in 3.2.3.3):
Nb_Total_Bearers_req_Modem = NbBRSub * Nb_User_Conn._Modem
Nb_GBR_Bearers_req_Modem = NbGBRSub * Nb_User_Conn._Modem
Where:
For the number of Bearers per Controller board (or eNB), the Modem result multiplied by
the Number of Modems per eNB shall be inferior or equal to the Max number of bearers per
eNB/Controller board (as per tables in Section 3.2.2.2)
# Modems * Nb_Bearers_req_Modem ≤ Max_Bearers_Controller (eNB)
# Modems * Nb_GBR_Bearers_req_Modem ≤ Max_GBR_Bearers_Controller (eNB)
Note that the controller dimensioning condition will be always fulfilled if the modem condition is
fulfilled (the Max Number of Bearers per controller is specified as being 3 * Max Number of
Bearer per cell/modem)
The peak throughput supported by the modem and the controller boards in LR14.1.L is presented in
tables in Section 3.2.2.2.
Note that the Modem peak throughput represents the max decoding processing capacity of the
modem board, whereas the controller peak throughput represents also the max throughput
supported by the eNB toward the S1 & X2 interfaces and can be subject of additional dimensioning
exercise related to the transport interfaces dimensioning/architecture (see details in Chapter 3.5).
In this chapter only the eNB dimensioning considerations will be addressed.
The Modem & Controller peak throughput is not a blocking resource (not considered in the CAC)
which means that new calls will not be rejected in case the peak throughput limit is reached, but
service degradation for the existing users may be observed.
Performing a dimensioning exercise on this resource, means to compute the throughput required by
a specific number of attached subscribers (Nsubs) transferring a specific volume of date
(VData(i)_UL/DL) spread over several traffic services characterized by different traffic intensities
(AVoIP, AData(i)), and compare it with the controller & modem LR14.1.L peak throughput figures
(given in Table 3-7).
The process for computation of the UL and DL throughput required in order not to exceed the
Modem & controller specific throughput constraints consist in:
Compute the traffic intensity (in Kbps) per subscriber for all services
Apply traffic model to compute the total Throughput required. Two approaches are possible:
Using an overbooking factor or Peak to Average ratio applied to Data
Using more complicated / precise methods if some grades of service (GoS) constraints are
imposed. Such GoS requirements can consist in dimensioning the system not to exceed
specific blocking rate or transmission delay:
Most of the LR14.1.L eNB capacity figures presented in tables in Section 3.2.2.2 are limited by the
SW (lower capacity figures than the Modem & Controller HW can support) and are mainly driven by
User Plane processing.
With the continuous increase in number of users per eNB, the eNB Control Plane (both Modem and
Controller boards) and PO can potentially be a limiting factor for the eNB capacity. PO overload
duration can be used as the critical trigger in the eNB capacity monitoring process. (See Section
4.3.)
Paging rate at eNB depends on both pages for terminating calls received at this eNB (for the Nsubs
under the eNB coverage), and pages to UEs camped on other eNB but whose TA includes this eNB.
The eNB Paging rate can then be calculated by the following formula:
eNB_Paging_Rate (paging events/sec) = Nsubs * Terminating_calls(/sub) *
eNB_Paging_Efficiency / 3600
Where:
The Terminating_calls(/sub) is an element that can be directly obtained from the traffic model
parameters:
Terminating_calls(/sub) = BHCA(/sub) * % Terminating calls
The eNB_Paging_Efficiency is the number of paging events at eNB per terminating call received
at eNB. It depends on the paging strategies, paging area sizes and page response rates. This can
be approximated as:
eNB_Paging_Efficiency = eNBs paged per MME termination
The maximum paging rate achievable by the eNB is 600 paging events / second per eNB.
Note: This is the theoretical limit.
Due to random distribution of the IMSIs in the network, and the fact that eNB is paging them on 32
paging occasions defined by 32 different IMSI ranges; the practical maximum paging achievable is
70%-80% of the theoretical limit.
The recommended engineering limit is 450 paging events/second per eNB.
3.2.4.3 EMBMS
To support eMBMS, an eNB is upgraded to support SYNC protocol with BM-SC. The eNB joins IP
multicast, terminates the multicast control channel and indicates multicast session start/stop to
mobile devices (UE).
This feature consists in supporting the MCE in distributed mode where each eNB will host the MCE
function. (The 3GPP proposes both possible configurations: centralized or distributed MCE).
The introduction of the MCE requires the support of the 3GPP defined M3 interfaces and also the
support of an internal “M2 like” interface in the way that the messages should be similar to the M2
messages defined by the 3GPP.
Capacity Licensing feature allows the operator to order some eNB HW components with a reduced
SW capacity and subsequently purchase licenses for additional capacity.
The principle consists of converting the required capacity for some specific eNB resources to
Licensing Tokens (also called RTUs – Right To Use) that can be associated to commercial ordering
codes allowing to control the eNB capacity usage from contractual perspective. Same Token concept
is also implemented in the eNB configuration allowing to technically applying the capacity
constraints imposed by the commercial side.
Based on the sum of all Licensing tokens required for all eNBs under a specific SAM machine, a
capacity license key (encrypted file) containing the tokens information is generated per SAM. Once
the license file is installed in the SAM (via the newly available SW tool called WLM – Wireless
Licensing Manager), the operator can distribute capacity tokens between all controlled eNB(s) via
specific Licensing parameters (licensing parameters that are also configured in Tokens).
The SAM will permanently monitor the coherence between the number of Tokens provided by the
License file and the sum (per SAM) of the Licensing parameters values configured on each eNB. If
the Sum of Licensing parameters (for a given resource) reaches the limit given by the License file,
the SAM will block the configuration related to these resource (no additional resources can be
configured)
The following eNB capacity elements are managed in LR14.1.L via this feature:
For complete information on Capacity Licensing configuration (parameters settings and Engineering
recommendations) please see Volume 4 of LPUG document [R1].
As the main commercial deployments in LR14.1.L are using the entire eNB SW & HW capacity (not
limited by licensing parameters), the eNB dimensioning methods & engineering rules in this
document will not consider the licensing limitations.
Yet, Licensing Tokens computation rules will be provided for customers that want to estimate the
resources usage cost in terms of Licensing (see section 3.2.5.1).
Considering that the four main eNB dimensioning elements presented in previous chapters
(Allocated bandwidth, Transmission Power and Nb. of User Connections, including VoLTE users) are
subject of Licensing in LR14.1.L, it is important to understand the conversion rules from the
elementary measurement unit of each resource (5/10/20 MHz, W/dBm, Nb of users) into licensing
tokens. In this way, once the dimensioning exercise is performed, the network planning engineer
may also compute the number of associated licensing tokens (allowing a shorter time for Licensing
Tokens ordering and License file generation)
The following table summarizes the different eNB capacity elements considered in licensing scheme
and the number of elementary units required for each Licensing token:
RRH/TRDU
Transmission
allocated power in 10W/PA (**) 1 token
Power
W per PA (*)
Number of
Connected users
connected 8 (users) 1 token
per eNB
users
(**) 20W (2 Tokens) are provided by default in LR14.1.L FDD (10 W per PA). They can be used as
2x10W for a MIMO 2x 2 configurations. 1 Token (10 W cannot be split between two PAs)
The transmission power is dimensioned in W which makes easier the Token conversion:
As this resource is dimensioned/configured per cell, the total number of Power-Tokens required for
a given eNB is computed as the sum of tokens required for each cell:
Power_Token_eNB = ∑ Power_Token_Cell
Example: if 2 x 30 W are required for each cell of a 3 sector eNB, the Power_Token_eNB will be:
Power_Token_eNB = 2 x 3 + 2 x 3 + 2 x 3 = 18 Power_Token
Note: A particular attention has to be provided to the Transmission power configuration parameter
(cellDlTotalPower) that is configured in dBm, thus it needs an additional conversion to W in order
to correctly associate it to the Power_Token. For details on the Transmission Power Licensing
configuration please refer to Volume 4 of LPUG document [C1].
The Number of Active Users (also called the Number of User connections in the section 3.2.3.3) is
dimensioned as an integer directly providing the number of simultaneous connections allowed in en
eNB. According to Table 3-11, the Token conversion rule for Number of active users will be:
8 Active Users = 1 Active_Users_Token
In other words, in order to compute the number of required Active_Users_Token for a given eNB,
the following formula has to be used:
Active_Users_Token eNB = Roundup [#Modems * Nb_User_Conn._req_Modem / 8]
Example: For an eNB requiring 48 simultaneous connections, the Active_Users_Token eNB will be:
Active_Users_Token eNB = 48 / 8 = 6 Active_Users_Token
VoLTE capacity follows a pay-as-you-grow scheme. One VoLTE token gives capacity for one
simultaneous voice call (SVC) per cell.
A specific number of VoLTE tokens are included on the Scheduler features (DS and SPS) depending on
the order code of the feature. For a higher VoLTE capacity than that included in the SW features,
specific additional VoLTE tokens are required.
Maximum number of VoLTE calls per cell are configured by parameter maxNbrOfVoip and controlled
by license LTEmaxNbrOfVoip.
Example: In a network an initial license for “VoLTE with DS with initial Capacity of 18xSVC” is
deployed in a eNB with 3 sectors and 1 carrier (3 cells) and distributed as follows:
Cell 1 = 8 SVC
Cell 2 = 4 SVC
Cell 3 = 6 SVC
VoLTE traffic is expected to be higher in cell 1 and 2 and they are expected to support 15 SVC in
cell1 and 9 SVC in cell2. In this case, additional 12 VoLTE tokens would be required.
VoLTE_Token_eNB = 30 – 18 = 12 VoLTE SVC Tokens
Initial capacity of
Final VoLTE
eNB Cells SW feature VoLTE tokens required
Capacity Required
purchased
Cell 1 8 SVC 15 SVC 7 SVC
The Bandwidth tokens are a little nit special comparing to the other two types of tokens because
that can take only one value for a give cell and for a given LTE bandwidth value: 0 or 1 (false or
true)
The bandwidth tokens for a given LTE band will be then computed according to the following
algorithm:
Else
Bandwidth_Token_xxMHz_CellY = 0
End If
Where xx MHz can be one of the supported LTE bandwidths in LR14.1.L: 1.4MHz, 3MHz, 5MHz, 10
MHz, 15 MHz or 20MHz (if bandwidth supported by the specific technology and release).
Bandwidth_Token_xxMHz_CellY = ∑ Bandwidth_Token_xxMHz_Cell
Bandwidth_Token_10MHz_eNB = 3 * 1 = 3 Bandwidth_Token_10MHz
And:
Bandwidth_Token_5MHz_eNB = 3 * 0 = 0 Bandwidth_Token_5MHz
Bandwidth_Token_15MHz_eNB = 3 * 0 = 0 Bandwidth_Token_5MHz
Bandwidth_Token_20MHz_eNB = 3 * 0 = 0 Bandwidth_Token_20MHz
3.3.1 OVERVIEW
The ALU OAM solution for LTE network is designed for the E2E management of the eUTRAN network
and the ePC (evolved Packet Core). This solution is based on the
9959 NPO Release 6.1 (Network Performance Optimizer) allows service providers to monitor and
optimize the performance of the radio access part of their wireless networks.
9452 WPS Release 3.0 (Provisioning System) allows preparation of configuration of new
equipments to deploy or to reconfigure.
The 5620 SAM Element Management System manage fault management, configuration, accounting
performance and security functions to tightly integrate equipment routing, service and network
management and provisioning system.
The 5620 SAM release integrates eUTRAN and Core Network Element including following releases:
The higher level is composed of GUI interface, OSS system and the third-party system such other
Management systems used to access to the SAM Main server functions. The SAM client is
installed in this part of network and is accessible through the NBI (Northbound Interface).
The median level integrates the complete SAM solution into different deployment mode and
modules, the communication between each module use the East-West interface and require
dedicated bandwidth. Four different architecture are possible :
- Collocated server (SAM Main server and DB Database server installed on the same machine).
This configuration is composed of one machine for the two functions
- Collocated and redundant servers: This configuration is based on two machines, the Primary
one and the secondary used for redundancy. Each one of these two machines is installed in
collocated mode with the two modules 5620 SAM server and SAM DB.
- Distributed servers which means that all SAM main elements are installed in standalone
mode .This configuration should include in mandatory one SAM server and one SAM DB and
optionally two Auxiliary servers in minimum and two Delegate servers .
- Distributed and redundant servers: mean the two applications (SAM server & SAM DB) are
mandatory installed in standalone mode and in redundancy configuration. Two auxiliary
servers in minimum are optionally deployed in the network for the collection of call traces,
performance and accounting files. Up to two Delegate servers can be deployed in the OA&M
solution, one connected to the 5620 SAM Primary server and the second one to the 5620 SAM
Secondary server.
The lower level consists in the Network Elements management functions running on the EMS SAM
Server (Element Management System). This function communicates with the NE(s) through the
southbound interface.
Different Redundancy configurations are available for the three types of servers. For details please
refer to 5620 SAM Planning guide.
The following table summarizes the capacity figures for 5620 SAM platforms in Release 12.0.R1 with
RHEL or Solaris Operating system and for x86 machines.
3.3.1.3 OAM DATA FLOW DESCRIPTION
This chapter defines the specific data generated by eNodeB (PCMD, Call traces, Dynamic Debug
Traces, Snapshots, Post Mortem files).
The OAM traffic flows is split, from a telecom manager point of view, into two distinct traffic
categories : Operational and Maintenance
MAINTENANCE OAM
- Software Management
- NE discovery, NE resynchro
- Dynamic Debug Trace (DDT)
- Call Trace (CT)
- Post Mortem Files (Board or Application Restarts)
- Snapshots (L3, On-Demand )
Note: Except Call Trace and PCMD, OAM traffic is independent of call Traffic.
The following figure summarizes the OAM Data flow between LTE Network Elements and SAM, NPO,
WTA, PC/OSS GUI.
The following figure summarizes the OAM Data flow between eNodeB and SAM, NPO.
For more detail on ports and protocols, refer to the OAM Product Engineering Guide
The specific eNodeB Data flows are described in the following sections.
3.3.1.3.2 PCMD
Per Call Measurement Data (PCMD) contain UE connection level measurement data (UE connection
or context parameters, bearer parameters (including SDU traffic volume), UE measurements,
context disposition). PCMD traffic is a signalling traffic sent from the eNodeB on S1-MME in S1AP
Private Messages to the Alcatel-Lucent MME then to the EMS (SAM system)
5620 SAM manages up to 100 CT sessions. Only one CT session is active per eNodeB. Each session
traces a list of Cells (1 up to the maximum # of cells that eNodeB can manage) on RRC, S1-MME
and/or X2 interfaces
Cell Traffic Trace. The eNodeB starts a Trace Recording Session whenever a call is started in the
monitored cell(s).
Event-Based Trace. The eNodeB starts a Trace Recording Session whenever any threshold
condition is met in the monitored cell(s). Once triggered, any UE call that is started on the
monitored cell(s) will be traced.
Signalling based Trace. The eNodeB starts tracing the monitored UE as soon as it receives any of
the following messages with a Trace Activation parameter:
- S1AP: TRACE START
- S1AP: INITIAL CONTEXT SETUP REQUEST
- S1AP: HANDOVER REQUEST
- X2: HANDOVER REQUEST
These 3 types of Call Trace generate the same Call Trace information. The Call Trace report
messages contain information on the RRC, X2, S1-MME signaling messages.
It is possible to configure the eNodeB to get either Call Trace with maximum depth (whole signaling
messages are reported in Call Trace messages) or minimal depth (only some parameters of the
signaling messages are reported in Call Trace messages) and this configuration applies for all the
cells of the eNodeB.
The activation of the Cell Traffic Trace or Event based Trace is done from the EMS. The activation of
signaling based Trace is done from the EMS of the EPC through the MME with an S1-MME message.
The Call Trace traffic is a stream like traffic. The Call Trace messages are sent from the eNodeB to
the EMS in UPOS messages in UDP datagram. There can be several UPOS messages in a UDP
datagram. They are sent either when one of the following conditions is met:
The Dynamic Debug Trace (DDT) messages contain information for debug and information on the
signalling messages received.
There are several profiles for Dynamic debug Traces depending on the layers that are monitored for
the dynamic debug traces (L1, L2, L3) and on the depth of the monitoring (light, medium, deep). It
is possible to configure which profile to use for the DDT. The activation of the DDT is done from the
EMS.
The DDT traffic is a stream like traffic. The Dynamic Debug Trace messages are sent in UDP
datagram from the eNodeB to the EMS, for L1 and L2 DDT messages.
The maximum size for the payload of a datagram containing UPOS messages is 7500 bytes, so the
datagram is fragmented when transferred over Ethernet.
The volume of L1 DDT does not depend on the traffic model and does not depend on the number of
UEs.
The Dynamic Debug Trace is exclusive with the Call Trace at the eNodeB level.
Note: The maximum number of UEs that can be monitored at the same time is about 320 but some
information can only be given for 16 to 20 UEs.
The Post-Mortem traffic is file transfer traffic. After the restart of the board or the application, the
eNodeB notifies the EMS that a post-mortem file is available with SNMPv3.
The eNodeB transfers the Post-Mortem file to the EMS with SFTP.
3.3.1.3.6 L3 Snapshot
The eNodeB L3 Snapshot file is created in case of a telecom procedure failure. It contains context
dump data. The enabling of the L3 snapshot file transfer is done from the EMS.
The eNodeB notifies the EMS that a L3 Snapshot file is available with SNMPv3 then the eNodeB can
start transfers the L3 Snapshot file to the EMS with SFTP.
The On-Demand Snapshot file contains debug data from application or from platform software. The
On-Demand Snapshot traffic is file transfer traffic.
The generation of the On-Demand Snapshot file is done from the EMS on operator request.
The eNodeB notifies the EMS that an On-Demand Snapshot file is available with SNMPv3. Then the
eNodeB transfers the On-Demand Snapshot file to the EMS with SFTP.
The Software download traffic is file transfer traffic. The EMS transfers the software file to the
eNodeB with SFTP on operator request.
The minimum bandwidth for OAM traffic (DownLink and UpLink) takes into consideration dialogue
traffic and bulk transfer recurring traffic including PM files and PCMD files.
Most of maintenance interventions are done during a “maintenance window”, out of busy hours. The
2 types of maintenance, preventive and corrective maintenances needs additional bandwidth for
the traffic generated by equipment or network after each failure (post-mortem, snapshots).
These received files help diagnosis. Analysis could request launch of Call Trace or deeper
DEBUG.
the traffic generated by Call Trace or by Dynamic Debug Trace.
The following table summarizes the example of the maximum required bandwidth in downlink and
uplink during maintenance window, with the highest datagram overhead (IPv6, IPSEC and 802.1q).
As these operations are not occurring at the same time, we practically reserve the maximum
required bandwidth for the most consuming traffics: DDT (L1+L2+L3 Dynamic Debug Trace) for
Uplink traffic and Software upgrade for Downlink traffic.
Note: Post mortem and snapshot are peak traffic generated once with an average 300s transfer
delay.
Considering the calculation result given in the table and making the assumption that not all the
operation can be performed simultaneously on the same eNodeB we deduce that the most
consuming OAM traffic is the DTT (Dynamic Debug Trace) with a maximum of 37 Mbps of
throughput.
ALU does not recommend to activate the DDT during busy hours.
ALU recommends to activate DDT only during the maintenance window.
ALU recommends to activate DDT for 45 minutes maximum.
When some OAM operations are critical and require additional bandwidth out of the maintenance
windows, we need to configure the OAM interface with the appropriate QoS to avoid any impact on
the telecom traffic. Further control can be performed to avoid any congestion on the network by
deploying the correct network design that should take into consideration the maximum bandwidth
resource. Additional mechanisms on the transport level can be performed for the traffic engineering
to permit the reservation of the required bandwidth or the shaping and policing of the maximum
throughput.
The less consuming OAM operations in term of bandwidth could be scheduled out of the
maintenance window and during busy hours if congestion is controlled on the Network.
For some exception and for very critical maintenance operation which needs more
bandwidth, traffic shaping and QOS parameters can be modified during this operation but
needs the reboot of the eNodeB.
IP version: V4 or V6
IPSEC presence
VLAN tagging 802.1q
The IP stack header lengths, with different options, are depicted below:
The Transport and Application stack header length are depicted below:
DL UL
CM Throughput 1000 Kbps (1) N/A
5620 SAM collects periodically (via SNMP), on each eNodeB, a PM file (in a compressed XML format)
Calculation hypothesis:
Considering values
- 67283 instances of counters (in the .gz file)
- instance size = 13 Bytes (13 characters for value maw = 232-1)
- compression ratio = 80%
- XML = xml header size : value size = 1
= 67283 * 0,013 kBytes * 0,2 * 2
= 350 kBytes compressed (1749 kBytes not compressed) per eNodeB
examples for IPV6 packets, with IPSEC at layer 3, with 802.1q at layer 2
Ethernet 802.1Q
IPSec
Application
350 kBytes
throughput
Transport header 208 bytes
Physical
throughput
10,9 Kbps
3.3.2.4.3 PCMD
Ethernet 802.1Q
IPSec
143 Kbps
traffic type & Protocol = File transfer traffic over SSH / SFTP
Traffic way: UpLink from MME to SAM.
examples for IPV6 packets, with IPSEC at layer 3, with 802.1q at layer 2
Ethernet 802.1Q
IPSec
213 Kbps
Calculation hypothesis:
download of the entire eNodeB software package (eCCM, eCEM , bCEM , RRH,...)
maximum size of software package =
Software
bCEM eCCM eCCM2 eCEM AR MR MT RH RM SA SR TR
upgrade
Card 79 0 0 0 10 14 0 0
58 66 120 0
Software Mbytes Mbytes Mbytes Mbytes
Mbytes Mbytes Mbytes Mbytes Mbytes Mbytes Mbytes Mbytes
size
Software
package 337 Mbytes
size
Table 3-22: Software size
traffic type & Protocol = File transfer traffic over SSH / SFTP
transfer duration time = 300s.
With assumption takes in account a measurement validated on a practical example:
Application
7,44 Mbps
throughput
Physical
9,7 Mbps
throughput
Table 3-23: Software ApplicationUpgrade bandwidth requirement / eNB
Calculation hypothesis :
File transfer
1,6 Mbps
Throughput
Physical
2,1 Mbps
Throughput
Table 3-27: eNodeB Application Restart Context Bandwidth Requirement
Calculation hypothesis:
Traffic type & Protocol = File transfer traffic over SSH / SFTP
examples for IPV6 packets, with IPSEC at layer 3, with 802.1q at layer 2
Ethernet 802.1Q
L3 snapshot
IPSec
Application
throughput
2,66 Kbps
Physical
throughput
3,5 Kbps
The following table describes the bandwidth requirements for each network element.
Number of
Network Element Example Bandwidth requirement
MDA/CMA
Table 3-30: OAM SAM Bandwidth Dimensioning with ePC and Transport Network Elements
The following table summarizes the required bandwidth for the SAM interfaces:
Recommended Bandwidth
SAM element
between SAM servers
5620 SAM Server to a 5620 SAM Database (*) 5 to 10 Mbps (3 Mbps Min)
5620 SAM Server to a 5620 SAM Client 512 Kbps per client
5620 SAM Server to a 5620 SAM-O Client ( The bandwidth will
1 Mbps per client
depend on the OSS application)
The MME application is one of several applications supported by the 9471 Wireless Mobility Manager
(9471 WMM). The 9471 WMM WM8.0.0 configuration is a scalable configuration that includes one or
two ATCA chassis and from one to ten pair of Service Boards. The minimum configuration consists of
a single chassis with one active Service Board (one Service Board pair). The maximum configuration
consists of two chassis with ten Service Board pairs. The first ATCA chassis can hold up to four
Service Board pairs. The second ATCA chassis must be added/used when a fifth Service Board pair is
needed. The two Service Boards in a Service Board pair must be populated in adjoining odd/even
slots, but pairs do not have to be populated sequentially within the chassis.
Pooling/grouping is also supported in WM8.0.0. There can be up to eight MME pools in the network
with a maximum of 8 MMEs per pool when using manual provisioning and a maximum of 128 MMEs
per pool when using DNS.
1 – 2 ATCA chassis
1 – 2 pair: Shelf Management Controllers (ShMC’s), 1 pair per ATCA chassis
1 pair: OAM Server Boards, on shelf 0 (16G RAM, 300G HDD)
1 – 2 pair: Ethernet Hubs, 1 pair per ATCA chassis (w/ HSPP4 AMC hosting MPH service on shelf
0)
1 pair: Optical RTMs, on shelf 0 hubs
1 pair: Interface Boards w/ SS7 AMC & SS7 RTM
1 – 10 pair: Service Boards w/ HSPP4 AMC
Unused slots are filled by front and rear fillers
•PDU •PDU
Service
Service
Service
Service
Service
Service
Service
Service
Service
Service
Service
Service
Shelf 1
ShMC
Board
Board
Board
Board
Board
Board
Board
Board
Board
Board
Board
Board
ShMC
HUB
HUB
RTM
RTM
RTM
RTM
RTM
RTM
RTM
RTM
Interface Board
Interface Board
Interface Board
Interface Board
Service Board
Service Board
Service
Service
Service
Service
Service Board
Service Board
Service Board
Service Board
Shelf 0 Shelf 0
ShMC
ShMC
OAM Server
OAM Server
OAM Server
OAM Server
HUB/MPH
HUB/MPH
HUB/MPH
HUB/MPH
Board
Board
Board
Board
ShMC
ShMC
Figure 3.4-1: 9471 WMM WM8.0.0 Hardware Configurations for MME Application
For additional details regarding MME hardware and software architecture, please refer to: [C4] and
[C6].
For additional details regarding MME supported configurations, please refer to: [C5].
The 9471 WMM WM8.0.0 has been designed to work efficiently within the limits shown in Table 3-32
below. These limits are based on customer input and engineering judgment for WM8.0.0.
The table shows the limits for a fully configured MME. Code has been developed and tested to work
within these limits. In particular, the provisioning GUI and associated databases have been designed
using these capacities.
The GUI will not allow a user to exceed these capacities. Please note that BHCA is dependent upon
the call model. Therefore a strict maximum BHCA cannot be quoted per release.
305,000 msg/sec
Maximum external message throughput per MME (MPH)
(150K in & out)
In WM8.0.0, additional Service Boards may be added to accommodate increased capacity. The two
Service Boards in a Service Board pair must be populated in adjoining odd/even slots, but pairs do
not have to be populated sequentially within the chassis.
Contact customer support for detailed procedures to perform an in-service growth of a Service
Board pair.
An individual Service Board pair can support up to 40K external message per second and up to 500K
registered users.
Capacity extension actions include adding an additional Service Board pair to the MME until the 10-
pair limit is reached.
The LTE network must be architected such that the above capacity limits (Table 3-32) are not
exceeded for an individual MME. The SAM Provisioning GUI will not allow provisioning beyond these
capacity limits.
Capacity extension actions include adding an additional Service Board pair to the MME until the 10-
pair limit is reached.
The LTE network must be architected such there are not more than 1024 TAs in the network. The
SAM Provisioning GUI will not allow provisioning beyond this capacity limit.
Large networks must be divided into sets of MME groups where one set does not know about the
other set(s).
The 9471 WMM WM8.0.0 supports the MME external interfaces as shown in the figure below. Note:
only interfaces for the MME application are shown.
CBC GLMC
MSC/VLR E-SMLC
1xCS IWS EIR
SAM (5620)
S3/Gn
NETCONF,
SNMPv3.
MME MME SSH,
S10
HTTPS
S1-MME
M3
eNodeB S11
1357 LIG
X1_1, X2
S-GW
Figure 3.4-2: 9471 WMM Signaling Interfaces for WM8.0.0 MME Application
In LR14.1.L, user traffic is expected to be a mix of voice, best effort (BE) data (including SMS &
other chatty applications), Guaranteed Bit Rate (GBR) data, and carrier aggregation as shown
Table 3-33. The Alcatel-Lucent LR14.1.L end-to-end network solution also supports CSFB for voice
and data but this is not included in the example model.
The following table shows a capacity & dimensioning calculation using the call model presented in
Chapter 2.2.1.1 as an example. The “MME Procedure” column represents the unique mix of
procedures that affects busy hour in the example. The “MME Message Count/ Event” column is
constant for the set of procedures shown. MME Message Count/ Event values could be updated if call
model analysis of a user call model showed a different mix of MME procedures affecting the busy
hour. The operator can also tailor this calculation to a different call model by substituting different
values in the “Events/ Subscriber/ Busy Hour” column of the table.
The call model yields a total of approximately 475 messages/ subscriber/ busy hour for the LR14.1
example call model. This figure is then used in the following equation to yield the maximum number
of subscribers supported by this model:
Please refer to Section 2.2.1.2 for calculations regarding the paging rate at the MME.
From a functional perspective, the eNB has two functional interfaces, namely Telecom and
Management interfaces.
Control traffic which is exchanged between the eNB and the MME (S1-MME) or between
neighbouring eNBs (X2-C).
Details about S1, X2, M1 and M3 interfaces are given in section 2.1.3.4.
Please note that thought M1 is classified as Telecom traffic, dedicated Vlan is required to be
assigned as M1 traffic is IP multicast traffic.
Standalone PCMD traffic between the eNB and the standalone PCMD collection entity (PCMD
applies to Alcatel-Lucent MME only).
The dynamic debug trace (DDT) between the eNB and the DDT collection entity.
1588 traffic exchanged between the eNB and the Master Clock Server.
In the scope of this document, above mentioned OAM, call trace, PCMD, DDT traffic and IEEE 1588
Precision Time Protocol (PTP) dimensioning requirements are described in Section 3.3.
Depending upon the eUTRAN transport architecture, eNB functional traffic can be split over
different logical interfaces, for the purpose to differentiate and direct the eUTRAN traffic to target
network elements which are different (neighbouring eNBs, MME, S-GW, EMS, etc). eNB functional
traffic split is performed at layer 2 by means of virtual LAN feature (Vlan tagging is optional when
eMBMS is not supported, when eMBMS is supported a dedicated Vlan has to be setup for eMBMS M1
interface user plane traffic), and at layer 3 by means of setting up the IP address of network
element which are involved in eUTRAN transport. Note that eUTRAN transport architecture is not in
the scope of this document, please refer to transport engineering guide for details about the
eUTRAN transport architecture [C7].
Please note that since the MME and S-GW aggregate traffic coming from numerous eNBs, S1- MME
and S1-U bandwidth requirements at MME and S-GW level must take into account the number of
connected eNBs to assess the aggregate bandwidth at eUTRAN interfaces level. The same remark
applies to X2 interface and to OAM traffic, as a single EMS will connect to numerous eNBs. The
figure below illustrates eUTRAN traffic split towards backhaul transport network.
e-UTRAN Backhaul
M1
MBMS GW
M3
Local area A M1
S1 S1
M1
X2
OAM
MME
S1
M3
M3
e-UTRAN
M1 S1-MME
M3
Local area B S1 S1
S-GW
X2
S1-U
OAM
X2
e-UTRAN
M1 SAM
M3 OAM OAM
S1
Local area C
X2
OAM
M2 interface is not shown in the figure above, as long as the MCE is embedded in the eNB (please
refer to section 2.1.3.4).
Select or customize the traffic model which represents the traffic parameters which
characterize the UE activity from user and control plane perspective at the busy hour. The
traffic model provides relevant inputs which enable per eNB interface bandwidth computation
based on various traffic types which can be supported at UE / eNB level (VoLTE traffic, GBR
traffic, non-GBR traffic, control & signaling traffic). As the user distribution and concentration
can be different from one region to another, per region traffic model might need to be
estimated in order to accurately assess the eUTRAN interfaces bandwidth. The eUTRAN
interfaces required bandwidth for a specific eNB has to be computed based on customer traffic
model which applies to this specific eNB, if applicable.
Apply the traffic model to the eNB eUTRAN interfaces (S1-U, S1-MME, X2-U, X2-C, M1 and M3)
and per traffic type (VoLTE traffic, GBR traffic, non-GBR traffic, control & signaling traffic,
Management) in order to assess eUTRAN respective functional interface required bandwidth.
With the following additional inputs:
eNB traffic can be carried over a non-trusted backhaul network. LTE network operator will
then likely send eNB traffic over an authenticated and encrypted IPsec tunnel.
- Per eNB interface resulting bandwidth figures can be used to assess the eUTRAN backhaul
throughput figures towards / from neighbor eNB(s) and ePC network element (MME, S-GW,
MBMS-GW).
Please note that the following sections provide bandwidth assessment example based on the
reference traffic model which is described in Table 2-3, Table 2-4 and Table 2-5.
Please note that interfaces bandwidth computations which are described in this section, show an
uplink (UL) and downlink (DL) part. The main reason behind is coming from the fact that uplink and
downlink traffic bandwidth are different for most services carried over the eUTRAN interfaces (user
or control plane), and thus uplink / downlink traffic split is required to dimension the overall layer 2
Ethernet backhaul network, which can operate full of half duplex.
Full-duplex configuration: Most significant bandwidth for each direction needs to be considered,
for example if the maximum bandwidth is in downlink then the same bandwidth will be reserved
for uplink.
Half duplex configuration: Maximum of Uplink and Downlink bandwidth figures has to be
considered.
The eNB supports IPsec tunnel mode i.e eUTRAN backhaul network supports so called “host-to-
network” IPsec security configuration, i.e the eNBs are the hosts connected to the network Security
Gateway (SEG), as depicted in Figure 3.5-2 below. Indeed eNB will generate the IPsec traffic at the
source and establish the IPsec tunnel with the remote security gateway.
eNodeB
MME
eNodeB
Intranet
eNodeB
MME
Intranet
S-GW
Security Gateway
eNodeB
IPsec tunnel
IPsec tunnel mode is the only IPsec security mode to be supported at eNB level. This security mode
adds complementary header to each packet which has to flow through the eUTRAN backhaul
network. The complementary header size depends upon the authentication and encryption methods.
There is a maximum of 73 overhead bytes (IPv4) 93 overhead bytes (IPv6) which are added to the
packet assuming AES-CBC-128 and HMAC-SHA1-96 tunnel mode and ESP encapsulation is used.
Further details about IPsec implementation at the transport backhaul can be found in [C7].
3.5.4 S1 INTERFACE
This section provides details about S1 interface dimensioning and engineering rules.
As S1-U and S1-MME dimensioning methods are different, dedicated sections describe dimensioning
methods for each type of traffic. However, from a backhauling perspective, overall S1 bandwidth is
to be taken into consideration i.e. S1-U + S1-MME.
S1-U is the reference point between eUTRAN and the ePC Serving GW (S-GW) for the per bearer user
plane tunneling and inter eNB path switching during handover. S1-U interface protocol stack is
depicted in the figure below.
UE eNodeB S-GW
User App
TCP, UDP, …
GTP-U GTP-U
S1-MME is the reference point for the control plane protocol between eUTRAN and the ePC Mobility
Management Entity (MME). S1-MME interface protocol stack is depicted in the figure below.
UE eNodeB MME
Detailed S1 bandwidth figures depends upon the IP transport stack and the resulting headers which
must be added to the application protocol, according to the backhaul network architecture.
Referring to the S1- U and S1-MME protocol stack described above, the followings parameters need
to be taken into consideration when assessing the physical bandwidth at S1 interface:
S1-U / S1-MME protocol header size. Please see details in sections 3.5.4.2.1 and 3.5.4.2.2
below.
Transport layer architecture (plain IP or secured IP with encryption & authentication and
tunneling). Please see details in section 3.5.3.
Layer 2 technology.
- eNB, MME and S-GW transport layer 1/2 relies upon Ethernet / IEEE 802.3 with optional
IEEE 802.1q Vlan tagging capability.
The table below provides a summary of S1-U transport headers to be accounted for various IP
transport stack configurations.
VoLTE(3)
IPsec IP
Header Size (bytes) Phy Vlan 802.3 UDP GTP-U RTP/
ESP/IP (path)
UDP/IP
(1) (2)
S1-U Voice IPv4 20 4 18 73 20 8 8 12+8+20
(1) (2)
S1-U Data IPv4 20 4 18 73 20 8 8 NA
Note 3: S1-U bandwidth computation takes into account the type of services which are supported by
the traffic model. For each service, the bandwidth which is calculated at application layer needs to
take into consideration the overhead with results from the transport backhaul. The headers which
are added to the application packet depend upon the application itself. For VoLTE traffic, the
application packet is encapsulated inside a RTP/UDP/IP packet prior to be forwarded to the
transport layer (GTP-U). Thus additional 12 + 8 + 20 = 40 bytes of RTP/UDP/IPv4 header bytes have
to be added to the initial application packet prior to add the transport backhaul overhead.
This rule does not apply to GBR traffic, though GBR traffic might be transported over RTP/UDP/IP, it
is assumed this overhead is already accounted for in the traffic volume figures which are given in
the reference traffic model of Table 2-4. For non GBR traffic such as for example web browsing
type of application, the application packet is already an IP packet, which is then directly forwarded
to the transport layer, where the transport backhaul overhead is added.
Several transport header combinations are thus possible depending upon the transport backhaul
architecture, this leads to different IP transport header sizes. Examples of possible configuration
are given below (non exhaustive list):
The table below summarizes the different overhead header size according to the eUTRAN backhaul
transport architecture (S1-U).
eUTRAN backhaul
802.3 802.3 w 802.1q 802.3 w IPsec 802.3 w 802.1q w IPsec
architecture
Note: The table above assumes that when IPsec is in use, inner IP version and outer IP version are
the same (IPv4 or IPv6).
The same rules apply to S1-MME. The tables below provide a summary of S1-MME transport headers
to be accounted for various IP transport stack configurations.
Header IPsec
Phy Vlan 802.3 IP SCTP
Size (bytes) ESP/IP
(1) (2)
S1-MME IPv4 20 4 18 73 20 28
(1) (2)
S1-MME IPv6 20 4 18 93 40 28
The table below summarizes the different overhead header size according to the eUTRAN backhaul
transport architecture.
The maximum S1-U throughput in downlink and uplink is computed by taking into account the user
plane traffic for the eNB maximum number of attached subscribers at the busy hour. S1-U
bandwidth computation process uses the input from the traffic model (either the Alcatel-Lucent
reference traffic model or a customer specific traffic model), taking into consideration the
backhauling network specific architecture.
Traffic model to be used is given in section 2.2.1.1. Please note that depending upon the traffic
model some service might not be taken into consideration for S1-U bandwidth computation. This is
the case for example when using a traffic model with Circuit Switch Fall Back (CSFB) capability,
where VoLTE service is not taken into consideration at S1-U interface level.
Transport backhaul configuration determines the S1-U transport header size. Various transport
backhaul configuration and corresponding header size are given in section 3.5.4.2 above.
eNB_S1U_Throughput_Phy_DL/UL =
eNB-S1U_Non_GBR_Throughput_Phy_DL/UL +
eNB_S1U_VoLTE_Throughput_Phy_DL/UL +
eNB_S1U_GBR_Throughput_Phy_DL/UL
Depending upon the traffic model, some parts of the above formula are not considered. This is the
case for CSFB traffic model with 100% circuit switch fall back, where voice service is not carried
over the LTE radio access network, but over the 2G / 3G radio access network instead.
At S1-U interface level, Voice (VoLTE) and GBR services require bandwidth reservation to ensure
quality of service objectives are met, while non-GBR service gets the remaining part of the
aggregate S1-U interface available bandwidth. The figure below depicts this bandwidth reservation
rule.
Reserved bandwidth
for VoIP
Reserved bandwidth
for GBR Transport interface
available bandwidth
Remaining bandwidth
for Non-GBR
PtA_B is the peak to average ratio to apply at the backhaul. PtA_B is slightly different from
PtA_Air (the value to be used at the air interface), since the backhaul link rate is higher than
the air interface “link rate”, PtA_B is thus less restrictive than PtA_Air.
The Peak to Average ratio (PtA) is an overflow factor that is used to estimate the equivalent
bandwidth for a set of variable bit rate calls, such as applications making use of non-GBR
service. Details about the Peak to Average traffic aggregation concept is given in section
3.1.3.1.
PtA_B value to be used depends upon the call model and the frequency band in use.
Typical PtA_B figure ranges from 1.09 to 1.2 depending upon the frequency band and the call
model.
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1)
Avg_BR_Phy_DL/UL is the average bit rate for non GBR service. Avg_BR_Phy is computed as
follows:
Avg_BR_Phy_DL/UL(per Sub) =
[Volume_at_BH_(DL/UL) x 8 / 3600] x [1 + %_Transport_Overhead]
Volume_at_BH_(DL/UL) is an input from the traffic model (refer to Table 2-4 of section
2.2.1.1), it represents the aggregated number of bytes which are exchanged for non GBR
service at the busy hour. Note that uplink and downlink figures are different.
Non_GBR_BHCA and Non_GBR_Volume are retrieved from the traffic model. Non_GBR_BHCA
represents the average non-GBR busy hour call attempt and Non_GBR_Volume represents the
volume of data exchanged at the busy hour.
8 to normalize to bit.
The resulting throughput at the S1-U interface S-GW side is computed as eNB-
S1U_Non_GBR_Throughput_Phy_DL/UL multiplied by the number of eNBs (N_eNBs) which are served
by this S-GW.
GBR service traffic characteristic is such that the application packets are sent at a fixed rate. GBR
service traffic characteristics are retrieved from the traffic model which provides the amount of
bytes which are exchanged through the interface during the busy hour.
eNB_S1-U_GBR_Throughput_App_Phy_DL/UL =
[ Inv_ErlgB (S1_GBR-traffic-Intensity @ BIGBR) ] x
[ GBR_Nominal_BR_APP_DL/UL] x [1 + %_Transport_Overhead]
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1).
GBR_BHCA is retrieved from the traffic model. It represents the average GBR busy hour call
attempt. Typical applications which get/require GBR type of service are those with QCI 1 and 2.
This is basically video collaboration, video telephony and real time gaming. For LR14.1L release,
either one of the following assumptions are made to get GBR BHCA figures in Table 2-4:
- 100% of gamers are premium users. Only a percentage of the applications with QoS Class
Identifier (QCI) equal to 1 or 2 are eligible to GBR servicing.
- 100% of gamers are premium users. Only a percentage of video collaboration and video
telephony users are premium users (typical range is 10%).
BIGBR is the blocking factor for GBR service i.e the probability of overflow at S1-U interface
level. Typical figure is in the range of 0.01.
Inv_ErlgB is the inverse ErlangB formula which is used to compute the amount of required
resources at eNB S1-U interface level, with the requested probability of overflow BIGBR, to
ensure the number of active user can send traffic, assuming a given activity factor.
GBR_Nominal_BR_APP DL/UL is the nominal GBR service throughput at busy hour. It is computed
as follows:
GBR_Nominal_BR_App_UL/DL(per sub) =
8 x (KByte_GBR_call_UL/DL) / (Call_Duration)
8 to normalize to bit.
KByte_GBR_call_UL/DL is an input from the traffic model (refer to Table 2-4), it represents the
aggregated number of bytes which are exchanged for GBR service at the busy hour. Note that
uplink and downlink figures are different.
The resulting throughput at the S1-U interface S-GW side is computed as eNB-
S1U_GBR_Throughput_Phy_DL/UL multiplied by the number of eNBs (N_eNBs) which are served by
this S-GW.
S-GW_S1-U_Throughput_GBR_Phy_DL/UL =
N_eNBs x eNB_S1-U_Throughput_GBR _Phy_DL/UL
VoLTE service traffic characteristic is such that the application packets are sent at a fixed rate.
VoLTE service traffic characteristics are retrieved from the traffic model which provides the amount
of bytes which are exchanged through the interface during the busy hour.
eNB_S1-U_VoLTE_Throughput_App_Phy_DL/UL =
[ Inv_ErlgB (S1_VoLTE-traffic-Intensity @ BIVoLTE) ] x
[ VoLTE_Nominal_BR_APP_DL/UL] x [1 + %_Transport_Overhead]
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1).
VoLTE_BHCA is retrieved from the traffic model. It represents the average VoLTE busy hour call
attempt (refer to Table 2-4 of section 2.2.1.1).
VoLTE_Call_duration is retrieved from the traffic model (refer to Table 2-4 of section 2.2.1.1).
BIVoLTE is the blocking factor for VoLTE service i.e the probability of overflow at S1-U interface
level. Typical figure is in the range of 0.01.
Inv_ErlgB is the inverse ErlangB formula which is used to compute the amount of required
resources at eNB S1-U interface level, with the requested probability of overflow BI VoLTE, to
ensure the number of active user can send traffic, assuming a given activity factor.
VoLTE_Nominal_BR_App_UL/DL(per sub) =
8 x (KByte_VoLTE_call_UL/DL) / Call_Duration
8 to normalize to bit.
KByte_VoLTE_call_UL/DL is an input from the traffic model (refer to Table 2-4), it represents
the aggregated number of bytes which are exchanged for VoLTE service at the busy hour.
The resulting throughput at the S1-U interface S-GW side is computed as eNB-
S1U_GBR_Throughput_Phy_DL/UL multiplied by the number of eNBs (N_eNBs) which are served by
this S-GW.
S-GW_S1-U_Throughput_VoLTE_Phy_DL/UL =
N_eNBs x eNB_S1-U_Throughput_VoLTE _Phy_DL/UL
Referring to the call model parameters, as they are given in Table 2-3, Table 2-4 and Table 2-5 the
following throughput is expected at S1-U interface level (on a per eNB basis), with the following
assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
S1-MME interface throughput computation (downlink and uplink) is driven by the number and size of
signaling messages per busy hour, which are exchanged at the control plane level to enable access
stratum and non access stratum signaling support.
In order to keep S1-MME bandwidth computation simple, only the main procedures exchanged over
S1-MME interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming. The main procedures to consider are listed below. For each
procedure, the corresponding call flow is described in 3GPP TS 23.401.
Attach procedure.
Detach procedure.
Assumption is made that it is Service Request procedure and S1 release procedure (as described
in 3GPP TS 23.401).
This can either be with MME and S-GW relocation, or with MME relocation and without S-GW
relocation, or without MME relocation and with SGW relocation, or without MME and S-GW
relocation.
Note TAU with SGSN relocation event contributes to the S1-MME throughput, when I-RAT
procedure is applicable (BHCA figure for this procedure in Table 2-3).
For this procedure, assumption is made that it is UE request PDN connectivity procedure (as
described in 3GPP TS 23.401).
PCMD contribution to the S1-MME traffic is described in section 3.3 (LTE OAM capacity
elements).
For each of the above procedure, number of procedures per subscribers at the busy hour that must
be considered for S1-MME throughput computation is retrieved from the traffic model, as described
in Table 2-3. For each procedure, per subscriber BHCA is used in relation with the number of
message and message size, to assess the traffic activity for that procedure at the S1-MME interface.
S1-MME protocol stack is described in section 3.5.4.1 above. S1-MME IP transport headers are
described in section 3.5.4.2 above.
This section is intended to describe the main procedures (those which are the most bandwidth
consuming), that are exchanged over S1-MME interface to assess S1-MME throughput. These
procedures are described in 3GPP TS 23.401. For each procedure the number of message exchanged
downlink / uplink and the aggregate number of bytes is given. Since SCTP is in use, the number of
SCTP acknowledgement messages is given as well. Worst case figure is given as much as possible.
Attach Procedure
Detach Procedure
Successful UE initiated detach request is considered. The characteristics of this procedure are given
in the table below:
Connection Request
This procedure comprises UE initiated service request and release (S1 release procedure). Details
about these procedures can be found in 3GPP TS 23.401. The characteristics of this procedure are
given in the table below.
This procedure describes the messages which are exchanged between target eNB and MME during
inter-eNB (X2) handover. The characteristics of this procedure are given in the table below:
Inter-RAT
This procedure describes the messages which are exchanged between eNB and MME during inter-RAT
handover. The characteristics of this procedure are given in the table below:
Paging eNB
This procedure describes the messages which are exchanged between MME and eNB when the MME
needs to signal to an UE, which is in ECM-IDLE state there is a network triggered service request.
The characteristics of this procedure are given in the table below (note that we only consider the
paging message, as the initial UE service request message is assumed to be accounted for in the UE
service request procedure described above):
This procedure describes the messages which are exchanged between MME and eNB when UE
requests PDN connectivity. The procedure allows the UE to request for connectivity to an additional
PDN over eUTRAN including allocation of a default bearer, if the UE already has active PDN
connections over eUTRAN. The characteristics of this procedure are given in the table below:
Tracking area update triggers condition are described in 3GPP TS 23.401 (example is UE detects it
has entered a new TA that is not in the list of TAIs that the UE registered with the network). The
characteristics of this procedure are given in the table below (it includes all call relocation
scenario):
With the introduction of LR14.1, The eNB shall support 2 new S1AP messages: UE Radio Capability
Match Request and UE Radio Capability Match Response.
If the MME requires more information on the UE radio capabilities support to be able to set the IMS
voice over PS Session Supported Indication, then the MME may send a UE Radio Capability Match
Request message to the eNB. The eNB shall request them from UE by sending an RRC UE Capability
Enquiry message to UE and forward them to MME in a UE Capability Info Indication message. The
eNB shall answer to MME with a UE Radio Capability Match Response message, including Voice
Support Match Indicator IE, determined based on UE capabilities and eNB configuration.
These messages are important as they enable the MME to have the appropriate information at hand
to appropriately set the IMS voice over PS Session Supported Indication, which will prevent
unnecessary loss of voice calls, or degraded user experience due to IMS Voice could have been used
instead of CS.
This procedure is typically used during the Initial Attach procedure, during Tracking Area Update
procedure for the "first TAU following GERAN/UTRAN Attach" or for "UE radio capability update" or
when MME has not received the Voice Support Match Indicator (as part of the MM Context).
The table corresponding to the Attach Procedure (Table 3-42) and/or the table corresponding to the
TAU (Table 3-47) will be updated by the size of the UE Radio Capability Match Request and UE Radio
Capability Match Response message once we receive a trace showing these two messages.
The required S1-MME bandwidth is computed according to the following formula (separate downlink
and uplink computation is made as the number and size of message differs from downlink to uplink):
eNB_S1-MME_Throughput_Phy_DL/UL =
PtA_B x N_Subs x Avg_S1-MME_Throughput_Phy_DL/UL_Subs
PtA_B is the peak to average ratio to apply at the backhaul. Please refer to section 3.5.4.3.1.
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1).
Avg_S1-MME_Throughput_Phy_DL/UL_Subs =
∑i Avg_S1-MME_Proc(i)_Throughput_Phy_DL/UL
Avg_S1-MME_Proc(i)_Throughput_Phy_DL/UL =
Nb_S1-MME_Proc(i) x [Volume_Size_Proc(i)_DL/UL + Nb_msg_Proc(i)_DL/UL x
Transport_Header] x (8 / 3600)
Nb_S1-MME_Proc(i) is the number of procedure_#i at busy hour per subscriber (i.e BHCA figure
for attach, detach, connection request, etc). For each procedure the figure is retrieved from
Table 2-3 and Table 2-4 except for paging procedure, where the figure is computed in section
2.2.1.2.
Volume_Size_Proc(i)_DL/UL is the amount of bytes which are exchanged for procedure_#i (at
S1-AP level). For each procedure, the figure is retrieved from the corresponding table of section
3.5.4.5.1 above.
Nb_msg_Proc(i)_DL/UL is the number of message exchanged between eNB and MME for
procedure_#i. This include SCTP acknowledgement message. For each procedure, the figure is
retrieved from the corresponding table of section 3.5.4.5.1 above.
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (SCTP/IP/Ethernet plus optional IPsec header). Refer to section 3.5.4.2
above.
8 to normalize to bit.
At MME side, the above computed eNB throughput has to multiplied by the number of eNBs which
are served by this MME to assess S1-MME interface throughput at MME level.
eNB_S1-MME_Total_Throughput_Phy(DL/UL) =
N_eNBs x eNB_S1-MME_Throughput_Phy(DL/UL)
Note: PCMD throughput is computed in section 3.3 (LTE OAM capacity elements).
Referring to the call model parameters, as they are given in Table 2-3, the following throughput is
expected at S1-MME interface level:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
S1-MME throughput: Downlink: 496 Kbit/s, Uplink: 604 Kbit/s.
3.5.5 X2 INTERFACE
This section is aimed to describe the X2 interface throughput computation method. Similarly to S1
interface, X2 is split in X2-U which carries user plane traffic and X2-C which carries control plane
traffic. Details about X2 interface can be found in section 2.1.3.4.
X2-U and X2-C dimensioning methods are different. The sections below describe dimensioning
methods for each type of traffic. From a backhauling perspective, overall X2 bandwidth equals to
X2-U + X2-C.
The figure below, which is an extract from 3GPP TS 23.401 depicts the call flow for X2-based
handover. It shows how X2 interface enables handing over from source to target eNB, when active
traffic is exchanged between the UE and the ePC.
Handover preparation
Handover execution
Forwarding of data
7 Release Resource
X2-U is the reference point between source and target eNBs, for bearer inter eNB path switching
during handover. X2-U enables data forwarding from source eNB toward target eNB, while handing
over between eNBs, when radio interface is already established in the target cell. X2-U interface
protocol stack is depicted in the figure below.
UE eNodeB eNodeB
User App
TCP, UDP, …
GTP-U GTP-U
X2-C is the reference point for the control plane protocol between eNBs (source and target eNBs)
when inter eNBs handover is to be supported. X2-C interface protocol stack is depicted in the figure
below.
UE eNodeB eNodeB
X2-AP X2-AP
RRC RRC
Detailed X2 bandwidth figures depends upon the IP transport stack and the resulting headers which
must be added to the application protocol, according to the backhaul network architecture. The
same rules as those given for S1 interface apply for X2 interface. Please refer to section 3.5.4.2
above.
Several transport header combinations are possible depending upon the transport backhaul
architecture. Figures are similar to those of S1 except for X2-U GTP-U header size which is 13 bytes
as per 3GPP TS 29.281 (it is indeed padded with 3 additional bytes to reach 16 bytes length).
X2-U interface enables data forwarding from source eNB to target eNB during the hand-over
procedure. The resulting X2-U throughput depends upon the amount of active users experiencing
handover. Indeed a fraction of the aggregated S1-U traffic is forwarded to the target eNB before the
hand-over is completed, as depicted in the figure below.
eNodeB
S-GW
X2-U S1-U
eNodeB
X2-U interface throughput is computed by taking into account the maximum user plane traffic for
the maximum number of attached subscribers at the busy hour. X2-U bandwidth computation
process uses the input from the traffic model (either Alcatel-Lucent traffic model or customer
specific traffic model), taking into consideration the backhauling network specific architecture.
X2-U throughput depends upon UEs handover activity. Basically, specific hand-over parameters
related to the cell coverage, the user mobility pattern and velocity are the main points to be taken
into account. These hand-over parameters are used to assess the number of hand-over procedures
per subscriber at Busy Hour. Number of inter eNB (X2) events per subscriber at the busy hour is an
input from the traffic model which is described in Table 2-3.
The X2-U throughput has to be computed for both X2-U directions (In for the incoming X2 traffic and
Out for the outgoing X2 traffic). However, with the assumption that per eNB number of subscribers
is constant during the busy hour and the percentage of user experiencing hand-over is equally
spread over eNBs, we assume that at each eNB, X2-U_Througput_In = X2-U_Througput_Out.
Indeed X2-U throughput computation follows the same principles as those which are applied for S1-
U. But in addition, the handover BHCA and handover duration have to be taken into consideration.
X2-U bandwidth is computed according to the following formula:
eNB_X2-U_Throughput_Phy_In =
eNB-X2-U_Non_GBR_Throughput_Phy_In +
eNB_X2-U_VoLTE_Throughput_Phy_In +
eNB_X2-U_GBR_Throughput_Phy_In
For non-GBR services, X2-U throughput is computed as follows (similar formula as for S1-U non-GBR,
but with handover ratio taken into consideration and GTP-U header equal to 16 bytes instead of 8
bytes (as specified in 3GPP TS 29.281, section 5.2.2.2, X2-U GTP-U header size equals to 8 bytes:
minimum GTP-U header size + 1 bytes for Next Extension header type + 4 bytes for PDCP PDU
number field. Please note that 3 padding bytes are added by most application in order to get 16
bytes GTP-U overhead with is easier to handle from an implementation perspective).
eNB_X2-U_Throughput_Non_GBR_Phy_In =
PtA_B x N_Subs x Avg_BR_Phy_In (per Sub)
PtA_B is the peak to average ratio to apply at the backhaul. Please refer to section 3.5.4.3.1.
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1).
Volume_at_BH_(DL/UL) represents the aggregated number of bytes which are exchanged at X2-
U for non-GBR service at the busy hour, for a given handover duration period of time.
Volume_at_BH_(In) =
[(Inter_eNB_X2_HO_BHCA x HO_duration) / (Non_GBR_Call_Duration x Non_GBR_BHCA)] x
(Non_GBR_BHCA x Non_GBR_BHCA_Volume)
The first part of the above formula represents the handover ratio, which is a correction factor to
account for the fact that only a fraction of aggregated non GBR volume of traffic is exchanged
over X2-U (i.e the ratio between (handover duration times x handover BHCA) and (non_GBR call
duration times x non_GBR BHCA). The second part of the above formula represents the non_GBR
volume of data exchanged at the busy hour.
Inter-eNB_X2_HO_BHCA is retrieved from the traffic model. It represents the average inter eNB
(X2) handover at the busy hour.
Non_GBR_BHCA_Volume is retrieved from the traffic model. It represents the average volume of
bytes exchanged for non-GBR service at the busy hour.
8 to normalize to bit.
eNB_X2-U_GBR_Throughput_App_Phy_In =
[ Inv_ErlgB (X2-U_GBR-traffic-Intensity @ BIGBR) ] x
[ GBR_Nominal_BR_APP_In] x [1 + %_Transport_Overhead]
Inv_ErlgB is the inverse ErlangB formula which is used to compute the amount of required
resources at eNB X2-U interface level, with the requested probability of overflow, to ensure the
number of active user can send traffic, assuming a given activity factor.
BIGBR is the blocking factor for GBR service i.e the probability of overflow at X2-U interface
level. Typical figure is in the range of 0.01.
X2-U_ GBR-traffic-intensity =
(N_Subs x GBR_BHCA x Inter-eNB_X2_ HO _BHCA x HO_duration) / (3600)
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1).
GBR_BHCA is retrieved from the traffic model. It represents the average GBR busy hour call
attempt.
Inter-eNB_X2_HO_BHCA is retrieved from the traffic model. It represents the average inter eNB
(X2) handover.
GBR_Nominal_BR_APP DL/UL is the nominal GBR service throughput at busy hour. It is computed
as follows:
8 to normalize to bit.
KByte_GBR_call_In is an input from the traffic model (refer to Table 2-4), it represents the
aggregated number of bytes which are exchanged for GBR service at the busy hour. Note that
uplink and downlink figures are different.
Call-Duration represents the GBR service call duration at the busy hour (traffic model refer to
Table 2-4).
eNB_X2-U_VoLTE_Throughput_App_Phy_In =
[ Inv_ErlgB (X2-U_VoLTE-traffic-Intensity @ BIVoLTE) ] x
[ VoLTE_Nominal_BR_APP_In] x [1 + %_Transport_Overhead]
Inv_ErlgB is the inverse ErlangB formula which is used to compute the amount of required
resources at eNB X2-U interface level, with the requested probability of overflow BI VoLTE , to
ensure the number of active user can send traffic, assuming a given activity factor.
BIVoLTE is the blocking factor for VoLTE service i.e the probability of overflow at X2-U interface
level. Typical figure is in the range of 0.01.
X2-U_ VoLTE-traffic-intensity =
(N_Subs x VoLTE_BHCA x Inter-eNB_X2_HO_BHCA x HO_duration) / (3600)
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1).
VoLTE_BHCA is retrieved from the traffic model. It represents the average VoLTE busy hour call
attempt.
Inter-eNB_X2_HO_BHCA is retrieved from the traffic model. It represents the average inter eNB
(X2) handover.
8 to normalize to bit.
KByte_VoLTE_call_In is an input from the traffic model (refer to Table 2-4), it represents the
aggregated number of bytes which are exchanged for GBR service at the busy hour. Note that
uplink and downlink figures are different.
Call-Duration represents the VoLTE service call duration at the busy hour (traffic model refer to
Table 2-4).
Referring to the call model parameters, as they are given in Table 2-3, the following throughput is
expected at X2-U interface level:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
The assumptions for X2-C interface throughput computation are the same as for S1-MME interface
throughput computation, as described in section 3.5.4.4.
The main procedure to consider for X2-C bandwidth computation, is the inter_eNB_X2 handover
procedure. This procedure is described in 3GPP TS 23.401 and 3GPP TS 36.300. The traffic model
provides the BHCA characteristics for the inter_eNB_X2 handover procedure. The inter_eNB_X2
handover procedure characteristics are given in the table below.
Formulas to compute the resulting X2-C physical throughput are the same as those used to
computed S1-MME throughput; please refer to section 3.5.4.4.
The total required throughput for X2-C must take into account both the incoming and the outgoing
handover procedures. Assuming that for each subscriber leaving an eNB source coverage (outgoing
HO) there is another subscriber coming from a neighboring eNB (incoming HO) which generates an
equivalent X2-C traffic, the total required X2-C throughput for a given direction (In or Out) is equal
to the sum of the required throughput for each of these two direction (In + Out).
Referring to the call model parameters, as they are given in Table 2-3, the following throughput is
expected at X2-C interface level:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
3.5.6 M1 INTERFACE
M1 interface is the reference point between MBMS-GW and the eNB. This is the user plane for MBMS
data delivery (i.e M1 interface is a bearer interface). A brief MBMS architecture description is given
in section 2.1.4.7. M1 interface is described in section 2.1.3.4. M1 interface protocol stack is
depicted in the figure below.
Please note that M1 is a unidirectional transport interface as IP multicast is used to deliver MBMS
application data.
eNodeB MBMS-GW
MBMS MBMS
application application
SYNC SYNC
GTP-U GTP-U
UDP UDP
IP (path) IP (path)
IPsec/UDP/IP M1 IPsec/UDP/IP
To support the inter-eNB content synchronization, SYNC protocol layer is defined between BM-SC
(Broadcast Multicast Service Center) and the eNBs. SYNC protocol carries additional information that
enables eNBs to identify the timing for radio frame transmission and detect packet loss. Packet at
M1 interface level contains the SYNC protocol information which is generated at BM-SC.
SYNC protocol (TS 25.446) is used over the M1 interface to keep the content synchronization for
MBMS service data transmission. MBMS application data is encapsulated inside SYNC protocol.
M1 interface IP transport headers
M1 bandwidth figures depends upon the IP transport stack and the resulting headers which must
be added to the application protocol, according to the backhaul network architecture. Referring
to M1 protocol stack described above, the followings parameters need to be taken into
consideration when assessing the physical bandwidth at M1 interface. Please refer to section
3.5.4.2 above for detailed M1 interface IP transport header size figures. Note that Vlan tagging
is mandatory for M1 interface transport over the backhaul network.
MBMS SYNC protocol headers
M1 bandwidth figure depends also upon the MBMS SYNC protocol headers which need to be
accounted for, when MBMS service is in use. Every packet at M1 interface contains the MBMS
SYNC protocol information.
The SYNC protocol for eUTRAN is used to convey user data associated to MBMS Radio Access
Bearers. As part of the SYNC protocol procedure the BM-SC shall include to the SYNC PDU
packets (i.e the MBMS application data) a time stamp which tells the timing based on which the
eNB sends MBMS data over air interface. This time stamp is based on a common time reference
available at the BM-SC and the eNBs, and represents a relative time value which refers to the
start time of the synchronization period. The time stamp which is provided in each SYNC PDU is
a relative value which refers to the start time of the synchronization period (the
synchronization period provides the time reference for the indication of the start time of each
synchronization sequence). MBMS user data shall be time-stamped based on separable
synchronization sequences (each SYNC PDU contains a time stamp which indicates the start time
of the synchronization sequence). Synchronization sequence is transmitted continuously, even if
there is no MBMS user data in the synchronization sequence. Each synchronization sequence for
each service is denoted by a single timestamp value working in such a manner that an increase
of the timestamp value by one synchronization sequence length shall be interpreted as an
implicit start-of-a-new-synchronization-sequence-indicator, so that the eNB becomes aware that
a new sequence is starting.
There are four types of frame format for the SYNC protocol:
- SYNC PDU Type 0 for transfer of synchronisation information without MBMS data. SYNC PDU
type 0 features 18 bytes SYNC protocol header.
- SYNC PDU Type 1 for transfer of MBMS user data with uncompressed header. SYNC PDU type
1 features 11 bytes SYNC protocol header.
- SYNC PDU Type 2 for transfer of MBMS user data with compressed header. SYNC PDU type 2
is not applicable for M1 interface.
- SYNC PDU Type 3 for transfer of synchronization information with length of packets. SYNC
PDU type 3 features 19 bytes SYNC protocol header.
So far the reference traffic model of Table 2-3 and Table 2-4 does not provide detailed figures for
eMBMS, consequently it is not possible to provide guidance about how to compute M1 bandwidth.
This section will be completed once the reference traffic model includes eMBMS.
Please refer to section 3.5.6.2. This section will be completed in a further release of the document.
3.5.7 M3 INTERFACE
M3 interface is the reference point between the MME and the eNB embedded Multi-cell/multicast
Coordination Entity (MCE). This is the control plane for MBMS data delivery, between MME and eNB /
MCE, as the MME is instructed by the MBMS-GW over Sm interface. A brief MBMS architecture
description is given in section 2.1.4.7. M3 interface is described in section 2.1.3.4.
M3 interface enables the MME and eNB embedded MCE to exchanges the following messages (M3AP
elementary procedures):
MBMS Session Start Request (please refer to 3GPP 23.246 for details regarding the MBMS session
start procedure).
MBMS Session Start Response (please refer to 3GPP 23.246 for details regarding the MBMS session
start procedure).
MBMS Session Update Request (please refer to 3GPP 23.246 for details regarding the MBMS
session update procedure).
MBMS Session Update Response (please refer to 3GPP 23.246 for details regarding the MBMS
session update procedure).
MBMS Session Stop Request (please refer to 3GPP 23.246 for details regarding the MBMS session
stop procedure).
MBMS Session Stop Response (please refer to 3GPP 23.246 for details regarding the MBMS session
stop procedure).
Reset to enable initialize, re-initialize the eUTRAN or part of the eUTRAN M3AP MBMS related
contexts (in the event of a failure in the ePC). This is a non MBMS-service associated signaling
procedure.
M3 Setup Request to enable exchange of application level data needed for the MCE and MME to
correctly interoperate on the M3 interface. The procedure uses non MBMS-service associated
signalling. This procedure erases any existing application level data in the MCE and the MME and
replaces it by the one received. This procedure also re-initialises the eUTRAN M3AP service-
related contexts (if any) and erases all related MBMS-service-associated logical M3 connections
in the two nodes like a Reset procedure would do.
MCE Configuration Update enables application level configuration data needed for the MCE and
MME to interoperate correctly on the M3 interface. The procedure uses non MBMS-service-
associated signalling.
eNodeB MME
(embeded MCE)
M3-AP M3-AP
SCTP SCTP
IP IP
IPsec/UDP/IP M3 IPsec/UDP/IP
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at M3 interface, must be taken into consideration. To
keep M3 interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
The main procedures which are taken into consideration for M3 maximum throughput computation
are (MBMS procedures are described in 3GPP TS 23.246, M3AP procedure are described in 3GPP TS
36.444):
The main procedures which are taken into consideration for M3 maximum throughput computation
are described in 3GPP TS 23.246:
MBMS Session Start Procedure
The BM-SC initiates the MBMS Session Start procedure when it is ready to send data. This is a
request to activate all necessary bearer resources in the network for the transfer of MBMS data
and to notify interested UEs of the imminent start of the transmission.
Upon reception of MBMS session start request from the MBMS-GW, the MME creates an MBMS
bearer context. The MME stores the session attributes and sends a Session Start Request
message including the session attributes , transport network IP Multicast Address, IP address of
the multicast source, C-TEID, ...) to eUTRAN. When connected to multiple MCEs, the MME filters
the distribution of Session Control message to the MCEs based on the MBMS service area.
MBMS Session Stop Procedure
The BM-SC Session and Transmission function initiates the MBMS Session Stop procedure when it
considers the MBMS session to be terminated. The session is typically terminated when there is
no more MBMS data expected to be transmitted for a sufficiently long period of time to justify a
release of bearer plane resources in the network.
Upon reception of MBMS session stop request from the MBMS-GW, the MME forwards Session Stop
Request message to the eUTRAN which previously received the Session Start Request message
and sets the state attribute of its MBMS bearer Context to 'Standby'. Each eUTRAN responds with
Session Stop Response message to the MME. The MME releases the MBMS Bearer Context in case
of a broadcast MBMS bearer service.
MBMS Session Update Procedure
The MBMS Session Update procedure may be invoked by BM-SC. The BM-SC uses the procedure to
update the service area for an ongoing MBMS Broadcast service session (MBMS Service Area is the
area within which data of a specific MBMS session are sent). The BM-SC initiates the MBMS
Session Update procedure when the service attributes (e.g. Service Area) for an ongoing MBMS
Broadcast service session shall be modified, e.g. the Session Update procedure for EPS is
initiated by BM-SC to notify eNBs to join or leave the service area.
The MME receiving an MBMS Session Update Request message sends an MBMS Session Update
Request message including the session attributes (transport network IP Multicast Address, IP
address of the muticast source, C-TEID, etc) to each eNB/MCE that is connected to the MME.
When connected to multiple MCEs, the MME should filter the distribution of MBMS Session
Update Request messages to the MCEs based on the MBMS service area.
So far the reference traffic model of Table 2-3 and Table 2-4 does not provide detailed figures for
eMBMS, consequently it is not possible to provide guidance about how to compute M3 bandwidth.
This section will be completed once the reference traffic model includes eMBMS.
Please refer to section 3.5.7.2. This section will be completed in a further release of the document.
Maximum throughput computation over the backhauling interface depends upon the transport
backhaul configuration. Please refer to section 3.5.4.2 above for examples of possible
configurations.
eNB_BW_Total_UL/DL =
eNB_BW_Telecom_UL/DL + eNB_BW_OAM_UL/DL + eNB_BW_SYnc_UL/DL
With
eNB_BW_Telecom_UL/DL: Telecom interface throughput (S1_UL/DL + X2_UL/DL + M1_DL +
M3_UL/DL)
OAM Operational traffic is a fixed throughput equal to 3,5 Mbit/s and the Maintenance traffic is
an additional traffic required only for maintenance purpose.
The eNB interface estimated load eNB_BW_Load can then be assessed with the following formula:
With AV_eNB_BW_UL/DL being the available eNB bandwidth which is reserved by the operator for
the OAM traffic and Telecom traffic.
The eNB interface load has to be computed for downlink and uplink transmission path. The resulting
maximum load_DL/UL, expressed in % is then compared to the eNB target load (X%). Load will be OK
as long as the following is true:
The example below assumes there is no additional OAM maintenance traffic, with no specific
maintenance activity. The table below summarizes the eNB interface throughput for Alcatel-Lucent
reference traffic model. Aggregated Telecom throughput (S1 & X2 &M1 & M3) is computed as
follows:
S1-U and S1-MME throughput figures are taken from section 3.5.4.4 and section 3.5.4.6 above.
X2-U and X2-C throughput figure are taken from section 3.5.5.4 and section 3.5.5.6 above.
OAM Operational and maintenance throughput figures are given in section 3.3.
IEEE 1588 synchronization throughput figure can take two different values: Peak throughput and
average throughput. Peak throughput is in a range of 102.37 Kbit/s for downlink and 49.44
Kbit/s for uplink. These are figures during the synchronization setup phase. Nominal throughput
is in a range of 28.8 Kbit/s for downlink and 14.5 Kbit/s for uplink. These are figures while the
synchronization is setup i.e. after synchronization convergence.
Telecom throughput at eNB interface is given in the table below.
Table 3-49: eNB Throughput (IPv4, w/o OAM Maintenance Traffic, w/o IPsec)
The table below provides eNB interface throughput assuming additional OAM maintenance traffic.
Six different OAM maintenance actions are considered in the table below. Please note that it is not
recommended to carry simultaneously the six OAM maintenance actions, to prevent overloading the
eNB backhaul interface. It is recommended to execute single OAM operation only. Required
bandwidth for each OAM operation is given as an example.
Board App
Call Software L3 On demand
DDT restart restart
Trace Upgrade snapshot snapshot
Context Context
UL Only DL Only UL Only UL Only
UL Only UL Only
600
OAM BW
600 600 600 600 600 600
DL Kbit/s + 11300
S1 & X2 & M1
& M3 BW DL 129475
Kbit/s
S1 & X2 & M3 31646
BW UL Kbit/s
Synchro DL 102,37
Kbit/s
Synchro 49,44
Kbit/s UL
Max Total BW 130178 130178 141478 130178 130178 130178 130178
DL Kbit/s
Max Total BW 106910 32491 31911 34111 33911 31914 32711
UL Kbit/s
Table 3-50: eNB Bandwidth Calculation (IPv4 w Additional OAM Maintenance Traffic)
Note: Only DDT, call trace operation and software upgrade (downlink without execution) can be
executed in parallel, others OAM operations cannot be executed simultaneously because this might
require eNB to be reset.
For a maximum case including maintenance with DDT (20MHz on Air cells) we consider the network
under control if the maximum reserved end to end bandwidth is as follows:
The total maximum downlink bandwidth is in the range of 142 Mbit/s.
7750 SR Mobile Gateway is based on the 7750 SR hardware platform. A hardware module, named
Mobile-Gateway Integrated Services Module (MG-ISM), is used to support S-GW, and P-GW/GGSN
functions. 7750 SR OS MG 6.0 uses MG-ISMv2 (introduced in 5.0R5). MG-ISMv2 provides increased
capacity; 2 million bearer contexts compared to 1 million supported by MG-ISMv1. Please refer to
The 7750 SR Mobile Gateway (implemented as either an S-GW or P-GW/GGSN) is a single chassis
system that consists of a set of IO Module cards (IOMs), one Switch Fabric/Control Plane Module
(SF/CPM-3) with an optional redundant SF/CPM-3, and a set of Mobile Gateway - Integrated Services
Modules (MG-ISMs). New for Release 6.0R1, SGW and PGW functions are supported on the same MG-
ISM group. Therefore the minimum chassis configuration for a combined SGW/PGW is one MG-ISM
card (no redundancy) or two MG-ISM cards (with 1+1 redundancy).
7750 SR Mobile Gateway can be based on two different chassis configurations: 7750 SR-12 and 7750
SR-7. One slot holds one Switch Fabric/Control Processor Module (SF/CPM) and only one SF/CPM is
required for operation. A second SF/CPM provides complete redundancy of the fabric and the
control processors. The SF/CPM-3 card provides node management, routing protocols, and load
balancing of sessions into the MG-ISMs.
The remaining 10 slots, in case of 7750 SR-12, are occupied by I/O Modules and MDAs or Mobile
Gateway – Integrated Services Module.
The Alcatel-Lucent Input/Output Module -XP (IOM3-XP) supported by the 7750 Mobile Gateway
delivers up to 50 Gb/s (full-duplex) per-slot performance, with highly scalable multiservice
capabilities. It provides outstanding IP/MPLS routing performance to enable the convergence of
voice, video and data services for a wide range of business, consumer and mobile network
applications.
The Alcatel-Lucent Service Router (SR) Ethernet Media Dependent Adapter -XP (MDA-XP) delivers
high-density, high performance Carrier Ethernet for all 7750 Service Router applications including
Mobile Gateways. They provide up to 25 Gb/s (full-duplex) of line-rate throughput per card and
support ITU-T Synchronous Ethernet, leading port densities, and a rich set of service delivery and
OAM features.
Technical details on MG-ISM, IOM3-XP and MDA-XP modules can be found in [C11] and [C12].
The following table gives a summary of the common 7750 SR Mobile gateway system configuration:
AC Power (1 + 1) AC Power (1 + 1)
DC Power (1 + 1) DC Power (1 + 1)
Pairs of MG-ISMv1 and MG-ISMv2 may be configured in the same chassis, but each member
within a redundant 1:1 pair must be the same version.
The following tables summarize the 7750 Mobile Gateway static capacity using MG-ISMv2:
DPI N/A
DPI Up to 10 Gbps
Note: The total 7750 Mobile gateway chassis capacity is simply the addition of individual MG-ISM
capacity:
Max number supported PDP-context / EPS-bearer activations (connections per sec) 4000
Max number supported PDP-context / EPS-bearer deactivations (connections per sec) 4000
Max number supported PDP-context / EPS-bearer modifications (connections per sec) 4000
Note: The total 7750 Mobile gateway chassis capacity is simply the addition of individual MG-ISM
capacity:
4 x active MG-ISM per chassis (for SR12)
3.6.1.4 MOBILE GATEWAY BASE SYSTEM (SR DERIVED) FEATURES AND CAPACITY
The 7750 Mobile gateway supports a number of services and features and directly inherited from the
7750 Service Router OS. The following most relevant capabilities for EPC deployment are supported
as part of SR OS MG 6.0 for both S-GW and P-GW:
Services
- IES
- VPRN for 3GPP interfaces
Note: for VPRN service, the maximum number of APN applies on P-GW
Ethernet
- 802.1Q VLAN based interfaces
- 802.3ad LAG interfaces
- 802.1ag & 802.3ah Ethernet OAM
Routing / forwarding
- Static routing
- OSPFV2
- OSPFv3
- BGP
- ISIS
QoS
Note: the below capabilities refer to the “transport” level, i.e., when applied to ingress/egress
IOM3. QoS features and capabilities in the context of PCC rules are given
- QoS filters/policies on network and SAP ingress/egress
- Queues and schedulers with shaping
Filters and policies
- IPv4 and IPv6 filters
- Route policies
High availability
- BFD for static, OSPF and BGP
Without explicit notice stating differently, capacity and scalability limits for the above capabilities
can directly be derived from the 7750 Service Router scalability guidelines, on the basis of SR OS
6.0R1 running in chassis mode D (e.g., system equipped with IOM3-XP modules only). Although the
7750 Mobile Gateway is based on SF-CPM3, control plane performances listed against SF-CPM2 apply.
However, it must be noted that the individual limits are mentioned as a theoretical limit applicable
to a given feature/capability taking abstraction from interaction with other limits. In practice, the
actual limit is always a combination of several limits with the most restrictive one determining the
achievable scale. The ability to reach the maximum scaling limits depends on, but is not limited to,
CPU load, network topology, etc. These scaling limits have been designed for specific deployments
of the 7750 Service Router which does not place high degrees of stress in many areas.
In the context of Mobile Gateway application, high degrees of scalability/capacity for the base
system (SR derived) capabilities (routing protocols, QoS filters, etc) is not expected. High
scalability/capacity requirements for the Mobile Gateway application are addressed in the previous
sections.
For extreme scaling conditions thorough testing must be performed by the customer. It is therefore
strongly advised to contact an Alcatel-Lucent representative for additional guidance.
This section addresses the specific engineering capacity limits and guidelines for the 7750 Mobile
Gateway when configured as an S-GW.
3.6.2.1 S-GW INTRA-NODE RESILIENCY DIMENSIONING
The 7750 SR S-GW supports 1:1 MG-ISM stateful redundancy. With MG-ISM cards deployed in such
protection mode, the 7750 SR Mobile Gateway provides full internal application redundancy. Each
pair supports an active and a standby MG-ISM that synchronize all subscriber states between them. A
switchover from active to standby MG-ISM is invisible to the subscriber and therefore will not have
any impact on surrounding EPC nodes/systems: in other words it will not generate any additional
load to the MME or other network nodes and systems. As such the switchover from standby to active
does not require external protocol support.
The 7750 S-GW supports up to 8 x MG-ISMs (up to 4 pairs deployed in 1:1 hot stand-by mode). When
multiple active MG-ISM are provisioned in the S-GW, an internal load-balancing algorithm that
involves the Control Processor Module (CPM) allows the S-GW to spread the creation of
subscriber/bearer contexts across the active MG-ISM’s. Subsequently the IOMs direct all the traffic
to the appropriate MG-ISM based on the individual subscriber context. Therefore, no specific
dimensioning rule needs to be applied to take advantage of the full capacity of the system in
presence of multiple MG-ISMs per S-GW.
3.6.2.2 S-GW GEO-RESILIENCY DIMENSIONING
7750 SR S-GW supports geo-resiliency through S-GW pooling (based on S1-Flex). There is no need to
create a specific configuration to activate S1-Flex interfaces in S-GW. Interconnection between
eNB/MME and S-GW is based on IP routing and S-GW will advertise its configured IP addresses for
each 3GPP interface (S1, S11) via IGP and/or BGP. Each interface can be associated to a different
VPRN instance.
The 9471 MME selects the SGW from a pool of SGWs based upon the Tracking Area Identifiers (TAI)
served by the SGW pool or based upon Domain Name System (DNS) query procedures. Service
providers may configure whether SGW discovery and selection use provisioned data or use the
Straightforward-Name Authority Pointer (S-NAPTR) procedure. The maximum number of S-GW Pools
is 16. The maximum number of S-GW elements per S-GW Pool is 16 (using manual provisioning) or 80
(using DNS).
S-GW pooling is recommended as Flex interfaces improve network reliability and simplify site
disaster recovery. Moreover, it enables network-level dimensioning with more efficient use of
node capacity and S-GW load sharing. It also facilitates S-GW maintenance with graceful S-GW
shutdown.
The supported interfaces on S-GW and related protocols are listed hereafter:
Protocol(s) to be supported:
Protocol(s) to be supported:
S5 S-GW, P-GW
Protocol(s) to be supported:
S8 S-GW, P-GW
Protocol(s) to be supported:
Protocol(s) to be supported:
Protocol(s) to be supported:
Protocol(s) to be supported:
Protocol(s) to be supported:
S-GW capability in handling user plane traffic and associated capacity figures were summarized in
sections 3.6.1.2 & 3.6.1.3.
GTP Traffic arriving at the S-GW (from eNB over S1-U or PGW/GGSN over S5/S8) may have been
fragmented (by the backhaul or core network) and therefore require reassembly at GTP endpoint
before the encapsulated packet can be forwarded (through another GTP tunnel).
Restriction:
IP reassembly will cause a performance impact on the basis of functions such as packet copying,
fragment validation, and fragment reorder. This performance impact will vary depending on the
number of concurrent IP datagram that are being reassembled.
The performance benchmark considers two IP fragments for a single IP packet of i.e. 2229 bytes of
IP payload. The packet is fragmented due to addition of GTP+UDP + IP header (48 bytes),
which result in two IP fragments on outgoing link having an MTU set of 2229 bytes. With the above
configuration the performance obtained through a single MS-ISA card is the following:
This allows the incoming S1-u traffic to be spread across multiple MS-ISAs as different eNodeBs
would have different source IP addresses. Similarly, incoming S5-u traffic is spread across the MS-
ISAs on a per PGW basis (or more granular if the PGW is configured to use multiple source ports).
It is possible to assign multiple MS-ISA into the logical group that is bound to the logical interface for
which IP reassembly is enabled (e.g., S1-U, S5, and S8)
If multiple MS-ISAs need to be used as part of a logical group, the following rules apply:
The MG-ISMs seated in the S-GW system must all have a second MS-ISA installed.
The MS-ISA of the MG-ISM(s) must be configured to be part of the logical group before any other
MS-ISA installed on an external IOM3 of the 7750 S-GW can be used. For instance, if there are 2
MG-ISM installed in the S-GW, it is only possible to create a logical group of 2 MS-ISAs using the
MS-ISAs of the two MG-ISMs. If a group of 3 needs to be created, only then may a third MS-ISA
installed on an external IOM3 can be used.
When the S-GW is configured with redundant MG-ISMs operating in 1:1 hot stand-by mode, it is
possible to use the MS-ISA installed in the stand-by MG-ISM for reassembly purposes.
Engineering Recommendation:
Network design should emphasize to use Layer 3 hashing mechanisms (instead of Layer 4
hashing mechanisms for ECMP or LAG) in all the network elements to avoid the fragment
packets taking different ip-paths to reach the GTP endpoint.
MS-ISA supports IP reassembly for at-least 3 VRFs (S1-U, S5-U, S8) in a S-GW instance, i.e., IP
reassembly for a given MS-ISA or group of MS-ISA can be configured under 3 distinct interfaces each
bound a VRF. The IP reassembly context is managed per VRF.
Nonetheless, the S-GW can support IP reassembly for 4k eNBs GTP-endpoints in a single VRF for the
S1-U interface.
The S-GW supports a maximum of 64k IPv4 concurrent reassembly context from a given GTP end-
point.
3.6.2.4.2 Handovers
Maximum handover event rates per MG-ISM are captured in the following table:
3.6.2.4.3 Paging
The S-GW will initiate Paging when downlink packets are received for a UE that is in idle mode (i.e.
no DL-TEID or eNB address for the bearer)
When DL traffic is received for an idle UE, the ingress forwarding complex of the MG-ISM on S-GW
has a null FIB entry. The DL traffic is then handled by the ISA-MG of the MG-ISM while a notification
is sent to the MME, and the following events take place:
This section addresses the specific engineering capacity limits and guidelines for the 7750 Mobile
gateway when configured as a P-GW/GGSN.
The 7750 P-GW supports 1:1 MG-ISM stateful redundancy. With MG-ISM cards deployed in such
protection mode, the 7750 SR Mobile Gateway provides full internal application redundancy. Each
pair sports an active and a standby MG-ISM that synchronize all subscriber state between them.
A switchover from active to standby MG-ISM is invisible to the subscriber and therefore will not have
any impact on surrounding EPC nodes/systems: in other words it will not generate any additional
load to other network nodes and systems. As such the switchover from standby to active does not
require external protocol support.
The 7750 P-GW supports up to 8 x MG-ISMs (up to 4 pairs deployed in 1:1 hot stand-by mode).
When multiple active MG-ISM are provisioned in the P-GW, an internal load-balancing algorithm that
involves the Control Processor Module (CPM) allows the P-GW to spread the creation of
subscriber/bearer contexts across the active MG-ISM’s.
Subsequently the IOMs direct all the traffic to the appropriate MG-ISM based on the individual
subscriber context. Therefore, no specific dimensioning rule needs to be applied to take advantage
of the full capacity of the system in the presence of multiple MG-ISMs per P-GW.
3.6.3.2 P-GW INTERFACES AND SUPPORTED PROTOCOLS
The supported interfaces on P-GW and related protocols are listed hereafter:
S5 S-GW, P-GW
Protocol(s) to be supported:
S8 S-GW, P-GW
Protocol(s) to be supported:
Protocol(s) to be supported:
SGi P-GW
Protocol(s) to be supported:
Gx P-GW, PCRF
Protocol(s) to be supported:
Ga P-GW, OFCS
Protocol(s) to be supported:
Protocol(s) to be supported:
P-GW capability in handling user plane traffic and associated capacity figures were summarized in
sections 3.6.1.2 & 3.6.1.3.
GTP Traffic arriving at the P-GW/GGSN (from S-GW or SGSN over S5/S8 or Gn, or from the PDN
network over SGi/Gi) may have been fragmented (by the core or PDN network) and therefore
require reassembly before the encapsulated packet can be forwarded (as plain packet or through a
GTP tunnel).
IP reassembly is supported on P-GW/GGSN through the second MS-ISA of the MG-ISM.
Restriction:
IP reassembly will cause a performance impact on the basis of functions such as packet
copying, fragment validation, and fragment reorder. This performance impact will vary
depending on the number of concurrent IP datagram that are being reassembled.
The performance benchmark considers two IP fragments for a single IP packet of i.e. 1500 bytes of
IP payload. The packet is fragmented due to addition of GTP+UDP + IP header (48 bytes),
which result in two IP fragments on outgoing link having an MTU set of 1500 bytes. With the above
configuration the performance obtained through a single MS-ISA card is the following:
If multiple MS-ISAs need to be used as part of a logical group, the following rules apply:
The MG-ISMs seated in the P-GW/GGSN system must all have a second MS-ISA installed.
The MS-ISA of the MG-ISM(s) must be configured to be part of the logical group before any other
MS-ISA installed on an external IOM3 of the 7750 P-GW can be used. For instance, if there are 2
MG-ISM installed in the P-GW, it is only possible to create a logical group of 2 MS-ISAs using the
MS-ISAs of the two MG-ISMs. If a group of 3 needs to be created, only then may a third MS-ISA
installed on an external IOM3 can be used.
The MG-ISM for which an MS-ISA is used for IP-reassembly must be active. In other words, it is
not possible to use the 2 MS-ISA of 2 MG-ISMs operating in 1:1 hot stand-by mode.
Engineering Recommendation:
Network design should emphasize to use Layer 3 hashing mechanisms (instead of Layer 4 hashing
mechanisms for ECMP or LAG) in all the network elements to avoid the fragment packets taking
different ip-paths to reach the GTP endpoint.
MS-ISA supports IP reassembly for at-least 3 VRFs (SGi/Gi, S5, and Gn) in a P-GW/GGSN instance,
i.e., IP reassembly for a given MS-ISA or group of MS-ISA can be configured under 3 distinct
interfaces each bound a VRF. The IP reassembly context is managed per VRF.
The P-GW/GGSN supports a maximum of 64k IPv4 concurrent reassembly context from a given GTP
end-point.
3.6.3.3.2 IP Addressing
The P-GW IP Address allocation scheme provides simultaneous support for IPv4 and IPv6, including
multiple context for a single subscriber. The following IP allocation methods are supported:
Local IPv4 address and IPv6 prefix pools – directly assigned via APN
HSS-static addresses – provided via GTP create
IP Address Assignment via RADIUS
The Named Local IP pool supports the inclusion/exclusion of address ranges, as well as a
configurable hold timer (after the closing of a session the hold timer needs to expire before an
address can be returned to the free address pool)
When local IP pools are configured (IPv4 or IPv6), UE IP addresses are allocated upon PDN
connection activation from the IP pool of prefixes associated with the corresponding APN for that
connection. The numbers of UE that can receive an IP address from the pool depend on the subnet
size derived from the subnet mask length. For instance, up to 254 IP addresses can be allocated
from a prefix having a /24 subnet mask.
Although it is possible to configure subnet masks longer or shorter than /24 for IP pool prefixes,
special caution should be taken when multiple MG-ISMs are used in active mode in the same P-
GW/system. Such a configuration inherently enables load-sharing and therefore allows multiple
bearers or PDP context for a given APN to be spread across multiple MG-ISMs.
Address blocks from any given IP pool prefix are allocated to each MG-ISM on a /24 basis, i.e. per
block of 254 addresses. If the subnet length mask of the IP pool prefix is /24 or longer, all addresses
will reside on a single MG-ISM only. Furthermore, as the internal load balancing algorithm at CPM
level for incoming PDN connection will try to maintain equal load across the 2 MG-ISM (based on a
per PDN connection round-robin), a second PDN connection may never succeed to establish. By
creating a /23 prefix for the IP local pool, a /24 block will be allocated to each MG-ISM, allowing
equal spread of PDN connections across the two MG-ISMs for a given APN.
Engineering Recommendation:
It is recommended whenever possible to use subnet lengths of /23 or shorter for IP pool prefixes.
It is a mandatory requirement when multiple MG-ISMs are active in the P-GW/GGSN to ensure
proper operation and load-sharing.
The number of dynamic PCC rules includes the rules that are statically defined on the P-GW and
activated by the PCRF through the Gx interface.
Maximum Charging Data Records (CDRs) rate 2000/s at minimum, per MG-ISM
3.7.1 GN INTERFACE
Gn interface is the reference point between MME and pre-release 8 SGSN. Gn enables interworking
between the EPS and 3GPP 2G and/or 3G SGSNs, which provide only Gn and Gp interfaces but no S3,
S4 or S5/S8 interfaces. The reference architecture for this interworking capability is described in
the figure below.
Gn/Gp
SGSN
HSS
GERAN
Gr
UTRAN
Gn Gn S6a
MME PCRF
Rx
S1-MME
e-UTRAN
eNodeB S11 Gx
S-GW P-GW
EPC
Only Gn Interface sitting between the MME and the pre-release 8 SGSN is described in this
document. Gn interface sitting between the P-GW and the pre-release 8 SGSN will be described in a
further release of the document.
Gn relies upon GTP tunnels (GTP-C for control plane). GTP tunnels are used between two nodes
communicating on a GTP-based interface to separate traffic into different communication flows.
General Packet Radio System (GPRS) Tunneling Protocol Control Plane (GTPv1-C) is defined in 3GPP
TS 23.060.
Gn interface enables handling of mobility and inter Radio Access Technology (I-RAT) handover from
LTE (4G) access network to WCDMA (3G) or GSM (2G) access network. Unlike SGSN which handles
both user and control planes for Gn interface, MME only handles the control plane part of the Gn
interface (the MME does not support user plane interface to P-GW, SGSN will direct the user plane
traffic to the P-GW through Gn user plane).
Through Gn interface, the MME enables handing over from LTE (4G) to WCDM (3G) or GSM (2G) radio
access technologies, this includes I-RAT features Reselection, Redirection and PS handover mobility
mode.
The figure below depicts Gn interface protocol stack for Gn interface sitting between the MME and
the Gn/Gp SGSN.
GTPv1-C GTPv1-C
UDP UDP
IP IP
IPsec/UDP/IP Gn IPsec/UDP/IP
Only the configuration where UE moving out of LTE coverage towards 2G or 3G network is
considered. Maximum Gn interface throughput is computed assuming PS handover mode, which is
the most bandwidth consuming mode.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged through Gn interface, must be taken into
consideration. To keep Gn interface bandwidth computation simple, only the main procedures which
are exchanged over the interface will be considered further in the computation exercise i.e
procedures which are the most bandwidth consuming. The main procedures to consider are listed
below (details are given in 3GPP TS 23.401).
Routing Area Update procedure takes place when a UE that is registered with an MME selects a
UTRAN or GERAN cell served by a Gn/Gp SGSN. In this case, the UE changes to a Routing Area
that the UE has not yet registered with the network. The Routing Area Update procedure
consists in the following messages exchanges over Gn interface: SGSN Context Request, SGSN
Context Response, SGSN Context Acknowledge
Tracking Area Update procedure takes place when there is Gn/Gp SGSN relocation from 3G/2G
to 4G which trigger SGSN context request from the new MME to old SGSN. Tracking Area Update
procedure consists in the following messages exchanges over Gn interface: SGSN Context
Request, SGSN Context Response and SGSN Context.
Inter-RAT PS handover consists in the following messages exchanges over Gn interface during I-
RAT: Forward Relocation Request, Forward Relocation Response, Forward Relocation Complete
and Forward Relocation Complete Acknowledgement.
The characteristics of these procedures are described in the following table.
The required Gn interface bandwidth is computed according to the following formula (separate
downlink and uplink computation is made as the number and size of message differs from downlink
to uplink):
eNB_Gn_Throughput_Phy_DL/UL =
PtA_B x N_Subs x Avg_Gn_Throughput_Phy_DL/UL_Subs
PtA_B is the peak to average ratio to apply at the backhaul. PtA is described in section 3.5
above.
N_Subs is the number of active subscribers per eNB at the busy hour (refer to Table 2-5 of
section 2.2.1.1).
Avg_Gn_Throughput_Phy_DL/UL_Subs =
∑i Avg_Gn_Proc(i)_Throughput_Phy_DL/UL
Avg_Gn_Proc(i)_Throughput_Phy_DL/UL =
Nb_Gn_Proc(i) x [Volume_Size_Proc(i)_DL/UL +
Nb_msg_Proc(i)_DL/UL x Transport_Header] x (8 / 3600)
Nb_Gn_Proc(i) is the number of procedure_#i at busy hour per subscriber (i.e BHCA figure for
attach, TAU, RAU, etc). For each procedure the figure is retrieved from Table 2-3: Call Model
Parameters – Aggregate Values.
Nb_msg_Proc(i)_DL/UL is the number of message exchanged between SGSN and MME for
procedure_#i. For each procedure, the figure is retrieved from the above table.
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (GTP-C/UDP/IP/Ethernet plus optional IPsec header). Transport_Header
ranges from:
8 to normalize to bit
At MME side, the above computed eNB_Gn throughput has to multiplied by the number of eNBs
which are served by this MME to assess Gn interface throughput at MME level.
MME_Gn_Total_Throughput_Phy(DL/UL) =
N_eNBs x eNB_Gn_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they are given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at Gn-C interface level, on a per eNB contribution basis, with the
following assumption:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
Note that Gn-C throughput equals to “0” because the traffic model example of Table 2-3 and Table
2-4 shows:
TAU with SGSN relocation equals to “0”. No Gn/Gp SGSN relocation from 3G/2G to 4G (which
trigger SGSN context request from the new MME to old SGSN).
RAU with MME relocation equal to “0”. Any UE that is registered with an MME, to select an
UTRAN or GERAN cell served by a Gn/Gp SGSN.
3.7.2 GX INTERFACE
Gx interface is the reference point between P-GW and PCRF. Gx provides transfer of (QoS) policy
and charging rules from PCRF to the Policy and Charging Enforcement entity which is embedded in
the P-GW (Policy control encompasses gating control, QoS control, QoS signalling, etc). Gx relies on
Diameter protocol. Please refer to section 2.1.4.6 for details about the PCRF supported features.
A Gx diameter session is established for each IP Connectivity Access Network (IP-CAN) session. For
IP-CAN types that support multiple IP-CAN bearers (applicable to 4G/LTE), the diameter session is
established when the first IP-CAN bearer is set up i.e the Default bearer for that IP-CAN session, is
established.
P-GW PCRF
Diameter Diameter
SCTP/TCP SCTP/TCP
IP IP
IPsec/UDP/IP Gx IPsec/UDP/IP
For the purpose to support Gx interface, Diameter can be transported either over TCP or SCTP.
Alcatel-Lucent supports Diameter transport over TCP or over SCTP.
Gx interface Diameter messages which are exchanged between P-GW and PCRF, for P-GW or UE
based initiated use cases are Credit Control Request (CCR) in uplink (P-GW to PCRF) and Credit
Control Answer (CCA) in downlink (PCRF to P-GW) for the following LTE procedures (details can be
found in 3GPP 23.203, 23.401 and 29.213) :
IP-CAN session establishment (Attach).
IP-CAN Session modification with possible IP-CAN bearer establishment (PDN session
modification).
Gx interface Diameter message which are exchanged between P-GW and PCRF, for PCRF or network
initiated uses cases are Re-Authorize Request (RAR) in downlink (PCRF to P-GW) and Re-Authorize
Answer (RAA) in uplink (P-GW to PCRF) for the following LTE procedures:
IP-CAN session termination (initiated by the network).
These procedures are trigged by Mobile Terminated call Event in the traffic model. Network
initiated procedures are not taken into consideration in the present release of the document.
Note: IP-CAN session is the association between a UE and an IP network. The association is
identified by one IPv4 and/or an IPv6 prefix together with UE identity information, if available, and
a PDN represented by a PDN ID (e.g. an APN). An IP-CAN session incorporates one or more IP-CAN
bearers. Support for multiple IP-CAN bearers per IP-CAN session is IP-CAN specific. An IP-CAN session
exists as long as UE IP addresses/prefix are established and announced to the IP network.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at Gx interface, must be taken into consideration. To
keep Gx interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
The main procedures which are taken into consideration for Gx maximum throughput computation
are:
IP-CAN session establishment (Attach with GUTI and authentication).
IP-CAN Session modification with possible IP-CAN bearer establishment (PDN session
modification).
Note that dedicated bearer activation procedure contribution will be provided in a further release
of the document (Network initiated procedures are not taken into consideration in the present
release of the document).
Gx interface bandwidth computation follows the same rules as those described for Gn interface
above (please refer to section 3.7.1). Gx interface required bandwidth computation formula is thus
similar with the following specificities:
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (TCP/IP/Ethernet plus optional IPsec header). Transport_Header ranges from
(TCP header account for 32 bytes):
At P-GW side, computed eNB contribution to Gx throughput (eNB_Gx), has to be multiplied by the
number of eNBs which are served by this P-GW to assess Gx interface throughput at P-GW level.
P-GW_Gx_Total_Throughput_Phy(DL/UL) =
N_eNBs_P-GW x eNB_Gx_Throughput_Phy(DL/UL)
At PCRF side, the computed eNB contribution to Gx throughput (eNB_Gx), has to be multiplied by
the number of eNBs which are served by this PCRF to assess Gx interface throughput at PCRF level.
PCRF_Gx_Total_Throughput_Phy(DL/UL) =
N_eNBs_PCRF x eNB_Gx_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they given are in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at Gx interface level, on a per eNB contribution basis, with the
following assumption:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
Gx throughput figures: Downlink: 44 Kbit/s, Uplink: 43 Kbit/s
3.7.3 S3 INTERFACE
S3 interface is the reference point between MME and 3GPP release 8 SGSN (also called S4 capable
SGSN). The reference architecture for this interworking capability is described in the figure below.
Together, S3/S4/S12 interfaces enable handling of mobility and inter Radio Access Technology (I-RAT)
handover from LTE (4G) access network to WCDMA (3G) or GSM (2G) access network. Unlike 3GPP
release 8 SGSN which handles both user and control planes (S3 and S4), MME only handles the
control plane interface S3 (the MME does not support user plane interface to SGSN and P-GW), the
user plane traffic is directed either from the 3GPP release 8 SGSN to the S-GW through S4 interface,
or directed from 3G UTRAN/RNC to the S-GW through S12 interface (direct tunnel, this is an
optional feature that allows the SGSN to establish a direct user plane tunnel between the 3G
UTRAN/RNC and the S-GW).
Through S3 interface, MME enables handing over from LTE (4G) to WCDM (3G) or GSM (2G) radio
access technologies, including I-RAT features Reselection, Redirection and PS handover mobility
mode.
Rel 8
SGSN
Gb HSS
GERAN
sS6d / Gr
IuPS
UTRAN
S3 S6a
S12 S4
MME PCRF
Rx
e-UTRAN
S1-MME
eNodeB S11 Gx
S5/S8 SGi
Operator IP Services
S1-U
S-GW P-GW
EPC
Figure 3.7-4: Architecture for ePC interoperation with 3GPP release 8 SGSNs
Handles EPS Bearer Contexts (to eliminate the need for the MME to perform the mapping
between the EPS Bearer Contexts and PDP Contexts).
Exchange between 3GPP release SGSN and MME for the following procedures:
Tracking area update procedure with or without S-GW change, when UE moves from
UTRAN/GERAN to eUTRAN coverage.
Routing area update procedure with or without S-GW change, when UE moves from eUTRAN
to UTRAN/GERAN coverage.
S3 interface relies on GTPv2-C as described in 3GPP 29.274. S3 interface protocol stack is depicted
in the figure below:
Release 8 MME
SGSN
GTPv2-C GTPv2-C
UDP UDP
IP IP
IPsec/UDP/IP S3 IPsec/UDP/IP
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S3 interface must be taken into consideration. To
keep S3 interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e procedures which are
the most bandwidth consuming. The main procedures to consider are listed below.
The main procedures to consider are listed below (details are given in 3GPP TS 23.401).
Routing Area Update procedure takes place when a UE that is registered with an MME selects a
UTRAN or GERAN cell served by a 3GPP release 8 SGSN. In this case, the UE changes to a Routing
Area that the UE has not yet registered with the network. The Routing Area Update procedure
consists in the following messages exchanges over S3 interface: SGSN Context Request, SGSN
Context Response, SGSN Context Acknowledge
Tracking Area Update procedure takes place when there is 3GPP release 8 SGSN relocation from
3G/2G to 4G which trigger SGSN context request from the new MME to old SGSN. Tracking Area
Update procedure consists in the following messages exchanges over S3 interface: SGSN Context
Request, SGSN Context Response and SGSN Context.
Inter-RAT PS handover consists in the following messages exchanges over S3 interface during I-
RAT: Forward Relocation Request, Forward Relocation Response, Forward Relocation Complete
and Forward Relocation Complete Acknowledgement.
The accurate characteristics of these procedures will be described in a further release of the
document. So far assumption is made that procedure characteristics at S3 interface are similar to
those of Gn interface, as described in section 3.7.1.2 above.
The required S3 interface bandwidth is computed according to the following formula (separate
downlink and uplink computation is made as the number and size of message differs from downlink
to uplink): Same formulas bandwidth computation formulas as for Gn interface are assumed, please
refer to section 3.7.1.2 above.
The accurate S3 interface throughput figures will be described in a further release of the document.
So far assumption is made that S3 throughput is the same as Gn throughput, which is computed in
section 3.7.1.3 above, when 3GPP release 8 SGSN is in use, instead of 3GPP pre-release 8 SGSN
(Gn/Gp SGNS).
In other words,
For network configurations with 3GPP pre-release 8 SGSN, Gn throughput figures of section
3.7.1.3 apply, and S3 equals zero.
For network configuration with 3GPP release 8 SGSN, throughput figures of section 3.7.1.3
apply, and Gn equals zero.
3.7.4 S4 INTERFACE
S4 interface is the reference point between S-GW and 3GPP release 8 SGSN (also called S4 capable
SGSN). The reference architecture for this interworking capability is described in Figure 3.7-4:
Architecture for ePC interoperation with 3GPP release 8 SGSNs of section 3.7.3.1 above.
Together, S3/S4/S12 interfaces enable handling of mobility and inter Radio Access Technology (I-RAT)
handover from LTE (4G) access network to WCDMA (3G) or GSM (2G) access network. Unlike 3GPP
release 8 SGSN which handles both user and control planes (S3 and S4), MME only handles the
control plane interface S3 (the MME does not support user plane interface to SGSN and P-GW), the
user plane traffic is directed either from the 3GPP release 8 SGSN to the S-GW through S4 interface,
or directed from 3G UTRAN/RNC to the S-GW through S12 interface (direct tunnel, this is an
optional feature that allows the SGSN to establish a direct user plane tunnel between the 3G
UTRAN/RNC and the S-GW).
S4 interface provides related control and mobility support between 3GPP release 8 SGSN and the
anchor function of SGW, when handling of mobility and inter Radio Access Technology (I-RAT)
handover from LTE (4G) access network to WCDMA (3G) or GSM (2G) access network. In addition, if
direct tunnel over S12 interface is not established, S4 interface provides the user plane tunneling
capability as well. S4 interface relies on GTP tunnels (GTP-U for user plane interface and GTPv2-C
for control plane). S4 interface protocol stack is depicted in Figure below:
IP IP IP IP
S4-U: Assumption is made that the throughput computation follows the same principle as described
in section 3.5 for S1-U.
S4-C: Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S4-C interface, must be taken into consideration.
To keep S4-C interface bandwidth computation simple, only the main procedures which are
exchanged over the interface will be considered further in the computation exercise i.e the
procedures which are the most bandwidth consuming.
The main procedures which are taken into consideration for S4-C maximum throughput computation
are: This section will be completed in a further release of the document.
3.7.5 S5 INTERFACE
S5 interface is the reference point between S-GW and P-GW within the same Public Land Mobile
Network (PLMN). S5 relies on GTP tunnels (GTP-U for user plane and GTPv2-C for control plane).
Indeed S5-U interface is similar to S1-U interface, assuming non packet fragmentation applies at S5-
U interface level. S5-C interface relies upon GTPv2-C as described in 3GPP 29.274. S5 interface
protocol stack is depicted in the figure below.
IP IP IP IP
Downlink data Notification used for network service request scenario and associated to MME
paging procedures.
S5-U: Assumption is made that the throughput computation follows the same principle as those
described in section 3.5 for S1-U, i.e S5-U throughput equals to S1-U throughput at S-GW level (i.e
the resulting throughput figures, on a per eNB contribution basis).
S5-C: Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S5-C interface, must be taken into consideration.
To keep S5-C interface bandwidth computation simple, only the main procedures which are
exchanged over the interface will be considered further in the computation exercise i.e the
procedures which are the most bandwidth consuming.
The main procedures which are taken into consideration for S5-C maximum throughput computation
are:
Attach procedure (Attach with GUTI and authentication).
Detach procedure.
Tracking area update (for all TAU such as those with location information P-GW option as well
as all voice CSFB use cases). Two different types of configuration must be taken into account:
Tracking Area update w/o MME/SGW relocation leads to modify bearer request message on
S5-C.
Tracking Area update with MME or SGW relocation leads to a modify bearer message on S5
similar in size to an Inter-eNB X2 Handover.
Note that in case of voice CSFB service activation, while voice CSFB is in use, LTE resources
are suspended, switching back from 2G/3G to LTE leads to generate a Tracking Area update.
S5-C interface bandwidth computation follows the same rules as those described for Gn interface
above (refer to section 3.7.1). S5-C interface required bandwidth computation formula is thus
similar with the following specificities:
Nb_S5-C_Proc(i) equals to Nb_S1-MME_Proc(i) except for the service activation and service
release, as S5 takes care only of non CSFB service.
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (GTP-C/UDP/IP/Ethernet plus optional IPsec header). Transport_Header
ranges from (GTP-C v2 header account for 12 bytes):
At S-GW side, the computed eNB contribution to S5-C throughput (eNB_S5-C) has to be multiplied by
the number of eNBs which are served by this S-GW to assess S5-C
interface throughput at S-GW level.
S-GW_S5-C_Total_Throughput_Phy(DL/UL) =
N_eNBs_S-GW x eNB_S5-C_Throughput_Phy(DL/UL)
At P-GW side, the computed eNB contribution to S5-C throughput (eNB_S5-C) has to be multiplied by
the number of eNBs which are served by this P-GW to assess S5-C
interface throughput at P-GW level.
P-GW_S5-C_Total_Throughput_Phy(DL/UL) =
N_eNBs_P-GW x eNB_S5-C_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they are given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at S5-C interface level, on a per eNB contribution basis, with the
following assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
S5-C throughput figures: Downlink: 57 Kbit/s, Uplink: 70 Kbit/s
S6a interface is the reference point between MME and Home Subscriber Server (HSS). S6a allows
transfer of subscription and authentication data in order to authorize the users to access to the
evolved packet system network. S6a relies on Diameter protocol. S6a interface protocol stack is
described in the figure below:
MME HSS
Diameter Diameter
SCTP SCTP
IP IP
Note that in previous release S6a was making use of Diameter transport over TCP. This is not true
anymore, since SCTP is now used (at least from an end to end solution release LE5).
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S6a interface must be taken into consideration. To
keep S6a interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
The main procedures which are taken into consideration for S6a maximum throughput computation
are:
Attach procedure
S6a interface bandwidth computation follows the same rules as those described for Gn interface
above, (refer to section 3.7.1). S6a interface required bandwidth computation formula is thus
similar with the following specificities:
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (SCTP/IP/Ethernet plus optional IPsec header). Transport_Header ranges
from (SCTP header account for 28 bytes):
At MME side, the computed eNB contribution to S6a throughput (eNB_S6a) has to be multiplied by
the number of eNBs which are served by this MME to assess S6a interface throughput at MME level.
MME_S6a_Total_Throughput_Phy(DL/UL) =
N_eNBs_MME x eNB_S6a_Throughput_Phy(DL/UL)
At HSS side, the computed eNB contribution to S6a throughput (eNB_S6a) has to be multiplied by
the number of eNBs which are served by this HSS to assess S6a interface throughput at HSS level.
HSS_S6a_Total_Throughput_Phy(DL/UL) =
N_eNBs_HSS x eNB_S6a_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they are given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at S6a interface level, on a per eNB contribution basis, with the
following assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
S6a throughput figures: Downlink: 50 Kbit/s, Uplink: 39 Kbit/s
3.7.7 S8 INTERFACE
S8 interface is the reference point between S-GW and P-GW located in different PLMN: Inter PLMN
reference point providing user and control plane between S-GW in the VPLMN and P-GW in the
HPLMN. S8 is indeed a variant of S5.
S8 interface protocol stack is the same as S5 interface which is depicted in Figure 3.7-7.
S8-U: Assumption is made that the throughput computation follows the same principle as those
described for S5-U interface in 3.7.5.
S8-C: Assumption is made that the throughput computation follows the same principle as those
described for S5-C interface in section 3.7.5.
The resulting throughput figures, on a per eNB contribution basis, are given in section 3.7.5.
S8-U: Assumption is made that the throughput figure is similar to S5-U interface (section 3.7.5), but
corrected by a “roaming factor”.
S8-C: Assumption is made that the throughput figure is similar to S5-C interface (section 3.7.5), but
corrected by a “roaming factor”.
The resulting throughput figures, on a per eNB contribution basis, are given in section 3.7.5.
3.7.8 S9 INTERFACE
S9 interface is the reference point between PCRF located in different PLMN: Inter PLMN reference
point providing transfer of (QoS) policy and charging control information between the Home PCRF
and the Visited PCRF in order to support local breakout function in roaming scenarios. S9 interface
is described in 3GPP TS 29.215
S9 interface protocol stack is depicted in the figure below (it is similar to Gx interface protocol
stack).
HPCRF VPCRF
Diameter Diameter
SCTP/TCP SCTP/TCP
IP IP
IPsec/UDP/IP S9 IPsec/UDP/IP
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S9 interface must be taken into consideration. To
keep S9 interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e procedures which are
the most bandwidth consuming. The main procedures to consider are listed below. S9 messages
exchanges are described in 3GPP TS 29.215.
S10 interface is the reference point between MMEs in the same or different MME pools. S10 relies on
GTP tunnels (GTPv2-C). S10 enables MME relocation and MME to MME information transfer. S10
interface protocol stack is the same as S3 interface which is described in Figure 3.7-5.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S10 interface must be taken into consideration. To
keep S10 interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e procedures which are
the most bandwidth consuming. The main procedures to consider are listed below.
S11 interface is the reference point between MME and S-GW. S11 interface relies on GTP tunnels
(GTPv2-C as described in 3GPP 29.274).
Indirect Data Forwarding Tunnel Creation and Deletion (Indirect Data Forwarding Tunnel
Request message is sent on S11, if the S-GW selected by the MME for indirect data forwarding is
different from the S-GW used as anchor in case of S1-based handover, I-RAT handover, etc).
S11 interface protocol stack is the same as S3 interface which is described in section 3.7.3.
Note that GTP Echo Request and GTP Echo Response messages, which are used to monitor the GTP
tunnels between MME and S-GW, are not taken into consideration for S11 throughput computation,
since the contribution is assumed to be meaningful.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S11 interface must be taken into consideration. To
keep S11 interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
The main procedures which are taken into consideration for S11 maximum throughput computation
are:
Attach procedure (attach with GUTI and authentication is assumed).
Detach procedure.
Paging procedure for network triggered service request (the service request procedure )
TAU with S-GW relocation and without MME relocation, as described in Table 2-3.
TAU without S-GW relocation and with MME relocation, as described in Table 2-3.
TAU with SGSN relocation is accounted there as well. TAU with SGSN relocation is triggered
when the UE goes back from 3G/UTRAN to 4G/LTE coverage (likely in case of I-RAT), as
described in Table 2-3.
Please note that when the traffic model includes Circuit Switch Fall Back (CSFB), there’s an
additional TAU event to take into consideration since a TAU request message is triggered when
the UE returns back to eUTRAN (i.e when CS service is terminated, the UE moves back to
eUTRAN, if the EPS service was suspended during the CS service).
S11 interface bandwidth computation follows the same rules as those described for Gn interface
above, (refer to section 3.7.1). S11 interface required bandwidth computation formula is thus
similar with the following specificities:
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (GTP-C/UDP/IP/Ethernet plus optional IPsec header). Transport_Header
ranges from (GTP-C v2 header account for 12 bytes):
At MME side, the computed eNB contribution to S11 throughput (eNB_S11) has to be multiplied by
the number of eNBs which are served by this MME to assess S11
interface throughput at MME level.
MME_S11_Total_Throughput_Phy(DL/UL) =
N_eNBs_MME x eNB_S11_Throughput_Phy(DL/UL)
At S-GW side, the computed eNB contribution to S11 throughput (eNB_S11) has to be multiplied by
the number of eNBs which are served by this S-GW to assess S11
interface throughput at S-GW level.
S-GW_S11_Total_Throughput_Phy(DL/UL) =
N_eNBs_S-GW x eNB_S11_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they are given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at S11 interface level, on a per eNB contribution basis, with the
following assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
S11 throughput figures Downlink: 268 Kbit/s, Uplink: 268 Kbit/s.
S12 interface is the reference point between S-GW and 3G UTRAN / RNC when 3GPP release 8 SGSN
(also called S4 capable SGSN) is present between the WCDMA (3G) UTRAN and the packet domain.
The reference architecture for this interworking capability is described in Figure 3.7-4 of section
3.7.3.1 above.
Together, S3/S4/S12 interfaces enable handling of mobility and inter Radio Access Technology (I-RAT)
handover from LTE (4G) access network to WCDMA (3G) or GSM (2G) access network. Unlike 3GPP
release 8 SGSN which handles both user and control planes (S3 and S4), MME only handles the
control plane interface S3 (the MME does not support user plane interface to SGSN and P-GW), the
user plane traffic is directed either from the 3GPP release 8 SGSN to the S-GW through S4 interface,
or directed from 3G UTRAN/RNC to the S-GW through S12 interface (direct tunnel, this is an
optional feature that allows the SGSN to establish a direct user plane tunnel between the 3G
UTRAN/RNC and the S-GW).
S12 interface provides direct user plane tunneling capability between the 3G UTRAN/RNC and the
anchor function of S-GW, when handling of mobility and inter Radio Access Technology (I-RAT)
handover from LTE (4G) access network to WCDMA (3G) or GSM (2G) access network.
When direct tunnel over S12 interface is established, user plane tunneling goes straight from 3G
UTRAN/RNC to the S-GW, instead of going through from 3G UTRAN/RNC to the 3GPP release 8 over
IuPS and being relayed towards the S-GW over S4 interface. S12 interface relies on GTP-U tunnels.
3GPP release 8 SGSN controls the user plane direct tunnel establishment. S12 interface protocol
stack is depicted in Figure below:
Application
IP
IP
MAC MAC L2 L2 L2 L2
L1 L1 L1 L1 L1 L1
Uu
Uu Iu
S12 S5/S8
S5/S8 SGi
SGi
Assumption is made that the throughput computation follows the same principle as described in
section 3.5 for S1-U.
S13 interface is the reference point between MME and the Equipment Identity Register (EIR). S13
enables the UE Identity Check Procedure between the MME and EIR to check that the UE has not
been stolen, or it is operational. S13 Mobile Equipment (ME) identity check procedures are
described in 3GPP TS 29.272. Indeed S13 interface protocol stack is similar to S6a interface protocol
stack which is described in section 3.7.6.1. There are basically two messages to be exchanged
between MME and EIR over S13 interface:
ME identity check request
If a combined EIR/HSS server is in use, the two above mentioned procedures are exchanged over S13
interface sitting between the MME and the combined EIR/HSS.
The main procedure to consider for S13 interface is the attach procedure (the corresponding call
flow is described in 3GPP TS 23.401). The characteristics of these procedures are described in the
table below.
S13 interface bandwidth computation follows the same rules as those described for Gn interface
above (refer to section 3.7.1). S13 interface required bandwidth computation formula is thus similar
with the following specificities:
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (SCTP/IP/Ethernet plus optional IPsec header). Transport_Header ranges
from (SCTP header account for 28 bytes):
Transport_Header = 86 bytes (IPv4 w/o Vlan, w/o IPsec)
At MME side, the computed eNB contribution to S13 throughput (eNB_S13) has to be multiplied by
the number of eNBs which are served by this MME to assess S13 interface throughput at MME level.
MME_S13_Total_Throughput_Phy(DL/UL) =
N_eNBs_MME x eNB_S13_Throughput_Phy(DL/UL)
At EIR (or combo EIR/HSS) side, the computed eNB contribution to S13 throughput (eNB_S13) has to
be multiplied by the number of eNBs which are served by this EIR (or combo EIR/HSS) to assess S13
interface throughput at EIR (or combo EIR/HSS) level.
EIR_S13_Total_Throughput_Phy(DL/UL) =
N_eNBs_EIR x eNB_S13_Throughput_Phy(DL/UL)
N_eNBs_HSS is the number of eNBs served by the EIR (or combo EIR/HSS).
Referring to the call model parameters, as they are given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at S13 interface level, on a per eNB contribution basis, with the
following assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
S13 throughput figures: Downlink: 6 Kbit/s, Uplink: 8 Kbit/s.
SBc interface is the reference point between the MME and the Cell Broadcasting Center (CBC). SBc
interface enables warning messages delivery in the Commercial Mobile Alert System (CMAS) context
(CMAS is the main alert system that allows broadcasting short messages to UEs present in specific
cells of the network for the public warning service purpose).
SBc relies upon SBc Application Protocol (SBc-AP), which is described in 3GPP 29.168. SBc-AP
supports Warning Message Transmission function: This functionality provides the means to start,
overwrite and stop the broadcasting of warning message in support of the Public Warning System
(PWS) messages which include Commercial Mobile Warning System (CMAS) and Earthquake and
Tsunami (ETWS) messages.
SBc-AP is carried over SCTP. SBc interface protocol stack is depicted in the figure below:
MME CBC
SBc-AP SBc-AP
SCTP SCTP
IP IP
Note that in addition to SBc interface support, MME does require to support S1-AP enhancements to
enable CMAS warning notification transfer to eNB. These enhancements are out of the scope of this
document.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at SBc interface, must be taken into consideration. To
keep SBc interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
The main procedures which are taken into consideration for SBc maximum throughput computation
are:
Write Replace Warning procedure. The purpose of this procedure is to start or overwrite the
broadcasting of warning message. This procedure comprise, Write Replace Warning Request,
sent from the CBC to the MME, and Write Replace Warning Response, sent from the MME to the
CBC.
Stop Warning procedure. The purpose of this procedure is to stop the broadcasting of warning
message. This procedure comprises Stop Warning Request, sent from the CBC to the MME, and
Stop Warning Response, sent from the MME to the CBC.
Error indication procedure is initiated by a node to report detected errors in one incoming
message, provided they cannot be reported by an appropriate failure message.
Write Replace Warning Indication procedure is optionally sent by the MME to report to the CBC
the Broadcast Scheduled Area List, containing the Broadcast Completed Area List the MME has
received from the eNB(s).
SGi interface is the reference point between the P-GW and the packet data network. Packet data
network may be an operator external public or private packet data network or an intra operator
packet data network, e.g. for provision of IMS services. SGi relies on IP.
SGi interface protocol stack is depicted in the figure below (the figure extract from 3GPP TS 23.401
shows UE to P-GW user plane with eUTRAN).
Note that SGi interface is also used in the scope of the eMBMS architecture, as depicted in section
2.1.4.7.
Application
IP IP
Relay Relay
PDCP GTP-U
PDCP GTP-U GTP-U
GTP-U
MAC MAC L2 L2 L2 L2
L1 L1 L1 L1 L1 L1
MBMS-GW features SGi-mb (user plane) reference point and SGmb (control plane) reference point
for interworking with the BM-SC. SGi-mb interface is the reference point between the BM-SC and
the MBMS-GW for MBMS data delivery. A BM-SC sends user plane data to an MBMS-GW either by IP
unicast (default) or by IP multicasting (when multiple MBMS-GWs are present in the network).
Through the control plane communication interface (SGmb) the BM-SC provides the MBMS-GW with
necessary data, so that MBMS-GW could forward the user plane data downlink either by multicast
(default) or by unicast (one or more MBMS-GWs may be used in a PLMN).
SGi-mb interface protocol stack is depicted in the figure below (MBMS SYNC protocol is described in
3GPP TS 25.446).
MBMS-GW BM-SC
MBMS MBMS
application application
SYNC SYNC
UDP UDP
IP (path) IP (path)
MBMS-GW provides the SGi-mb (user plane) reference point and the SGmb (control plane) reference
point for interworking with the BM-SC. SGi-mb interface is the reference point between the BM-SC
and the MBMS-GW for MBMS data delivery. A BM-SC sends user plane data to an MBMS-GW either by
IP unicast (default), or by IP multicasting (when multiple MBMS-GWs). Through the control plane
communication (SGmb interface) the BM-SC provides an MBMS-GW with necessary data, so that
MBMS-GW could forward the user plane data downlink either by multicast (default) or by unicast
(one or more MBMS-GWs may be used in a PLMN).
SGmb relies upon Diameter, as described in 3GPP TS 29.061, where specific messages are defined
for the purpose to support MBMS functionalities. SGmb interface enables BM-SC and MBMS-GW to
exchange the following messages:
MBMS Session Start Request (please refer to 3GPP 23.246 for details regarding the MBMS session
start procedure).
MBMS Session Start Response (please refer to 3GPP 23.246 for details regarding the MBMS session
start procedure).
MBMS Session Update Request (please refer to 3GPP 23.246 for details regarding the MBMS
session update procedure).
MBMS Session Update Response (please refer to 3GPP 23.246 for details regarding the MBMS
session update procedure).
MBMS Session Stop Request (please refer to 3GPP 23.246 for details regarding the afert MBMS
session stop procedure).
MBMS Session Stop Response (please refer to 3GPP 23.246 for details regarding the MBMS session
stop procedure).
MBMS-GW BM-SC
Diameter Diameter
TCP/SCTP TCP/SCTP
IP IP
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at SGmb interface, must be taken into consideration.
To keep SGmb interface bandwidth computation simple, only the main procedures which are
exchanged over the interface will be considered further in the computation exercise i.e the
procedures which are the most bandwidth consuming.
The main procedures which are taken into consideration for SGmb maximum throughput
computation are described in 3GPP TS 23.246:
MBMS Session Start Procedure
The BM-SC initiates the MBMS Session Start procedure when it is ready to send data. This is a
request to activate all necessary bearer resources in the network for the transfer of MBMS data
and to notify interested UEs of the imminent start of the transmission.
Note that the list of downstream nodes of BM-SC and the list of MBMS control plane nodes (i.e
MMEs) of MBMS-GW are achieved as follows: The list of MBMS control plane nodes for MBMS-GW
will be sent from the BM-SC to the MBMS-GW in the Session Start Request.
MBMS Session Stop Procedure
The BM-SC Session and Transmission function initiates the MBMS Session Stop procedure when it
considers the MBMS session to be terminated. The session is typically terminated when there is
no more MBMS data expected to be transmitted for a sufficiently long period of time to justify a
release of bearer plane resources in the network.
MBMS Session Update Procedure
The MBMS Session Update procedure may be invoked by BM-SC. The BM-SC uses the procedure to
update the service area for an ongoing MBMS Broadcast service session (MBMS Service Area is the
area within which data of a specific MBMS session are sent). The BM-SC initiates the MBMS
Session Update procedure when the service attributes (e.g. Service Area) for an ongoing MBMS
Broadcast service session shall be modified, e.g. the Session Update procedure for EPS is
initiated by BM-SC to notify eNBs to join or leave the service area.
The characteristics of these procedures will be described in a further release of the document.
SGmb interface bandwidth computation follows the same rules as those described for Gn interface
above (refer to section 3.7.1). SGmb interface required bandwidth computation formula is thus
similar with the following specificities:
SGmb bandwidth figure is computed at BM-SC level (BM-SC_SGmb_Throughput_Phy(DL/UL)
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (TCP/IP/Ethernet plus optional IPsec header). Transport_Header ranges from
(TCP header account for 32 bytes):
SGs interface is the reference point between MME and MSC/VLR to support UEs location coordination
for the purpose to enable Circuit Switch Fall Back (CSFB). SGs interface relies on SGs Application
Protocol (SGsAP), which is described in 3GPP 29.118.
SGs interface supports a set of procedures which enable MME and MSC/VLR to exchange information
in order to allow location management coordination and to relay certain messages related to circuit
switched services over the EPS system:
Coordinates location information of UEs that are IMSI attached to both EPS and non-EPS
services.
Relays certain messages related to circuit switched services over SGs to the EPS.
SGs interface association is applicable to the UEs which CSFB capability is activated (applicable
either to voice service or SMS service i.e SMS delivery via the circuit switched core network). SGs
interface protocol stack is depicted in the figure below.
MME VLR
SGsAP SGsAP
SCTP SCTP
IP IP
SCTP is used for the transport of SGsAP messages between MME and MSC/VLR, as stated in 3GPP
29.118.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at SGs interface must be taken into consideration. To
keep SGs interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming. SGs interface throughput relies upon the exchange of specific
messages for particular CSFB use cases. These use cases are Mobile Originating (MO), Mobile
Terminating (MT) voice and SMS calls and I-RAT.
Mobile Originating CSFB voice and SMS.
For Mobile Originating voice and SMS use cases, assumption is made that the UE is in idle mode
and camped on eNB.
Voice call is initiated via UE triggered Service Request. Service Request is “Extended Service
Request with CS Fallback Indicator”. The Initial Context Setup Request from MME to eNB has “CS
Fallback Indicator” set. The eNB then performs voice traffic redirection (to 2G or 3G radio
access network), which results in immediate release of S1 and RRC connections while the voice
call proceeds on 2G/3G network. Upon completion of voice call, UE will initiate a Tracking Area
Update (TAU) to return back to LTE.
Mobile Terminating CSFB voice and SMS (network triggered service request)
For Mobile Terminating voice and SMS use cases, a CSFB voice or SMS call results in: Page, BHCA
(Service Request + S1 release) and TAU (for voice only).
Inter RAT (for Packet Service Handover MSC/VLR Location update)
For Inter RAT events on implicit Redirection when the UE moves out of LTE coverage and has
registered in UTRAN/GERAN (RRC disconnection applies)
With the above mentioned assumptions, the following procedures (and messages exchanges over
SGs) are the main procedure to take into consideration for SGs interface throughput computation
(interface SGs call flows are described in 3GPP TS 23.272):
Attach (through location update request and location update accept).
Service request (Extended service CSFB indicator through SGs service request).
CSFB SMS.
The characteristics of the main procedures are described in the tables below.
SGs interface bandwidth computation follows the same rules as those described for Gn interface
above, (refer to section 3.7.1). SGs interface required bandwidth computation formula is thus
similar with the following specificities:
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (SCTP/IP/Ethernet plus optional IPsec header). Transport_Header ranges
from (SCTP header account for 28 byets):
At MME side, the computed eNB contribution to SGs throughput (eNB_SGs) has to be multiplied by
the number of eNBs which are served by this MME to assess SGs interface throughput at MME level.
MME_SGs_Total_Throughput_Phy(DL/UL) =
N_eNBs x eNB_SGs_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at SGs interface level, on a per eNB contribution basis, with the
following assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
SGs throughput figures: Downlink: 0 Kbit/s, Uplink: 0 Kbit/s.
No CSFB user activity in the example traffic model shown in Table 2-3.
3.7.18 SM INTERFACE
Sm interface is the reference point between the MME and the Multimedia Broadcast/Multicast
Service Gateway (MBMS-GW). Further information, regarding eMBMS can be found in section 2.1.4.7.
Sm relies upon GTPv2-C as described in 3GPP TS23.246 and TS 29.274, where specific messages are
defined for the purpose to support MBMS functionalities. Sm interface enables MBMS-GW and MME to
exchanges the following messages:
MBMS Session Start Request (please refer to 3GPP 23.246 for details regarding the MBMS session
start procedure).
MBMS Session Start Response (please refer to 3GPP 23.246 for details regarding the MBMS session
start procedure).
MBMS Session Update Request (please refer to 3GPP 23.246 for details regarding the MBMS
session update procedure).
MBMS Session Update Response (please refer to 3GPP 23.246 for details regarding the MBMS
session update procedure).
MBMS Session Stop Request (please refer to 3GPP 23.246 for details regarding the MBMS session
stop procedure).
MBMS Session Stop Response (please refer to 3GPP 23.246 for details regarding the MBMS session
stop procedure).
MME MBMS-GW
GTPv2-C GTPv2-C
UDP UDP
IP IP
IPsec/UDP/IP Sm IPsec/UDP/IP
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at Sm interface, must be taken into consideration. To
keep Sm interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
The main procedures which are taken into consideration for Sm maximum throughput computation
are described in 3GPP TS 23.246:
MBMS Session Start Procedure
The BM-SC initiates the MBMS Session Start procedure when it is ready to send data. This is a
request to activate all necessary bearer resources in the network for the transfer of MBMS data
and to notify interested UEs of the imminent start of the transmission.
Note that the list of downstream nodes of BM-SC and the list of MBMS control plane nodes (i.e
MMEs) of MBMS-GW are achieved as follows: The list of MBMS control plane nodes for MBMS-GW
will be sent from the BM-SC to the MBMS-GW in the Session Start Request. At Sm interface level,
session start procedure consists in:
The MBMS-GW creates an MBMS bearer context. The MBMS-GW stores the session attributes and
the list of MBMS control plane nodes in the MBMS bearer context and allocates a transport
network IP multicast address and a C-TEID (Common Tunnel Endpoint Identifier) for this session.
The MBMS-GW sends a Session Start Request message including the session attributes to MMEs
listed in the "list of MBMS control plane nodes" parameter after filtering the list using the Access
indicator, thus ignoring entries not consistent with the Access indicator.
MBMS Session Stop Procedure
The BM-SC Session and Transmission function initiates the MBMS Session Stop procedure when it
considers the MBMS session to be terminated. The session is typically terminated when there is
no more MBMS data expected to be transmitted for a sufficiently long period of time to justify a
release of bearer plane resources in the network. At Sm interface level, session stop procedure
consists in:
The MBMS-GW forwards Session Stop Request message, received from the BM-SC, to the MMEs
which previously received the Session Start Request message, release the corresponding bearer
plane resources to eUTRAN and sets the state attribute of its MBMS Bearer Context to 'Standby'.
The MBMS-GW releases the MBMS Bearer Context in case of a broadcast MBMS bearer service.
MBMS Session Update Procedure
The MBMS Session Update procedure may be invoked by BM-SC. The BM-SC uses the procedure to
update the service area for an ongoing MBMS Broadcast service session (MBMS Service Area is the
area within which data of a specific MBMS session are sent). The BM-SC initiates the MBMS
Session Update procedure when the service attributes (e.g. Service Area) for an ongoing MBMS
Broadcast service session shall be modified, e.g. the Session Update procedure for EPS is
initiated by BM-SC to notify eNBs to join or leave the service area. At Sm interface level, session
update procedure consists in:
The MBMS-GW filters the new list of MBMS control plane nodes using the Access indicator to
ignore entries not consistent with the Access indicator, and compares remaining entries in the
new list of MBMS control plane nodes with the list of MBMS control plane nodes it has stored in
the MBMS Bearer Context. It sends an MBMS Session Start Request message to any added MME,
an MBMS Session Stop Request to any removed MME, and an MBMS Session Update Request to the
remaining MMEs in the new list.
The characteristics of these procedures will be described in a further release of the document.
Sm interface bandwidth computation follows the same rules as those described for Gn interface
above (refer to section 3.7.1). Sm interface required bandwidth computation formula is thus similar
with the following specificities:
Sm bandwidth figure is computed at MBMS-GW from a per MME contribution perspective
(MME_Sm_Throughput_Phy(DL/UL)
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (GTP-C/UDP/IP/Ethernet plus optional IPsec header). Transport_Header
ranges from (GTP-C v2 header account for 12 bytes):
At MBMS-GW side, the computed Sm throughput has to be multiplied by the number of MMEs which
are served by this MBMS-GW.
MBMS-GW_Sm_Total_Throughput_Phy(DL/UL) =
N_MMEs_MBMS-GW x MME_Sm_Throughput_Phy(DL/UL
3.7.19 SV INTERFACE
Sv interface is the reference point between MME and MSC/VLR to enable Single Radio Voice Call
Continuity (SRVCC). This interface is used to support Inter-RAT handover from IMS based voice
service over EPS to CS domain over 3GPP UTRAN/GERAN access. Sv interface is used by MME to
trigger SRVCC procedure towards MSC/VLR. The core network must support SRVCC procedures to
handover from eUTRAN to 3GPP UTRAN/GERAN (as described in 3GPP TS 23.216 the UE detects that
the network supports SRVCC from MME reply to the UE Attach request message). The UE must
support the SRVCC procedures (a SRVCC capable terminal indicates support for SRVCC in the MS
network capability parameter which is set in the attach message and in the tracking area / routing
area update messages). Sv relies on GTPv2-C which is described in 3GPP 29.274. Sv interface
protocol stack is depicted in the figure below.
MME MSC/VLR
GTPv2-C GTPv2-C
UDP UDP
IP IP
IPsec/UDP/IP Sv IPsec/UDP/IP
Note that MME supports SRVCC capability from 3GPP eUTRAN to 3GPP GERAN/UTRAN only. SRVCC
through S102 interface to eHRPD (non 3GPP) is not supported.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at Sv interface, must be taken into consideration. To
keep Sv interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
The main procedure which is taken into consideration for Sv maximum throughput computation is:
Inter-RAT SRVCC Ho (with SRVCC PS to CS Request, SRVCC PS to CS Response, SRVCC PS to CS
Complete Notification and SRVCC PS to CS Complete Acknowledge procedures)
Inter-RAT SR-VCC
Handover procedure Downlink Uplink
Sv interface bandwidth computation follows the same rules as those described for Gn interface
above, (refer to section 3.7.1). SGs interface required bandwidth computation formula is thus
similar with the following specificities:
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (GTP-C/UDP/IP/Ethernet plus optional IPsec header). Transport_Header
ranges from (GTP-C v2 header account for 12 bytes):
At MME side, the computed eNB contribution to Sv throughput (eNB_Sv) has to be multiplied by the
number of eNBs which are served by this MME to assess Sv interface throughput at MME level.
MME_Sv_Total_Throughput_Phy(DL/UL) =
N_eNBs_MME x eNB_Sv_Throughput_Phy(DL/UL)
At MSC/VLR side, the computed eNB contribution to Sv throughput (eNB_Sv) has to be multiplied by
the number of eNBs which are served by this MSC/VLR to assess Sv interface throughput at MSC/VLR
level.
MSC/VLR_Sv_Total_Throughput_Phy(DL/UL) =
N_eNBs_MSC/VLR x eNB_Sv_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at Sv interface level, on a per eNB contribution basis, with
following assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
Sv throughput figures: Downlink: 0 Kbit/s, Uplink: 0 Kbit/s.
No Inter-RAT user activity in the example traffic model shown in Table 2-3.
S102 interface is the reference point between the MME and 3GPP2 1xCS IWS. The S102 interface is
used to support UEs that do not transmit and receive on both the LTE and 1xRTT radio interfaces
simultaneously. The S102 interface allows CSFB to 1xRTT, in order to establish voice calls in the CS
domain by supporting registration over EPS procedures as specified in 3GPP TS 23.272. S102
interface enables 3GPP2 1xCS signaling messages tunneling between MME and 3GPP2 1xCS IWS, as
described in section 2.1.4.1. S102 interface protocol stack is described in the figure below.
UDP UDP
IP IP
Note that as per 3GPP TS 23.216, 1x CS signaling messages are those messages that are defined for
A21 interface as described in 3GPP2 A.S0008-C. Consequently S102 is aimed to carry A21 procedures
messages between MME and 1x CS IWS.
Similarly to S1-MME interface maximum throughput computation, number and size of signaling
messages per busy hour, which are exchanged at S102 interface must be taken into consideration. To
keep S102 interface bandwidth computation simple, only the main procedures which are exchanged
over the interface will be considered further in the computation exercise i.e the procedures which
are the most bandwidth consuming.
S102 interface throughput relies upon the exchange of specific messages for CSFB to 1xRTT, as
described in 3GPP TS 23.272. Note that UEs with single Rx configuration are not able to camp in
1xRTT when they are active in eUTRAN. The network therefore provides mechanisms for the UE to
perform registration to 1xRTT, receive 1xRTT paging, SMS etc, while the UE is in eUTRAN. The
following procedures (and messages exchanges over S102) are the main procedure to take into
consideration for S102 interface throughput computation.
Note that S102 interface call flows are described in 3GPP TS 23.272, further details for eUTRAN-
cdma 1xCSFB and enhanced CSFB to 1xRTT (e1xCSFB) call flows are also given in annex F of 3GPP2
A.S0008-D (though annex F is informative, description of call flows is used as a basis to evaluate the
number of message exchanged over S102 for various 1xCSFB and e1xCSFB procedures). 3GPP 29.277
provides also details regarding S102 interface (messages and message formats).
1xCSFB Registration
MS/UE that is attached to the eUTRAN might decide to register with the 1x system CS domain by
means of this procedure.
Note that the same procedure is used for UE-initiated detach procedure (the UE that is
registered to the 1xRTT CS system initiates the detach procedure in eUTRAN access due to
switch off and the UE is required to perform “power-down registration procedure with 1xRTT CS
system via the S102 tunnel).
For TAU, there are no messages generated over S102, unless the UE moves to new MME (in this
event, there is an MME relocation procedure where the new MME takes over the S102 session:
indeed S102 redirection procedure is used when the UE performs TAU with MME change while
the UE is registered with the 1xRTT CS domain).
Paging (for MT events i.e Mobile Terminating call or SMS)
enhanced 1xCSFB (e1xCSFB) is an enhanced version of 1xCSFB defined in Release 9 to enable the
network to provide the UE with radio resources that are allocated for the UE by the target
1xRTT cell. e1xCSFB with dual transceiver is capable of receiving/transmitting over both LTE
and 1xRTT radios simultaneously. In release 10, a dual transceiver e1xCSFB is introduced to
utilize the dual transceiver capability whilst avoiding higher battery consumption (dual
transceiver e1xCSFB mechanism is to turn on the 1xRTT radio only when a CS voice call or SMS
are to be served, and to turn it off otherwise).
Mobile Terminating Call with Concurrent optimized PS handover is not supported in release
LR13.1L.
Mobile terminating procedure.
In case of 1xCSFB mobile terminating call, the 1xMSC sends a paging request to the 1xCS IWS,
which forwards the paging request to the MME, and then to the UE. From now on, the procedure
is similar to the 1xCSFB mobile originating procedure, as the UE will to handoff to the 1xRTT
radio system.
In case of e1xCSFB mobile terminating call, paging request and paging response are exchanged
over S102 interface, so is the Universal Handoff Direction Message to instruct the UE to handoff
to the 1xRTT radio system.
The characteristics of the main procedures are described in the tables below.
Note that e1xCSFB configuration is assumed for S102 interface dimensioning, as this is the worst
case configuration from a S102 interface dimensioning perspective.
S102 interface bandwidth computation follows the same rules as those described for Gn interface
above, (refer to section 3.7.1). S102 interface required bandwidth computation formula is thus
similar with the following specificities:
Transport_Header is the correction factor to apply to the computed throughput to account for
transport overhead (UDP/IP/Ethernet plus optional IPsec header). Transport_Header ranges
from:
At MME side, the computed eNB contribution to S102 throughput (eNB_S102) has to be multiplied by
the number of eNBs which are served by this MME to assess S102 interface throughput at MME level.
MME_S102_Total_Throughput_Phy(DL/UL) =
N_eNBs_MME x eNB_S102_Throughput_Phy(DL/UL)
Referring to the call model parameters, as they given in Table 2-3, Table 2-4 and Table 2-5, the
following throughput is expected at S102 interface level, on a per eNB contribution basis, with the
following assumptions:
IPv4 with Vlan tagging without IPsec is assumed, 20 Mhz on air, 8634 attached subscribers per eNB
(1439 attached subscribers per cell, 6 cells per eNB).
S102 throughput figures: Downlink: 0 Kbit/s, Uplink: 0 Kbit/s.
No 1xCSFB or e1xCSFB user activity in the example traffic model shown in Table 2-3 and Table
2-4.
NPO INDICATORS The NPO equivalent if the trigger is a 3GPP counter and additional NPO
indicators that are helpful for troubleshooting.
Available capacity monitoring counters as well as monitoring and troubleshooting methods will be
provided for the different LTE Network elements and interfaces presented in Chapter 3.
Call Admission blocking (due to lack of resources) – this is applicable only for resources that may
block the traffic admission (block new calls for instance) when the congestion limit is reached.
Call Admission Blocking (%) = 1 - # Call_Success / # Call_Request
Resource load which can be separately evaluated in order to estimate the current resource load
comparing to some critical threshold that may trigger a capacity/optimization action. This step
is particularly important for resources that are not part of call admission control mechanism
(non traffic blocking resources).
Resource Load (%) = Resources used / Total resources available
The decision of resource capacity extension should be based on a combination of the two above
indicators (as in the diagram below):
1 Resource type =
2
traffic blocking No
Yes
Resource type
dependant step
Yes
Yes
Yes
Capacity optim./tuning
and/or
Resources addition
are required
The two main capacity monitoring elements in the above diagram can be described as follows:
For resources that may generate traffic blocking (resources considered in Call Admission Control
[CAC] algorithm):
- If no blocking is detected => No corrective action is required
- If blocking occurs, an additional level of investigation is needed:
o Load is Low (below a predefined threshold, also called Critical Threshold) => Capacity
analysis and tuning are required
o Load is High (above a predefined threshold) => Capacity analysis and tuning are required
and most probably (depending on tuning results), new resource needs to be added.
Note: The Critical Threshold is computed for each resource and is generally based on the
engineering dimensioning rules presented in Chapter 3.
Note: The monitoring indicators presented in this section are either raw counters or monitoring
metrics (computed based on combinations of raw counters).
For the monitoring metrics the current NPO (Network Performance & Optimization tool)
implemented indicators will be presented, with the associated metric name and identifier. In
addition the formula used for the metric computation will be also presented.
This section provides a summary of the critical triggers used throughout the Air Interface monitoring
process, associated with a description of each trigger.
Note: Triggers not followed by a reference number, aren’t NPO standard indicators. They are either
based on a combination of NPO standard indicators or 3GPP counters.
Lack_of_PRB_CAC_Fails_UL
Number of CAC failures in UL due to the lack of PRB resources.
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the number of times a Call Admission Control procedure has failed.
It indicates the lack of UL resources for TRB or SRB admission.
RECOMMENDED THRESHOLD
Value should be = 0
MEASUREMENT
Lack _of_PRB_CAC_Fails_UL =
VS.CACFailure.LackOfULPRBResourcePerCellForTopPriorityTRBAdmission +
VS.CACFailure.LackOfULPRBResourcePerCellForHighPriorityTRBAdmission +
VS.CACFailure.LackOfULPRBLicensePerBandForHighPriorityTRBAdmission +
VS.CACFailure.LackOfULPRBResourcePerCellForMediumPriorityTRBAdmission +
VS.CACFailure.LackOfULPRBResourcePerCellForLowPriorityTRBAdmission +
VS.CACFailure.LackOfULPRBLicensePerBandForLowPriorityTRBAdmission +
VS.CACFailure.LackOfULPRBResourcePerCellForSRBAdmission +
VS.CACFailure.LackOfULPRBLicensePerBandForSRBAdmission
ACTION
When the counter value becomes > 0, this indicates that Call Admission Control has been
performed on a new call or bearer admission on the cell.
Cell is suspected to be in critical capacity conditions, and there might be a need to add
additional resources
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
Not Available.
Lack_of_PRB_CAC_Fails_DL
Number of CAC failures in DL due to the lack of PRB resources.
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the number of times a Call Admission Control procedure has failed.
It indicates the lack of DL resources for TRB or SRB admission.
RECOMMENDED THRESHOLD
Value should be = 0
MEASUREMENT
Lack_0f_PRB_CAC_Fails_DL =
VS.CACFailure.LackOfDLPRBResourcePerCellForTopPriorityTRBAdmission +
VS.CACFailure.LackOfDLPRBResourcePerCellForHighPriorityTRBAdmission +
VS.CACFailure.LackOfDLPRBLicensePerBandForHighPriorityTRBAdmission +
VS.CACFailure.LackOfDLPRBResourcePerCellForMediumPriorityTRBAdmission +
VS.CACFailure.LackOfDLPRBResourcePerCellForLowPriorityTRBAdmission +
VS.CACFailure.LackOfDLPRBLicensePerBandForLowPriorityTRBAdmission +
VS.CACFailure.LackOfDLPRBResourcePerCellForSRBAdmission +
VS.CACFailure.LackOfDLPRBLicensePerBandForSRBAdmission
ACTION
When the counter value becomes > 0, this indicates that Call Admission Control has been
performed on a new call or bearer admission on the cell.
Cell is suspected to be in critical capacity conditions, and there might be a need to add
additional resources
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
Not Available.
PRB_DL_Utilization
Load usage of Physical Radio Block (PRB) resources (Average DL PRB utilization).
SYSTEM
LTE Cell
TYPE
Traffic Resource
DESCRIPTION
Load usage of Physical Radio Block (PRB) resources (Average DL PRB utilization)
RECOMMENDED THRESHOLD
Value should be < 80%
MEASUREMENT
PRB_DL_Utilization = (VS.DLPRBUsed.Cum /VS.DLPRBUsed.NbEvt/1000) / MaxPRB_perTTI
Where MaxPRB_perTTI is:
MaxPRB_perTTI 6 15 25 50 75 100
ACTION
When the indicator value becomes > 80, Cell is suspected to be in critical capacity conditions,
and there might be a need to add additional resources
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_PRB_used_NonGBR_DL_avg_PerCell (L12039_2_30_CI).
VS_PRB_used_NonGBR_DL_avg_PerCell_PerPLMN (L12039_2_50_CI).
PRB_UL_Utilization
Load usage of Physical Radio Block (PRB) resources (Average UL PRB utilization).
SYSTEM
LTE Cell
TYPE
Traffic Resource
DESCRIPTION
Load usage of Physical Radio Block (PRB) resources (Average UL PRB utilization)
RECOMMENDED THRESHOLD
Value should be < 80%
MEASUREMENT
PRB_UL_Utilization = (VS.ULPRBUsed.Cum /VS.ULPRBUsed.NbEvt/1000) / MaxPRB_perTTI
Where MaxPRB_perTTI is:
MaxPRB_perTTI 6 15 25 50 75 100
ACTION
When the indicator value becomes > 80, Cell is suspected to be in critical capacity conditions,
and there might be a need to add additional resources
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_PRB_used_NonGBR_UL_avg_PerCell (L12040_2_30_CI).
VS_PRB_used_NonGBR_UL_avg_PerCell_PerPLMN (L12040_2_50_CI).
Average_DL_Cell_Load
Average Aggregate Cell Throughput in DL (Mbps)
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
Average Aggregate Cell Throughput in DL (Mbps)
RECOMMENDED THRESHOLD
ALU's recommended engineering target is 70% of the cell capacity limit (Table 3-1, Table 3-2)
MEASUREMENT
Average_DL_Cell_Load =
VS_ERABs_all_RLC_PDU_UP_kbyte_DL x 8 / (BOP_DL_duration x 1000)
Where:
VS_ERABs_all_RLC_PDU_UP_kbyte_DL = (VS.DLRlcPduKbytes.NonGBR +
VS.DLRlcPduKbytes.VoIP + VS.DLRlcPduKbytes.OtherGBR +
VS.DLRlcPduKbytesForCarrierAggregation + VS.DLRlcPduKbytesResentForCarrierAggregation)
* 1024/1000
ACTION
When the indicator value exceeds the critical threshold value (Table 4-1) which is calculated
as 70% of the capacity limits (Table 3-1 & Table 3-2), cell shall be added to a cell suspect list
and the number of Active subscribers shall be monitored.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_ERABs_NonGBR_RLC_PDU_UP_kbyte_DL (L12105_2_30_CI)
VS_ERAB_VoIP_RLC_PDU_UP_kbyte_DL (L12105_0_30_CI)
VS_ERABs_GBR_WithoutVoIP_RLC_PDU_UP_kbyte_DL (L12105_1_30_CI)
VS_ERABs_NonGBR_RLC_PDU_UP_KiByte_DL (L12105_2)
VS_ERAB_VoIP_RLC_PDU_UP_KiByte_DL (L12105_0)
VS_ERABs_GBR_WithoutVoIP_RLC_PDU_UP_KiByte_DL (L12105_1)
Average_UL_Cell_Load
Average Aggregate Cell Throughput in UL (Mbps)
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
Average Aggregate Cell Throughput in UL (Mbps)
RECOMMENDED THRESHOLD
ALU's recommended engineering target is 70% of the cell capacity limit (Table 3-1, Table 3-2)
MEASUREMENT
Average_UL_Cell_Load =
VS_ERABs_all_RLC_PDU_UP_kbyte_UL x 8 / (BOP_UL_duration x 1000)
Where:
VS_ERABs_all_RLC_PDU_UP_kbyte_UL = (VS.ULRlcPduKbytes.NonGBR +
VS.ULRlcPduKbytes.VoIP + VS.ULRlcPduKbytes.OtherGBR) * 1024/1000
ACTION
When the indicator value exceeds the critical threshold value (Table 4-1) which is calculated
as 70% of the capacity limits (Table 3-1 & Table 3-2), cell shall be added to a cell suspect list
and the number of Active subscribers shall be monitored.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_ERABs_NonGBR_RLC_PDU_UP_kbyte_UL (L12106_2_30_CI)
VS_ERAB_VoIP_RLC_PDU_UP_kbyte_UL (L12106_0_30_CI)
VS_ERABs_GBR_WithoutVoIP_RLC_PDU_UP_kbyte_UL (L12106_1_30_CI)
VS_ERABs_NonGBR_RLC_PDU_UP_KiByte_UL (L12106_2)
VS_ERAB_VoIP_RLC_PDU_UP_KiByte_UL (L12106_0)
VS_ERABs_GBR_WithoutVoIP_RLC_PDU_UP_KiByte_UL (L12106_1)
Average_User_Load
Average User Experienced Throughput (Mbps)
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This trigger measures RLC downlink throughput experienced by end users. It reflects queuing
delay and is negatively impacted by scheduler loading.
This trigger is FDD specific.
RECOMMENDED THRESHOLD
The limit value is to be established by the service provider.
MEASUREMENT
Average_User_Load = Avg_DL_User_RLC_Tput (Mbps) Aggregated over the BOP_DL
Avg_DL_User_RLC_Tput (Mbps) =
8 * (1024/1000) * max {
(VS.DLRlcBurstSize.OtherGBR + VS.DLRlcBurstSize.NonGBR +
VS.DLRlcBurstSize.NonGBRWithCA - VS.DLRlcPduSizeInLastTTI.OtherGBR -
VS.DLRlcPduSizeInLastTTI.NonGBR - VS.DLRlcPduSizeInLastTTI.NonGBRWithCA) /
(VS.DLRlcBurstTime.OtherGBR + VS.DLRlcBurstTime.NonGBR +
VS.DLRlcBurstTime.NonGBRWithCA - VS.DLRlcLastTTITime.OtherGBR -
VS.DLRlcLastTTITime.NonGBR - VS.DLRlcLastTTITime.NonGBRWithCA)
,
(VS.DLRlcBurstSize.OtherGBR + VS.DLRlcBurstSize.NonGBR +
VS.DLRlcBurstSize.NonGBRWithCA) / (VS.DLRlcBurstTime.OtherGBR +
VS.DLRlcBurstTime.NonGBR + VS.DLRlcBurstTime.NonGBRWithCA)
}
ACTION
When the indicator value drops below the critical threshold value, the cell is marked with a
major capacity warning, and capacity analysis/tuning is required.
COMMENTS
Not Available.
EXAMPLE
Not Available.
Alcatel-Lucent - Proprietary - Use pursuant to applicable agreements
NPO INDICATORS
Not Available.
Average_RRC_Connected_Users_per_Cell
Average number of users in the RRC connected state in the cell.
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the average number of RRC Connected users in the cell.
RECOMMENDED THRESHOLD
ALU's recommended engineering target is 80% of the modem capacity limit per cell (Table
3-7 and Table 3-8).
MEASUREMENT
Average_RRC_Connected_Users_per_Cell = (RRC.Conn.cum / RRC.Conn.NbEvt) *
VS_nb_PLMNs_OnCell
ACTION
When the indicator value exceeds 80% of the modem capacity limit per cell (Table 3-7 and
Table 3-8), the cell is marked with a major capacity warning, and capacity analysis/tuning is
required.
COMMENTS
In case of one PLMN, VS_nb_PLMNs_OnCell = 1
This indicator is intended to be used in case OOT is deactivated. It shall replace the indicator
VS_UE_active_WithAssignedPUCCH_nb_OnCell_avg_PerCell (L13260_30_CI) described
below.
In that case, the Engineering Limit Value (80%) shall be applied to the number of Active
Subscribers in (Table 3-7 and Table 3-8). As the average number of Connected Subscribers
over the Busiest Observation Period will be the same as the average number of Active
Subscribers.
EXAMPLE
Not Available.
NPO INDICATORS
VS_UE_RRC_connected_state_avg_PerCell (L13201_30_CI).
Average_Active_Users_per_cell
Average number of users in the Active state in the cell
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the average number of active users (RRC connected users with uplink
time alignment not expired and uplink control channel resource assignment) in the cell.
RECOMMENDED THRESHOLD
ALU's recommended engineering target is 80% of the modem capacity limit per cell (Table
3-7 and Table 3-8).
MEASUREMENT
Average_Active_Users_per_cell = VS.NbActiveUsersPerCell.Cum /
VS.NbActiveUsersPerCell.NbEvt
ACTION
When the indicator value exceeds the 80% value of the Total Active Users modem capacity
limit per cell (Table 3-7 and Table 3-8), the cell is marked with a major capacity warning is the
cell, and capacity analysis/tuning is required.
COMMENTS
This indicator is available in case OOT is activated. If OOT is deactivated, the indicator
VS_UE_RRC_connected_state_avg_PerCell (L13201_30_CI) shall be used.
EXAMPLE
Not Available.
NPO INDICATORS
VS_UE_active_WithAssignedPUCCH_nb_OnCell_avg_PerCell (L13260_30_CI)
*In case of OOT is deactivated, The Average Number of Active Subscribers over the Busiest
Observation Period (BOP) can be determined as: Average_RRC_Connected_Users_per_Cell
(VS_UE_RRC_connected_state_avg_PerCell) aggregated over the BOP_DL/UL (For more
information about BOP_DL/UL, please refer to the next section).
One of the most important indicators of air interface resource consumption is PRB utilization, as
calculated in previous chapter for DL and UL (PRB DL Utilization, PRB UL Utilization).
The effect of PRB overload can be seen when the value of the following indicator has a non-zero
value:
Lack of PRB CAC Fails DL/UL
Two criterions can be used to evaluate the “Average Throughput”, which represent the average
resource load experienced in the cell.
One criteria is to calculate the “Average Aggregate Cell Throughput”, and then to compare it to
specific Critical Threshold. Or to calculate the “Average User Experienced Throughput” which can
then be compared to a Critical Threshold defined by the service provider.
The previously mentioned data volume indicators can be used to compute the average cell
throughput during the observation period. For that the total DL and UL volume will need to be
evaluated with the following indicators:
VS_ERABs_all_RLC_PDU_UP_kbyte_DL
VS_ERABs_all_RLC_PDU_UP_kbyte_UL
The two above mentioned indicators will be used to determine the cell Busiest Observation Period
during the day in terms of DL and UL data volume transferred (Referred to as BOP_DL and BOP_UL)
and also to compute the average throughput for the BOP_DL/UL duration.
In order to obtain a significant value for the average cell throughput, the BOP_DL_duration and
BOP_UL_duration need to be as small as possible. It’s recommended to use the smallest counter
reporting period (15 min = 900 sec)
The DL/UL cell average aggregate throughput (in Mbps) will be computed using the triggers:
ALU's recommended engineering target is 70% of the capacity limit (Table 3-1 & Table 3-2). The
table below shows engineering target of average cell throughput for the supported LR14.1.L
bandwidths and radio configurations:
Note: Critical Threshold values in Table 4-1 correspond to 2 or 3 sector cells only. For other
configurations, Sectorization Gain (Table 3-3) should be taken into consideration.
The User Experienced Througput over the Busiest Observation Period during the day can be
determined as:
Note: In case of the Average user Load is being monitored, the “Air Interface Capacity Monitoring
Process” will be applicable only to DL.
The Average Number of Active Subscribers over the Busiest Observation Period during the day can be
determined as:
Average_Active_Users_per_Cell (VS_UE_active_WithAssignedPUCCH_nb_OnCell_avg_PerCell)
aggregated over the BOP_DL/UL
Engineering Limit Value is determined as 70% of the modem capacity level per cell.
In case of OOT is not enabled, the Average Number of Connected Subscribers over the Busiest
Observation Period during the day will be the same as the Average Number of Active Subscribers,
and can be determined as:
VS_UE_RRC_connected_state_avg_PerCell aggregated over the BOP_DL/UL.
In that case, the Engineering Limit Value (80%) shall be applied to the number of Active Subscribers
in (Table 3-7 and Table 3-8).
As mentioned in Chapter 3.2, the eNB may be limited by the following user plane resources:
Both resource blocking and resource load should be considered for the above indicators.
The following monitoring indicators can be used to detect call admission blocking problems due to
lack of Number of User resources:
Note: Triggers not followed by a reference number, aren’t NPO standard indicators. They are either
based on a combination of NPO standard indicators or 3GPP counters.
RRC_Connection_CAC_Fail
Number of RRC Connection Establishment CAC failures
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the number of RRC connection procedures that failed for some failure
causes.
RECOMMENDED THRESHOLD
Not to exceed the limit value established by the service provider
MEASUREMENT
Based on the 3GPP Counter RRC.ConnEstabFail.CACFailure
ACTION
When the counter value becomes > 0, this indicates that Call Admission Control has been
performed on a new RRC connection admission on the cell.
Cell shall be added to a cell suspect list and the number of ERABs shall be monitored.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_RRC_cnx_fail_CAC (L12304_0_CI)
RRC_Connection_CAC_Fail_Rate
Rate of RRC Connection Establishment CAC failures
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
It indicates the percentage (%) of RRC Connection requests that were rejected from the total
number of RRC Connection requests, due to Lack of Nb of users resources (it is also
computed per cell and during the observation period).
RECOMMENDED THRESHOLD
Not to exceed the limit value established by the service provider
MEASUREMENT
RRC_Connection_CAC_Fail_Rate = VS_RRC_cnx_fail_CAC / VS_RRC_cnx_req
VS_RRC_cnx_fail_CAC = RRC.ConnEstabFail.CACFailure
VS_RRC_cnx_req =
RRC.ConnEstabAtt.EmergencyCallAttempts +
RRC.ConnEstabAtt.HighPriorityAccessAttempts
+ RRC.ConnEstabAtt.PageResponsesReceived
+
RRC.ConnEstabAtt.MobileOriginatedSignalling
+ RRC.ConnEstabAtt.MobileOriginatedUserBearer
+ RRC.ConnEstabAtt.Other
-
RRC.ConnEstabFail.InterventionOAM
ACTION
For the cells in the “congestion suspect” cell list, check if the congestion condition is reached
(VS_RRC_cnx_fail_CAC_rate > target), where the target is a limit imposed by the operator
QoS policy. It may be 2% as usually considered in 2G & 3G networks (which consider the
presence of voice traffic), or can be higher (5% to 20% for instance) if only data traffic is
performed in the network.
The congestion condition has to be checked at the CAC_BH which represents the hour of the
day when the VS_RRC_cnx_fail_CAC indicator had the maximum value.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_RRC_cnx_fail_CAC_rate (L12304_21_CI)
eRAB_Procedural_Setup_CAC_Fail
Number of CAC failure for all E-RABs due to lack of resources
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the number of E-RAB Setup procedures failed.
RECOMMENDED THRESHOLD
Value should be = 0
MEASUREMENT
Based on the 3GPP counter: VS.ERABSetupFailed.CACFailure
ACTION
When the counter value becomes > 0, this indicates that Call Admission Control has been
performed on a new bearer admission on the cell.
Cell shall be added to a cell suspect list and the number of ERABs shall be monitored.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_ERABs_proc_setup_fail_CAC (L12603_0)
UE_Context_Setup_CAC_Fail
Number of CAC failure for all E-RABs due to lack of resources
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the number of Initial Context Setup procedures failed for some failure
causes. In case of multiple partial E-RABs setup failure, only the last cause is used to peg the
counter.
RECOMMENDED THRESHOLD
Value should be = 0
MEASUREMENT
Based on the 3GPP counter: VS.InitialContextSetupFailed.CACFailure
ACTION
When the counter value becomes > 0, this indicates that Call Admission Control has been
performed on a new bearer admission on the cell.
Cell shall be added to a cell suspect list and the number of ERABs shall be monitored.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_UE_ctxt_setup_fail_CAC (L12503_0)
UE_Context_Setup_CAC_Fail_Rate
Rate of CAC failure for all E-RABs due to lack of resources
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
This indicator provides the CAC failure rate of Context Setup procedure.
RECOMMENDED THRESHOLD
Value shouldn’t exceed the limit value established by the service provider
MEASUREMENT
UE_Context_Setup_CAC_Fail_Rate = VS.InitialContextSetupFailed.CACFailure /
(VS.UEContextSetupRequest.AfterDLNASTransport +
VS.UEContextSetupRequest.WithoutPreviousDLNASTransport)
ACTION
For the cells in the “congestion suspect” cell list, check if the congestion condition is reached
(UE Context Setup CAC Fail Rate > target), where the target is a limit imposed by the
operator QoS policy. It may be 2% as usually considered in 2G & 3G networks (which
consider the presence of voice traffic), or can be higher (5% to 20% for instance) if only data
traffic is performed in the network.
The congestion condition has to be checked at the CAC_BH which represents the hour of the
day when the UE Context Setup CAC Fail (VS_UE_ctxt_setup_fail_CAC) indicator had the
maximum value.
The capacity troubleshooting decision will be based on the blocking target check:
If the congestion rate is lower than the target, then no action is to be taken (just continue
monitoring the cell on hour basis). If the congestion rate is greater than the target, either
parameters tuning (change cell capacity configuration), or new resources add actions are
required.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_UE_ctxt_setup_fail_CAC_rate (L12503_21_CI).
Considering that this resource is less critical than the Number of Users (not necessarily generates
call rejects or call drops), the monitoring method proposed will be essentially based on the resource
blocking detection and PRB pool occupancy overload detection.
Average_RRC_Connected_Users_per_Cell
Average number of UEs in RRC connected state in a cell
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
It indicates the average number of UEs in RRC connected state in a given cell during the
observation period (basically it indicates the average load of the Number of Users resource).
RECOMMENDED THRESHOLD
Not to exceed 80% of the Critical Threshold (Refer to Table 4-2)
MEASUREMENT
Average_RRC_Connected_Users_per_Cell = (RRC.Conn.cum / RRC.Conn.NbEvt) *
VS_nb_PLMNs_OnCell
ACTION
If the congestion condition is reached for a specific cell, the resource load shall be verified
(VS_UE_RRC_connected_state_avg ≥ Critical threshold) for the same cell, at the same
CAC_BH. The Critical threshold for the User blocking can be computed with the ErlangB law
for different blocking targets and different cell capacity figures (Table 4-2).
COMMENTS
In case of one PLMN, VS_nb_PLMNs_OnCell = 1
EXAMPLE
Not Available.
NPO INDICATORS
VS_UE_RRC_connected_state_avg_PerCell (L13201_30_CI).
VS_UE_RRC_connected_max (L13201_Max)
VS_UE_RRC_connected_min (L13201_Min)
VS_UE_active_nb_OnENB_avg_PerENB (L13212_30_CI)
VS_UE_active_WithAssignedPUCCH_nb_OnENB_avg_PerCell (L13259_30_CI)
VS_UE_active_WithAssignedPUCCH_nb_OnCell_avg_PerCell (L13260_30_CI)
VS_UE_active_WithAssignedPUCCH_nb_OnCell_max (L13260_Max)
Average_eRABs_per_Cell
Average number of established eRABs in a cell
SYSTEM
eNB
TYPE
Traffic Resource
DESCRIPTION
It indicates the average of established eRABs number in a given cell during the observation
period.
RECOMMENDED THRESHOLD
Not to exceed 80% of the CEM Bearer Capacity (Refer to Table 3-7 and Table 3-8).
MEASUREMENT
Average_eRABs_per_Cell = VS.NbBearersPerCell.Cum/VS.NbBearersPerCell.NbEvt
ACTION
If the congestion condition is reached for a specific cell, the resource load shall be verified
(Average eRABs per Cell ≥ eRABs Limit) for the same cell, at the same CAC_BH.
The eRABS Limit = CEM Bearer Capacity
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_ERABs_all_nb_OnCell_avg_PerCell (L13207_30_CI)
VS_ERABs_all_nb_OnCell_max (L13207_Max)
VS_ERABs_all_nb_OnENB_max (L13211_Max)
VS_ERABs_GBR_WithoutVoIP_nb_OnCell_max (L13205_Max)
VS_ERABs_GBR_WithoutVoIP_nb_OnENB_max (L13209_Max)
VS_ERABs_NonGBR_nb_OnCell_max (L13206_Max)
VS_ERABs_NonGBR_nb_OnENB_max (L13210_Max)
VS_ERABs_VoIP_nb_OnCell_max (L13204_Max)
VS_ERAB_VoIP_nb_OnENB_max (L13208_Max)
VS_ERABs_all_nb_OnCell_min (L13207_Min)
VS_ERABs_GBR_WithoutVoIP_nb_OnENB_min (L13211_Min)
VS_ERABs_GBR_WithoutVoIP_nb_OnCell_min (L13205_Min)
VS_ERABs_GBR_WithoutVoIP_nb_OnENB_min (L13209_Min)
VS_ERABs_NonGBR_nb_OnCell_min (L13206_Min)
VS_ERABs_NonGBR_nb_OnENB_min (L132010_Min)
VS_ERAB_VoIP_nb_OnCell_min (L13204_Min)
VS_ERAB_VoIP_nb_OnENB_min (L13208_Min)
The monitoring and troubleshooting method for the eNB user plane resources is based on the above
mentioned monitoring indicators.
As explained in chapter 4.3.1, this resource being used by the CAC mechanism, need to be
monitored for the two aspects: resource blocking and resource load.
Based on the Monitoring indicators described in section 4.3.2, the monitoring & troubleshooting
diagram for this resource is described in the following figure:
Figure 4.3-1: Cell/Modem Capacity Monitoring Decision Tree (Nb. of User connections)
For the cells in the “congestion suspect” cell list, check if the congestion condition is reached
(RRC_Connection_CAC_Fail_Rate (VS_RRC_cnx_fail_CAC_Rate) > target), where the target is
a limit imposed by the operator QoS policy. It may be 2% as usually considered in 2G & 3G
networks (which consider the presence of voice traffic), or can be higher (5% to 20% for
instance) if only data traffic is performed in the network.
The congestion condition has to be checked at the CAC_BH which represents the hour of the
day when the RRC_Connection_CAC_Fail (VS_RRC_cnx_fail_CAC) indicator had the maximum
value.
If the congestion condition is reached (previous step) for a specific cell, the resource load shall
be verified (Average_RRC_Connected_Users_per_Cell (VS_UE_RRC_connected_State_avg) ≥
Critical threshold) for the same cell, at the same CAC_BH. The Critical threshold for the User
blocking can be computed with the ErlangB law for different blocking targets and different cell
capacity figures. See examples of Critical Threshold possible values in the table below:
Cell Blocking Limit Cell Capacity (Max Nb. Of Cell Capacity (Avg Nb. of
(In terms of User Connections) User Connections) User Connections)
48 39
100 88
120 107
2%
250 235
(Typical for GBR or VoIP)
400 386
625 612
800 789
48 47
100 100
120 120
10%
250 250
(Typical for Best effort Data)
400 400
625 25
800 800
Note The above table values are calculated as the minimum between CEM capacity (Table 3-7 and
Table 3-8) and ErlangB law with example blocking target for voice and data, that come with the
following formula : Cell Capacity (Avg Nb. of User Connections) = Min ( 200 , ErlangB [Modem=200;
Blocking=10%)=214.3] ) = 200.
In case of OOT is not enabled, the Average Number of Connected Subscribers over the Busiest
Observation Period during the day will be the same as the Average Number of Active Subscribers.
In that case, the Critical Threshold value shall be calculated based on the number of Active
Subscribers in (Table 3-7 and Table 3-8).
Note As the number of user connections used in the above formula is in average, so it’s suggested to
use 80% of the computed Cell Capacity (Avg Nb. of User Connections) as a Critical Threshold
(Engineering Limit), to avoid higher blocking probability in practical implementation. So Critical
threshold is computed as:
As for the Number of User connections, the number of bearer’s resource is also being used by the
CAC mechanism.
Based on the Monitoring indicators described in section 4.3.2, the monitoring & troubleshooting
diagram for this resource is described in the following figure:
Where:
Average_eRABs_per_Cell = VS.NbBearersPerCell.Cum/VS.NbBearersPerCell.NbEvt
ERABS Limit = CEM Bearer Capacity (Table 3-7 and Table 3-8)
in the network. If it’s 0, then no blocking event is observed, so no action is required. If it is > 0,
then go to next step.
If the above mentioned indicator is greater than 0, bearer blocking events occur and the cell
becomes “suspect” of high load (in terms of number of bearers) and it shall be added in a
“congestion suspect” cell list, which a list of cells that are to be closely monitored (on hour
basis).
For the cells in the “congestion suspect” cell list, check if the congestion condition is reached
(Vs_UE_Ctxt_Setup_Fail_CAC > target), where the target is a limit imposed by the operator
QoS policy. It may be 2% as usually considered in 2G & 3G networks (which consider the
presence of voice traffic), or can be higher (5% to 20% for instance) if only data traffic is
performed in the network.
The congestion condition has to be checked at the CAC_BH which represents the hour of the
day when the Vs_UE_Ctxt_Setup_Fail_CAC indicator had the maximum value.
The capacity troubleshooting decision will be based on the blocking target check
- If the congestion rate is lower than the target, then no action is to be taken (just keep
continue monitoring the cell on hour basis).
- If the congestion rate is greater than the target, either parameters tuning (change cell
capacity configuration), or new resources add actions are required – see the red rectangle in
Figure 4.3-2
PO counters with more correlation to the overload algorithm are used. These counters represent the
duration of processor being in gradual overload states (seconds).
Overload_Duration_on_System_ENB
This indicator provides the duration of overload
SYSTEM
eNB
TYPE
Load Resource
DESCRIPTION
This indicator provides the duration of overload on eNB controller.
This trigger is FDD specific.
RECOMMENDED THRESHOLD
Value should be = 0.
MEASUREMENT
Overload_Duration_on_System_ENB =
VS.ENodeBControlDurationInGradualOverloadSituation.OverloadDurationMinor +
VS.ENodeBControlDurationInGradualOverloadSituation.OverloadDurationMajor +
VS.ENodeBControlDurationInGradualOverloadSituation.OverloadDurationCritical
Where:
VS.ENodeBControlDurationInGradualOverloadSituation: This counter provides the duration
of overload for each status. The counter is incremented with the time between the trigger
event and the last one
Subcounters:
OverloadDurationMinor: Duration of minor overload state
OverloadDurationMajor: Duration of major overload state
OverloadDurationCritical: Duration of critical overload state
ACTION
Due to the negative consequences of controller being in overload state (connection reject,
handover reject, paging message drops, etc…), it is desirable for controller not to have any
overload during normal business operation mode
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
Alcatel-Lucent - Proprietary - Use pursuant to applicable agreements
VS_overload_OnSystemENB_duration_all (L13236_30_CI)
VS_overload_OnSystemENB_duration_critical (L13236_2)
VS_overload_OnSystemENB_duration_major (L13236_1)
VS_overload_OnSystemENB_duration_minor (L13236_0)
Overload_Duration_on_L1L2_Ctrl_Processor1
This indicator provides the duration of overload
SYSTEM
bCEM
TYPE
Load Resource
DESCRIPTION
This indicator provides the duration of overload on eNB modem.
This trigger is FDD specific.
RECOMMENDED THRESHOLD
Value should be = 0
MEASUREMENT
Overload_Duration_on_L1L2_Ctrl_Processor1 =
VS.L1L2ControlProcessor1DurationInGradualOverloadSituation.OverloadDurationMinor +
VS.L1L2ControlProcessor1DurationInGradualOverloadSituation.OverloadDurationMajor +
VS.L1L2ControlProcessor1DurationInGradualOverloadSituation.OverloadDurationCritical
Where:
VS.L1L2ControlProcessor1DurationInGradualOverloadSituation: This counter provides the
duration of overload for each status for L1L2 control processor. The counter is incremented
with the time between the trigger event and the last one.
Subcounters:
OverloadDurationMinor: Duration of minor overload state
OverloadDurationMajor: Duration of major overload state
OverloadDurationCritical: Duration of critical overload state
ACTION
Due to the negative consequences of controller being in overload state (connection reject,
handover reject, paging message drops, etc…), it is desirable for controller not to have any
overload during normal business operation mode
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_overload_OnL1L2CtrlProcessor1_duration_all (L13242_30_CI)
VS_overload_OnL1L2CtrlProcessor1_duration_critical (L13242_2)
VS_overload_OnL1L2CtrlProcessor1_duration_major (L13242_1)
VS_overload_OnL1L2CtrlProcessor1_duration_minor (L13242_0)
Average_CPU_Usage_eNB_Core_Processor
This indicator provides the average PO at eNB processor Core level
SYSTEM
eNB
TYPE
Load Resource
DESCRIPTION
This indicator provides the "average" CPU utilization rate at processor/core level of the eNB.
For eCCM: Processor ID = 0, Core ID = 0
For eCCM2: Processor ID = 0, Core ID = 1 to 7
RECOMMENDED THRESHOLD
Average PO (highest of all cores) < 50
MEASUREMENT
Average_CPU_Usage_eNB_Core_Processor =
VS.ENodeBProcessorCoreCpuUtilization.Cum / VS.ENodeBProcessorCoreCpuUtilization.NbEvt
/ 100
ACTION
Due to the negative consequences of controller being in overload state, it is desirable for
controller not to have any overload during normal business operation mode.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_CPU_usage_eNB_CoreProcessor_rate_avg (L13251_31_CI).
Maximum_CPU_Usage_on_eNB_Core_Processor
This indicator provides the maximum PO at eNB processor Core level
SYSTEM
eNB
TYPE
Load Resource
DESCRIPTION
This indicator provides the "maximum" CPU utilization rate at processor/core level of the
eNB.
For eCCM: Processor ID = 0, Core ID = 0
For eCCM2: Processor ID = 0, Core ID = 1 to 7
RECOMMENDED THRESHOLD
Maximum PO (highest of all cores) < 90
MEASUREMENT
Maximum_CPU_Usage_on_eNB_Core_Processor =
VS.ENodeBProcessorCoreCpuUtilization.Max / 100
ACTION
Due to the negative consequences of controller being in overload state, it is desirable for
controller not to have any overload during normal business operation mode.
COMMENTS
Not Available.
EXAMPLE
Not Available.
NPO INDICATORS
VS_CPU_usage_eNB_CoreProcessor_rate_MAX (L13251_Max_31_CI)
Capacities defined as provisioning limits (These capacities are not monitored and alarmed via
the traditional monitoring methods. Please refer to Section 3.4.1.)
Capacities due to the number of subscriber licenses
- Number of simultaneous attached users
Capacities due to memory resources on the Service Boards
- Number of registered users
Capacities due to CPU resources on the Service Boards
- External message throughput
This section captures information for the MME application on 9471 WMM WM8.0.0, including
software functionality tied to the MME application. The terms MIF (MME Interface Function) and MAF
(MME Application Function) are used to refer to MME-specific software functionality that resides on
the Interface Board and Service Boards, respectively.
Detailed definitions for all MME observation counters can be found in [C24]. Detailed definitions for
all NPO indicators can be found in [C23].
Customers must order subscriber license software for the expected number of users. The software is
purchased as Right to Use for 10K subscribers (UEs). Order code 3HZ12012AA, WMM SW RTU:
MME/SGSN Subscriber license - 10K pack.
4.4.1.1 COUNTERS AND INDICATORS
The MME supports several observation counters to track the average and maximum numbers of
registered UEs during a PM reporting interval. The counters include Average and MAX number of
registered UE, and Average and MAX number of registered UE per PLMN.
In WM8.0.0, MME subscriber licenses are purchased as Right to Use for 10K subscribers. A single MAF
pair can support up to 500,000 registered users and a fully configured MME can support up to
5,000,000 registered users per MME. Several 10K-pack licenses may be required to support the
expected number of users.
In order to avoid exceeding the maximum number of users supported by the available licenses, the
VS.MaxNbrOfRegisteredUE counter should be monitored and a capacity extension decision should be
made once the counter reaches 90% of the license limit.
Maximum_Number_Of_Registered_UE
Measures the maximum number of registered UEs (SAU, Simultaneous Attached Users) using
the VS.MaxNbrOfRegisteredUE (20405_1) 3GPP counter.
SYSTEM
MME
TYPE
License Resource
DESCRIPTION
This is an MME 3GPP counter that counts the maximum number of UEs that are in the EMM-
Registered state in the MME during the current PM reporting interval. The number of
registered UEs is counted periodically during the PM reporting interval. The maximum count
in any sampling point becomes the current maximum value, so the maximum number of
registered UEs is determined across all the sampling intervals in the reporting interval.
RECOMMENDED THRESHOLD
There is no Critical Performance Indicator (CPI) associated with the
VS.MaxNbrOfRegisteredUE counter. The recommendation is to track the counter against the
known number of licenses and purchase an additional 10K-pack software RTU when the
counter reaches 90% of the number of licenses.
MEASUREMENT
The count is unique to an individual MAF.
ACTION
In WM8.0.0, a single MAF pair can support up to 500,000 registered users and a fully
configured MME can support up to 5,000,000 registered users per MME. Several 10K-pack
licenses may be required to support the expected number of users.
In order to avoid exceeding the maximum number of users supported by the available
licenses, the VS.MaxNbrOfRegisteredUE counter should be monitored. When the number of
SAU reaches 90% of the available licenses, an additional 10K-pack software RTU should be
purchased.
COMMENTS
If the number of SAU reaches the number of available licenses, all additional attaches would
be denied. Once a user detaches, a license would become available for the next attach
attempt.
EXAMPLE
Not Available.
NPO INDICATORS
The MME data structure allocates memory resources for every UE registered to the MAF. In WM8.0.0,
the MAF on each Service Board pair is able to accommodate 500,000 registered users. Once this
limit is reached, a new registration would cause the oldest registered user to be dropped.
The MIF distributes new UE Attaches among the MAFs, round robin at first but then based on overall
MAF load and number of registered users in each MAF’s VLR. When a given MAF reaches its limit of
500,000 registered users, it will delete the oldest one when a new attach is received.
The distribution function on the MIF balances out the load on each MAF. There is no attempt to
coordinate among the MAFs. So, if all MAFs were at 500,000, each new attach (which would be
distributed among the MAFs) would cause an old subscriber to be deleted and the new attach would
proceed.
4.4.2.1 COUNTERS AND INDICATORS
The MME supports several observation counters to track the average and maximum numbers of UEs
in different states during a PM reporting interval. The counters include Average and MAX number of
registered UE, Average and MAX number of idle UE, Average and MAX number of connected UE, and
percent of maximum registered UE capacity.
VS.UECapacityUsage (21301) is an MME 3GPP counter that counts the percent of the maximum
Registered UE capacity achieved on a MAF during the current PM reporting interval. The counter
is pegged whenever the maximum number of registered UEs on a MAF is determined during a
reporting interval. This is a per-MAF count. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_UECapacityUsage (LC21301).
Additionally, NPO calculates 2 indicators related to UE capacity:
In WM8.0.0, the MME can contain up to 10 Service Board pairs, where the MAF software running on
each pair is limited to 500,000 registered users. A fully configured MME can accommodate up to
5,000,000 registered users per MME.
In order to avoid exceeding the maximum number of registered users, the VS_UECapacityUsage
counter should be monitored and a capacity extension decision should be made once the counter
reaches the Minor alarm threshold.
Capacity extension actions include adding Service Board pairs until the 10 pair limit is reached
and/or adding an additional MME to the network.
UE_Capacity_Usage
Measures the percent of maximum registered UE capacity using the VS.UECapacityUsage
(21301) 3GPP counter.
SYSTEM
MME
TYPE
Memory Resource
DESCRIPTION
This is an MME 3GPP counter that counts the percent of the maximum Registered UE
capacity achieved on a MAF during the current PM reporting interval. The counter is pegged
whenever the maximum number of registered UEs on a MAF is determined during a
reporting interval. This is a per-MAF count.
RECOMMENDED THRESHOLD
A Critical Performance Indicator (CPI) is associated with the VS.UECapacityUsage counter.
Thresholds are defined for Critical, Major and Minor alarms. Default values for these
indicators are:
99% capacity = Critical
95% capacity = Major
90% capacity = Minor
The valid range for these thresholds is 1% to 99%, in increments of 1%. These thresholds can
be updated using the SAM Provisioning GUI.
A trap is sent to the MI/EMS when any of the provisioned thresholds is crossed in either
direction during a reporting interval. That is, a trap is sent to indicate an alarm severity of
Minor, Major, Critical, or Cleared (Normal) to describe the threshold that was crossed during
the reporting interval.
MEASUREMENT
The count is unique to an individual MAF.
ACTION
In WM8.0.0, the MME can contain up to 10 Service Board pairs, where the MAF software
running on each pair is limited to 500,000 registered users. A fully configured MME can
accommodate up to 5,000,000 registered users per MME.
In order to avoid exceeding the maximum number of registered users, the
VS_UECapacityUsage counter should be monitored. When the capacity usage exceeds the
Minor alarm threshold, a Service Board pair should be added to the MME. If all Service Board
pairs are already populated, an additional MME should be added to the network.
COMMENTS
Alcatel-Lucent - Proprietary - Use pursuant to applicable agreements
The MME data structure allocates memory resources for every UE registered to the MAF. In
WM8.0.0, the MAF on each Service Board pair is able to accommodate 500,000 registered
users. Once this limit is reached, a new registration would cause the oldest registered user to
be dropped.
EXAMPLE
Not Available.
NPO INDICATORS
VS_UECapacityUsage (LC21301): the NPO equivalent of VS.UECapacityUsage (21301)
VS_UE_CapacityUsage_OnMaxAttachedUE_rate_max (L21301_CI): calculates the max of
the max value over all MAFs whatever the periodicity.
VS_UE_CapacityUsage_OnMaxAttachedUE_rate_avg (L21301_40_CI): calculates the
average value of all MAFs whatever the periodicity.
The MAF CPU usage is the limiting factor in determining MME message processing throughput. In
WM8.0.0, the MAF on each Service Board pair is able to support at least 40K external
messages/second without pushing the CPU into overload. Once the CPU overload limit is reached,
all additional service requests will be ignored/rejected. The total message throughput for the MME
is limited to 305K msg/sec.
The MME supports counters and monitoring for CPU usage and for messaging between the MIF and
MAFs.
4.4.3.1 COUNTERS AND INDICATORS
The MME supports several observation counters related to MAF messaging in WM8.0.0. The MME
counters include the following:
VS.TotalMsgsSentToMAF (24105) indicates the total number of messages sent to the MAF from
the MIF. Each MAF reports a separate count. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_MIF_MAF_total_msg_sent (L24105) is the equivalent of the 3GPP
counter.
VS.TotalMsgsRcvdFromMAF (24107) indicates the total number of messages received from the
MAF. Each MAF reports a separate count. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_MIF_MAF_total_msg_rcvd (L24107) is the equivalent of the 3GPP
counter.
Each of the above counters is incremented for each Application message sent/received to/from the
MAF.
The MME supports several observation counters related to CPU usage in WM8.0.0. The MME counters
include the following:
VS.aveCpuUsage (40001_0) indicates the average CPU utilization in the current interval. The
NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_CPU_usage_PerHost_avg_rate (L40001_0_31_CI). The NPO indicator is the equivalent of
the 3GPP counter expressed in percentage.
VS.avePerCoreCpuUsage (40002_0) indicates the average CPU utilization for a Core processor
on a board. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_CPU_usage_PerCore_avg_rate (L40002_0_31_CI). The NPO indicator is the equivalent of
the 3GPP counter expressed in percentage.
VS.aveBaseCpuUsage (40003_0) indicates the average CPU utilization for a set of processes
that contribute to CPU Overload calculations. The NPO tool uses this 3GPP counter to derive the
associated NPO indicators:
- VS_CPU_usage_SetOfProcessOnCPUoverload_avg_MAF_rate (L40003_0_MAF_31_CI) uses
the 3GPP counter to calculate average CPU usage on a per-MAF basis, expressed in
percentage.
- VS_CPU_usage_SetOfProcessOnCPUoverload_avg_MIF_rate (L40003_0_MIF_31_CI) uses
the 3GPP counter to calculate average CPU usage on a per-MIF basis, expressed in
percentage.
VS.peakCpuUsage (40001_1) indicates the peak CPU utilization in the current interval. The NPO
tool uses this 3GPP counter to derive the associated NPO indicator: VS_CPU_usage_-
PerHost_max_rate (L40001_1_31_CI). The NPO indicator is the equivalent of the 3GPP counter
expressed in percentage.
VS.peakPerCoreCpuUsage (40002_1) indicates the peak per-Core processing utilization in the
current interval. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_CPU_usage_PerCore_max_rate_Savg _Tmax (L40002_1_41_CI). The NPO indicator uses
the 3GPP counter to calculate the peak per-core processing utilization in the current interval
averaged on all cores on all MAFs in the system, expressed in percentage.
VS.peakBaseCpuUsage (40003_1) indicates the peak CPU utilization for a set of processes that
contribute to CPU Overload calculations. The NPO tool uses this 3GPP counter to derive the
associated NPO indicators:
- VS_CPU_usage_SetOfProcessOnCPUoverload _max_MAF_rate (L40003_1_MAF_31_CI)
uses the 3GPP counter to calculate the peak CPU usage on a per-MAF basis, expressed in
percentage.
- VS_CPU_usage_SetOfProcessOnCPUoverload _max_MIF_rate (L40003_1_MIF_31_CI) uses
the 3GPP counter to calculate the peak CPU usage on a per-MIF basis, expressed in
percentage.
Each of the above counters is pegged every 10 seconds over the reporting interval.
The MME supports several counters associated with the number of messages that are rejected or
dropped when the MME attempts to mitigate an overload condition. The MME counters are pegged
for a major or critical overload condition on a MAF service member and also for number of messages
that are dropped per interface type when an MME attempts to mitigate a critical overload condition
on a MIF service member.
Equivalent NPO indicators are provided for each MME counter. These indicators can be used to
analyze message throughput and adjust MME configuration, if necessary. Additional information
regarding NPO indicators can be found in Customer Document [C19]:
VS.NbrDetachDropped (22001_0) indicates the number of Detach Request messages that are
dropped during MME overload. A Detach Request cannot be rejected. The counter is pegged
when the MME drops a Detach Request message (application message) in an attempt to mitigate
a Major or Critical overload condition. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_UE_detach_InitiatedByUE_drop_MMEOverload (L22001_0).
VS.NbrS11msgsDropped (22001_4) indicates the number of messages that are dropped per S11
interface type during MME overload. The counter is pegged when the MME drops messages
(application messages) on the S11 interface in an attempt to mitigate a Major or Critical
overload condition or if the MIF exceeds its engineered message capacity limit.The NPO tool
uses this 3GPP counter to derive the associated NPO indicator:
VS_S11_GTPc_AppMsg_drop_MMEOverload (L22001_4).
VS.NbrS1msgsDropped (22001_6) indicates the number of messages that are dropped per S1-
MME interface type during MME overload. The counter is pegged when the MME drops messages
(application messages) on the S1-MME interface in an attempt to mitigate a Major or Critical
overload condition or if the MIF exceeds its engineered message capacity limit. The NPO tool
uses this 3GPP counter to derive the associated NPO indicator:
VS_S1MME_GTPc_AppMsg_drop_MMEOverload (L22001_6).
VS.NbrS6admsgsDropped (32001) indicates the number of messages that are dropped per S6a
and/or S6d interface during an MME and/or SGSN overload. The counter is pegged when the MME
and/or SGSN drops messages on the S6a/S6d interface in an attempt to mitigate a Critical
overload condition or if the MIF exceeds its engineered capacity limit. The NPO tool uses this
3GPP counter to derive the associated NPO indicator:
VS_HSS_S6aS6d_GTPc_AppMsg_drop_MMEOverload (L32001)
VS.NbrX1_1msgsDropped (22001_10) indicates the number of X1_1 CALEA messages that are
dropped during MME overload. The counter is pegged whenever an X1_1 (CALEA) application
message is dropped during MME overload. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_CALEA_X1_1_msgs_drop_MMEOverload (L22001_10).
VS.NbrX2msgsDropped (22001_11) indicates the number of X2 CALEA messages that are
dropped during MME overload. This counter is pegged whenever an X2 (CALEA) application
message is dropped during MME overload. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_CALEA_X2_msgs_drop_MMEOverload (L22001_11).
VS.NbrSmmsgsDropped (22001_22) indicates the number of messages that are dropped per Sm
interface type during MME overload. The counter is pegged when the MME drops messages on
the Sm interface in an attempt to mitigate a Major or Critical overload condition or if the MIF
exceeds its engineered message capacity limit. The NPO tool uses this 3GPP counter to derive
the associated NPO indicator: VS_Sm_GTPc_AppMsg_drop_MMEOverload (L22001_22).
VS.NbrM3msgsDropped (22001_23) indicates the number of messages that are dropped per M3
interface type during MME overload. The counter is pegged when the MME drops messages on
the M3 interface in an attempt to mitigate a Major or Critical overload condition or if the MIF
exceeds its engineered message capacity limit. The NPO tool uses this 3GPP counter to derive
the associated NPO indicator: VS_M3_GTPc_AppMsg_drop_MMEOverload (L22001_23).
VS.NbrS1ReleaseMsgsDropped (22002_16) indicates the number of times a UE Context Release
Request message is dropped at the MME because a MAF is in Overload. The counter is pegged
whenever the MME is in an Overload state and a UE Context Release Request message is dropped
in an attempt to mitigate the overload condition. The NPO tool uses this 3GPP counter to derive
the associated NPO indicator: VS_UE_ctxt_rel_drop_MMEOverload (L22002_16).
VS.NbrAttachMsgsRejected (22002_0) indicates the number of times an Attach Request
message is rejected at the MME because a MAF is in Overload or has exceeded its engineered
message capacity limit. The counter is pegged whenever the MAF rejects an Attach message
because it is in an Overload state or has exceeded its engineered message capacity limits. The
NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_UE_attach_all_fail_ReqRejOnMMEOverload (L22002_0).
VS.NbrBearerResourceAllocMsgsRejected (22002_1) indicates the number of Bearer Resource
Allocation messages that are rejected during an MME attempt to mitigate a Major or Critical
overload condition. The counter is pegged whenever the MME is in overload and rejects a Bearer
Resource Allocation message. The NPO tool uses this 3GPP counter to derive the associated NPO
indicator: VS_EB_QCI_all_bearer_rsc_alloc_fail_ReqRejOnMMEOverload (L22002_1).
VS.NbrInterRatHOGnMsgsRejected (22002_10) indicates the number of inter-RAT handover,
inter-RAT RAU and inter-RAT TAU messages received via a Gn interface that are rejected during
MME overload. The counter is pegged whenever an interRAT HO message is received via a Gn
interface and is rejected because the MME is in overload. The NPO tool uses this 3GPP counter
to derive the associated NPO indicator:
VS_Mob_LTE_UtraGeran_Gn_msgs_fail_ReqRejOnMMEOverload (L22002_10).
VS.NbrPDNconnectMsgsRejected (22002_14) indicates the number of times a PDN Connectivity
Request message is rejected at the MME because of overload. The counter is pegged whenever a
PDN Connectivity Request message is rejected during MME overload. The NPO tool uses this
3GPP counter to derive the associated NPO indicator:
VS_PDN_connectivity_all_fail_ReqRejOnMMEOverload (L22002_14).
VS.NbrPDNDisconnectMsgsRejected (22002_15) indicates the number of times a PDN
Disconnect Request message is rejected at the MME because a MAF is in Overload. The counter
is pegged whenever the MME is in an Overload state, and a PDN Disconnect Request message is
rejected in an attempt to mitigate the overload condition. The NPO tool uses this 3GPP counter
to derive the associated NPO indicator: VS_PDN_disconnect_fail_ReqRejOnMMEOverload
(L22002_15).
VS.NbrServiceReqMsgsRejected (22002_17) indicates the number of times a NAS Service
Request message is rejected at the MME because a MAF is in Overload or has exceeded its
engineered message capacity limit. The counter is pegged whenever the MAF rejects a NAS
Service Request message because it is in an Overload state or has exceeded its engineered
message capacity limits. The NPO tool uses this 3GPP counter to derive the associated NPO
indicator: VS_UE_NAS_service_fail_ReqRejOnMMEOverload (L22002_17).
VS.NbrSGsPagingReqMsgsRejected (22002_18) indicates the number of SGs Paging Request
messages that are rejected during MME overload. The counter is pegged when the MME is in
overload, and an SGs Paging Request message is received and rejected. The NPO tool uses this
3GPP counter to derive the associated NPO indicator:
VS_SGsAP_paging_fail_RejOnMMEOverload (L22002_18):
VS.NbrTAUMsgsRejected (22002_19) indicates the number of times a TAU Request message is
rejected at the MME because a MAF is in Overload. The counter is pegged whenever the MME is
in an Overload state, and a TAU Request message is rejected in an attempt to mitigate the
overload condition. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_UE_TAU_all_fail_ReqRejOnMMEOverload (L22002_19).
VS.NbrBearerResourceModifyMsgsRejected (22002_2) indicates the number of Bearer Resource
Modification messages that are rejected during MME overload. The counter is pegged whenever
a Bearer Resource Modification message is rejected during MME overload. The NPO tool uses this
The NPO tool also calculates several indicators that are useful during message throughput analysis.
MME supports several observation counters related to paging in WM8.0.0. The MME MI counters
(3GPP counters) support equivalent NPO indicators as well as calculated NPO indicators associated
with paging. These counters and indicators can be used to analyze message throughput and adjust
MME configuration, if necessary. Additional information regarding NPO indicators can be found in
Customer Document [C19]:
VS.AttPaging (21101): This is an MME 3GPP counter that counts the total number of Paging
sequences started at the MME in the reporting interval. This counter is pegged when the
AttPaging_FirstAttempt counter is pegged. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_AttPaging (LC21101).
VS.AttPaging_FirstAttempt (21101_0): This is an MME 3GPP counter that counts the number of
paging attempts using the algorithm provisioned for the first paging attempt. The counter is
pegged when a Downlink Data Notification arrives, and the UE is paged using the algorithm
provisioned for the first attempt. The NPO tool uses this 3GPP counter to derive the associated
NPO indicator: VS_paging_all_req_1stTry (L21101_0).
VS.AttPaging_SecondAttempt (21101_1): This is an MME 3GPP counter that counts the number
of paging attempts using the algorithm provisioned for the second paging attempt. The counter
is pegged when paging messages are sent to the eNB(s) using the second provisioned paging
algorithm. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_paging_all_req_2ndTry (L21101_1).
VS.AttPaging_ThirdAttempt (21101_2): This is an MME 3GPP counter that counts the number of
paging attempts using the algorithm provisioned for the third paging attempt. The counter is
pegged when paging messages are sent to the eNB(s) using the third provisioned paging
algorithm. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_paging_all_req_3rdTry (L21101_2)
VS.AttPaging_FourthAttempt (21101_3): This is an MME 3GPP counter that counts the number
of paging attempts using the algorithm provisioned for the fourth paging attempt. The counter
is pegged when paging messages are sent to the eNB(s) using the fourth provisioned paging
algorithm. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS.AttPaging_FourthAttempt (21101_3).
VS.NbrPagingFailures_Timeout (21102_2): This is an MME 3GPP counter that counts the number
of times the Paging procedure is started for a UE and all provisioned attempts to locate the UE
have been tried but the paging procedure was unsuccessful in reaching the UE. This counter is
pegged when the Paging procedure is started for a UE and all provisioned attempts at paging the
UE have timed out without locating the UE. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_NbrPagingFailures_Timeout (LC21102_2).
VS.AttSGsPageCSbySTMSI (20849_0): This is an MME 3GPP counter that counts the number of
SGs Paging Request messages received by the MME from the MSC, where the UE Context is
known at the MME and where the request indicates that the page is for a CS service. This
counter is pegged whenever the MME receives the SGsAP-Paging-Request message from the MSC,
where the UE Context is known at the MME and the message indicates a CS call service. The NPO
tool uses this 3GPP counter to derive the associated NPO indicator: VS_AttSGsPageCSbySTMSI
(LC20849_0).
VS.AttSGsPageIMSIandLAI (20849_1): This is an MME 3GPP counter that counts the number of
SGs Paging Request messages received by the MME from the MSC where the UE context is not
known at the MME and where the request indicates an LAI value for paging purposes. This
counter is pegged whenever the MME receives an SGsAP-Paging-Request from the MSC and the
UE Context is not known and the request contains an LAI value. The NPO tool uses this 3GPP
counter to derive the associated NPO indicator: VS_AttSGsPageIMSIandLAI (LC20849_1).
VS.AttSGsPageIMSIandVLR (20849_2): This is an MME 3GPP counter that counts the number of
SGs Paging Request messages received by the MME from the MSC where the UE Context is not
known at the MME and where the request does not indicate an LAI value. This counter is pegged
whenever the MME receives an SGsAP-Paging-Request from the MSC where the UE Context is not
known at the MME and where the message does not indicate an LAI value for Paging (The MME
uses a provisioned set of TACs for the MSC/VLR that sends the paging message.) The NPO tool
uses this 3GPP counter to derive the associated NPO indicator: VS_AttSGsPageIMSIandVLR
(LC20849_2).
VS.AttSGsPagePSbySTMSI (20849_3): This is an MME 3GPP counter that counts the number of
SGs Paging Request messages received by the MME from the MSC where the UE Context is known
at the MME and where the SMS service indicator is set in the request message. This counter is
pegged whenever the MME receives an SGsAP-Paging-Request message from the MSC where the
UE Context is known at the MME and where the SMS service indicator is set in the message. The
The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_AttSGsPagePSbySTMSI (LC20849_3).
VS.NbrSuccessSGsPageCSbySTMSI (20851_0): This is an MME 3GPP counter that counts the
number of successes that occur in the handling of the SGs Paging Request message received
from the MSC where the UE is IDLE and the UE Context is known at the MME and the CS service
indicator is set in the request message. This counter is pegged whenever the MME processing of
the SGsAP-Paging-Request results in a success for the case where the UE is IDLE and the UE
Context is known at the MME and where the request has the CS service indicator set. The NPO
tool uses this 3GPP counter to derive the associated NPO indicator:
VS_NbrSuccessSGsPageCSbySTMSI (LC20851_0).
VS.NbrSuccessSGsPageIMSIandLAI (20851_1): This is an MME 3GPP counter that counts the
number of successes that occur in the handling of the SGs Paging Request message received
from the MSC where the UE Context is not known at the MME and the LAI value is in the request
message. This counter is pegged whenever the MME processing of the SGsAP-Paging-Request
results in a success for the case where the UE Context is not known at the MME and where the
request contains an LAI value to guide the paging. The NPO tool uses this 3GPP counter to
derive the associated NPO indicator: VS_NbrSuccessSGsPageIMSIandLAI (LC20851_1).
VS.NbrSuccessSGsPageIMSIandVLR (20851_2): This is an MME 3GPP counter that counts the
number of successes that occur in the handling of the SGs Paging Request message received
from the MSC where the UE Context is not known at the MME and the LAI value is not contained
in the request message. This counter is pegged whenever the MME processing of the SGsAP-
Paging-Request results in a success for the case where the UE Context is not known at the MME
and where the request does not contain an LAI value to guide the paging. The NPO tool uses this
3GPP counter to derive the associated NPO indicator: VS_NbrSuccessSGsPageIMSIandVLR
(LC20851_2).
VS.NbrSuccessSGsPagePSbySTMSI (20851_3): This is an MME 3GPP counter that counts the
number of successes that occur in the handling of the SGs Paging Request message received
from the MSC where the UE is IDLE and the UE Context is known at the MME and the SMS service
is indicated in the request message. This counter is pegged whenever the MME processing of the
SGsAP-Paging-Request results in a success for the case where the UE is IDLE, and the UE Context
is known at the MME, and where the SMS service indicator is set in the request message. The
NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_NbrSuccessSGsPagePSbySTMSI (LC20851_3).
VS.NbrFailedSGsPagePSbySTMSI_Other (20861_1): This is an MME 3GPP counter that counts the
number of failures that occur in the handling of the SGs Paging Request message received from
the MSC, where the UE is IDLE and the UE Context is known at the MME and the SMS service is
indicated in the request message. This counter is pegged whenever the MME processing of the
SGsAP-Paging-Request results in a failure for the case where the UE is IDLE and the UE Context
is known at the MME and where the SMS service indicator is set in the request message. The NPO
tool uses this 3GPP counter to derive the associated NPO indicator:
VS_NbrFailedSGsPagePSbySTMSI_Other (LC20861_1).
VS.NbrFailedSGsPageIMSIandVLR_Other (20861_2): This is an MME 3GPP counter that counts
the number of failures that occur in the handling of the SGs Paging Request message received
from the MSC where the UE Context is not known at the MME and the LAI value is not contained
in the request message. This counter is pegged whenever the MME processing of the SGsAP-
Paging-Request results in a failure for the case where the UE Context is not known at the MME
and where the request does not contain an LAI value to guide the paging. The NPO tool uses this
3GPP counter to derive the associated NPO indicator: VS_NbrFailedSGsPageIMSIandVLR_Other
(LC20861_2).
The NPO tool uses the above counters to calculate the following indicators:
Total number of eNBs paged for all page attempts: VS_paging_NbOfeNBsPagedOn_AllTries
(L20133_30_CI) = VS_paging_NbOfeNBsPagedOn_1stTry + VS_paging_NbOfeNBsPagedOn_2ndTry +
VS_paging_NbOfeNBsPagedOn_3rdTry + VS_paging_NbOfeNBsPagedOn_4thTry
Rate of paging procedures with UE responses on the first paging request sent:
VS_paging_first_rsp_rate (L20133_1_11_CI) = (VS_paging_all_req_1stTry -
VS_paging_all_fail_1stTryTimeout) / VS_paging_all_req_1stTry
The LTE network must be architected such that the CPU capacity on the MAF is not overloaded. In
WM8.0.0, the MME can have up to 10 MAF pairs and external message throughput is expected to be
at least 40K messages/second/MAF, with an individual MME throughput of at least 305K
messages/second.
In order to avoid exceeding the external message throughput limit, the VS_MIF_MAF_total_msg_all
(L2410d_30_CI) indicator (See Section 4.4.3.1.1.) should be monitored to ensure it does not exceed
305,000. The LSS_cpuOverload alarm should be monitored and a capacity extension decision should
be made once any of the Average CPU counters (See below.) reaches the Minor alarm threshold. Peak
CPU counters (See Section 4.4.3.1.2.) should also be monitored, but do not require immediate
action.
Capacity extension actions include adding MAF pairs until the 10 pair limit is reached and/or adding
an additional MME to the network.
MAF_COMMUNICATION_FAILURE_RATE
Detects whether or not the MIF is getting call processing messages from the MAF by
measuring the MAF communication failure rate. The rate is measured on a per-MAF service
basis in the current 5 minute interval.
SYSTEM
MME
TYPE
Memory Resource
DESCRIPTION
This is a calculated alarm based on two MME 3GPP counters:
VS.TotalMsgsRcvdFromMAF (24107): the counter is pegged for each application message
received by MIF from MAF
VS.TotalMsgsSentToMAF (24105): the counter is pegged for each application message
sent by MIF to MAF
The formula for calculating the failure rate is:
100 - (100 * (VS.TotalMsgsRcvdFromMAF / VS.TotalMsgsSentToMAF))
The formula assumes that for every message sent from MIF to MAF, 1 or more messages
should be returned back to the MIF.
RECOMMENDED THRESHOLD
The LSS_cpiMAFCommunicationFailureRate alarm indicates that the MAF communication
failure rate on a per-MAF service basis has exceeded a threshold in the last 5 minute interval.
The three severity levels indicate the CPI value. Thresholds are defined for Critical, Major and
Minor alarms and are hard-coded into the platform software. The hard-coded values for
these indicators are:
95% capacity = Critical
90% capacity = Major
80% capacity = Minor
An alarm is raised only once for a component with a CPI at a specific severity. The alarm
clears if no severity threshold is met in one of the subsequent intervals. The alarm resource
indicates which MAF service in the MAF pool is associated with the alarm.
MEASUREMENT
This measurement is unique to an individual MAF. The alarm resource indicates which MAF
service in the MAF pool is associated with the alarm.
ACTION
COMMENTS
Ensure that other monitoring mechanisms have not identified and corrected the problem
before switching to the standby MAF.
EXAMPLE
Not Available.
NPO INDICATORS
VS_MIF_MAF_total_msg_rcvd (L24107) : the NPO equivalent of
VS.TotalMsgsRcvdFromMAF (24107).
VS_MIF_MAF_total_msg_sent (L24105) : the NPO equivalent of
VS.TotalMsgsSentToMAF (24105).
AVERAGE_CPU_USAGE
Measures the average CPU utilization in the current interval using the VS.aveCpuUsage
(40001_1) 3GPP counter.
SYSTEM
MME
TYPE
Memory Resource
DESCRIPTION
This is an MME 3GPP counter that counts the average CPU utilization per host during the
current PM reporting interval. The counter is pegged every 10 seconds over the reporting
interval. This is a per-host count.
RECOMMENDED THRESHOLD
The LSS_cpuOverload alarm is associated with the VS.aveCpuUsage counter. The alarm
indicates that the CPU utilization on a service has exceeded a threshold. The three severity
levels indicate the degree of CPU overload. Thresholds are defined for Critical, Major and
Minor alarms and are hard-coded into the platform software. The hard-coded values for
these indicators are:
95% capacity = Critical
93% capacity = Major
91% capacity = Minor
A trap is sent to the MI/EMS when any of the thresholds is crossed in either direction during
a reporting interval. That is, a trap is sent to indicate an alarm severity of Minor, Major,
Critical, or Cleared (Normal) to describe the threshold that was crossed during the reporting
interval.
MEASUREMENT
This count is reported at the host level.
ACTION
In WM8.0.0, the MME can contain up to 10 Service Board pairs. The external message
throughput of an individual MAF is expected to be at least 40K messages/second, with an
individual MME throughput of at least 305K messages/second.
In order to avoid exceeding the external message throughput limit, the LSS_cpuOverload
alarm should be monitored. When the capacity usage exceeds the Minor alarm threshold, a
Service Board pair should be added to the MME. If all Service Board pairs are already
populated, an additional MME should be added to the network.
COMMENTS
Alcatel-Lucent - Proprietary - Use pursuant to applicable agreements
Ensure that the root cause of the alarm is truly due to heavy call traffic by verifying that no
debug or testing tools or other tasks/processes that use an excess of CPU resources are
running.
EXAMPLE
Not Available.
NPO INDICATORS
VS_CPU_usage_PerHost_avg_rate (L40001_0_31_CI) : the NPO equivalent of
VS.aveCpuUsage (40001_0) expressed in percentage.
Average_Per-Core_CPU_Usage
Measures the average CPU utilization for a core processor on a board using the
VS.avePerCoreCpuUsage (40002_0) 3GPP counter.
SYSTEM
MME
TYPE
Memory Resource
DESCRIPTION
This is an MME 3GPP counter that counts the average CPU utilization per core processor on a
board during the current PM reporting interval. The counter is pegged every 10 seconds over
the reporting interval. This is a per-core count.
RECOMMENDED THRESHOLD
The LSS_cpuOverload alarm is associated with the VS.avePerCoreCpuUsage counter. The
alarm indicates that the CPU utilization on a service has exceeded a threshold. The three
severity levels indicate the degree of CPU overload. Thresholds are defined for Critical, Major
and Minor alarms and are hard-coded into the platform software. The hard-coded values for
these indicators are:
95% capacity = Critical
93% capacity = Major
91% capacity = Minor
A trap is sent to the MI/EMS when any of the thresholds is crossed in either direction during
a reporting interval. That is, a trap is sent to indicate an alarm severity of Minor, Major,
Critical, or Cleared (Normal) to describe the threshold that was crossed during the reporting
interval.
MEASUREMENT
This count is unique to an individual core processor on a board.
ACTION
In WM8.0.0, the MME can contain up to 10 Service Board pairs. The external message
throughput of an individual MAF is expected to be at least 40K messages/second, with an
individual MME throughput of at least 305K messages/second.
In order to avoid exceeding the external message throughput limit, the LSS_cpuOverload
alarm should be monitored. When the capacity usage exceeds the Minor alarm threshold, a
Service Board pair should be added to the MME. If all Service Board pairs are already
populated, an additional MME should be added to the network.Comments
Ensure that the root cause of the alarm is truly due to heavy call traffic by verifying that no
debug or testing tools or other tasks/processes that use an excess of CPU resources are
running.
EXAMPLE
Not Available.
NPO INDICATORS
VS_CPU_usage_PerCore_avg_rate (L40002_0_31_CI) : the NPO equivalent of
VS.avePerCoreCpuUsage expressed in percentage.
Average_Base_CPU_Usage
Measures the average CPU utilization for a set of processes that contribute to CPU overload
calculations using the VS.aveBaseCpuUsage (40003_0) 3GPP counter.
SYSTEM
MME
TYPE
Memory Resource
DESCRIPTION
This is an MME 3GPP counter that counts the average CPU utilization for a set of processes
that contribute to CPU overload calculations during the current PM reporting interval. The
counter is pegged every 10 seconds over the reporting interval. This is a per-MAF/MIF count.
RECOMMENDED THRESHOLD
The LSS_cpuOverload alarm is associated with the VS.aveBaseCpuUsage counter. The alarm
indicates that the CPU utilization on a service has exceeded a threshold. The three severity
levels indicate the degree of CPU overload. Thresholds are defined for Critical, Major and
Minor alarms and are hard-coded into the platform software. The hard-coded values for
these indicators are:
95% capacity = Critical
93% capacity = Major
91% capacity = Minor
A trap is sent to the MI/EMS when any of the thresholds is crossed in either direction during
a reporting interval. That is, a trap is sent to indicate an alarm severity of Minor, Major,
Critical, or Cleared (Normal) to describe the threshold that was crossed during the reporting
interval.
MEASUREMENT
This count is unique to an individual MAF, MIF, MPH. or MME.
ACTION
In WM8.0.0, the MME can contain up to 10 Service Board pairs. The external message
throughput of an individual MAF is expected to be at least 40K messages/second, with an
individual MME throughput of at least 305K messages/second.
In order to avoid exceeding the external message throughput limit, the LSS_cpuOverload
alarm should be monitored. When the capacity usage exceeds the Minor alarm threshold, a
Service Board pair should be added to the MME. If all Service Board pairs are already
populated, an additional MME should be added to the network.
COMMENTS
Ensure that the root cause of the alarm is truly due to heavy call traffic by verifying that no
debug or testing tools or other tasks/processes that use an excess of CPU resources are
running.
EXAMPLE
Not Available.
NPO INDICATORS
VS_CPU_usage_SetOfProcessOnCPUoverload_avg_MAF_rate (L40003_0_MAF_31_CI):
calculates the average CPU usage on a per-MAF basis, expressed in percentage.
VS_CPU_usage_SetOfProcessOnCPUoverload_avg_MIF_rate (L40003_0_MIF_31_CI):
calculates the average CPU usage on a per-MIF basis, expressed in percentage.
VS_CPU_usage_SetOfProcessOnCPUoverload_avg_HOST_rate (L40003_0_HOST_31_CI):
calculates the average CPU usage on a per-host basis, expressed in percentage.
There is no alarm associated with these counters. When troubleshooting requires knowledge of MAF
messaging, these counter values must be investigated manually using MI/EMS.
When the following indicators (detailed in Section 4.4.3.1.4) are supported by NPO, they can be
monitored:
These thresholds are for indicators only. They should not be alarmed as they can be affected by UE
behavior.
The observational period for minor and major thresholds should be 15 minutes. These values should
not be exceeded at any time during the week, with the exception of maintenance intervals.
The long term average is the expected value when the failure rate is averaged over a 24 hour
period. It will vary over hours in the day depending on mobility of the UEs.
The scope of this section is to provide details regarding eUTRAN interface monitoring and
measurement capabilities, available at eNB level, to ensure the eUTRAN interface and the eUTRAN
transport backhaul are properly dimensioned, i.e. not experiencing overload, so that end to end
quality of service objectives are met.
LTE eUTRAN interface monitoring relies upon the measurement of throughput for S1, X2, M1, M3 and
OAM interfaces. Throughput measurements are provided at physical and logical levels, to enable as
accurate as possible eUTRAN interfaces load assessment.
where:
The eng margin is recommended to be 70%-80%
Available backhauling throughput is to be retrieved from the project or from HLD
where:
The eng margin is recommended to be 75%-86%
S1-U BW= approximately 93% of Backhaul BW based on field network
The eNB features a set of counters which enable eUTRAN physical and logical interfaces monitoring.
This section is aimed to provide the list of available critical trigger counters at eNB interfaces level.
The interfaces load and capacity are assessed based on the following criterion:
Throughput
Link Utilization
Vlan Throughput
S1_THROUGHPUT_RECEIVED_PerENB
This trigger provides the average throughput received on the S1 interfaces of the eNodeB
equipment (including Ethernet headers).
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the average throughput received on the S1 interfaces of the eNodeB
equipment (including Ethernet headers). Note that the Ethernet overheads include the MAC
Preamble and the FCS trailer, but it excludes the 20 bytes for Preamble, Start Frame
Delimiter and Inter-Frame Gap (IFG). This has to be taken into consideration for assessing the
interface load.
RECOMMENDED THRESHOLD
Not to exceed the backhaul throughput, please refer to the process above.
MEASUREMENT
S1_Throughput_received_perENB = VS.S1DLThroughput.Cum/ VS.S1DLThroughput.NbEvt
ACTION
Please refer to the process above.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_S1_thpt_rcvd_avg_PerENB (L13109_30_CI): This indicator provides the
average throughput received on the S1 interfaces of the eNodeB equipment (including
Ethernet headers). NPO indicator equivalent to trigger S1_Throughput_received_perENB.
VS_traf_eNB_S1_thpt_rcvd_cum (L13109_Cum): This indicator provides the "received
volume traffic in kilo-bits" because it is the addition of each "throughput received
multiplied by the duration of the sampling period" on the S1 interfaces of the eNodeB
equipment (including Ethernet headers). It will be used to compute the average
indicator.
Based on 3GPP counter: VS.S1DLThroughput.Cum
S1_Throughput_Sent_PerENB
This trigger provides the average throughput sent on the S1 interfaces of the eNodeB equipment
(including Ethernet headers).
SYSTEM
eNB
TYPE
Load.
DESCRIPTION
This trigger provides the average throughput sent on the S1 interfaces of the eNB equipment
(including Ethernet headers). Note that the Ethernet overheads include the MAC Preamble
and the FCS trailer, but it excludes the 20 bytes for Preamble, Start Frame Delimiter and
Inter-Frame Gap (IFG). This has to be taken into consideration for assessing the interface
load.
RECOMMENDED THRESHOLD
Not to exceed the backhaul throughput, please refer to the process above.
MEASUREMENT
S1_Throughput_Sent_PerENB= VS.S1ULThroughput.Cum/ VS.S1ULThroughput.NbEvt
ACTION
Please refer to the process above.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_S1_thpt_sent_avg_PerENB (L13111_30_CI): This indicator provides the
average throughput sent on the S1 interfaces of the eNodeB equipment (including
Ethernet headers).NPO indicator equivalent to trigger S1_Throughput_Sent_PerENB.
VS_traf_eNB_S1_thpt_sent_max (L13111_Max): This indicator provides the maximum
throughput sent on the S1 interfaces of the eNodeB equipment (including Ethernet head.
Based on 3GPP counter: VS.S1ULThroughput.Max
Remark: "maximum" is when the spatial and temporal aggregation are set to MAX.
VS_traf_eNB_S1_thpt_sent_min (L13111_Min): This indicator provides the minimum
throughput sent on the S1 interfaces of the eNodeB equipment (including Ethernet
headers).
X2_THROUGHPUT_RECEIVED_PerENB
This trigger provides the average throughput received on the X2 interfaces of the eNodeB equipment
(including Ethernet headers).
SYSTEM
eNB
TYPE
Load.
DESCRIPTION
This trigger provides the average throughput received on the X2 interfaces of the eNodeB
equipment (including Ethernet headers). Note that the Ethernet overheads include the MAC
Preamble and the FCS trailer, but it excludes the 20 bytes for Preamble, Start Frame
Delimiter and Inter-Frame Gap (IFG). This has to be taken into consideration for assessing the
interface load.
RECOMMENDED THRESHOLD
No specific limit value directly linked to this X2 received throughput.
MEASUREMENT
X2_Throughput_received_PerENB=VS.X2ReceivedThroughput.Cum/VS.X2ReceivedThroughp
ut.NbEvt
ACTION
Please refer to the process.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_X2_thpt_rcvd_avg_PerENB (L12909_30_CI): This indicator provides the
average throughput received on the X2 interfaces of the eNodeB equipment (including
Ethernet headers).NPO indicator equivalent to trigger X2_Throughput_received_PerENB
VS_traf_eNB_X2_thpt_rcvd_max (L12909_Max): This indicator provides the maximum
throughput received on the X2 interfaces of the eNB equipment (including Ethernet
headers). Based on 3GPP counter: VS.X2ReceivedThroughput.Max
X2_Throughput_Sent_PerENB
This trigger provides the average throughput sent on the X2 interfaces of the eNodeB equipment
(including Ethernet headers).
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the average throughput sent on the X2 interfaces of the eNodeB
equipment (including Ethernet headers). Note that the Ethernet overheads include the MAC
Preamble and the FCS trailer, but it excludes the 20 bytes for Preamble, Start Frame
Delimiter and Inter-Frame Gap (IFG). This has to be taken into consideration for assessing the
interface load.
RECOMMENDED THRESHOLD
No specific limit value directly linked to this X2 sent throughput.
MEASUREMENT
X2_Throughput_sent_PerENB= VS.X2SentThroughput.Cum/VS.X2SentThroughput.NbEvt
ACTION
Please refer to the process
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_X2_thpt_sent_avg_PerENB(L12911_30_CI): This indicator provides the
average throughput sent on the X2 interfaces of the eNodeB equipment (including
Ethernet headers).NPO indicator equivalent to the trigger X2_Throughput_sent_PerENB.
VS_traf_eNB_X2_thpt_sent_max (L12911_Max): This indicator provides the maximum
throughput sent on the X2 interfaces of the eNB equipment (including Ethernet headers).
Based on 3GPP counter: VS.X2SentThroughput.Max
VS_traf_eNB_X2_thpt_sent_min (L12911_Min): This indicator provides the minimum
throughput sent on the X2 interfaces of the eNodeB equipment (including Ethernet
headers).Based on 3GPP counter: VS.X2SentThroughput.Min
SYSTEM
eNB (FDD)
TYPE
Load, unit is Kbit/s
DESCRIPTION
This trigger provides the average eMBMS service bit rate per eNodeB.
RECOMMENDED THRESHOLD
No specific limit value directly linked to this indicator.
MEASUREMENT
M1_Throughput_received_PerENB =
(VS.M1GtpPayloadKbytesReceivedPerENodeB*1024/1000*8)/(1000* (PERIOD+( VS.IfInOctets
*0)))
ACTION
This trigger can be used to elaborate the received throughput at the M1 interface.
COMMENTS
Applicable for FDD only
EXAMPLE
Not available.
NPO INDICATORS
VS_M1_GTP_payload_thpt_rcvd_PerENB (L14309_50_CI): This indicator provides the
average eMBMS service bit rate per eNodeB.NPO indicator equivalent to trigger
M1_Throughput_received_PerENB
VS_M1_GTP_payload_kbyte_rcvd_PerENB (L14309_CI): This indicator provides, for each
MBMS bearer service, the volume of M1 GTP payload received by eNodeB (expressed in
kBytes (1000 Bytes)). The counter does not include in the count the GTP header. Based
on 3GPP counter: VS.M1GtpPayloadKbytesReceivedPerENodeB
OAM_THROUGHPUT_RECEIVED_PerENB
This trigger provides the average throughput received on the OAM VLAN of the eNodeB
excluding the Ethernet header.
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the average throughput received on the OAM VLAN of the eNodeB
excluding the Ethernet header. Size of Ethernet header with VLAN tag is 22 (VLAN tag = 4
bytes).
RECOMMENDED THRESHOLD
No specific limit value directly linked to this indicator.
MEASUREMENT
OAM_Throughput_received_PerENB= (VS.OAMInOctets*8)/(PERIOD+(VS.S1SctpInOctets*0))
ACTION
This trigger can be used to elaborate the OAM received throughput.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_OAM_thpt_rcvd_avg_PerENB (L13314_40_CI): This indicator provides the
average throughput received on the OAM VLAN of the eNodeB excluding the Ethernet
header. Size of Ethernet header with VLAN tag is 22 (VLAN tag = 4 bytes).NPO indicator
equivalent to trigger OAM_Throughput_received_PerENB
VS_traf_eNB_OAM_kbyte_rcvd (L13314): This indicator provides the total number of
Kilobytes (1000 Bytes) received on the VLAN containing the OAM traffic of the eNodeB
(the length of the ethernet frame includes the ethernet header).
Based on 3GPP counter: VS.OAMInOctets
OAM_Throughput_Sent_PerENB
This trigger provides the average throughput sent on the OAM VLAN of the eNodeB excluding
the Ethernet header.
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the average throughput sent on the OAM VLAN of the eNodeB excluding
the Ethernet header. Size of Ethernet header with VLAN tag is 22 (VLAN tag = 4 bytes).
RECOMMENDED THRESHOLD
No specific limit value directly linked to this indicator.
MEASUREMENT
OAM_Throughput_sent_PerENB = (VS_OAMOutOctets*8)/(PERIOD+(VS_S1SctpInOctets*0))
ACTION
This trigger can be used to elaborate the OAM sent throughput.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_OAM_thpt_sent_avg_PerENB (L13316_40_CI): This indicator provides the
average throughput sent on the OAM VLAN of the eNodeB excluding the Ethernet
header. Size of Ethernet header with VLAN tag is 22 (VLAN tag = 4 bytes).NPO indicator
equivalent to trigger OAM_Throughput_sent_PerENB.
VS_traf_eNB_OAM_kbyte_sent(L13316): This indicator provides the total number of
Kilobytes (1000 Bytes) sent on the VLAN containing the OAM traffic of the eNodeB (the
length of the ethernet frame includes the ethernet header).Based on 3GPP counter:
VS.OAMOutOctets
ENB_AVERAGE_ RECEIVED_LINK_USAGE
This trigger provides the average percentage value on GEthernet link for the incoming traffic.
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the eNB incoming Gigabit Ethernet interface throughput load
(percentage of 1 Gigabit Ethernet). This is basically the ratio of incoming throughput divided
by 1 Gigabit and thus expressed as a percentage of the maximum throughput.
RECOMMENDED THRESHOLD
Not to exceed the backhaul throughput
eNB interface maximum load: In a range of 70%- 80%
MEASUREMENT
eNB_Average_Received_link_usage
=(VS.IfInLinkUtilisation.Cum/VS.IfInLinkUtilisation.NbEvt)/100
ACTION
When eNB interface load reaches maximum load, a major congestion status is likely, as long
as the eNB is receiving more data that it can sustained to meet quality of service targets.
When such an event happens, critical alarm must be reported right away.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_if_link_usage_rcvd_rate_avg (L13312_41_CI): This indicator provides the
average throughput sent on the OAM VLAN of the eNodeB excluding the Ethernet
header. Size of Ethernet header with VLAN tag is 22 (VLAN tag = 4 bytes).NPO indicator
equivalent to trigger eNB_average_received_link_ usage.
VS_traf_eNB_if_link_usage_rcvd_avg (L13312_40_CI): This indicator provides the
average load percentage value on GEthernet link for the incoming traffic.
eNB_Average_Sent_link_usage
This trigger provides the average percentage value on GEthernet link for the outgoing traffic.
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the eNB outgoing Gigabit Ethernet interface throughput load
(percentage of 1 Gigabit Ethernet). This is basically the ratio of outgoing throughput divided
by 1 Gigabit and thus expressed as a percentage of the maximum throughput.
RECOMMENDED THRESHOLD
Not to exceed the backhaul throughput
eNB interface maximum load: In a range of 70%- 80%
MEASUREMENT
eNB_Average_Sent_link_usage
=(VS.IfOutLinkUtilisation.Cum/VS.IfOutLinkUtilisation.NbEvt)/100
ACTION
When eNB interface load reaches maximum load, a major congestion status is likely, critical
alarm must be reported right away as the eNB outgoing Gigabit Ethernet interface load is not
supposed to cross this maximum value.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
ENB_VLAN_AVERAGE_SENT_THROUGHPUT
This trigger provides the "average" uplink throughput on the VLAN interface (including
Ethernet header and CRC).
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the uplink throughput on the VLAN interface (including Ethernet header
and CRC). Note that the Ethernet overheads include the MAC Preamble and the FCS trailer,
but it excludes the 20 bytes for Preamble, Start Frame Delimiter and Inter-Frame Gap (IFG).
This has to be taken into consideration for assessing the interface load.
LR14.1L release eNodeB can support up to 4 Vlans. A counter VS.VlanULThroughput is
assigned per Vlan. No Vlan configuration is achieved by setting the VlanId=4096, that case
the throughput indicator will report the eNB interface aggregated throughput.
RECOMMENDED THRESHOLD
No specific limit value directly linked to Vlan sent throughput.
MEASUREMENT
eNB_VLAN_Average_SENT_Throughput
=VS.VlanULThroughput.Cum/VS.VlanULThroughput.NbEvt
ACTION
Not available.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_VLAN_thpt_sent_avg_PerENB (L13332_30_CI): This indicator provides the
"average" uplink throughput on the VLAN interface (including Ethernet header and CRC).
Based on 3GPP Counter : VS.VlanULThroughput.Cum. NPO indicator equivalent to trigger
eNB_VLAN_Average_Throughput_Sent
VS_traf_eNB_VLAN_thpt_sent_max (L13332_Max): This indicator provides the uplink
throughput on the VLAN interface (including Ethernet header and CRC).
eNB_VLAN_Average_Received_Throughput
This trigger provides the "average" downlink throughput on the VLAN interface (including
Ethernet headers and CRC).
SYSTEM
eNB
TYPE
Load
DESCRIPTION
This trigger provides the downlink throughput on the VLAN interface (including Ethernet
header and CRC). Note that the Ethernet overheads include the MAC Preamble and the FCS
trailer, but it excludes the 20 bytes for Preamble, Start Frame Delimiter and Inter-Frame Gap
(IFG). This has to be taken into consideration for assessing the interface load.
LR14.1L release eNodeB can support up to 4 Vlans. A counter VS.VlanDLThroughput is
assigned per Vlan. No Vlan configuration is achieved by setting the VlanId=4096, that case
the throughput indicator will report the eNB interface aggregated throughput.
RECOMMENDED THRESHOLD
No specific limit value directly linked to Vlan sent throughput.
MEASUREMENT
eNB_VLAN_Average_Received_Throughput
=VS.VlanDLThroughput.Cum/VS.VlanDLThroughput.NbEvt
ACTION
Not available.
COMMENTS
Applicable for both FDD and TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_eNB_VLAN_thpt_rcvd_avg_PerENB (L13333_30_CI): This indicator provides the
"average" downlink throughput on the VLAN interface (including Ethernet headers and
CRC). Based on 3GPP Counter : VS.VlanDLThroughput.Cum. NPO indicator equivalent to
trigger eNB_VLAN_Averag_ Received_Throughput
VS_traf_eNB_VLAN_thpt_rcvd_max (L13333_Max): This indicator provides the downlink
throughput on the VLAN interface (including Ethernet header and CRC).
In case of using one VLAN, Vlan throughput should be equal to the summation of all the interfaces
configured in this Vlan.
e.g. if only one Vlan is configured, then the Vlan throughput should be equal to the summation of
eUTRAN interface utilization, OAM and PTP
This section is aimed at providing the list of available critical triggers at S-GW and P-GW level.
Average_CPU_Utilization_CPISA_PGW
Measures the average CPU utilization in the current interval using the VS.avgCpuUtilization
(62010_CPISA) 3GPP counter.
SYSTEM
P-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average CPU utilization per CP-ISA card per polling interval. CP-
ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% CPU average utilization is
an indication that resources are reaching their limit. This is an alarm condition as the MG-
ISM CP-ISA CPU is running above safe engineering limit.
MEASUREMENT
This counter represents the Average CPU utilization per CP-ISA card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgCpuUtilization_CPISA_PGW (LC62010_CPISA): the NPO equivalent of
VS.avgCpuUtilization
Average_CPU_Utilization_CPM_PGW
Measures the average CPU utilization per CPM card in the current interval using the
VS.avgCpuUtilization (62010_CPM) 3GPP counter.
SYSTEM
P-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average CPU utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% CPU average utilization is
an indication that resources are reaching their limit. This is an alarm condition as the CPM
CPU is running above safe engineering limit.
MEASUREMENT
This counter represents the Average CPU utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgCpuUtilization_CPM_PGW (LC62010_CPM): the NPO equivalent of
VS.avgCpuUtilization
VS_CPU_usage_OnPGW_CPM_avg_rate (L62010_CPM_CI): This indicator provides the
average CPU utilization rate per CPM card per polling interval, expressed in %.
Maximum_CPU_Utilization_CPISA_PGW
Measures the maximum CPU utilization per CP-ISA card in the current interval using the
VS.maxCpuUtilization (62011_CPISA) 3GPP counter.
SYSTEM
P-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the maximum CPU utilization per CP-ISA card per polling interval.
CP-ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to activate specific monitoring
for this resource, as average CPU utilization might be rising up to the specified engineering
limit.
MEASUREMENT
This counter represents the Maximum CPU utilization per CP-ISA card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxCpuUtilization_CPISA_PGW (LC62011_CPISA): The NPO equivalent of
VS.maxCpuUtilization.
VS_CPU_usage_OnPGW_CPISA_max_rate (L62011_CPISA_CI): This indicator provides the
maximum CPU utilization rate per CP-ISA card per polling interval, expressed in %.
Maximum_CPU_Utilization_CPM_PGW
Measures the maximum CPU utilization per CPM card in the current interval using the
VS.maxCpuUtilization (L62011_CPM) 3GPP counter.
SYSTEM
P-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum CPU utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to activate specific monitoring
for this resource, as average CPU utilization might be rising up to the specified engineering
limit.
MEASUREMENT
This counter represents the Maximum CPU utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxCpuUtilization_CPM_PGW (LC62011_CPM): the NPO equivalent of
VS.maxCpuUtilization
VS_CPU_usage_OnPGW_CPM_max_rate (L62011_CPM_CI): This indicator provides the
maximum CPU utilization rate per CPM card per polling interval, expressed in %.
Average_Memory_Utilization_CPISA_PGW
Measures the average memory utilization in the current interval using the
VS.avgMemoryUtilization (62012_CPISA) counter.
SYSTEM
P-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average Memory utilization per CP-ISA card per polling interval.
CP-ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% memory average utilization
is an indication that memory usage is reaching its limit. This is an alarm condition as the
MG-ISM CP-ISA memory usage is running above safe engineering limit.
MEASUREMENT
This counter represents the Average Memory utilization per CP-ISA card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgMemoryUtilization_CPISA_PGW (LC62012_CPISA): the NPO equivalent of
VS.avgMemoryUtilization
VS_memory_usage_OnPGW_CPISA_avg_rate (L62012_CPISA_CI): This indicator provides
the average memory utilization rate per CP-ISA card per polling interval, expressed in %.
Average_Memory_Utilization_CPM_PGW
Measures the average memory utilization in the current interval using the
VS.avgMemoryUtilization (62012_CPM) 3GPP counter.
SYSTEM
P-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average Memory utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% memory average utilization
is an indication that memory usage is reaching its limit. This is an alarm condition as the
CPM memory usage is running above safe engineering limit.
MEASUREMENT
This counter represents the Average Memory utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgMemoryUtilization_CPM_PGW (LC62012_CPM): the NPO equivalent of
VS.avgMemoryUtilization
VS_memory_usage_OnPGW_CPM_avg_rate (L62012_CPM_CI): This indicator provides
the average memory utilization rate per CPM card per polling interval, expressed in %.
Maximum_Memory_Utilization_CPISA_PGW
Measures maximum memory utilization in the current interval using the
VS.maxMemoryUtilization (62013_CPISA) 3GPP counter.
SYSTEM
P-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum Memory utilization per CP-ISA card per polling
interval. CP-ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to set up specific monitoring, as
average memory utilization might be rising up to the specified limit.
MEASUREMENT
This counter represents the Maximum Memory utilization per CP-ISA card per polling
interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxMemoryUtilization_CPISA_PGW (LC62013_CPISA): the NPO equivalent of
VS.maxMemoryUtilization.
VS_memory_usage_OnPGW_CPISA_max_rate (L62013_CPISA_CI): This indicator provides
the maximum memory utilization rate per CP-ISA card per polling interval, expressed in
%.
Maximum_Memory_Utilization_CPM_PGW
Measures the maximum memory utilization in the current interval using the
VS.maxMemoryUtilization (62013_CPM) 3GPP counter.
SYSTEM
P-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum Memory utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to set up specific monitoring, as
average memory utilization might be rising up to the specified limit.
MEASUREMENT
This counter represents the Maximum Memory utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxMemoryUtilization_CPM_PGW (LC62013_CPM): the NPO equivalent of
VS.maxMemoryUtilization.
VS_memory_usage_OnPGW_CPM_max_rate (L62013_CPM_CI): This indicator provides
the maximum memory utilization rate per CPM card per polling interval, expressed in %.
Average_Number_Of_Bearers_PGW
Measures the average number of bearers served by CP-ISA card between two sample
collections using the VS.avgNumberOfBearers (61110_Avg) 3GPP counter.
SYSTEM
P-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average number of bearers served by CP-ISA card between two
sample collections.
RECOMMENDED THRESHOLD
Check the maximum number of P-GW bearers per MG-ISM card (maximum number is given in
section 3.6.1.2. Measured average number of bearers per MG-ISM card must always be below
the maximum number mentioned previously.
No specific engineering rule linked to this counter, but when the average number of bearers
per P-GW MG-ISM card is reaching the limit value mentioned above, warning message must
be generated as MG-ISM bearer resource is approaching maximum limit. The same counter
can be used to monitor the maximum bearers activations per sec is below the limit given in
section 3.6.1.3.
MEASUREMENT
This counter represents the Average number of bearers served by CP-ISA card between two
sample collections.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgNumberOfBearers_PGW (LC61110_Avg): the NPO equivalent of
VS.avgNumberOfBearers.
Maximum_Number_Of_Bearers_PGW
Measures the maximum number of bearers served by CP-ISA card between two sample
collections using the VS.maxNumberOfBearers (61110_Max).
SYSTEM
P-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum number of bearers served by CP-ISA card between two
sample collections.
LIMIT VALUE
Check the maximum number of P-GW bearers per MG-ISM card (maximum number is given in
section 3.6.1.2. Measured average number of bearers per MG-ISM card must always be below
the maximum number mentioned previously.
No specific engineering rule linked to this counter, but when the maximum number of
bearers per P-GW MG-ISM card is reaching the limit value mentioned above, per MG-ISM CP-
ISA card bearers monitoring is recommended.
MEASUREMENT
This counter represents the Average number of bearers served by CP-ISA card between two
sample collections.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxNumberOfBearers_PGW (LC61110_Max): the NPO equivalent of
VS.maxNumberOfBearers.
Average_CPU_Utilization_CPISA_SGW
Measures the average CPU utilization in the current interval using the VS.avgCpuUtilization
(52010_CPISA) 3GPP counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average CPU utilization per CP-ISA card per polling interval. CP-
ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% CPU average utilization is
an indication that resources are reaching their limit. This is an alarm condition as the MG-
ISM CP-ISA CPU is running above safe engineering limit.
MEASUREMENT
This counter represents the Average CPU utilization per CP-ISA card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgCpuUtilization_CPISA_SGW (LC52010_CPISA): the NPO equivalent of
VS.avgCpuUtilization.
VS_CPU_usage_OnSGW_CPISA_avg_rate (L52010_CPISA_CI): This indicator provides the
average CPU utilization rate per CP-ISA card per polling interval, expressed in %.
Average_CPU_Utilization_CPM_SGW
Measures the average CPU utilization per CPM card per polling interval using the
VS.avgCpuUtilization (52010_CPM) 3GPP counter.
SYSTEM
S-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average CPU utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% CPU average utilization is
an indication that resources are reaching their limit. This is an alarm condition as the CPM
CPU is running above safe engineering limit.
MEASUREMENT
This counter represents the Average CPU utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgCpuUtilization_CPM_SGW (LC52010_CPM): the NPO equivalent of
VS.avgCpuUtilization.
VS_CPU_usage_OnSGW_CPM_avg_rate (L52010_CPM_CI): This indicator provides the
average CPU utilization rate per CPM card per polling interval, expressed in %.
Maximum_CPU_Utilization_CPISA_SGW
Measures the maximum CPU utilization in the current interval using the
VS.maxCpuUtilization (52011_CPISA) 3GPP counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum CPU utilization per CP-ISA card per polling interval.
CP-ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to activate specific monitoring
for this resource, as average CPU utilization might be rising up to the specified engineering
limit.
MEASUREMENT
This counter represents the Maximum CPU utilization per CP-ISA card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxCpuUtilization_CPISA_SGW (LC52011_CPISA): the NPO equivalent of
VS.maxCpuUtilization.
VS_CPU_usage_OnSGW_CPISA_max_rate (L52011_CPISA_CI): This indicator provides the
maximum CPU utilization rate per CP-ISA card per polling interval, expressed in %.
Maximum_CPU_Utilization_CPM_SGW
Measures the maximum CPU utilization in the current interval using the
VS.maxCpuUtilization (52011_CPM) 3GPP counter.
SYSTEM
S-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum CPU utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to activate specific monitoring
for this resource, as average CPU utilization might be rising up to the specified engineering
limit.
MEASUREMENT
This counter represents the Maximum CPU utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxCpuUtilization_CPM_SGW (LC52011_CPM): the NPO equivalent of
VS.maxCpuUtilization
VS_CPU_usage_OnSGW_CPM_max_rate (L52011_CPM_CI): This indicator provides the
maximum CPU utilization rate per CPM card per polling interval, expressed in %.
Average_Memory_Utilization_CPISA_SGW
Measures the average memory utilization per CP-ISA card per polling interval using the
VS.avgMemoryUtilization (52012_CPISA) 3GPP counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average Memory utilization per CP-ISA card per polling interval.
CP-ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% memory average utilization
is an indication that memory usage is reaching its limit. This is an alarm condition as the
MG-ISM CP-ISA memory usage is running above safe engineering limit.
MEASUREMENT
This counter represents the Average Memory utilization per CP-ISA card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgMemoryUtilization_CPISA_SGW (LC52012_CPISA): the NPO equivalent of
VS.avgMemoryUtilization.
VS_memory_usage_OnSGW_CPISA_avg_rate (L52012_CPISA_CI): This indicator provides
the average memory utilization rate per CP-ISA card per polling interval, expressed in %.
Average_Memory_Utilization_CPM_SGW
Measures the average memory utilization in the current interval using the
VS.avgMemoryUtilization (52012_CPM) 3GPP counter.
SYSTEM
S-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average Memory utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. Reaching 85-90% memory average utilization
is an indication that memory usage is reaching its limit. This is an alarm condition as the
CPM memory usage is running above safe engineering limit.
MEASUREMENT
This counter represents the Average Memory utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgMemoryUtilization_CPM_SGW (LC52012_CPM): the NPO equivalent of
VS.avgMemoryUtilization.
VS_memory_usage_OnSGW_CPM_avg_rate (L52012_CPM_CI): This indicator provides
the average memory utilization rate per CPM card per polling interval, expressed in %.
Maximum_Memory_Utilization_CPISA_SGW
Measures the maximum memory utilization in the current interval using the
VS.maxMemoryUtilization (52013_CPISA) 3GPP counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum Memory utilization per CP-ISA card per polling
interval. CP-ISA is the Control Plane Integrated Service Adapter of the MG-ISM card.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to set up specific monitoring, as
average memory utilization might be rising up to the specified limit.
MEASUREMENT
This counter represents the Maximum Memory utilization per CP-ISA card per polling
interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxMemoryUtilization_CPISA_SGW (LC52013_CPISA): the NPO equivalent of
VS.maxMemoryUtilization.
VS_memory_usage_OnSGW_CPISA_max_rate (L52013_CPISA_CI): This indicator provides
the maximum memory utilization rate per CP-ISA card per polling interval, expressed in
%.
Maximum_Memory_Utilization_CPM_SGW
Measures the maximum memory utilization in the current interval using the
VS.maxMemoryUtilization (52013_CPM) 3GPP counter.
SYSTEM
S-GW (per CPM)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum Memory utilization per CPM card per polling interval.
RECOMMENDED THRESHOLD
Maximum limit value is in the range of 85-90%. No specific engineering rule linked to this
counter, but crossing the 85-90% threshold is an indication to set up specific monitoring, as
average memory utilization might be rising up to the specified limit.
MEASUREMENT
This counter represents the Maximum Memory utilization per CPM card per polling interval.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxMemoryUtilization_CPM_SGW (LC52013_CPM): the NPO equivalent of
VS.maxMemoryUtilization.
VS_memory_usage_OnSGW_CPM_max_rate (L52013_CPM_CI): This indicator provides
the maximum memory utilization rate per CPM card per polling interval, expressed in %.
Average_Number_Of_Bearers_SGW
Measures the average number of bearers served by CP-ISA card between two sample
collections using the VS.avgNumberOfBearers (51110_Avg) 3GPP counter.
KCI=BearerManagement, GroupName=CP-ISA, group=1, slot/mda
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Average number of bearers served by CP-ISA card between two
sample collections.
RECOMMENDED THRESHOLD
Check the maximum number of S-GW bearers per MG-ISM card (maximum number is given in
section 3.6.1.2. Measured average number of bearers per MG-ISM card must always be below
the maximum number mentioned previously.
No specific engineering rule linked to this counter, but when the average number of bearers
per S-GW MG-ISM card is reaching the limit value mentioned above, warning message must
be generated as MG-ISM bearer resource is approaching maximum limit.
The same counter can be used to monitor the maximum bearers activations per sec is below
the limit given in section 3.6.1.3.
MEASUREMENT
This counter represents the Average number of bearers served by CP-ISA card between two
sample collections.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_avgNumberOfBearers_SGW (LC51110_Avg): the NPO equivalent of
VS.avgNumberOfBearers.
Maximum_Number_Of_Bearers_SGW
Measures the maximum number of bearers served by CP-ISA card between two sample
collections using the VS.maxNumberOfBearers (51110_Max) 3GPP counter.
KCI=BearerManagement, GroupName=CP-ISA, group=1, slot/mda
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Maximum number of bearers served by CP-ISA card between two
sample collections.
RECOMMENDED THRESHOLD
Check the maximum number of S-GW bearers per MG-ISM card (maximum number is given in
section 3.6.1.2. Measured average number of bearers per MG-ISM card must always be below
the maximum number mentioned previously.
No specific engineering rule linked to this counter, but when the maximum number of
bearers per S-GW MG-ISM card is reaching the limit value mentioned above, per MG-ISM CP-
ISA card bearer monitoring is recommended.
MEASUREMENT
This counter represents the Average number of bearers served by CP-ISA card between two
sample collections.
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_maxNumberOfBearers_SGW (LC51110_Max): the NPO equivalent of
VS.maxNumberOfBearers
IntraSGW_Relocation_S1X2Based_Handover_Successful_SGW
Measures the number of incoming intra S-GW X2-based and S1-based handovers with and
without indirect tunnels served successfully by this CP-ISA card (MG-ISM) using the
VS.intraSgwRelocationS1X2BasedHandoverSuccessful (52144_1) 3GPP counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the Number of incoming intra S-GW X2-based and S1-based
handovers with and without indirect tunnels (the sum of intra S-GW X2 handovers and intra
S-GW S1 handovers) served successfully by this CP-ISA card (MG-ISM).
RECOMMENDED THRESHOLD
Check the maximum handover event rates per MG-ISM given in section 3.6.2.4.2.
This counter can be used to elaborate S1/X2 intra S-GW handover rate per MG-ISM card,
which must always be below the maximum number mentioned previously. Please note that
there is an additional counter (VS_intraSgwRelocationS1X2BasedHandoverFailures_SGW) to
be monitored in relation with VS_intraSgwRelocationS1X2BasedHandoverSuccessful_SGW.
VS_intraSgwRelocationS1X2BasedHandoverFailures_SGW counts the number of incoming
intra Serving Gateway (SGW) X2-based and S1-based handover failures with and without
indirect tunnels served by CP-ISA card (MG-ISM).
MEASUREMENT
This counter represents the Number of incoming intra S-GW X2-based and S1-based
handovers with and without indirect tunnels (the sum of intra S-GW X2 handovers and intra
S-GW S1 handovers) served successfully by this CP-ISA card (MG-ISM).
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
IntraSGW_Relocation_S1Based_With_Indirect_Tunnel_Handov
er_Successful_SGW
Measures the number of incoming intra S-GW S1-based handovers with indirect tunnels
served successfully by this CP-ISA card (MG-ISM) using the
VS.intraSgwRelocationS1BasedWithIndirectTunnelHandoverSuccessful (52144_2) 3GPP
counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the number of incoming intra S-GW S1-based handovers with
indirect tunnels (intra S-GW S1 handovers) served successfully by this CP-ISA card (MG-ISM).
RECOMMENDED THRESHOLD
At S-GW level, there no limit about the numbers of S1 intra S-GW handover per second
which can be supported per MG-ISM card, as the number is global (maximum handover rate
per MG-ISM card is given per intra S-GW S1/X2 HO rate and per inter S-GW HO rate). Check
the maximum handover event rates per MG-ISM given in section 3.6.2.4.2.
This counter can be used to elaborate S1 intra S-GW handover rate per MG-ISM card. Please
note that there is an additional counter
(VS_intraSgwRelocationS1BasedWithIndirectTunnelHandoverFailures_SGW) to be monitored
in relation with VS_intraSgwRelocationS1BasedWithIndirectTunnelHandoverSuccessful_SGW.
At S-GW level, there no limit about the numbers of S1 intra S-GW handover per second
which can be supported per MG-ISM card, as the number is global (maximum handover rate
per MG-ISM card is given per intra S-GW S1/X2 HO rate and inter S-GW HO rate).
MEASUREMENT
This counter represents the number of incoming intra S-GW S1-based handovers with
indirect tunnels (intra S-GW S1 handovers) served successfully by this CP-ISA card (MG-ISM).
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_intraSgwRelocationS1BasedWithIndirectTunnelHandoverSuccessful_SGW
(LC52144_2): the NPO equivalent of
VS.intraSgwRelocationS1BasedWithIndirectTunnelHandoverSuccessful.
VS_HO_IrC_S1_IaSGW_WithIndirectDataFwd_req (L52144_2_00_CI): This indicator
provides the number of incoming intra Serving Gateway (SGW) S1-based handovers with
indirect tunnels served by CP-ISA card (Control Plan Integrated Service Adapter also
called MSCP - Mobile Service Control Plan).
VS_HO_IrC_S1_IaSGW_WithIndirectDataFwd_succ_rate (L52144_2_11_CI): This indicator
provides the successful rate of incoming intra Serving Gateway (SGW) S1-based
handovers with indirect tunnels served by CP-ISA card (Control Plan Integrated Service
Adapter also called MSCP - Mobile Service Control Plan).
InterSGW_X2_Relocation_Handover_Successful_SGW
Measures the number of incoming inter S-GW X2-based handovers served successfully by CP-
ISA card (MG-ISM) using the VS.interSgwX2RelocationHandoverSuccessful (52141_1) 3GPP
counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the number of incoming inter S-GW X2-based handovers served
successfully by CP-ISA card (MG-ISM).
RECOMMENDED THRESHOLD
At S-GW level, there no limit about the numbers of X2 inter S-GW handover per second
which can be supported per MG-ISM card, as the number is global (maximum handover rate
per MG-ISM card is given per intra S-GW S1/X2 HO rate and per inter S-GW HO rate). Check
the maximum handover event rates per MG-ISM given in section 3.6.2.4.2.
This counter can be used to elaborate inter S-GW X2-handover rate per MG-ISM card. Please
note that there is an additional counter (VS_interSgwX2RelocationHandoverFailures_SGW) to
be monitored in relation with VS_interSgwX2RelocationHandoverSuccessful_SGW.
MEASUREMENT
This counter represents the number of incoming inter S-GW X2-based handovers served
successfully by CP-ISA card (MG-ISM).
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_interSgwX2RelocationHandoverSuccessful_SGW (LC52141_1): the NPO equivalent of
VS.interSgwX2RelocationHandoverSuccessful.
VS_HO_IrC_X2_IrSGW_req_tgt (L52141_1_00_CI): This indicator provides the number of
incoming inter S-GW X2-based handovers served by CP-ISA card (Control Plan Integrated
Service Adapter also called MSCP - Mobile Service Control Plan).
VS_HO_IrC_X2_IrSGW_succ_tgt_rate (L52141_1_11_CI): This indicator provides the
successful rate of incoming inter S-GW X2-based handovers served by CP-ISA card
(Control Plan Integrated Service Adapter also called MSCP - Mobile Service Control Plan).
InterSGW_S1Based_Relocation_With_Indirect_Tunnel_Handov
er_Successful_SGW
Measures the number of incoming inter S-GW S1-based handovers with indirect tunnels
served successfully by CP-ISA card (MG-ISM) using the
VS.interSgwS1BasedRelocationWithIndirectTunnelHandoverSuccessful (52141_2) 3GPP
counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the number of incoming inter S-GW S1-based handovers with
indirect tunnels served successfully by CP-ISA card (MG-ISM).
3GPP 23.401 considers two different use cases for S1 based handover: The first one is S1
based handover with indirect tunnel forwarding path between source eNB and target eNB
(i.e no direct connectivity between source eNB and target eNB); the second one is S1 based
handover with direct tunnel forwarding path between source eNB and target eNB (i.e direct
connectivity between source eNB and target eNB is available).
RECOMMENDED THRESHOLD
At S-GW level, there no limit about the numbers of inter S-GW S1-based handovers with
indirect tunnels per second which can be supported per MG-ISM card, as the number is
global (maximum handover rate per MG-ISM card is given per intra S-GW S1/X2 HO rate and
per inter S-GW HO rate). Check the maximum handover event rates per MG-ISM given in
section 3.6.2.4.2.
This counter can be used to elaborate inter S-GW S1-based handovers with indirect tunnels
rate per MG-ISM card. Please note that there is an additional counter
(VS_interSgwS1BasedRelocationWithIndirectTunnelHandoverFailures_SGW) to be monitored
in relation with VS_interSgwS1BasedRelocationWithIndirectTunnelHandoverSuccessful_SGW.
At S-GW level, there no limit about the numbers of inter S-GW S1-based handovers with
indirect tunnels per second which can be supported per MG-ISM card, as the number is
global (maximum handover rate per MG-ISM card is given per intra S-GW S1/X2 HO rate and
inter S-GW HO rate). But this counter can be used in conjunction with two other inter S-GW
HO counters to elaborate aggregate inter S-GW HO rate.
MEASUREMENT
Alcatel-Lucent - Proprietary - Use pursuant to applicable agreements
This counter represents the number of incoming inter S-GW S1-based handovers with
indirect tunnels served successfully by CP-ISA card (MG-ISM).
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_interSgwS1BasedRelocationWithIndirectTunnelHandoverSuccessful_SGW
(LC52141_2): the NPO equivalent of
VS.interSgwS1BasedRelocationWithIndirectTunnelHandoverSuccessful.
VS_HO_IrC_S1_IrSGW_WithIndirectDataFwd_req_tgt (L52141_2_00_CI): This indicator
provides the number of incoming inter S-GW S1 based handovers with indirect tunnels
served by CP-ISA card (Control Plan Integrated Service Adapter also called MSCP - Mobile
Service Control Plan).
VS_HO_IrC_S1_IrSGW_WithIndirectDataFwd_succ_tgt_rate (L52141_2_11_CI): This
indicator provides the successful rate of incoming inter S-GW S1 based handovers with
indirect tunnels served by CP-ISA card (Control Plan Integrated Service Adapter also
called MSCP - Mobile Service Control Plan).
InterSGW_S1Based_Relocation_Without_Indirect_Tunnel_Han
dover_Successful_SGW
Measures the number of incoming inter S-GW S1-based handovers without indirect tunnels
served successfully by CP-ISA card (MG-ISM) using the
VS.interSgwS1BasedRelocationWithoutIndirectTunnelHandoverSuccessful (52141_3) 3GPP
counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the number of incoming inter S-GW S1-based handovers without
indirect tunnels served successfully by CP-ISA card (MG-ISM).
3GPP 23.401 considers two different use cases for S1 based handover: The first one is S1
based handover with indirect tunnel forwarding path between source eNB and target eNB
(i.e no direct connectivity between source eNB and target eNB); the second one is S1 based
handover with direct tunnel forwarding path between source eNB and target eNB (i.e direct
connectivity between source eNB and target eNB is available).
RECOMMENDED THRESHOLD
At S-GW level, there no limit about the numbers of inter S-GW S1-based handovers without
indirect tunnels per second which can be supported per MG-ISM card, as the number is
global (maximum handover rate per MG-ISM card is given per intra S-GW S1/X2 HO rate and
per inter S-GW HO rate). Check the maximum handover event rates per MG-ISM given in
section 3.6.2.4.2.
This counter can be used to elaborate inter S-GW S1-based handovers without indirect
tunnels rate per MG-ISM card. Please note that there is an additional counter
(VS_interSgwS1BasedRelocationWithoutIndirectTunnelHandoverFailures_SGW) to be
monitored in relation with
VS_interSgwS1BasedRelocationWithoutIndirectTunnelHandoverSuccessful_SGW.
At S-GW level, there no limit about the numbers of inter S-GW S1-based handovers without
indirect tunnels per second which can be supported per MG-ISM card, as the number is
global (maximum handover rate per MG-ISM card is given per intra S-GW S1/X2 HO rate and
inter S-GW HO rate). But this counter can be used in conjunction with two other inter S-GW
HO counters to elaborate aggregate inter S-GW HO rate.
MEASUREMENT
This counter represents the number of incoming inter S-GW S1-based handovers without
indirect tunnels served successfully by CP-ISA card (MG-ISM).
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_interSgwS1BasedRelocationWithoutIndirectTunnelHandoverSuccessful_SGW
(LC52141_3): the NPO equivalent of
VS.interSgwS1BasedRelocationWithoutIndirectTunnelHandoverSuccessful.
VS_HO_IrC_S1_IrSGW_WithoutIndirectDataFwd_req_tgt (L52141_3_00_CI): This
indicator provides the number of incoming inter S-GW S1 based handovers without
indirect tunnels served by CP-ISA card (Control Plan Integrated Service Adapter also
called MSCP - Mobile Service Control Plan).
VS_HO_IrC_S1_IrSGW_WithoutIndirectDataFwd_succ_tgt_rate (L52141_3_11_CI): This
indicator provides the successful rate of incoming inter S-GW S1 based handovers
without indirect tunnels served by CP-ISA card (Control Plan Integrated Service Adapter
also called MSCP - Mobile Service Control Plan).
Paging_Attempts_SGW
Measures the number paging attempts served successfully by CP-ISA card (MG-ISM) using the
VS.pagingAttempts (52531) 3GPP counter.
SYSTEM
S-GW (per CP-ISA)
TYPE
Cumulative load.
DESCRIPTION
This counter represents the number paging attempts served successfully by CP-ISA card (MG-
ISM).
RECOMMENDED THRESHOLD
The limit of paging rate per MG-ISM card is given in section 3.6.2.4.3.
This counter can be used to elaborate maximum paging rate per MG-ISM card. Please note
that there is an additional counter (VS_pagingFailures_SGW) to be monitored in relation
with VS_pagingAttempts_SGW.
VS_pagingFailures_SGW counts the number of paging failures served by CP-ISA card (MG-
ISM).
Maximum number of paging attempt per second must be below the limit mentioned above.
MEASUREMENT
This counter represents the number paging attempts served successfully by CP-ISA card (MG-
ISM).
ACTION
Please refer to RECOMMENDED THRESHOLD above.
COMMENTS
Please refer to RECOMMENDED THRESHOLD above.
EXAMPLE
Not available.
NPO INDICATORS
VS_pagingAttempts_SGW (LC52531): the NPO equivalent of VS.pagingAttempts.
VS_paging_OnSGW_succ (L52531_10_CI): This indicator provides the number of paging
success served by CP-ISA card (Control Plan Integrated Service Adapter also called MSCP -
Mobile Service Control Plan).
Additional P-GW counters are provided in “7750 SR OS MG KPI KCI Counters”. This section will be
completed in a further release of the document.
4.6.2.2 S-GW ADDITIONAL COUNTERS
VS.numberOfBearers (51110): Counts the number of bearers being served by CP-ISA card. The
NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_numberOfBearers_SGW (LC51110).
VS.numberOfDedicatedBearers (51112): Counts the number of dedicated bearers being served
by CP-ISA card. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_numberOfDedicatedBearers_SGW (LC51112).
VS.numberOfActiveBearers (51111_1): Counts the number of active bearers being served by
CP-ISA card. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_numberOfActiveBearers_SGW (LC51111_1).
VS.numberOfPdnSessions (51113): Counts the number of PDN sessions being served by CP-ISA
card. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_numberOfPdnSessions_SGW (LC51113).
VS.s1uGtpPathMgmtFailures (52314): Counts of number of S1U Peer Unreachable detected by
loss of GTP keep-alive from the remote peer. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_s1uGtpPathMgmtFailures_SGW (LC52314).
VS.s1uGtpPeerRestarts (52315): Counts the number of S1U Peer Restart detected by Peer
Restart counter value. The NPO tool uses this 3GPP counter to derive the associated NPO
indicator: VS_s1uGtpPeerRestarts_SGW (LC52315).
VS.PeerRestarts (52351_S1U): Counts the number of times S1-U peer restarted. The NPO tool
uses this 3GPP counter to derive the associated NPO indicator: VS_PeerRestarts_S1U_SGW
(LC52351_S1U).
VS.s4GtpPathMgmtFailures (52316): Counts of number of S4 Peer Unreachable detected by loss
of GTP keep-alive from the remote peer. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_s4GtpPathMgmtFailures_SGW (LC52316).
VS.s4GtpPeerRestarts (52317): Counts the number of S4 Peer Restart detected by Peer Restart
counter value. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_s4GtpPeerRestarts_SGW (LC52317).
VS.s5GtpPathMgmtFailures (52310): Counts the number of S5 Peer Unreachable detected by
loss of GTP keep-alive from the remote peer. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_s5GtpPathMgmtFailures_SGW (LC52310).
VS.s5GtpPeerRestarts (52311): Counts the number of S5 Peer Restart detected by Peer Restart
counter value. The NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_s5GtpPeerRestarts_SGW (LC52311).
VS.PeerRestarts_S5 (52351_S5): Counts the number of times S5 peer restarted after
registering with the system. The NPO tool uses this 3GPP counter to derive the associated NPO
indicator: VS_PeerRestarts_S5_SGW (LC52351_S5).
VS.s11GtpPathMgmtFailures (52312): Counts the number of S11 Peer Unreachable detected by
loss of GTP keep-alive from the remote peer. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_s11GtpPathMgmtFailures_SGW (LC52312).
VS.s11GtpPeerRestarts (52313): Counts the number of S11 Peer Restart detected by Peer
Restart counter value. The NPO tool uses this 3GPP counter to derive the associated NPO
indicator: VS_s11GtpPeerRestarts_SGW (LC52313).
VS.PeerRestarts (52351_S11): Counts the number of times S11 peer restarted. The NPO tool
uses this 3GPP counter to derive the associated NPO indicator: VS_PeerRestarts_S11_SGW
(LC52351_S11).
VS.s12GtpPathMgmtFailures (52318): Counts of number of S12 Peer Unreachable detected by
loss of GTP keep-alive from the remote peer. The NPO tool uses this 3GPP counter to derive the
associated NPO indicator: VS_s12GtpPathMgmtFailures_SGW (LC52318).
VS.s12GtpPeerRestarts (52319): Counts the number of S12 Peer Restart detected by Peer
Restart counter value. The NPO tool uses this 3GPP counter to derive the associated NPO
indicator: VS_s12GtpPeerRestarts_SGW (LC52319).
VS.mgwRestartCounterValue (52323): Counts the number of SGW restart counter value. The
NPO tool uses this 3GPP counter to derive the associated NPO indicator:
VS_mgwRestartCounterValue_SGW (LC52323).
Additional S-GW counters are provided in “7750 SR OS MG KPI KCI Counters”. This section will be
completed in a further release of the document.
The scope of this section is to provide details regarding ePC interfaces monitoring and measurement
capabilities, available at various ePC network elements level, to ensure the ePC interfaces and the
transport network are properly dimensioned, i.e. not experiencing overload, so that end to end
quality of service objectives are met.
This section is aimed to provide an as exhaustive as possible list of available ePC interfaces counters
which can be used to provide ePC interfaces monitoring capabilities.
PGW_UPLINK_DROP_PACKETS
This trigger Counts the number of packets dropped of uplink traffic processed on MG-ISM
card
SYSTEM
P-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of uplink packets received on P-GW GTP-U logical
interfaces (S5, Gn, S2b), that were dropped per MG-ISM card.
RECOMMENDED THRESHOLD
Engineering rule regarding this critical trigger will be defined in a further release of the
document.
MEASUREMENT
PGW_Uplink_Drop_Packets= VS.ulDropPackets
ACTION
This trigger can be used to detect per MG-ISM uplink bearer traffic congestion condition.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnPGW_pkt_dropped_UL(L62227): This indicator provides the
number of packets dropped of uplink traffic processed on MG-ISM card. Based on 3GPP
counter: VS.ulDropPackets. NPO indicator equivalent to trigger
PGW_Uplink_Drop_Packets.
VS_traf_processed_OnPGW_pkt_all_UL_ratio (L62227_31_CI): This indicator provides
the ratio of UL packet dropped over UL packets for traffic bytes and packets processed
on MG-ISM card.
VS_traf_processed_OnPGW_pkt_all_UL (L62225): This indicator provides the number of
packets of uplink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.ulPackets
PGW_Downlink_Drop_Packets
This trigger Counts the number of packets dropped of downlink traffic processed on MG-ISM
card
SYSTEM
P-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of downlink packets received P-GW GTP-U logical
interfaces (SGi) that were dropped per MG-ISM card.
RECOMMENDED THRESHOLD
Engineering rule regarding this critical trigger will be defined in a further release of the
document.
MEASUREMENT
PGW_Downlink_Drop_Packets= VS.dlDropPackets
ACTION
This trigger can be used to detect per MG-ISM downlink bearer traffic congestion condition.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnPGW_pkt_dropped_DL (L62230): This indicator provides the
number of packets dropped of downlink traffic processed on MG-ISM card. Based on
3GPP counter: VS.dlDropPackets.NPO indicator equivalent to trigger
PGW_Downlink_Drop_Packets
VS_traf_processed_OnPGW_pkt_all_DL_ratio (L62230_31_CI): This indicator provides
the ratio of DL packet dropped over DL packets for traffic bytes and packets processed on
MG-ISM card.
VS_traf_processed_OnPGW_pkt_all_DL (L62228): This indicator provides the number of
packets of downlink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.dlPackets
SGW_GTP_UPLINK_DROP_PACKETS
This trigger Counts the number of packets dropped of uplink traffic processed on MG-ISM
card.
SYSTEM
S-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of uplink packets received on S-GW GTP-U logical
interfaces (S1, S4, S12), that were dropped per MG-ISM card.
RECOMMENDED THRESHOLD
Engineering rule regarding this critical trigger will be defined in a further release of the
document.
MEASUREMENT
SGW_GTP_Uplink_Drop_Packets=VS.sgwGtpUlDropPackets
ACTION
This trigger can be used to detect per MG-ISM uplink bearer traffic congestion condition.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnSGW_pkt_drop_UL(L52215): This indicator provides the number
of packets dropped of uplink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.sgwGtpUlDropPackets. NPO indicator equivalent to trigger
SGW_GTP_Uplink_Drop_Packets
VS_traf_processed_OnSGW_pkt_UL (L52214): This indicator provides the number of
packed of uplink packets processed on MG-ISM card. Based on 3GPP counter:
VS.sgwGtpUlPackets
SGW_GTP_Downlink_Drop_Packets
This trigger Counts the number of packets dropped of downlink traffic processed on MG-ISM
card.
SYSTEM
S-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of downlink packets received S-GW GTP-U logical
interfaces (S5, S8) that were dropped per MG-ISM card.
RECOMMENDED THRESHOLD
Engineering rule regarding this critical trigger will be defined in a further release of the
document
MEASUREMENT
SGW_GTP_Downlink_Drop_Packets= VS.sgwGtpDLDropPackets
ACTION
This trigger can be used to detect per MG-ISM downlink bearer traffic congestion condition.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnSGW_pkt_drop_DL (L52212): This indicator provides the number
of packets dropped of downlink traffic processed on MG-ISM. Based on 3GPP counter:
VS.sgwGtpDLDropPackets.NPO indicator equivalent to trigger
SGW_GTP_Downlink_Drop_Packets
VS_traf_processed_OnSGW_pkt_DL (L52211): This indicator provides the number of
packets of downlink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.sgwGtpDlPackets
In addition to the counter defined in section 4.5.2 above, the network elements of the ePC feature
additional counters which enable more accurate ePC interfaces monitoring and potential
troubleshooting.
This section list the relevant ePC interfaces counters which are available at the P-GW level and
which can be used to provide additional monitoring capabilities.
Please note that additional counters are available at S5C, S8C, Gx interfaces level, but since these
counters do only provide number of received or sent messages, they can be hardly be used in the
scope of the LTE Network Capacity Monitoring and Engineering. These counters are not listed in this
document; please refer to “7750 SR OS MG KPI KCI Counters” for additional information about these
counters.
PGW_Uplink_Bytes
This trigger Counts the number of bytes of uplink traffic processed on MG-ISM card.
SYSTEM
P-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of bytes of uplink egress traffic (SGi interface)
processed on MG-ISM card (GTP-U payload bytes).
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
PGW_Uplink_Bytes= VS.ulBytes
ACTION
This trigger can be used to elaborate sent throughput at the SGi interface.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnPGW_byte_all_UL(L62226): This indicator provides the number of
bytes of uplink traffic processed on MG-ISM card. Based on 3GPP counter: VS.ulBytes.
NPO indicator equivalent to trigger PGW_Uplink_Bytes
VS_traf_processed_OnPGW_kbyte_all_UL (L62226_CI): This indicator provides the
number of kilobytes (1000 bytes) of uplink egress traffic processed on MG-ISM card.
PGW_Downlink_Bytes
This trigger counts the number of bytes of downlink traffic processed on MG-ISM card
SYSTEM
P-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of bytes of downlink egress traffic (S5, Gn, S2b
interfaces) processed on MG-ISM card (GTP-U payload bytes).
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
PGW_Downlink_Bytes= VS.dlBytes
ACTION
This trigger can be used to elaborate aggregate sent throughput at the P-GW GTP-U
interface.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnPGW_byte_all_DL (L62229): This indicator provides the number
of bytes of downlink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.dlBytes
NPO indicator equivalent to trigger PGW_Downlink_Bytes
VS_traf_processed_OnPGW_kbyte_all_DL (L62229_CI): This indicator provides the
number of kilobytes (1000 bytes) of downlink egress traffic processed on MG-ISM card.
PGW_Uplink_Packets
This trigger counts the number of packets of uplink traffic processed on MG-ISM card.
SYSTEM
P-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of packets of uplink traffic (SGi interface) processed
on MG-ISM card.
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
PGW_Uplink_Packets= VS.ulPackets
ACTION
Not available.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnPGW_pkt_all_UL (L62225): This indicator provides the number of
packets of uplink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.ulPackets
NPO indicator equivalent to trigger PGW_Uplink_Packets
PGW_Downlink_Packets
This trigger counts the number of packets of uplink traffic processed on MG-ISM card.
SYSTEM
P-GW (per MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number of packets of downlink traffic (GTP-U S5, Gn, S2b
interfaces) processed on MG-ISM card.
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
PGW_Downlink_Packets= VS.dlPackets
ACTION
Not available.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnPGW_pkt_all_DL (L62228): This indicator provides the number of
packets of downlink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.dlPackets
NPO indicator equivalent to trigger PGW_Downlink_Packets
PGW_GTP_Uplink_Egress_Octets
This trigger Counts the number octets of uplink egress traffic processed on PGW per APN
SYSTEM
P-GW (per APN on MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number octets of uplink egress traffic (SGi interface)
processed per APN on MG-ISM card (GTP-U payload bytes).
RECOMMENDED VALUE
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
PGW_GTP_Uplink_Egress_Octets= VS.PgwGtpUlEgressOctets
ACTION
This trigger can be used to elaborate per APN sent throughput at the P-GW uplink egress
interface (SGi interface).
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_PGW_traf_egress_byte_GTPu_UL_PerAPN(L62747_APN): This indicator provides per
APN the number octets of uplink egress traffic processed on PGW per APN. Based on
3GPP counter: VS.PgwGtpUlEgressOctets
NPO indicator equivalent to trigger PGW_GTP_Uplink_Egress_Octets
VS_PGW_traf_egress_kbyte_GTPu_UL_PerAPN (L62747_APN_CI): This indicator
provides per APN the number kilobytes (1000 bytes) of uplink egress traffic processed
per MG-ISM on PGW.
PGW_GTP_Downlink_Egress_Octets
This trigger counts the number octets of downlink egress traffic processed on PGW per APN.
SYSTEM
P-GW (per APN on MG-ISM card)
TYPE
Load
DESCRIPTION
This trigger provides the total number octets of downlink egress traffic (GTP-U interface)
processed per APN on MG-ISM card (GTP-U payload bytes).
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
PGW_GTP_Downlink_Egress_Octets= VS.PgwGtpDlEgressOctets
ACTION
This trigger can be used to elaborate per APN aggregate sent throughput at the P-GW
downlink egress GTP-U interface.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_PGW_traf_egress_byte_GTPu_DL_PerAPN(L62751_APN): This indicator provides per
APN the number octets of downlink egress traffic processed on PGW per APN. Based on
3GPP counter: VS.PgwGtpDlEgressOctets
NPO indicator equivalent to trigger PGW_GTP_downlink_Egress_Octets
VS_PGW_traf_egress_kbyte_GTPu_DL_PerAPN (L62751_APN_CI): This indicator
provides per APN the number kilobytes (1000 bytes) of downlink egress traffic processed
per MG-ISM on PGW.
PGW_GTP_Uplink_Max_Egress_Traffic_Rate
This trigger provides the maximum MBPS rate of uplink egress traffic processed on PGW per
APN
SYSTEM
P-GW (per APN on MG-ISM card)
TYPE
Load, unit is Mbit/s.
DESCRIPTION
This trigger provides the maximum throughput of uplink egress traffic (SGi interface)
processed per APN on MG-ISM card (GTP-U payload bytes).
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
PGW_GTP_Uplink_Max_Egress_Traffic_Rate= VS.PgwGtpUlMaxMbpsEgressTrafficRate
ACTION
Not available, will be updated in a further release of the document.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_PGW_traf_egress_thpt_Mbps_GTPu_UL_max_PerAPN(L62750_APN): This indicator
provides per APN the maximum Mbps (1 000 000 bits per second) rate of uplink egress
traffic processed on PGW per APN. Based on 3GPP counter:
VS.PgwGtpUlMaxMbpsEgressTrafficRate.
NPO indicator equivalent to trigger PGW_GTP_Uplink_Max_Egress_Traffic_Rate
VS_PGW_traf_egress_thpt_Mbps_GTPu_UL_avg_PerAPN(L62750_APN_30_CI):This
indicator provides per APN the average Mbps rate of uplink egress traffic processed per
MG-ISM on PGW.
PGW_GTP_Downlink_Max_Egress_Traffic_Rate
This trigger provides the maximum MBPS (mega bits per second) rate of downlink egress
traffic processed on PGW per APN.
SYSTEM
P-GW (per APN on MG-ISM card)
TYPE
Load, unit is Mbit/s.
DESCRIPTION
This trigger provides the maximum throughput of downlink egress traffic (GTP-U interface)
processed per APN on MG-ISM card (GTP-U payload bytes).
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
PGW_GTP_Downlink_Max_Egress_Traffic_Rate= VS.PgwGtpDlMaxMbpsEgressTrafficRate.
ACTION
Not available, will be updated in a further release of the document.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_PGW_traf_egress_thpt_Mbps_GTPu_DL_max_PerAPN(L62754_APN): This indicator
provides per APN the maximum Mbps (1 000 000 bits per second) rate of downlink
egress traffic processed on PGW per APN.
Based on 3GPP counter: VS.PgwGtpDlMaxMbpsEgressTrafficRate.
NPO indicator equivalent to trigger PGW_GTP_Downlink_Max_Egress_Traffic_Rate
PGW_Tx_Application_Messages
This trigger provides the count of messages transmitted to a peer.
SYSTEM
P-GW
TYPE
Cumulative.
DESCRIPTION
This trigger provides the count of messages transmitted to a peer (Gx interface).
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
PGW_Tx_Application_Messages= VS.TxAppMsgs
ACTION
Not available, will be updated in a further release of the document.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_Gx_AppMsg_sent (L62370_GX): This indicator provides the number of messages
transmitted to a peer.Based on 3GPP counter: VS.TxAppMsgs
NPO indicator equivalent to trigger PGW_Tx_Application_Messages
PGW_Rx_Application_Messages
This trigger Counts the number of messages received from a peer.
SYSTEM
P-GW
TYPE
Cumulative.
DESCRIPTION
This trigger provides the count of messages received from a peer (Gx interface).
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
PGW_Rx_Application_Messages= VS.RxAppMsgs
ACTION
Not available, will be updated in a further release of the document.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_Gx_AppMsg_rcvd(L62371_GX): This indicator provides the number of messages
received from a peer. Based on 3GPP counter: VS.RxAppMsgs
NPO indicator equivalent to trigger PGW_Rx_Application_Messages
In a further release of the document, inventory of P-GW / S-GW GTP-U echo counters can be added
as this information can be used to indicate potential problems at the GTP-U interfaces level.
This section list the relevant ePC interfaces counters which are available at the S-GW level.
Please note that additional counters are available at S4C, S5C, S8C, S11 interfaces level, but since
these counters do only provide number of received or sent messages, they can be hardly be used in
the scope of the LTE Network Capacity Monitoring and Engineering. These counters are not listed in
this document; please refer to “7750 SR OS MG KPI KCI Counters” for additional information about
these counters.
SGW_GTP_Uplink_Bytes
This trigger counts the number of bytes of uplink traffic processed on MG-ISM card.
SYSTEM
S-GW (per MG-ISM card)
TYPE
Cumulative, unit is bytes.
DESCRIPTION
This trigger provides the total number of bytes of uplink traffic processed on MG-ISM card
(GTP-U payload bytes) uplink egress on S5, S8 interfaces.
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
SGW_GTP_Uplink_Bytes= VS.sgwGtpUlBytes
ACTION
This trigger can be used to elaborate aggregated sent throughput at the S-GW uplink egress
interface (GTP-U S5, S8 interfaces).
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnSGW_byte_UL(L52213): This indicator provides the number of
bytes of uplink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.sgwGtpUlBytes.
NPO indicator equivalent to trigger SGW_GTP_Uplink_Bytes
VS_traf_processed_OnSGW_kbyte_UL (L52213_CI): This indicator provides the number
of uplink traffic (processed on MG-ISM card) kilo bytes (1000 bytes) received from this
peer.
SGW_GTP_Downlink_Bytes
This trigger counts the number of bytes of downlink traffic processed on MG-ISM card.
SYSTEM
S-GW (per MG-ISM card)
TYPE
Cumulative, unit is bytes.
DESCRIPTION
This trigger provides the total number of bytes of downlink traffic processed on MG-ISM card
(GTP-U payload bytes) downlink egress on S1, S4, S12 interfaces.
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
SGW_GTP_Downlink_Bytes = VS.sgwGtpDlBytes
ACTION
This trigger can be used to elaborate aggregated sent throughput at the S-GW downlink
egress interface (GTP-U S1, S4, S12 interfaces).
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnSGW_byte_DL(L52210): This indicator provides the number of
bytes of downlink traffic processed on MG-ISM card. Based on 3GPP counter:
VS.sgwGtpDlBytes
NPO indicator equivalent to trigger SGW_GTP_downlink_Bytes
VS_traf_processed_OnSGW_kbyte_DL (L52210_CI): This indicator provides the number
of downlink traffic (processed on MG-ISM card) kilo bytes (1000 bytes) sent to a peer.
SGW_GTP_Uplink_Packets
This trigger counts the number of packed of uplink packets processed on MG-ISM card.
SYSTEM
S-GW (per MG-ISM card)
TYPE
Cumulative.
DESCRIPTION
This trigger provides the total number of GTP-U packets of uplink egress traffic processed
on MG-ISM card (S5, S8 interfaces).
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
SGW_GTP_Uplink_Packets= VS.sgwGtpUlPackets
ACTION
Not available, will be updated in a further release of the document.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnSGW_pkt_UL(L52214): This indicator provides the number of
packed of uplink packets processed on MG-ISM card.
Based on 3GPP counter: VS.sgwGtpUlPackets
NPO indicator equivalent to trigger SGW_GTP_Uplink_Packets
SGW_GTP_Downlink_Packets
This trigger counts the number of packets of downlink traffic processed on MG-ISM card.
SYSTEM
S-GW (per MG-ISM card)
TYPE
Cumulative.
DESCRIPTION
This trigger provides the total number of GTP-U packets of downlink egress traffic processed
on MG-ISM card (S1, S4, S12 interfaces).
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
SGW_GTP_Downlink_Packets = VS.sgwGtpDlPackets
ACTION
Not available, will be updated in a further release of the document.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_traf_processed_OnSGW_pkt_DL(L52211): This indicator provides the number of
packets of downlink traffic processed on MG-ISM card.Based on 3GPP counter:
VS.sgwGtpDlPackets
NPO indicator equivalent to trigger SGW_GTP_Downlink_Packets
This section list the relevant ePC interfaces counters which are available at the MME level (S1-MME
interface).
Please note that additional counters are available at S3, S4, S10, S11, S102, SGs, Sm and Sv
interfaces level, but since these counters do only provide number of received or sent messages,
they can be hardly be used in the scope of the LTE Network Capacity Monitoring and Engineering.
These counters are not listed in this document; please refer to “9471 WMM MME, SGSN and SRS
Applications Observation Counters Reference Guide” for additional information about these
counters.
MME_Total_Bytes_Received_on_S1_Local IP_x
This trigger counts the total number of kbytes (1000 bytes) received on a provisioned S1-
MME Local IP address.
SYSTEM
MME (MIF)
TYPE
Cumulative (1000 bytes)
DESCRIPTION
This trigger provides the number of Kbyte application messages received on the S1-MME
local IP address number x. This trigger counts the number of Kbytes application messages,
i.e up to S1AP layer (but does not include SCTP/IP headers). x ranges from 1 to 9. The 9471
WMM (MME) supports up to 9 S1-MME local IP addresses to enable for eNBs clustering
capability (eNBs attached to the MME do not necessarily share the same local MME IP
address).
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
MME_Total_Bytes_Received_on_S1_Local IP_x= VS.TotalBytesRcvdS1LocalIP_x
This trigger provides per eNB cluster cumulative number of Kbytes received at S1-MME
interface. An eNB cluster is assigned to a local IP address (x = 1 through 9).
ACTION
This trigger can be used to elaborate received throughput at the MME S1-MME interface.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_S1MME_S1ProvisionedLocalIPnb1_kbyte_rcvd(L30350_1): This indicator provides
the total number of kbytes (1000 bytes) received on a provisioned S1-MME Local IP
address. Total number of kbytes received on a provisioned S1-MME Local IP address local
interface number 1.Based on 3GPP counter: VS.TotalBytesRcvdS1LocalIP_1
VS_S1MME_S1ProvisionedLocalIPnb2_kbyte_rcvd(L30350_2): This indicator provides
the total number of kbytes (1000 bytes) received on a provisioned S1-MME Local IP
address.
MME_Total_Bytes_Sent_on_S1_Local_IP_x
This trigger counts the total number of kbytes (1000 bytes) sent on a provisioned S1-MME
Local IP address.
SYSTEM
MME (MIF)
TYPE
Cumulative (1000 bytes)
DESCRIPTION
This trigger provides the number of Kbyte application messages sent on the S1-MME local IP
address number x. This trigger counts the number of Kbytes application messages, i.e up to
S1AP layer (but does not include SCTP/IP headers). x ranges from 1 to 9. The 9471 WMM
(MME) supports up to 9 S1-MME local IP addresses to enable for eNBs clustering capability
(eNBs attached to the MME do not necessarily share the same local MME IP address).
RECOMMENDED THRESHOLD
Engineering rule regarding this trigger will be defined in a further release of the document.
MEASUREMENT
MME_Tota_ Bytes_Sent_on_S1_Local_IP_x= VS.TotalBytesSentS1LocalIP_x
This trigger provides per eNB cluster cumulative number of Kbytes sent at S1-MME interface.
An eNB cluster is assigned to a local IP address (x = 1 through 9).
ACTION
This trigger can be used to elaborate sent throughput at the MME S1-MME interface.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_S1MME_S1ProvisionedLocalIPnb1_kbyte_sent(L30351_1): This indicator provides
the total number of kbytes (1000 bytes) sent on a provisioned S1-MME Local IP address.
Total number of kbytes sent on a provisioned S1-MME Local IP address local interface
number 1. Based on 3GPP counter: VS.TotalBytesSentS1LocalIP_1
VS_S1MME_S1ProvisionedLocalIPnb2_kbyte_sent(L30351_2): This indicator provides
the total number of kbytes (1000 bytes) sent on a provisioned S1-MME Local IP address.
Total number of kbytes sent on a provisioned S1-MME Local IP address local interface
number 2. Based on 3GPP counter: VS.TotalBytesSentS1LocalIP_2
MME_Total_Messages_Received_on_S1_Local_IP_x
This trigger counts the total number of messages received on a provisioned S1-MME Local IP
address.
SYSTEM
MME (MIF)
TYPE
Cumulative
DESCRIPTION
This trigger provides the number of application messages received on the S1-MME local IP
address number x. This trigger counts the number of S1AP messages. x ranges from 1 to 9.
The 9471 WMM (MME) supports up to 9 S1-MME local IP addresses to enable for eNBs
clustering capability (eNBs attached to the MME do not necessarily share the same local MME
IP address).
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
MME_Total_Messages_Received_on_S1_Local_IP_x= VS.TotalMsgsRcvdS1LocalIP_x
This trigger provides per eNB cluster cumulative number of received S1-MME/S1AP
messages. An eNB cluster is assigned to a local IP address (x = 1 through 9).
ACTION
Not available.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_S1MME_S1ProvisionedLocalIPnb1_msg_rcvd (L30352_1): This indicator provides the
total number of messages received on a provisioned S1-MME Local IP address.
Total number of messages received on a provisioned S1-MME Local IP address local
interface number 1. Based on 3GPP counter: VS.TotalMsgsRcvdS1LocalIP_1
VS_S1MME_S1ProvisionedLocalIPnb2_msg_rcvd (L30352_2): This indicator provides the
total number of messages received on a provisioned S1-MME Local IP address.
Total number of messages received on a provisioned S1-MME Local IP address local
interface number 2. Based on 3GPP counter: VS.TotalMsgsRcvdS1LocalIP_2
MME_Total_Messages_Sent_on_S1_Local_IP_x
This trigger counts the total number of messages received on a provisioned S1-MME Local IP
address.
SYSTEM
MME (MIF)
TYPE
Cumulative
DESCRIPTION
This trigger provides the number of application messages sent on the S1-MME local IP
address number x. This trigger counts the number of S1AP messages. x ranges from 1 to 9.
The 9471 WMM (MME) supports up to 9 S1-MME local IP addresses to enable for eNBs
clustering capability (eNBs attached to the MME do not necessarily share the same local MME
IP address).
RECOMMENDED THRESHOLD
No specific engineering rule related to this trigger.
MEASUREMENT
MME_Total_Messages_Sent_on_S1_Local_IP_x= VS.TotalMsgsSentS1LocalIP_x
This trigger provides per eNB cluster cumulative number of sent S1-MME/S1AP messages. An
eNB cluster is assigned to a local IP address (x = 1 through 9).
ACTION
Not available.
COMMENTS
Applicable to both FDD&TDD
EXAMPLE
Not available.
NPO INDICATORS
VS_S1MME_S1ProvisionedLocalIPnb1_msg_sent(L30353_1): This indicator provides the
total number of messages sent on a provisioned S1-MME Local IP address.
Total number of messages sent on a provisioned S1-MME Local IP address local interface
number 1. Based on 3GPP counter: VS.TotalMsgsSentS1LocalIP_1
VS_S1MME_S1ProvisionedLocalIPnb2_msg_sent (L30353_2): This indicator provides the
total number of messages sent on a provisioned S1-MME Local IP address.
Total number of messages sent on a provisioned S1-MME Local IP address local interface
number 2. Based on 3GPP counter: VS.TotalMsgsSentS1LocalIP_2
This section list the relevant ePC interfaces counters which are available at the PCRF level.
Traffic model assumptions may change due to new features and services, or due to changes in the
market (traffic growth, commercial promotions for existing services, etc.) By changing the traffic
model assumption, a new trend of the load evolution can be established and a new estimated time
for reaching the load limits can be defined. (#3 in the figure below.)
Resources usage
(load) Eng limit (≤ target QoS)
Capacity Planning is based on Capacity Monitoring results and commercial inputs, where the
capacity monitoring results are the measured load of a current, real call model and the commercial
inputs are a forecast of future, projected load. Commercial inputs may include but are not limited
to the following:
medium and long term traffic growth,
new services/features that will be activated,
commercial promotions for the existing service
The main capacity planning difficulty is to convert the commercial inputs into an estimate of traffic
growth and implicit load increase on eUTRAN elements. This load estimation/projection will be
compared with the engineering limit (QoS targets) in order to decide the future action to take
(parameters tuning, new resources, …).
The LTE Network Capacity Monitoring & Engineering document does not provide methodology and
processes for evaluating traffic load, forecasting resource usage, and anticipating operational
actions needed to avoid exceeding engineering limits. It is recommended to contact an ALU services
team for this purpose.
Services are available to assist customers in collecting and analyzing network data. The services
teams also make recommendations to enhance the network. Services teams are available for both
the RAN and ePC.
Note that services teams may have access to preliminary information for call models in future
releases and may also have additional information regarding customizing traffic models for specific
customers and commercial offers.
6 APPENDIX A
6.1 AN EXAMPLE OF A TRAFFIC MODEL WITH CSFB
The following example traffic model is an LA6 model that was used in previous editions of this
document. It assumes 50% data cards and 50% smart phones with Circuit Switch Fall Back (CSFB) for
SMS and voice. This traffic model is sometimes referred to as the “CSFB model” and can be used as
a base for developing a model for customers using CSFB. LA6.0 was used as an example whenever a
specific value was needed for a calculation and/or recommendation. Note that the example does
not represent any specific field configuration. For all customer-specific use cases, the example will
require updates to reflect the true traffic metrics of the customer network.
CSFB Voice
From signaling perspective, a CSFB voice call includes Page (for MT calls), BHCA (Service Request
+ S1 release), and TAU. From traffic perspective, a CSFB call includes only the period a voice
call stays on the LTE network; its call duration is very short (<1 second). The short voice call
duration has impact on the aggregate metrics.
CSFB SMS
CSFB SMS messages are delivered similar to a regular LTE data call. The main difference is that
CSFB SMS are delivered via NAS messaging to/from MME (which relays the SMS to/from 3G NEs).
In ‘pure LTE’, SMS are delivered via dedicated bearer for IMS signaling.
The main elements of the call model (with specific values for the example traffic model) are
presented in the following tables. Note that the tables show an example and do not represent any
specific field configuration. The example call model can be used as a basis to build dimensioning
and monitoring strategies for a customer configuration using CSFB. For all customer-specific use
cases, the example in this document will require updates to reflect the true customer network.
TA size (# eNBs) 50
paging-related parameters:
- Terminating calls: This is used in MME and eNB paging rate calculations. It is equal to (BHCA
* %Terminating calls). For CSFB, terminating calls include SMS and voice calls. The
percentage of terminating calls may be different for SMS and voice calls.
- Paging - eNB: This is the number of paging events per subscriber per hour, at fully-loaded
eNB. Its value is higher than ‘Terminating calls’ because the MME pages multiple eNBs for
each terminating call at MME. The Alcatel-Lucent estimated value in this example is based
on MME paging implementation in LE6.0.
The data call duration includes one or more “On” periods (transactions) where UE
sends/receives data, and a final period of dormancy just before the connection is released. In
the Alcatel-Lucent Call model, this dormancy period is considered to be 10 sec.
This dormancy period is controlled through Traffic Based Context Release functionality
configuration at eNB level. This functionality is automatically disconnecting dormant users after
a preconfigured UE Inactivity timer (timeToTrigger timer), allowing an optimal eNB resource
usage. For Traffic Based Context Release configuration parameters details please see Volume 5
in [C1].
The Traffic Based Context Release functionality can be activated in the eNB through the OAM
parameter isTrafficBasedContextReleaseAllowed (in ActivationService MO). In LTE
networks handling commercial traffic it is strongly recommended to set this parameter to
“true“.
The OAM parameter that controls the dormancy period in the eNB is timeToTrigger (in
TrafficBasedReleaseConf MO). In order to ensure a good balance between User plane and
Control plane load, it is strongly recommended not to go below 10000 ms value for this
parameter (equivalent to a dormancy period of 10 s). For details of this parameter and its
LE6.0 default value, please refer to [C1] and [R1].
The Duty Cycle parameter is defined as the ratio of on-time to call duration, where on-time is
the period during which the UE is in a transaction having some data to be sent or received.
The Control Plane parameters are counted in number of procedures (Attach, Detach,…) per
subscriber and per Busy Hour.
The following table provides a further breakdown, by call type, of the general and user plane
parameters from Table 6-1.
Alcatel-Lucent projects that user traffic in LE6.0 deployments will be primarily Best Effort data.
However, the Alcatel-Lucent LE6.0 End-to-end network solution supports the other call types listed
(VoIP, SMS, GBR data), in addition to Best Effort data.
Table 6-3 provides some additional dimensioning parameters for subscriber loading and data
usage per cell.
Table 6-4 shows the detailed call model for data traffic in the example traffic model. In this model
the CSFB SMS are not included in the Small message LTE.
Voice Small
non- message
GBR Streaming1
Application Downloa Web Emai Online Without
Category d Upload Browsing l GBR Gaming2 CSFB SMS
Data
Volume per
1219 2,23 356 70,5 96 14506 5526 0.28
call DL
(kBytes)
Data
Volume per
0,74 766 2,7 50,2 96 1668 2974 0,27
call UL
(kBytes)
Packet Size
608 351 1307 1266 35 1099 236 155
DL (Bytes)
Packet Size
74 350 117 496 35 857 127 150
UL (Bytes)
Table 6-4: Detailed Data Call Model for the Example Traffic Model
As the Paging and TAU procedures are handled by several LTE Network elements (eNB & MME), the
capacity impact and induced load will be analyzed separately in different sections of the document
(eNB part and MME part).
When a UE is in idle state, its location may not be exactly known. A Tracking Area (TA) represents an
area in which the UE was last registered, and it is necessary to page the UE in the TA to locate the
UE in a particular eNodeB. A TA update (TAU) is generated when the UE crosses the boundary from
one TA to another TA.
The LTE system has defined two TA operational schemes for 3GPP CSFB: the “multiple-TA
registration” scheme and the “overlapping TA” scheme. (This does not apply to 1x CSFB or eCSFB.)
With multiple-TA registration, MME sends to the UE a TA List containing the current TA and one
or several neighbor TAs. UE will request TAU only when it moves to a TA that is not in current
list. This avoids ping-pong scenarios at the TA border areas. MME will automatically include the
previous (old) Last Seen TA as a neighbor in the TA list and is capable of updating the TA List
automatically in order to track cyclic movement between 2 and 3 TAs.There is no need to
provision/configure TA neighbor lists in MME. On average, it is estimated that TA list size is ~1.2
TAs.
In overlapping scheme, the eNodeB is configured to broadcast more than one TA.
The following paging strategy can be used as an example of a realistic strategy for the example
traffic model described in Section 6.1. MME supports different paging methods for CSFB Voice, CSFB
SMS and (LTE) data calls.
For CSFB Voice calls: 2 pages, where both will be TA list due to the delay sensitive nature of
voice calls.
For CSFB SMS calls: 2 pages, where first is Last seen ENB and second is TA list.
For non-CSFB data calls: three pages, where first is Last seen eNB, second is Last seen TA, and
third is TA list.
Note that this strategy is similar to what was seen in the field in LE6.0 and is not the recommended
paging strategy. Please see recommendation below.
For Best Effort data applications, the recommended paging strategy is:
For CSFB voice calls (applicable to 3GPP only for paging), the recommended paging strategy
is:
1. Last Seen eNB List (List Size = 5 eNBs)
For CSFB SMS calls (applicable to 3GPP only for paging), the recommended paging strategy
is:
Figure 6.1-1: MME Paging Strategy Recommended for non-CSFB (Data Traffic) Mobile in LE6.0
The page response rate assumptions for each of the paging method are as follows:
Cumulative Cumulative
Page Success rate
Paging success rate success rate
Response after 1st page
Strategy after 2nd page after 3rd page
Assumption attempt
attempt attempt
TA list
CSFB Voice 93% 95% -
TA list
Based on the above paging strategy, the paging response rates and the TA/TA list sizes (see Table
6-1), the eNBs paged per MME terminating call can be calculated as following:
eNBs paged / MME terminating Data call = 1 (for 1st paging attempt)
+ numberEnbsTA * ( 1 - SucessRateAfter1stPage ) (for 2nd paging attempt)
Example with TAsize = 50 eNBs and TAlistsize=60 eNBs in average for all subscribers in the entire
network:
- For all data calls:
eNBs paged / MME term. Data call = 1 + 50 *15% + 60 *8% =~13.3 eNBs.
eNBs paged / MME terminating CSFB SMS call = 1 (for 1st paging attempt)
+ numberEnbsTAList * ( 1 - SucessRateAfter1stPage ) (for 2nd paging attempt)
7 APPENDIX B
7.1 ABBREVIATIONS
All terms, definitions and abbreviations used in the present document that are common across 3GPP
TSs are defined in the 3GPP Vocabulary. For the purposes of the present document, the
abbreviations given in following apply:
ACK Acknowledgement
ACLR Adjacent Channel Leakage Ratio
AM Acknowledge Mode
AMBR Aggregate Maximum Bit Rate
ANR Automatic Neighbor Relation
ARQ Automatic Repeat Request
AS Access Stratum
BCCH Broadcast Control Channel
BCH Broadcast Channel
BSR Buffer Status Reports
C/I Carrier-to-Interference Power Ratio
CAZAC Constant Amplitude Zero Auto-Correlation
CMC Connection Mobility Control
CM Configuration Management
CP Cyclic Prefix
C-plane Control Plane
C-RNTI Cell RNTI
CQI Channel Quality Indicator
CRC Cyclic Redundancy Check
CSG Closed Subscriber Group
DCCH Dedicated Control Channel
DDT Dynamic Debug Trace
DL Downlink
DFTS DFT Spread OFDM
DPI Deep Packet Inspection
DTCH Dedicated Traffic Channel
DRB Data Radio Bearer
DRX Discontinuous Reception
DTCH Dedicated Traffic Channel
DTX Discontinuous Transmission
DwPTS Downlink Pilot Time Slot
ECGI E-UTRAN Cell Global Identifier
ECM EPS Connection Management
eHRPD Enhanced High Rate Packet Data
EMM EPS Mobility Management
eNB E-UTRAN NodeB
EPC Evolved Packet Core
ePDG Evolved Packet Data Gateway
EPS Evolved Packet System
E-RAB E-UTRAN Radio Access Bearer
ETWS Earthquake and Tsunami Warning System
E-UTRA Evolved UTRA
E-UTRAN Evolved UTRAN
FDD Frequency Division Duplex
FDM Frequency Division Multiplexing
FM Fault Management
GERAN GSM EDGE Radio Access Network
GNSS Global Navigation Satellite System
GSM Global System for Mobile communication (Groupe Spécial Mobile)
GBR Guaranteed Bit Rate
GP Guard Period
GUTI Globally Unique Temporary Identifier
HARQ Hybrid ARQ
HO Handover
HRPD High Rate Packet Data
HSDPA High Speed Downlink Packet Access
ICIC Inter-Cell Interference Coordination
IP Internet Protocol
LB Load Balancing
LCR Low Chip Rate
LTE Long Term Evolution
MAC Medium Access Control
MBMS Multimedia Broadcast Multicast Service
MBR Maximum Bit Rate
MBSFN Multimedia Broadcast multicast service Single Frequency Network
MCCH Multicast Control Channel
MCE Multi-cell/multicast Coordination Entity
MCH Multicast Channel
MCS Modulation and Coding Scheme
MIB Master Information Block
MIMO Multiple Input Multiple Output
MME Mobility Management Entity
MTCH MBMS Traffic Channel
MSAP MCH Subframe Allocation Pattern
N.A. Not Applicable
NACK Negative Acknowledgement
NAS Non-Access Stratum
NCC Next Hop Chaining Counter
NGMN Next Generation Mobile Networks
NH Next Hop key
NR Neighbor cell Relation
NRT Neighbor Relation Table
N.S Not Significant
OLC One Logical Cell
OFDM Orthogonal Frequency Division Multiplexing
OFDMA Orthogonal Frequency Division Multiple Access
OMC Operations and Maintenance Center
OOT Out Of Time Alignment
P-GW PDN Gateway
P-RNTI Paging RNTI
PA Power Amplifier
PAPR Peak-to-Average Power Ratio
PBCH Physical Broadcast CHannel
PBR Prioritized Bit Rate
PCCH Paging Control Channel
PCFICH Physical Control Format Indicator CHannel
PCH Paging Channel
PCI Physical Cell Identifier
PDCCH Physical Downlink Control CHannel
PDSCH Physical Downlink Shared CHannel
SU Scheduling Unit
TA Tracking Area
TB Transport Block
TCP Transmission Control Protocol
TDD Time Division Duplex
TFT Traffic Flow Template
TM Transparent Mode
TNL Transport Network Layer
TTI Transmission Time Interval
UE User Equipment
UL Uplink
UM Un-acknowledge Mode
UMTS Universal Mobile Telecommunication System
U-plane User plane
UTRA Universal Terrestrial Radio Access
UTRAN Universal Terrestrial Radio Access Network
UpPTS Uplink Pilot Time Slot
UPOS Unified Procedure Oriented Selective (Trace Service)
VRB Virtual Resource Block
X2-C X2-Control plane
X2-U X2-User plane
END OF DOCUMENT