You are on page 1of 109

permitted without written authorization from Alcatel-Lucent

Site Mobile Access Division


document, use and communication of its contents not
All rights reserved. Passing on and copying of this

VELIZY
Originators
BSS TRAFFIC MODEL AND CAPACITY: PS PART
F. Mercier
RELEASE B10

System : Alcatel-Lucent GSM/GPRS/EDGE BSS


Sub-system : SYS-TLA
Document Category : Product Definition

ABSTRACT
This document presents a simple traffic model for PS services. This model is then used to estimate the
capacity of the Alcatel BSS with GPRS and EDGE, in the scope of B10 software release.

This document is Alcatel internal and shall not be shown to customers.

Approvals

Name S.BARRE E. ZORN L. KEUER

App. SYT Team Manager PL MFS DPM

M. TITIN-SCHNAIDER SONG J. J.P. GRUAU

B10 DPM BSC DPM PM

REVIEW
Ed. 01 Proposal 01 2007/11/16 Review report: GSM-Wimax/R&D/SYT/FME/207271
Ed. 01 Proposal 02 2008/01/14 Review report: MOAD/GSM/R&D/SYT/FME/208062

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 1/109
HISTORY
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Ed. 01 Proposal 01 2007/03/5 Based on B9 version of the document (3BK 11203 0115 DSZZA Ed.
02).
It takes into account the following B10 feature:
WRR (Weighted Round Robin).
Only Best-Effort traffic is considered (DTM, GBR and EDA are not
considered in this proposal).
Ed. 01 Proposal 02 2007/11/16 Compared to Edition 1 Proposal 1, the following impacts are
taken into account:
Impact of the feature Gb over Ip at DSP and PPC level.
Impact of the feature DTM on the GSL link. The impact of
this feature on PPC is not studied.
Ed. 01 Released 2008/01/14 Compared to Edition 2 Proposal 2, the following impacts are
taken into account:
Comparison of MR2 performance with Gb over FR and Gb
over IP (iso software), for legacy only.
Addition of a RA Update profile, which takes into account
GPRS attached MS, without on-going PS data traffic.
For GP case, reduction of the number of GCHs per link E1.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 2/109
TABLE OF CONTENTS
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

1 SCOPE AND DEFINITIONS ............................................................................................ 13


1.1 Scope of the document........................................................................................ 13
1.2 Definitions ....................................................................................................... 13
1.2.1 Load model ................................................................................................ 13
1.2.2 Traffic model .............................................................................................. 13

2 BSS CONFIGURATION ................................................................................................. 14


2.1 MFS configuration description ............................................................................... 14
2.1.1 MFS inside PLNM .......................................................................................... 14
2.1.2 GPU Board Architecture ................................................................................. 15
2.1.3 GP Board Architecture ................................................................................... 17
2.2 BSC Configuration description ............................................................................... 18
2.2.1 G2 BSC configurations.................................................................................... 18
2.2.2 BSC Evolution configurations............................................................................ 19

3 GENERAL OVERVIEW OF B10 IMPACTS ON THE BSS ............................................................ 21


3.1 Weighted Round Robin (WRR)................................................................................ 21
3.2 Dual Transfer Mode (DTM) .................................................................................... 21
3.3 Extended Dynamic Allocation (EDA) ........................................................................ 21
3.4 Gb over IP (GboIP) .............................................................................................. 22

4 GPU / GP PROCESSING CAPACITY .................................................................................. 23


4.1 GPU / GP resources............................................................................................. 23
4.2 DSP capacity ..................................................................................................... 24
4.2.1 DSP memory capacity .................................................................................... 24
4.2.2 DSP processing capacity ................................................................................. 27
4.3 PPC Capacity..................................................................................................... 31
4.3.1 PPC memory ............................................................................................... 31
4.3.2 PPC processing capacity ................................................................................. 31
4.3.3 How to define and measure PPC capacity ............................................................ 35
4.3.4 Maximum PPC Ping rate.................................................................................. 36
4.3.5 Maximum PPC DL LLC rate .............................................................................. 37
4.3.6 Maximum PPC UL LLC rate .............................................................................. 38
4.4 GPU / GP Throughput objectives ............................................................................ 38
4.4.1 General ..................................................................................................... 38
4.4.2 Maximum GPU Througput (B10 legacy)................................................................ 39
4.4.3 Maximum GP Througput (B10 Mx) ...................................................................... 43
4.4.4 Conclusion and recommendation....................................................................... 46
4.5 Main figures for the GPU / GP ................................................................................ 47

5 TRAFFIC MODEL ....................................................................................................... 51


5.1 Theoretical model for PS services .......................................................................... 51

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 3/109
5.2 Simplification and specific needs to predict the GPU / GP capacity ................................. 51
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

5.2.1 Session description ....................................................................................... 51


All rights reserved. Passing on and copying of this

5.2.2 Transaction description.................................................................................. 52


5.2.3 EDGE and GPRS users..................................................................................... 52
5.2.4 User behavior at Busy Hour ............................................................................. 52
5.3 Profile definitions and mix of profiles ..................................................................... 53
5.3.1 WEB browsing.............................................................................................. 53
5.3.2 WAP ......................................................................................................... 54
5.3.3 Multi Media Messaging Services (MMS)................................................................. 55
5.3.4 FTP transfer over TCP/IP ................................................................................ 56
5.3.5 File Downlink transfer over UDP........................................................................ 56
5.3.6 Signalling ................................................................................................... 56
5.4 Mix of profiles ................................................................................................... 58
5.5 User behavior at the busy hour.............................................................................. 58
5.6 Parameter values for all profiles and mix ................................................................. 58
5.6.1 Computations.............................................................................................. 58
5.6.2 Session description for all profiles ..................................................................... 58
5.6.3 Transaction description for all profiles ............................................................... 58
5.6.4 User behaviors at busy hour............................................................................. 60
5.6.5 Mix of profiles: definition of an average user ........................................................ 60

6 APPLICATION OF PS TRAFFIC MODEL TO THE BSS.............................................................. 62


6.1 GPU / GP and BSC role ......................................................................................... 62
6.2 GPU / GP dimensioning data.................................................................................. 62
6.2.1 DSP role .................................................................................................... 62
6.2.2 PPC role .................................................................................................... 63
6.2.3 Relevant outputs.......................................................................................... 63
6.3 PPC capacity with all profiles and traffic mix for B10 legacy ......................................... 64
6.3.1 PPC performance for GPU2.............................................................................. 64
6.3.2 PPC performance for GPU3.............................................................................. 72
6.4 PPC capacity with all profiles and traffic mix for B10 Mx.............................................. 78
6.4.1 GPRS case .................................................................................................. 78
6.4.2 EDGE case .................................................................................................. 79
6.4.3 Maximum throughput..................................................................................... 80
6.4.4 Maximum number of established MS................................................................... 80
6.4.5 GSL Load ................................................................................................... 80
6.4.6 TBF establishment, LLC PDU transmission per hour and PPC Throughput ....................... 81

7 GSL CAPACITY ......................................................................................................... 85


7.1 GSL Configuration .............................................................................................. 85
7.2 GSL load model for one GPU / GP ........................................................................... 85
7.3 GSL load estimation (worst case)............................................................................ 86
7.4 GSL load model for up to 6 GPU / GP ....................................................................... 89

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 4/109
8 GLOSSARY .............................................................................................................. 92
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

9 ANNEX................................................................................................................... 95
9.1 Message routes through BSS .................................................................................. 95
9.2 Protocol Layers and message sizes ......................................................................... 96
9.3 Traffic model formula ......................................................................................... 97
9.4 CPU load formula .............................................................................................. 100
9.5 BSC processing capacity for PS services .................................................................. 101
9.5.1 BSC board presentation ................................................................................ 101
9.5.2 Main assumptions ....................................................................................... 101
9.5.3 DTC boards............................................................................................... 102
9.5.4 TCU Board................................................................................................ 106
9.5.5 Main figures for the BSC ............................................................................... 109

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 5/109
TABLE OF FIGURES
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Figure 1: MFS position inside the PLMN.................................................................................14


Figure 2: GPU boards inside the MFS....................................................................................15
Figure 3: DSP and PPC inside the GPU board ..........................................................................15
Figure 4: DSP and PPC connectivity to the internal switch, GPU...................................................16
Figure 5: General Mx-MFS architecture .................................................................................17
Figure 6: General architecture of the GP board ......................................................................18
Figure 7: Mapping of BSC G2 Control Element on BSC Evolution Boards ..........................................19
Figure 8: BSS GPRS functional description .............................................................................23
Figure 9: Typical characteristic of a packet service session ........................................................51
Figure 10: Number of TBF established in DL and UL per hour for GPU2 (GPRS case) ...........................69
Figure 11: Number of TBF established in DL and UL per hour for GPU2 (EGPRS case)..........................69
Figure 12: Number of LLC PDU transmitted in DL and UL per hour for GPU2 (GPRS case) .....................70
Figure 13: Number of LLC PDU transmitted in DL and UL per hour for GPU2 (EGPRS case)....................70
Figure 14: DL PPC Throughput in kbit/s for GPU2 (GPRS case) .....................................................71
Figure 15: DL PPC Throughput in kbit/s for GPU2 (EGPRS case)....................................................71
Figure 16: Number of TBF established in DL and UL per hour for GPU3 (GPRS case) ...........................75
Figure 17: Number of TBF established in DL and UL per hour for GPU3 (EGPRS case)..........................76
Figure 18: Number of LLC PDU transmitted in DL and UL per hour for GPU3 (GPRS case) .....................76
Figure 19: Number of LLC PDU transmitted in DL and UL per hour for GPU3 (EGPRS case)....................77
Figure 20: DL PPC Throughput in kbit/s for GPU3 (GPRS case) .....................................................77
Figure 21: DL PPC Throughput in kbit/s for GPU3 (EGPRS case)....................................................78
Figure 22: Number of TBF established in DL and UL per hour for GP (GPRS case) ..............................81
Figure 23: Number of TBF established in DL and UL per hour for GP (EGPRS case) .............................82
Figure 24: Number of LLC PDU transmitted in DL and UL per hour for GP (GPRS case) ........................82
Figure 25: Number of LLC PDU transmitted in DL and UL per hour for GP (EGPRS case).......................83
Figure 26: DL PPC Throughput in kbit/s for GP (GPRS case) ........................................................83
Figure 27: DL PPC Throughput in kbit/s for GP (EGPRS case).......................................................84
Figure 28: Message routes: PDU Uplink / Downlink...................................................................95
Figure 29: Message routes: Paging.......................................................................................95
Figure 30: Message routes: Signalling ...................................................................................95
Figure 31: Message routes: Resource management...................................................................96

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 6/109
TABLE OF TABLES
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Table 1: G2 BSC configurations ..........................................................................................19


Table 2: BSC Evolution configurations ..................................................................................20
Table 3: Dimensioning parameters for the DSP .......................................................................24
Table 4: Dimensioning parameters for DSP memory..................................................................25
Table 5: External memory usage in DSP for GPU2 ....................................................................25
Table 6: External memory usage in DSP for Mx .......................................................................26
Table 7: DSP internal memory mapping for GPU2 ....................................................................26
Table 8: Dimensioning parameters for the DSP .......................................................................28
Table 9: RLC data size and needed GCH for each coding scheme (GPRS and EGPRS)...........................29
Table 10: Maximum number of PDCHs per DSP and per GPU versus coding (legacy)............................29
Table 11: Maximum number of PDCHs per DSP and per GP versus (M)CS for GboFR ............................30
Table 12: Maximum number of PDCHs per DSP and per GP versus (M)CS for GboIP .............................30
Table 13: PPC memory requirements for data and variables for GPU2 ...........................................31
Table 14: CPU margin to be added in PPC for GPU2 and GPU3 .....................................................33
Table 15: CPU margin to be added in PPC for GP (MR1 and MR2) ..................................................33
Table 16: CPU cost model for PPC for GPU2, GPU3 and GP for B10 MR1 (with GboFR).........................34
Table 17: CPU cost model for PPC for GPU2 and GPU3 for B10MR2 (with GboFR)...............................34
Table 18: CPU cost model for PPC for GPU2, GPU3 and GP for B10 MR2 (with GboIP) .........................34
Table 19: Maximum PPC Ping rate for GPU2/GPU3/GP (MR1 and MR2) ...........................................37
Table 20: Maximum PPC DL LLC rate for GPU2/GPU3/GP (MR1 and MR2) ........................................37
Table 21: Maximum PPC UL LLC rate for GPU2/GPU3/GP (MR1 and MR2) ........................................38
Table 22: Maximum throughput per PDCH (kbit/s) ...................................................................39
Table 23: Maximum GPU Throughput (GPRS and EGPRS, perfect propagation) ..................................39
Table 24: Maximum GPU Throughput (GPRS and EGPRS, with radio propagation) ..............................40
Table 25: Maximum DL PPC Throughput depending on LLC PDU length (GPU2/GPU3) .........................40
Table 26: Expected average LLC PDU length ..........................................................................41
Table 27: GPU Throughput at PPC nominal load with LLC length of 460 bytes ..................................42
Table 28: GPU Throughput at PPC nominal load with LLC length of 250 bytes ..................................43
Table 29: Maximum GP Throughput (GPRS and EGPRS, perfect propagation)....................................44
Table 30: Maximum GP Throughput (GPRS and EGPRS, with radio propagation) ................................44
Table 31: Maximum DL PPC throughput depending on LLC PDU length (GP) .....................................45
Table 32: GP throughput at PPC nominal load with LLC length of 460 bytes.....................................45
Table 33: GP throughput at PPC nominal load with LLC length of 250 bytes.....................................45

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 7/109
Table 34: (M)CS Distribution..............................................................................................47
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Table 35: Maximum number of PDCHs versus EGPRS penetration rate ............................................47
Table 36: DL LLC average data length for WEB if N201 = 576 bytes ...............................................54
Table 37: DL LLC average data length for WEB if N201 = 800 bytes ...............................................54
Table 38: DL LLC average data length for WEB if N201 = 1500 bytes..............................................54
Table 39: Examples of MMS information type .........................................................................55
Table 40: Signalling messages at start of session .....................................................................57
Table 41: Session description for all data profiles....................................................................58
Table 42: Transaction description at GPU / GP level with GPRS mobiles (MS always attached) ..............59
Table 43: Modified transaction parameters with EDGE mobiles ....................................................59
Table 44: User behavior at busy hour per profile (GPRS case)......................................................60
Table 45: Modified parameters for user behavior for EDGE mobiles...............................................60
Table 46: Alcatel mix of profile for BSS ................................................................................61
Table 47: Average user behavior at Busy Hour ........................................................................61
Table 48: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR1 GboFR)...............................65
Table 49: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboFR)...............................65
Table 50: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboIP) ...............................66
Table 51: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR1 GboFR) .............................66
Table 52: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboFR) .............................67
Table 53: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboIP) ..............................67
Table 54: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR1 GboFR)...............................72
Table 55: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboFR)...............................73
Table 56: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboIP) ...............................73
Table 57: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR1 GboFR) .............................74
Table 58: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboFR) .............................74
Table 59: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboIP) ..............................74
Table 60: PPC capacity at 75% CPU load with GP for GPRS (B10 MR1 GboFR) ..................................79
Table 61: PPC capacity at 75% CPU load with GP for GPRS (B10 MR2 GboIP)...................................79
Table 62: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR1 GboFR) ................................80
Table 63: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR2 GboIP) .................................80
Table 64: BSCGP hypothesis and LAPD overhead......................................................................87
Table 65: Size of messages between MFS and BSC....................................................................87
Table 66: Number of BSCGP messages between MFS and BSC ......................................................88
Table 67: Assumptions for BSC G2 and BSC Evolution ................................................................88
Table 68: Occurrences per second for the available BSC / MFS hardware........................................88

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 8/109
Table 69: GSL data load for one GPU / GP (without location services) ...........................................89
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Table 70: Number of messages per second exchanged on GSL link for GPU / GP ...............................89
Table 71: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU2) ........................................90
Table 72: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU3) ........................................90
Table 73: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GP)............................................90
Table 74: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU2) ................................90
Table 75: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU3) ................................91
Table 76: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GP)....................................91
Table 77: CPU load of GSL DTC (1 GPU) .............................................................................. 102
Table 78: Data load on the GSL for up to 6 GPU (without Location services) .................................. 103
Table 79: GPRSAP load on BSSAP-DTC ................................................................................ 103
Table 80: Added load on BSSAP-DTC for up to 6 GPU on BSC type 6............................................. 104
Table 81: Added load on TCH-RM DTC for one GPU............................................................... 104
Table 82: Added load on TCH-RM DTC for one GPU depending on BSC type.................................. 104
Table 83: TCH-RM DTC load, multi-GPU case (type 6 BSC) ...................................................... 105
Table 84: PS TRKDH CPU cost for one GPU (EGPRS penetration rate of 30%) .................................. 105
Table 85: Added load on DTC for TRKDH for one GPU (EGPRS penetration rate of 30%) ..................... 106
Table 86: DTC load on a type 6 BSC with 3 GPU and EGPRS penetration rate of 30%......................... 106
Table 87: DTC load on a type 6 BSC with 6 GPU and EGPRS penetration rate of 30%......................... 106
Table 88: Cost processing model in TCU ............................................................................. 107
Table 89: Traffic hypotheses applied to the cell ................................................................... 107
Table 90: PS TCU load for CCCH and SDCCH ......................................................................... 108
Table 91: PS TCU load for TCHs........................................................................................ 109

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 9/109
INTERNAL REFERENCED DOCUMENTS
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

REFERENCED DOCUMENTS
Alcatel references
[1] 3BK 11203 0443 DSZZA FBS GPRS Radio Interface RRM Sub-layer (PRH)
[2] 3BK 11202 0441 DSZZA FBS GPRS Radio Interface RRM Sub-layer (PCC)
[3] 3BK 11202 0458 DSZZA FBS GPRS Gb interface BSSGP Layer
[4] 3BK 11202 0460 DSZZA FBS GPRS MFS-BSC interface BSCGP Layer
[5] 3BK 11202 0450 DSZZA FBS GPRS Radio Interface RLC Layer
[6] 3BK 11202 0462 DSZZA FBS GPRS Radio Interface MAC Layer
[7] 3BK 11202 0467 DSZZA FBS GPRS GPRS MFS/BTS Interface M-EGCH stack
[8] 3BK 11203 0134 DSZZA BSS telecom parameters
[9] 3BK 11202 0445 DSZZA Resource Allocation and Management
[10] 3BK 11203 0132 DSZZA LCS and (E)GPRS telecom presentation
[11] 3BK 11202 0428 DSZZA GPU overload control and CPU power budget
[12] 3BK 11203 0115 DSZZA BSS Traffic Model and Capacity: PS part B9

[13] 3BK 11203 0142 DSZZA BSS telecom performance objectives, traffic model and
capacity: CS part
[14] 3BK 11203 0124 DSZZA Alcatel Radio System Architecture
[15] 3BK 11203 0128 DSZZA Transmission Functional Specification

3GPP references
[16] 3GPP TS 23.060 GPRS Service description Stage 2
[17] 3GPP TS 03.64 GPRS Overall description of the GPRS radio interface Stage 2
[18] 3GPP TS 04.60 Radio Link Control / Medium Access Control (RLC/MAC) protocol
[19] 3GPP TS 04.64 LLC Layer Specification
[20] 3GPP TS 08.14 BSS SGSN interface; Gb interface Layer 1

[21] 3GPP TS 24.008 Mobile radio interface layer 3 specification, Core Network
Protocols Stage 3
[22] 3GPP TS 08.18 BSS SGSN interface; BSS GPRS Protocol (BSSGP)
[23] 3GPP TS 08.16 BSS SGSN Network Service

RELATED DOCUMENTS
[24] IEEE P802.20 TM /PD Traffic Model for IEEE 802.20 MBWA System Simulations
www:http//comnets.rtwth a WAP Traffic Model and its appliance for the Performance
[25] -aachen.de/~pst analysis of WAP over GPRS. RWTH Aachen University of
Technology, Germany

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 10/109
permitted without written authorization from Alcatel-Lucent

Report No. 261- June 2000 Source Traffic Modeling of Wireless Applications. D. Stahele, K.
document, use and communication of its contents not

[26] Leibniz, and P. Tran-Gia. Lehtstuhl fr Informatic III,


All rights reserved. Passing on and copying of this

Universitt Wrzburg
Proceeding of 16th ITC, A page oriented WWW traffic model for wireless system
[27] Edingburg, June 1999 simulations. A. Reyes-KLEcuona, E. Gonzales-Parad, E. Casilari,
JC. Casaloa

[28] UTR/C/02/0027 Evaluation of Extended Uplink TBF release performances. -


Laurent Demerville (Alcatel MND)
[29] 3BL 23888 UDZZA ALCATEL Reference Traffic Model GPRS NSS Release 2.2
[30] TD/SYT/AFR/202.255 ed2 LCS Traffic Model and BSS impacts for B8
[31] TD/SYT/AFR/0169.2002 B8: HSDS impacts on GPU dimensioning
UTR/01/C/0005 R&I study final report for EDGE Phase 1; Performance of
[32] Acknowledged Mode in EGPRS ARQ Mechanism Version 1
(without LA)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 11/109
PREFACE
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

The reading of this document is targeted to people who already have a good understanding of PS services
with previous releases. General information on this subject can be found in document [10].
This document is Alcatel internal and shall not be shown to customers.

OPEN POINTS / RESTRICTIONS

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 12/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

1 SCOPE AND DEFINITIONS

1.1 Scope of the document


The words in italics are defined in the next section.
The aim of this specification is to provide:
An estimated load model for PS functions in the Alcatel BSS for release B10.
An estimation of the PS traffic capacity with GPRS and EDGE mobiles of the Alcatel BSS for the
software release B10. This estimation concerns mainly the GPU / GP and it is based on a given
GPRS/EGPRS traffic model.
The word GPU is used for the board supporting PCU in the MFS for G2 platform and the word GP is
used for the board supporting PCU in the MFS for Mx platform.
The considered network elements are the following ones:
MFS: the document focuses on the GPU / GP boards. The processing load model enables to determine
a maximum number of PDCHs and GCHs. The expected performances for PPC processing capacity are
also provided. For PS services, we try to define the GPU / GP capacity independently of a traffic
model, because GPRS history is too short to provide a reliable traffic model. However a GPRS traffic
model is presented in Section 5, which will be modified when knowledge of PS traffic improves. In
Section 6, this traffic model is used to assess the PPC capacity.
BSC: the document presents also the GPRS impacts on the processing capacity of the DTC and TCU
boards of the BSC. For CS services we take for granted Alcatel CS traffic model (stable since several
releases) to compute GPRS impact on BSC load, then make some minimum hypothesis for the GPU /
GP compatible with the nominal load.

Note In the following, the term legacy will refer to MFS using G2 platform.

1.2 Definitions

1.2.1 Load model


A load model is always meant for one processor (processing load) or one signaling link (data load on the
link). A load model enables to predict the processing load or data load, depending on a rate of basic
procedures triggered on the external interfaces or on internally triggered events if any.
It enables to predict a traffic capacity quantified by message rates, throughput and so on but not to
predict a given number of users. When a traffic model is not available, the system capacity is defined by
the processing capacity described in term of basic procedure (e.g.: TBF establishment, throughput, and so
on).

1.2.2 Traffic model


The traffic model is a quantified description of user behavior, based on the knowledge of the applications
and user habits. The traffic model must enable to compute for a given user a rate of basic procedures.
The combination of traffic model and processing cost model enables to predict the system capacity in
number of users.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 13/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

2 BSS CONFIGURATION

2.1 MFS configuration description


The different figures of this section present a global description of the MFS in the GSM/GPRS network. It
goes from a macro-view, i.e. from MS to SGSN, to a micro-view, detailing different components on GPU
and GP boards.
For more details on MFS and GPU / GP architecture, see document [15].

2.1.1 MFS inside PLNM

A bis Ater and A ter mux A ter mux A

BTS PLMN

BSC TRCU MSC FRDN

BTS Frame
X Y Relay
Data
MFS Network
Z
BTS

Interfaces carrying Z
normal GSM and/or
Gb
GPRS MFS - BTS traffic/
signalling SGSN
Interfaces carrying normal GSM
and/or Gb traffic/ signalling
Normal GSM traffic/ signalling carried transparently through the MFS

Figure 1: MFS position inside the PLMN

Note on this figure:


X, Y, Z and Z represent the following interfaces (The word link should be understood here as physical 2
Mbit/s PCM links):
X: Number of Ater-Mux and Ater links between BSC and MFS,
Y: Number of Ater-Mux links + Gb links between MFS and SGSN through TRCU and MSC,
Z: Number of direct Gb links between MFS and SGSN,
Z: Number of Gb links between MFS and SGSN through MSC.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 14/109
2.1.2 GPU Board Architecture
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

2.1.2.1 GPU boards inside the MFS

GPU board

2048 kbit/s
PCM
interfaces

GPU board

Ethernet
100 Mbits/s Hub
Ethernet (internal)
interfaces
O&M O&M
server server

Ethernet
external
interface

Figure 2: GPU boards inside the MFS

Note on this figure:


There are a maximum number of 30 active + 2 protection GPU boards per MFS.

2.1.2.2 DSP and PPC inside the GPU board

#1 DSP

DSP
8 Mbit/s
2 Mbit/s PCM30
G .703/
PCM30 Interfaces
G .704
Interfaces DSP
Interfaces

DSP

HDLC
# 16

PCI bus
Clock
Ethernet
100 Mbit/s
Ethernet
Interfaces Ethernet

RS-232 O BC (PPC)
Interface

ICL ISL

Figure 3: DSP and PPC inside the GPU board

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 15/109
Note on this figure:
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

The number of links between the DSPs and the internal switch, and between the HDLC controller and the
internal switch are given in number of 2 Mbit/s PCM equivalents. It means that it does not represent the
reality in term of physical links, but it gives the maximum physical connectivity:
There are 4 x 2 Mbit/s PCM links between one DSP and the internal switch,
There are 16 x 2 Mbit/s PCM links between the HDLC controller and the internal switch.

2.1.2.3 GPU board internal connections

PCI Bus
See
note
DSP 16 x 8 Mbits Ports

DSP
16 2-Mbits
DSP Ports
64 bits/s
DSP Switch

PPC

HDLC Controler See note

Figure 4: DSP and PPC connectivity to the internal switch, GPU

Note on this figure:


These figures describes the physical connections between the main GPU components:
The GPU switch is a 64 kbits/s switch with 16 8 Mbit/s ports.
There is a maximum of 16 external ports on the GPU board. Each port can be connected to a 2 Mbit/s
PCM link.
There is one HDLC controller connected to four 8 Mbit/s links, but shall only handle the equivalent of
eight 2 Mbit/s links, Furthermore only 124 n x 64 kbits/s HDLC channels are allowed, up to the
maximum capacity of 16 Mbit/s (Others 4 HDLC Channels are for GSL).
Referring to Figure 2 and Figure 3, on each GPU board, the total number of PCM links follows X + Y + Z
16.

Note This physical connection limit is lowered taking into account the processing power limits at DSP
and PPC sides (see Sections 4.2 and 4.4 for limitation due to software).
On each PCM, the timeslots to be processed can be selected using the switch and in addition, any timeslot
on one external port can be transparently connected to any timeslot on its own or any other external
port.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 16/109
2.1.3 GP Board Architecture
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

2.1.3.1 GP boards inside the Mx-MFS

Gb , Ater mux interfaces

External E1 Links

LIU Shelf O&M + TELECOM


MUX W MUX P

Configuration Mx- MFS

GP P
OMCP P
GP N
OMCP W
GP 1

NE1oE
SSW W

SSW P 1Gigabit Ethernet ATCA Base Interface

Figure 5: General Mx-MFS architecture

Note about this figure:


Concerning the different parts:
The OMCP boards (O&M Control Processing) are the Control Stations of the Mx MFS.
The GP boards are the boards of the Mx MFS in charge of packet traffic management.
The SSWs are 1 Gigabit Ethernet switches ATCA based interfaces.
The LIU termination shelf, located outside of the ATCA rack, concentrates up to 256 E1 links on
Ethernet frames. The GP boards send/receive their E1 links over Ethernet to/from this LIU shelf
through the SSW.
The MUX function provides the de/concentration of n x E1 frames into Ethernet payload.
NE1oE: Method consisting to transport n x E1 frames in an Ethernet payload, assigned to a dedicated
MAC address.
The board protection is 1+1 for OMCP, MUX and SSW and n+1 for GP boards (with one GP protecting board
when an extension Mx MFS shelves is used contrary to the not Mx MFS case where there is one protecting
board per MFS shelf).

2.1.3.2 GP board
As shown in the figure below, the GP board re-uses the GPU core, added with the nE1oE function and a
local Gigabit Ethernet switch.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 17/109
permitted without written authorization from Alcatel-Lucent

GP Board Control
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Processor

GPU
HDLC 4x
DSP GbE
switch

NE1oE 16 x E1
Framer TDM cross connect
((N x 64 Kbit/s)
Gb Ethernet

NE1oE

Figure 6: General architecture of the GP board

Note about this figure:


As the GP is a mechanical adaptation of the current GPU (GPU2_AC) to ATCA format, following
improvements are realized:
Former PowerPC replaced by a PUMA-AGX PMC containing an IBM 750 GX processor running at 1.0
GHz and a new chipset,
Former DSPs C6203 replaced by four TI DSP C6415T (720 MHz CPU, 1 Mbyte internal RAM, 64 Mbytes
external RAM, processing 128 TDM channels at 64 kbit/s),
New FPGA module added in order to manage the NE1oE function.
Each working GP is able to synchronize and to switch 16 E1 links but only 12 E1 data links are connected
through nE1oE and no E1 link to the protect GP. So with the full equipment of 21 + 1 GP, only 252 E1 links
of the LIU shelf can be used. Each E1 link of any LIU board can be used for any NE interface (Abis,
Atermux, Gb).

2.2 BSC Configuration description

2.2.1 G2 BSC configurations


The G2 BSC architecture is defined in document [14]. The BSC boards, which handle the main Telecom
functions in the BSC, are the TCU and the DTC. The DTC and TCU communicate through the BSC switch for
most messages1. The TCU board is connected to the Abis interface, and handles the signalling towards the
BTS.
The DTC board is connected to the Ater interface. Depending on the software activated on the DTC board,
the DTC can be dedicated to several functions:
MTP SS7: SS7 signalling (CS functions only)

1
Most signalling messages are being transferred through the switch. A path needs to be established for each message sent.
Communications is also possible through the broadcast bus (used for A interface paging broadcast to the TCUs).

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 18/109
GSL: handles the message reception and forwarding on the GSL interface.
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

BSSAP and GPRSAP: handles Layer 3 GSM and GPRS signalling (BSCGP)
All rights reserved. Passing on and copying of this

TCHRM: handles radio resource management for CS and PS services.

Additionally, the trunk handling function (switching between Abis and Ater channels) is active on all the
DTCs. The table below gives the number of boards of each type, and the number of Ater circuit, for each
BSC type.
Configurations (BSC type) 1 2 3 4 5 6
TCUs 8 32 48 72 88 112
DTCs 16 24 40 48 64 72
MTP SS7 4 6 10 12 16 16(1)
BSSAP/GPRSAP DTCs or GSL DTC 8 14 22 28 36 44(2)
TCHRM DTC pairs 2 2 4 4 6 6
TRXs FR connectable 32 128 192 288 352 448
Number of Ater-Mux (Ater sub multiplexing 4:1) 4 6 10 12 16 18
Total number of Ater circuits 470 710 1188 1428 1906 2146

Table 1: G2 BSC configurations


(1)
Note Maximum 16 SS7 DTC (even with 18 Ater-Mux), because there are maximum 16 SS7 links per BSC
allowed in the GSM standard.
(2)
Note With BSC Type 6, the maximum of 44 BSSAP/GPRSAP DTC is possible only without GPRS. With
GPRS, the number of GSL and BSSAP boards of each type is deduced from the following rules.
There are four DTC boards per Atermux. For each Ater mux (so for each group of 4 DTC):
A GSL link can be configured if at least a proportion of the Atermux is used for PS (min 1/8 of
Atermux). In this case one DTC is dedicated to GSL.
A SS7 link can be configured if at least a proportion of the Atermux is used for CS. In this case a
DTC is dedicated to SS7. If a SS7 link is not configured, then the corresponding DTC could
be activated as BSSAP at BSC level.
The remaining DTC are configured either as TCH-RM (fixed number according to BSC type) and
BSSAP.

2.2.2 BSC Evolution configurations


The BSC Evolution architecture is defined in document [14]. The main telecom functions are now handled
within the CCP (Call Control Processing) and the OMCP boards (O&M Control Processing). This is presented
in Figure 7.

OMCP CCP

S-CPR DTC TSC DTC TCU


DTC
(TCH-RM) TSC DTC
DTC DTC
(TCH-RM) TSC
(TCH-RM)
O-CPR

Figure 7: Mapping of BSC G2 Control Element on BSC Evolution Boards


The TCU and the DTC elements are mapped on the CCP boards and the DTC sub control element TCH-RM
(Traffic Channel Resource Manager) is mapped on the OMCP board.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 19/109
In B10 Mx system, five configuration types are defined. Following table gives the configuration data of
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

each BSC configuration type.


All rights reserved. Passing on and copying of this

Configuration Type 1 2 3 4 5
Capacity Nb TRX 200 400 600 800 1000
Nb Cell 200 400 500 500 500
Nb BTS 150 255 255 255 255
Nb of E1 Abis 96 96 176 176 176
Ater CS 10 20 30 40 48
Ater PS 6 12 18 24 28
Nb of VCE on CCP Nb TCU 50 100 150 200 250
Nb DTC CS 40 80 120 160 192
Nb DTC PS 24 48 72 96 112
Nb of VCE on OMCP Nb TCH-RM pairs 1 1 1 1 1
Nb SCPR pairs 1 1 1 1 1
Nb OCPR pairs 1 1 1 1 1
Nb TSC pairs 8 8 8 8 8
Nb boards ATCA Nb CCP 1 2 3 4 5
Nb spare CCP 1 1 1 1 1
Nb OMCP 2 2 2 2 2
Nb SSW 2 2 2 2 2
Nb TP GSM 2 2 2 2 2
Nb boards LIU Nb MUX 2 2 2 2 2
Nb LIU 8 8 14 15 16

Table 2: BSC Evolution configurations

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 20/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

3 GENERAL OVERVIEW OF B10 IMPACTS ON THE BSS


The following B10 features (PS domain) have a potential impact on the BSS load. Here we explain briefly
the potential impact in a qualitative way. A reference to other sections of the document is provided for
quantified impacts if applicable. For a technical description of the features, see document [10].

3.1 Weighted Round Robin (WRR)


This feature allows the introduction of a smooth scheduling probability to differentiate NRT (non-real
time) users, by giving a different weigth to each TBF. The weight represents the number of radio block
that can be scheduled between two resets of the NRT weights. To be noted, that a mechanism using
credit was already used in B9 for RT TBF. This feature has been introduced in MR1.
The feature impacts:
GPU / GP load:
DSP capacity: impact on CPU load as a new algorithm of scheduling is introduced.
PPC capacity: no impact.
BSC load: no impact.

3.2 Dual Transfer Mode (DTM)


This feature allows a dual transfer mode capable MS to use a radio resource for CS traffic and
simultaneously one or several radio resources for PS traffic. This feature has been introduced in MR1.
The feature impacts:
GPU / GP load:
DSP capacity: no impact,
PPC capacity: impact on CPU cost, as DTM leads to more complex radio resource allocation
algorithm,
GSL link: introduction of new messages between BSC and MFS to handle this feature.
BSC load: impact on load, as new mechanisms are needed to handle DTM calls (new messages, intra-
handover).

Note The impact of this feature is not studied at PPC level, but its impact on the GSL link is studied in
this proposal.

3.3 Extended Dynamic Allocation (EDA)


This feature is an extension of the basic Dynamic Allocation (DA) mode to allow higher throughput in
uplink for MS (supporting this feature) through the support of more than two radio time-slots. This feature
has been introduced in MR2
The feature impacts:
GPU / GP load:
DSP capacity: impact on CPU load, as a modification of scheduling is needed to handle more
than 2 timeslots in UL.
PPC capacity: impact on CPU cost, as EDA introduces new constraints on radio resource
allocation algorithm.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 21/109
BSC load: no impact.
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Note The impact of this feature is not studied in this proposal.

3.4 Gb over IP (GboIP)


This feature offers the possibility to use IP sub-networks at the Gb Interface, which replaces the usage of
Frame Relay protocol currently implemented. This feature has been introduced in MR2.
The feature impacts:
GPU / GP load:
DSP capacity: impact on the maximum number of PDCHs as more Ater resources are available
(Mx MFS case only),
PPC capacity: impact on CPU cost, especially on the CPU cost of PDU transfer in downlink and in
uplink.
GSL link: no impact.
BSC load: no impact.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 22/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

4 GPU / GP PROCESSING CAPACITY

4.1 GPU / GP resources


The Elementary Processing Unit of the MFS is the GPU / GP board. Thus all dimensioning figures given is
this paragraph are related to the components present on GPU / GP boards. This includes especially the
PPC and the DSP processors (one GPU / GP corresponds to 4 DSPs and 1 PPC)2.

LLC LLC

Relay Relay Relay

RRM RR RR BSCGP BSCGP RRM BSSGP BSSGP


RLC RLC

MAC MAC Network Network


Relay Service Service
L2-RSL L2-GSL L2-GSL M-EGCH
L2-GCH L2-RSL
GSM RF GSM RF
L1-GCH L1-RSL L1-RSL L1-GSL L1-GSL L1-GCH L1-Gb L1-Gb
SGSN
Um Abis Ater Gb
MS BTS BSC MFS SGSN

Processing Units: Handled by PPC Handled by DSP

Figure 8: BSS GPRS functional description


The dimensioning of the GPU / GP board is directly related to the memory and processing capacity of the
DSPs and the PPC. As shown on Figure 8, the GPRS data and signalling processing is split between the PPC
and the DSPs.
The DSP processors handle:
RLC and MAC layers,
L1-GCH and M-EGCH layers.
The PPC processor handles:
RRM layer,
BSSGP to L1-Gb (i.e. Gb interface),
BSCGP and Lx-GSL.
On each processor, for the maximum traffic capacity, the following points shall be checked:
Sufficient Memory capacity for the Messages waiting queues and retransmission windows
(acknowledged mode) for each layer handled by the processor,
Sufficient Memory capacity for all cells and Ms contexts (with or without Master-PDCH, Access
Messages queues) handled by the processor,
Sufficient CPU processing capacity to handle the GPRS access and traffic, for a standard traffic
profile (transmission duration, mean LLC frame length, throughput, etc).

2
This ratio would probably need to be changed with current analysis but the object of this document is not to propose change in
hardware design.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 23/109
4.2 DSP capacity
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

4.2.1 DSP memory capacity

4.2.1.1 DSP dimensioning requirements


The main dimensioning factors for the memory are the size of the code, the number of PDCH, the average
number of MS multiplexed per PDCH (from which is deduced the number of TBF contexts) and the number
of GCH.
The variables and contexts are allocated statically in the DSP, according to the maximum number of TBFs,
PDCH, etc. The RLC block containers are allocated dynamically within a pool of resources. Hence, the
dimension of this pool is based on statistical consideration.
It should be noted that in case of RLC block container shortage, a flow control mechanism between PMU
and PTU prevents the DSP to reset. A minimum of 3000 containers is reserved for signalling purpose. The
detailed mechanism for flow control is described in RLC specification (see document [5]).

The variables indicating each RLC block status for the ARQ mechanism is reserved statically, and must be
dimensioned according to the maximum Window Size and maximum number of TBF (including those in
delayed mode). Only one type of window variable is defined based on maximum WS for EGPRS, for all TBF
types (EGPRS and GPRS). This is constraining in term of memory needs but improve the real time
performances of the DSP. The following table gives the dimensioning parameter for the DSP for B10 legacy
and for B10 Mx.
Parameters Value for legacy Value for Mx
(1)
Maximum number of TBFs 210 / 240 960
Maximum number of PDCHs 120 480
Maximum number of TRX 60 240
Maximum number of GCHs 120 480
Proportion of EGPRS TBF 30% 30%
GPRS TBF window size value 64 64
Maximum EGPRS TBF window size value (for 4 TS capable MS) 512 512
Average WS filling for GPRS 50% 50%
Average WS filling for EGPRS 20% 20%
Proportion of active DL TBF 75% 75%

Table 3: Dimensioning parameters for the DSP


(1)
For the maximum number of TBFs, 210 and 240 are respectively defined for GPU_AB (GPU2) and for
GPU_AC (GPU3).

4.2.1.2 DSP memory requirements


From the above hypothesis (see Table 3), memory requirements for the DSP are deduced, for variables in
internal and external memory of the DSP.
From the maximum number of TBFs that can handle the DSP, the total number of needed data resources
can be computed, from the following formulas:
For EGPRS:
Data resources = Active TBF EGPRS_ratio EGPRS WS size EGPRS WS filling
For GPRS:
Data resources = Active TBF (1-EGPRS_ratio) GPRS WS size GPRS WS filling

Using information provided by Table 3 and the above formula, we can evaluate the needed resources for
data and signalling. This is presented in Table 4.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 24/109
Parameters Value for legacy Value for Mx
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Number of data resources for GPRS 4 032 16 128


All rights reserved. Passing on and copying of this

Number of data resources for EGPRS 5 530 22 120


Number of containers for RLC signalling 3 000 12 000
Total data and signalling resources (RLC containers) 12 562 50 248

Table 4: Dimensioning parameters for DSP memory


For B10 legacy, a total of 13000 RLC containers are reserved and for B10 Mx, a total of 52000 RLC
containers are reserved for data and signalling. It fulfils the needs for 30% EDGE penetration, with other
hypothesis unchanged.

4.2.1.3 DSP External memory mapping

4.2.1.3.1 B10 Legacy case


The DSP external memory is about 4Mbytes for GPU2. The goal of this section is to show that the
requirements given in Table 3 are compatible with the current available memory.
The DSP external memory usage is obtained by multiplying the context size by the maximum number of
contexts needed (as defined in Table 3), then adding the total number of signalling and data resources by
the RLC container size (one size fits all), and adding the code size. Subtotals per type of contexts and
totals are highlighted with shaded cells.

The values presented in Table 5 are related to B10-MR1 legacy, with information provided by PTU team.
Number of Context size Total size
Memory usage
context (byte) (byte)
B10 code 791 986
Code hole 0
Code increase 323 854
Total code 1 115 840
Timers 60 616
Common variables 234 054
Trace & debug 240 000
TBF variables 210 716 150 360
ARQ window variables 107 520 4 430 080
TRX variables 60 2 611 187 992
MPDCH variables 1 3 080 3 080
PDCH variables 120 2 596 311 520
GCH variables 120 309 37 080
Miscellaneous 64 634
Total variables 1 719 416
RLC containers (data and signalling) 13 000 92 1 196 000
Total used (sum of shaded cells) 4 031 256
Total available 4 194 304
Unused memory 163 048

Table 5: External memory usage in DSP for GPU2

Note From Table 5, we can see that the remaining external memory for GPU2 at DSP level is very low.

Note For GPU3, the DSP external memory is about 8Mbytes. This means that the unused memory is
around 4 Mbytes.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 25/109
4.2.1.3.2 B10 Mx case
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

The DSP external memory is about 16Mbytes for Mx. The goal of this section is to show that the
requirements given in Table 3 are compatible with the current available memory.

The DSP external memory usage is obtained by multiplying the context size by the maximum number of
contexts needed (as defined in Table 3), then adding the total number of signalling and data resources by
the RLC container size (one size fits all), and adding the code size. Subtotals per type of contexts and
totals are highlighted with shaded cells. The values presented in Table 6 are related to B10-MR1 Mx, with
information provided by PTU team.
Number of Context size Total size
Memory usage
context (byte) (byte)
B10 code 791 986
Code hole 0
Code increase 108 110
Total code 900 096
Timers 181 008
Common variables 941 736
Trace & debug 244 156
TBF variables 960 720 691 200
ARQ window variables 491 520 4 1 966 080
TRX variables 240 2 450 588 000
MPDCH variables 1 3 080 3 080
PDCH variables 480 2 596 1 246 080
GCH variables 480 348 167 040
Miscellaneous 292 002
Total variables 6 320 382
RLC containers (data and signalling) 52 000 92 4 784 000
Total used (sum of shaded cells) 12 004 478
Total available 16 777 216
Unused memory 4 772 738

Table 6: External memory usage in DSP for Mx

4.2.1.4 DSP Internal memory

DSP Internal memory is much smaller than external memory (about 64 kbytes for GPU2) and is reserved
for functions, which require real time optimization. DSP Internal memory usage is provided hereafter for
information (see Table 7).
DSP internal memory Bytes Comments
HPI 10 152 HPI descriptor and HPI circular buffer for DMA
GCH Common Buffer 3 912 CRC Z-sequence loop-up table, McBSP DMA buffer
TRX 4 052 MAC SDL instance table and MAC temp variables
TBF (max 210 TBF) 6 720 RLC SDL instance table and TBF variable in the SDL level
GCH optimization For static loop-up table and some temp buffer to improve GCH
0
data operation.
L2GCH Data Buffer Move UL buffer to external memory, this buffer is for MEGCH
21 408
context and some single GCH context.
Stack 2 048 DSP stack, increase it in B10 for MAC process, DL re-ordering
Common Internal: SDL, DSP System 8 568 SDT cmicro kernel and TI DSP kernel
RLC optimization 0
Total internal Memory used 56 860
Unused memory 8 676

Table 7: DSP internal memory mapping for GPU2

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 26/109
Note For GPU3, the DSP internal memory is about 512 kbytes. This means that the unused internal
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

memory for GPU3 is around 460 kbytes.


All rights reserved. Passing on and copying of this

Note For Mx, the DSP internal memory is about 1Mbytes. This means that the unused internal memory
for Mx is around 950 kbytes.

4.2.1.5 Conclusion for DSP memory


There is no external memory shortage with B10 (legacy and Mx) for the DSP.

4.2.2 DSP processing capacity

4.2.2.1 Real time constraint


The DSP capacity model was redesigned for B9 release because of the introduction of the statistical
multiplexing feature and also to allow easier update of execution times used in the model with
measurements results.
For each active PDCH, the DSP must be able to send one radio block3 every 20ms, upon request of the M-
EGCH layer. Otherwise, if RLC-MAC cannot answer in time to the request from the M-EGCH layer, this will
create holes in the block flow on the radio interface. The number of PDCH supported by the DSP must be
such that overload shall not occur in the DSP.
With statistical multiplexing, it is no more possible to evaluate the actual DSP load at RRM-PRH level as it
was done in B8 to prevent DSP congestion. Therefore, a new mechanism has been introduced between
RRM-PRH and the DSP to limit dynamically the DSP load.
For a given DSP, its current CPU load is transmitted to RRM through DSP-Load-Indication messages on PMU-
PTU interface. This message provides RRM with information on the current CPU load (after filtering) of a
given DSP. Two thresholds (DSP_Load_Thr_1 and DSP_Load_Thr_2, O&M parameters) have also been
introduced to define the DSP CPU states.
If the current CPU load is higher than DSP_Load_Thr1 but lower than DSP_Load_THr2, the DSP is
considered as loaded by RRM-PRH. In this case, the usage of new PDCHs and the establishment of new
GCHs are forbidden, except when serving the first One-UL-Block or the first best-effort TBF allocation
request in a cell.
As the DSP capacity is expressed as a number of PDCHs that the DSP can handle, the DSP load must be
limited by DSP_Load_Thr_1 to find this number of PDCHs as above this limit no new PDCHs are
established.
Another consequence of the statistical multiplexing is that the CPU cost is evaluated at TRX level and, the
total number of TRX supported (N_TRX) must be such that:
CPU cost per 20ms for one TRX N_TRX < DSP_Load_Thr_1 20ms

Then, for a given number of active PDCHs per TRX, it is easy to obtain the number of PDCHs that the DSP
can handle at the targeted DSP load.

4.2.2.2 CPU cost per radio block


To evaluate the number of TRXs (and therefore the number of PDCHs) that a DSP can handle per radio
block, it is necessary to estimate the CPU cost per radio block. The CPU cost is based on the analysis of
two scenarios, one corresponding to a DL transfer and one corresponding to an UL transfer.
For each transfer, we consider:
The data transfer (in DL and UL),

3
One radio block contains one RLC block (GPRS and EGPRS up to MCS-6) or 2 RLC blocks (MCS-7, MCS-8 and MCS-9).

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 27/109
The polling or acknowledgement mechanism (using PDAN and PUAN messages).
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

For each scenario, a certain number of signals or messages are exchanged between the different layers
(RLC, MAC, M-EGCH and L1GCH) and also with PPC (HPI interface). For each signal, its cost in terms of
CPU is measured. This cost is dependent of a certain number of parameters, which can be the following
ones:
The number of PDCHs,
The number of TRXs,
The number of GCHs,
The PDU rate in DL and in UL,
The RLC load per PDCH in DL and in UL.
The PDU rate corresponds to the peak PDU rate per 20ms. It depends on the LLC-PDU size and the RLC
block size (i.e. the used coding scheme, (M)CS).
The CPU cost is also related to the polling rate (in DL) and the acknowledgement rate (in UL). These two
mechanisms are different for EGPRS and GPRS. In both cases, the sending of a PDAN or of a PUAN is
triggered by a number of RLC blocks received or transmitted but the number of RLC blocks taken into
account is different in GPRS and EGPRS:
For EGPRS, the polling rate and the acknowledgement rate are function of the window size and of an
O&M parameter (EGPRS_DL_ACK_FACTOR for DL and EGPRS_UL_ACK_FACTOR for UL):
For DL, an EGPRS PDAN message is received every EGPRS_DL_ACK_PERIOD =
EGPRS_DL_ACK_FACTOR * WINDOW_SIZE transmitted RLC data blocks.
For UL, an EGPRS PUAN message is sent every EGPRS_UL_ACK_PERIOD = min (32,
EGPRS_UL_Ack_Factor * window_size) received radio blocks.
For GPRS, the polling rate and the acknowledgement are O&M parameters (GPRS_DL_ACK_PERIOD
and GPRS_UL_ACK_PERIOD respectively for DL and UL) and represent directly the number of radio
blocks.

The parameters used for the polling and acknowledgement mechanism in the DSP capacity model are
presented in Table 8.
Parameters Value
GPRS_DL_ACK_PERIOD 12
GPRS_UL_ACK_PERIOD 16
EGPRS Window size (if one TS allocated to TBF) 192
EGPRS_DL_ACK_FACTOR 0.25
EGPRS_UL_ACK_FACTOR 0.25
Number of transmitted radio blocks necessary to request a PDAN (EGPRS) 48
Number of received radio blocks necessary to send a PUAN (EGPRS) 48

Table 8: Dimensioning parameters for the DSP


Measurements of CPU costs for the signals used for DL and UL scenarios have been provided with the
following assumptions:
1 TBF per PDCH,
1 PDCH per TRX,
For each CS and MCS in UL and DL,
For three different LLC-PDU sizes.
A certain number of input parameters are also defined for the DSP capacity model, which are the
following ones:

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 28/109
Number of PDCH per TRX (N_PDCH_per_TRX),
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Choice of coding scheme (CS or MCS): it is possible to choose one coding scheme or a distribution of
All rights reserved. Passing on and copying of this

CS and / or MCS. For this latter case, it is possible to choose a percentage of EGPRS traffic,
Choice of a distribution of LLC-PDU size,
RLC load per PDCH in DL and in UL,
O&M parameter DSP_Load_Thr1.
From the measurements provided by PTU, we have the CPU cost per signal for one PDCH and one TBF per
PDCH during 20ms. It is then easy to evaluate the sum of CPU costs for all these signals during 20ms as a
function of N_TRX, N_GCH and N_PDCH and to find the number of TRXs (N_TRX), which allow reaching
DSP_Load_Thr1 of DSP capacity during 20 ms, knowing that:
N_PDCH = N_TRX N_PDCH_per_TRX,
N_GCH = GCH_Needed (Coding scheme) N_PDCH_per_TRX N_TRX.

The parameter GCH_Needed (Coding scheme) is given by Table 9 as a function of the coding scheme for
GPRS and EGPRS.
Coding scheme CS1 CS2 CS3 CS4
RLC data size (bytes) 20 30 36 50
GCH needed per PDCH 0.73 1 1.25 1.64
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
RLC data size (bytes) 22 28 37 44 56 74 112 136 148
GCH needed per PDCH 0.89 1 1.33 1.5 1.86 2.36 3.49 4.14 4.49

Table 9: RLC data size and needed GCH for each coding scheme (GPRS and EGPRS)

4.2.2.3 DSP performance objectives


Using the DSP model described in Section 4.2.2.2, it is possible to obtain the maximum number of PDCHs
for one DSP and for one GPU / GP, for different input parameters. This number of PDCH can be obtained
for one given coding scheme or for a distribution of coding schemes (CS for GPRS and MCS for EGPRS). For
this latter case, the proportion of EDGE traffic is also provided.

4.2.2.3.1 DSP performance for B10 legacy


In Table 10, the maximum number of PDCHs per DSP and per GPU is presented for each GPRS and EGPRS
coding scheme, with the following assumptions:
DSP_Load_Thr_1 set to 85%,
6 PDCHs per TRX,
Average size of LLC-PDU of 500 bytes in DL and 200 bytes in UL,
RLC load of 45% in DL and 30% in UL.

Coding scheme CS1 CS2 CS3 CS4


Maximum PDCHs per DSP 61 60 55 49
Maximum PDCHs per GPU 244 240 220 196
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum PDCHs per DSP 56 53 52 49 45 41 33 28 26
Maximum PDCHs per GPU 224 212 208 196 180 164 132 112 104

Table 10: Maximum number of PDCHs per DSP and per GPU versus coding (legacy)

Note For MSC8 and MCS9, the limitation of the number of PDCHs per DSP is due to the fact that the
maximum number of GCHs is reached (120 GCHs per DSP), and not that the targeted CPU load is
reached.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 29/109
4.2.2.3.2 DSP performance for B10 Mx
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

For MX, two cases are presented, one for Gb over Frame Relay (GboFR) and one for Gb over IP (GboIP),
assuming a configuration of 16 E1 links. In both cases (Table 11, and Table 12), the maximum number of
PDCHs per DSP and per GP is presented for each GPRS and EGPRS coding scheme, with the following
assumptions:
DSP_Load_Thr_1 set to 85%,
3 PDCHs per TRX,
Average size of LLC-PDU of 500 bytes in DL and 100 bytes in UL,
RLC load of 100% both in DL and in UL.

Number of GCHs usable for PS traffic per E1:


One E1 is composed of 32 4 timeslot of 16k, i.e. 128 timeslot. But all these timeslots are not usable for
PS traffic. Some timeslots are reserved to transport alarms, signalling or O&M informations. This means
that in average, 112 timeslots per E1 are usable to carry PS traffic.

Case of Gb over Frame Relay:

With GboFR, a share of E1 links is done between Ater and Gb. For a configuration with 16 E1, we assume
that 13 E1 links are reserved for Ater and 3 are reserved for Gb. This leads to a maximum number of GCHs
available per DSP of 364 GCHs, assuming 112 GCHs per E1 (and not 480 GCHs as expected with a factor 4).
The results are presented in Table 11.
Coding scheme CS1 CS2 CS3 CS4
Maximum PDCHs per DSP 252 240 223 202
Maximum PDCHs per GP 1008 960 892 808
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum PDCHs per DSP 228 220 210 199 183 154 104 87 81
Maximum PDCHs per GP 912 880 840 796 732 616 416 348 324

Table 11: Maximum number of PDCHs per DSP and per GP versus (M)CS for GboFR
Comparing the DSP performance in terms of number of PDCHs between B10 legacy and Mx, we note that
the objective of factor 4 is reached in most cases for the 13 + 3 E1 configuration. The factor of 4 is not
reached for high coding scheme values for EGPRS (for MCS7, MCS8 and MCS9). This is due to the limitation
in terms of GCHs and not to the limitation in terms of CPU load.

Case of Gb over IP:

With GboIP, all the E1 links can be used for Ater. This leads to a maximum number of GCHs available per
DSP of 448 GCHs, assuming 112 GCHs per E1.
Coding scheme CS1 CS2 CS3 CS4
Maximum PDCHs per DSP 252 240 223 202
Maximum PDCHs per GP 1008 960 892 808
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum PDCHs per DSP 228 220 210 199 183 168 128 108 99
Maximum PDCHs per GP 912 880 840 796 732 672 512 432 396

Table 12: Maximum number of PDCHs per DSP and per GP versus (M)CS for GboIP
In this case, we note that compared to legacy, the factor of 4 is respected nearly for every coding scheme
(except for high coding scheme of EGPRS). Compared to Mx with GboFR, we note an improvement in terms
of number of PDCHs that can be reached for EGPRS high coding schemes, due to a higher number of GCH
available.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 30/109
4.3 PPC Capacity
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

4.3.1 PPC memory


4.3.1.1 Memory for B10 legacy

The PPC memory capacity is 128 Mbytes on the GPU2 and 256 Mbytes on the GPU3. The memory usage and
management is the same for both GPUs.
It has been splitted into:
27 Mbytes reserved for the code, about 25 Mbytes is currently used (20.5 for PMU and 2.5 for LCU),
64 Mbytes of dynamic memory allocation,
37 Mbytes reserved for the dynamic buffers allocation (e.g. temporary storage of LLC PDUs).

The following table shows that enough memory is available for the GPU2.
Size for each Number per Total size (in
Memory usage
(kbytes) GPU kbytes)
TRX 25.1 448 11 265
PDCH (1) 2.3 3 584 8 071
TBF 3.4 1 440 4 945
MS 6.3 1 000 6 292
Cell 61.8 240 14 823
Routing Area 16.0 20 320
Miscellaneous 13 413
Total size variables 62 159

Table 13: PPC memory requirements for data and variables for GPU2
(1)
Maximum number of PDCH corresponds to the maximum number of TRX (240) multiplied by the number
of PDCH per TRX (8).

4.3.1.2 Memory for B10 Mx


The PPC memory capacity is around 1Gbyte for Mx. Compared to the Number per GPU defined in Table
13, the following modifications are introduced:
The number of TRX is set to 960 and therefore the number of PDCHs is set to 7680.
The maximum number of TBF per GP is multiplied by a factor of 4 compared to the number per GPU.
Then, the maximum number of TBFs considedered (UL and DL) is equal to 5760.
The number of MS context is multiplied by a factor of 4.
The number of cells is equal to 500.

From these assumptions, we can deduce easily that enough memory is available for PPC with GP.

4.3.2 PPC processing capacity


4.3.2.1 B10 impacts on the PPC CPU load
The following B10 features are expected to have an impact on the PPC CPU cost model. Depending on the
feature, this impact is quantified either as a global CPU overhead or as an increased CPU cost for some
basic procedures (TBF establishment, and so on):
Dual transfer mode (DTM): This feature impacts the PPC capacity as it leads to a slight increase of
complexity for TBF establishment procedures. But the impact should be low on the performances at
PPC level.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 31/109
Extended Dynamic Allocation (EDA): This feature impacts the PPC capacity as it leads to an increase
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

of complexity for TBF establishment procedures. Moveover, an UL TBF whose MS supports EDA, is not
All rights reserved. Passing on and copying of this

allocated directly in EDA mode. In a first step, the UL TBF is allocated in DA mode, and if it is proved
after a certain time that the TBF has an UL bias, it is reallocated in EDA mode. This means that one
allocation (in DA mode) and (at least) one reallocation (from DA mode to EDA mode) will be needed
for an UL TBF supporting EDA.
Gb over IP (GboIP): This feature impacts the PPC capacity, as the transfer of LLC PDU on the Gb
interface is modified, with the introduction of new stacks. It has therefore an impact on the CPU
load for LLC PDU transfer.

Note The impact of the two features DTM and EDA at PPC level is not studied in this proposal of the
document. Only Best-effort traffic and DA mode allocation are considered in this document. The
measurements available at PPC level are the following ones:
For MFS legacy: MR1 with Gb over Frame Relay, and MR2 for Gb over Frame Relay and Gb
over IP.
For MFS Mx: MR1 with Gb over Frame Relay, and MR2 with Gb over IP (measurements for
MR2 and Gb over Frame Relay are not available).

4.3.2.2 Definition of PPC nominal load


The nominal load limit is meant to guarantee a reasonable response time for the end users. The nominal
load is an average load, the GP(U) being able to handle sporadic load peaks due to variations of the
traffic.

4.3.2.3 PPC CPU overhead

Note For PPC capacity, we assume that a MFS legacy (GPU2 or GPU3) is associated with a BSC-G2 and
that a Mx MFS (GP) is associated with a BSC Evolution. The other possibilities are not considered.
We consider that 75% of the processing power is available at nominal load. In this 75% load, we will
reserve some overhead for the following procedures, with the following hypothesis:
The cost introduced by PDCH allocation is now independent of the traffic model with RAE-4 feature
and therefore it is added in the CPU margin. The cost values associated to this feature are only
assumptions, as up to now, no measures are available for B10. Moreover, the worst case is assumed
(one message per cell). Considering 264 cells per GPU2 / GPU3 (legacy case) and 500 cells per GP (Mx
case), and a periodicity of 10 seconds for the RAE-4 messages between BSC and MFS (RR Allocation
messages transmitted every TCH_INFO_PERIOD RR_ALLOC_PERIOD, with TCH_INFO_PERIOD and
RR_ALLOC_PERIOD equal to respectively 5 seconds and 2), the maximum rate is equal to 26,4
messages per second for legacy and 50 messages per second for Mx.
The cost of Suspend/Resume depends on the number of paging and on the penetration rate of GPRS
MS. The paging is equal to:
70 paging per second for BSC G2. The same value applies for MR1 and MR2,
108 paging per second for BSC Evolution with B10 MR1,
121 paging per second for BSC Evolution with B10 MR2.
For Gb overflow control function, the maximum rate per second is limited to 10 due to the limitation
introduced by BSCGP flow control.
For LCS (localization) hypothesis, see document [30].
The Number of CS paging depends on the presence or not of the Gs interface. Here, it is assumed
that there is no Gs interface and this means that the CS paging has no impact on the overhead.
The overhead for Full-intra-RA re-routing corresponds to TCP based application (most common
cases).

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 32/109
The values related to CPU margin are presented in Table 14 below for GPU2 and GPU3 (same value for
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

MR1 and for MR2), and in Table 15 for GP (different values for MR1 and MR2 due to a different value of
All rights reserved. Passing on and copying of this

paging between MR1 and MR2 for BSC Evolution).


Cost in ms Maximum rate CPU load
Functions
GPU2 GPU3 GPU2 / GPU3 GPU2 GPU3
PDCH (de)allocation 0.036 0.025 26.4 0.10% 0.07%
Suspend / resume 2.21 1.55 11.4 2.52% 1.76%
Gb flow control 0.65 0.46 10.0 0.65% 0.46%
Cost of localization (A-GPS) 5.0 3.5 3.8 1.90% 1.33%
NC2 overhead for 100 MS 0.2 0.14 104.17 2.08% 1.46%
Cost of CS paging on CCCH 1.2 0.84 0 0% 0%
Overhead for LLC PDU rerouting 2.00% 1.40%
Total CPU margin 9.25% 6.47%

Table 14: CPU margin to be added in PPC for GPU2 and GPU3
Maximum rate CPU load
Functions Cost in ms
MR1 MR2 MR1 MR2
PDCH (de)allocation 0.009 50 50 0.05% 0.05%
Suspend / resume 0.553 24 27 1.33% 1.49%
Gb flow control 0.163 10 10 0.16% 0.16%
Cost of localization (A-GPS) 1.25 8 9 1.00% 0,68%
NC2 overhead for 100 MS 0.05 104.17 104.17 0.52% 0.52%
Cost of CS paging on CCCH 0.03 0 0 0% 0%
Overhead for LLC PDU rerouting 0.5% 0,5%
Total CPU margin 3.55% 3,85%

Table 15: CPU margin to be added in PPC for GP (MR1 and MR2)
Therefore, we will reserve:
For GPU2, 9.25% CPU Load for PPC margin and the remaining 65.75% load is available for other
procedures at PPC nominal load.
For GPU3, 6.47% CPU Load for PPC margin and the remaining 68.53% load is available for other
procedures at PPC nominal load
For GP-MR1, 3.55% CPU Load for PPC margin and the remaining 71.45% load is available for other
procedures at PPC nominal load.
For GP-MR2, 3.85% CPU Load for PPC margin and the remaining 71.15% load is available for other
procedures at PPC nominal load

Note The values for B10 GPU2 given in Table 14 correspond to B8 measurements with L2-cache
activated, as no measurements will be provided for B10 concerning this part. The GPU3 and GP
values for CPU load are deduced from GPU2 values by respectively applying a factor of 0.7 and of
0.25.

4.3.2.4 PPC CPU cost model for other procedures

The following CPU cost for each procedure is considered to estimate the PPC capacity. The measurements
presented in the tables below correspond to PPC measurements for GPU2/GPU3 and for GP. Table 16 is
for B10 MR1 (GboFR), Table 17 is for B10MR2 (GboFR) and Table 18 is for B10 MR2 (GboIP).
Cost in ms
Procedures
GPU2 GPU3 GP
DL TBF establishment on CCCH 8.013 4.745 1.834
DL TBF establishment on PACCH 8.013 4.745 1.834
UL TBF establishment on CCCH 8.013 4.745 1.834

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 33/109
UL TBF establishment on PACCH 8.013 4.745 1.834
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

DL LLC transfer processing cost 0.568 0.401 0.141


All rights reserved. Passing on and copying of this

UL LLC transfer processing cost 0.257 0.176 0.061


CPU for one PS paging on CCCH 1.2 0.84 0.3
Radio resource reallocation 8.013 4.745 1.834
GCH allocation and connection 1.11 0.777 0.278

Table 16: CPU cost model for PPC for GPU2, GPU3 and GP for B10 MR1 (with GboFR)
Cost in ms
Procedures
GPU2 GPU3
DL TBF establishment on CCCH 9.463 5.318
DL TBF establishment on PACCH 9.463 5.318
UL TBF establishment on CCCH 9.463 5.318
UL TBF establishment on PACCH 9.463 5.318
DL LLC transfer processing cost 0.602 0.426
UL LLC transfer processing cost 0.29 0.199
CPU for one PS paging on CCCH 1.2 0.84
Radio resource reallocation 9.463 5.318
GCH allocation and connection 1.11 0.777

Table 17: CPU cost model for PPC for GPU2 and GPU3 for B10MR2 (with GboFR)
Cost in ms
Procedures
GPU2 GPU3 GP
DL TBF establishment on CCCH 9,626 6,036 1.808
DL TBF establishment on PACCH 9,626 6,036 1.808
UL TBF establishment on CCCH 9,626 6,036 1.808
UL TBF establishment on PACCH 9,626 6,036 1.808
DL LLC transfer processing cost 0,673 0,540 0.175
UL LLC transfer processing cost 0,297 0,227 0.069
CPU for one PS paging on CCCH 1,2 0,84 0.300
Radio resource reallocation 9,626 6,036 1.808
GCH allocation and connection 1,11 0,777 0.278

Table 18: CPU cost model for PPC for GPU2, GPU3 and GP for B10 MR2 (with GboIP)
We define the following notations:
TBF_dc: number of DL TBF/s established on CCCH.
TBF_uc: number of UL TBF/s established on CCCH.
TBF_da: number of DL TBF/s established on PACCH.
TBF_ua: number of UL TBF/s established on PACCH.
LLC_u: number of UL LLC frames transferred per s.
LLC_d: number of DL LLC frames transferred per s.
Paging_P: number of PS paging/s to be sent on CCCH.
RRR: number of Radio resource reallocation per s.

Note The values presented in Table 16, in Table 17 and in Table 18 have been derived from Load and
Stress tests. This explains why the cost of a TBF establishment is the same whatever the cases
(UL/DL or CCCH/PACCH).

4.3.2.4.1 PPC load criterion for B10 GPU2


In nominal load, the above variables must verify the following relations (one for MR1 and one for MR2),
which translate the fact that there is 65.75% (= 75 9.25 CPU margin) CPU Load available at nominal load

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 34/109
for the procedures listed in Table 16 (MR1-GboFR), in Table 17 (MR2-GboFR) and in Table 18 (MR2-GboIP).
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

This relation is called PPC load criterion for GPU2:


All rights reserved. Passing on and copying of this

8.01 TBF_dc + 8.01 (TBF_da + RRR) + 8.01 TBF_uc + 8.01 TBF_ua + 0.57 LLC_d + 0.26 LLC_u + 1.20
Paging_P < 1000 65.75% (MR1 GboFR)
9.46 TBF_dc + 9.46 (TBF_da + RRR) + 9.46 TBF_uc + 9.46 TBF_ua + 0.60 LLC_d + 0.29 LLC_u + 1.20
Paging_P < 1000 65.75% (MR2 GboFR)
9.63 TBF_dc + 9.63 (TBF_da + RRR) + 9.63 TBF_uc + 9.63 TBF_ua + 0.67 LLC_d + 0.30 LLC_u + 1.20
Paging_P < 1000 65.75% (MR2 GboIP)

Equation 1: PPC load criterion definition for B10 GPU2

4.3.2.4.2 PPC load criterion for B10 GPU3


In nominal load, the above variables must verify the following relations (one for MR1 and one for MR2),
which translate the fact that there is 68.53% (= 75 6.47 CPU margin) CPU Load available at nominal load
for the procedures listed in Table 16 (MR1-GboFR), in Table 17 (MR2-GboFR) and in Table 18 (MR2-GboIP).
This relation is called PPC load criterion for GPU3:
4.74 TBF_dc + 4.74 (TBF_da + RRR) + 4.74 TBF_uc + 4.74 TBF_ua + 0.40 LLC_d + 0.18 LLC_u + 0.84
Paging_P < 1000 68.53% (MR1 GboFR)
5.32 TBF_dc + 5.32 (TBF_da + RRR) + 5.32 TBF_uc + 5.32 TBF_ua + 0.43 LLC_d + 0.20 LLC_u + 0.84
Paging_P < 1000 68.53% (MR2 GboFR)
6.04 TBF_dc + 6.04 (TBF_da + RRR) + 6.04 TBF_uc + 6.04 TBF_ua + 0.54 LLC_d + 0.23 LLC_u + 0.84
Paging_P < 1000 68.53% (MR2 GboIP)

Equation 2: PPC load criterion definition for B10 GPU3

4.3.2.4.3 PPC load criterion for B10 GP


In nominal load, the above variables must verify the following relations (one for MR1 and one for MR2),
which translate the fact that there is 71.45% (= 75 3.55 CPU margin) CPU Load available at nominal load
for the procedures listed in Table 16 (MR1-GboFR) and that there is 71.1 5% (= 75 3.85 CPU margin) CPU
Load available at nominal load for the procedures listed in Table 18 (MR2-GboIP). This relation is called
PPC load criterion for GP:
1.83 TBF_dc + 1.83 (TBF_da + RRR) + 1.83 TBF_uc + 1.83 TBF_ua + 0.14 LLC_d + 0.06 LLC_u + 0.30
Paging_P < 1000 71.45 % (MR1 GboFR)
1.81 TBF_dc + 1.81 (TBF_da + RRR) + 1.81 TBF_uc + 1.81 TBF_ua + 0.17 LLC_d + 0.07 LLC_u + 0.30
Paging_P < 1000 71.15% (MR2 GboIP)

Equation 3: PPC load criterion definition for B10 GP

4.3.3 How to define and measure PPC capacity


The PPC load criterion validity for B10 legacy and Mx can be checked during load test thanks to GPU /
GP internal counters. But this criterion is not very meaningful on its own.
The real PPC capacity in term of number of PS users will be highly dependent on a given PS traffic model,
which is today very difficult to determine. With such a model, the parameters used in PPC load
criterion must be deduced for an average user from the traffic model. Then the PPC load criterion
enables to determine the PPC capacity in number of PS users at nominal load. Section 5 provides a PS
traffic model for the BSS. The PPC capacity with some of the proposed profiles or the average user can be
determined (see Section 6).
To remain independent of a traffic model, we can give an estimation of the PPC maximum capacity by
some simple figures, which will be easy to measure, easy to understand for our customers, and which can
be compared from release to release. We propose to use the following measurements:
Maximum Ping rate: this rate will quantify the PPC capacity in term of signalling load. A ping is a
simple scenario, which corresponds to most signalling exchanges between SGSN and MS.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 35/109
Maximum DL LLC rate: this will quantify the PPC data load capacity in downlink.
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Maximum UL LLC rate: this will quantify the PPC data load capacity in uplink.
All rights reserved. Passing on and copying of this

Expectations for the three above data are given in the following sections.

Note As the number of ping rate, DL LLC rate and UL LLC rate processed below are based on
measurements of L&T tests, they cannot be used anymore as objective for these L&T tests.

4.3.4 Maximum PPC Ping rate


We define a Ping as an IP Echo request sent from the MS to a server behind the SGSN and the response
(Echo response) from the server to the MS. When a single ping per MS is sent, the following simple
scenario is then triggered in the GPU / GP:
UL TBF establishment on CCCH,
One UL LLC (IP content of fixed length),
DL TBF establishment on PACCH,
One DL LLC (same IP content as UL LLC),
Release of the TBFs (after DL TBF timer expiry for the DL).
Depending on whether there is already some traffic in the cell and on the multiplexing parameters, the
ping may or may not trigger transmission resource allocation. In the following we will consider that the
allocation requests triggered are within the limits defined for the GPU / GP in Section 7.3. With the above
scenario repeated N_ping times per second transmitted by the PPC, we have:
TBF_dc = 0,
TBF_uc = N_ping,
TBF_da = N_ping,
TBF_ua = 0,
Paging_C = 0,
Pagin_P = 0,
RRR = 0,
LLC_d = N_ping,
LLC_u = N_ping.

Replacing the variables by N_Ping in the different PPC load criterion (see Equation 1, Equation 2 and
Equation 3) yield to the eight following formulae, three valid for MR1-GboFR, two valid for MR2-GboFR
and three valid for MR2-GboIP (for each available hardware):
(8.01 + 8.01 + 0.57 + 0.26) N_Ping < 1000 65.75 % .................... [GPU2 MR1 + GboFR]
(4.74 + 4.74 + 0.40 + 0.18) N_Ping < 1000 68.53 % .................... [GPU3 MR1 + GboFR]
(1.83 + 1.83 + 0.14 + 0.06) N_Ping < 1000 71.45 % ...................... [GP MR1 + GboFR]
(9.46 + 9.46 + 0.60 + 0.29) N_Ping < 1000 65.75 % .................... [GPU2 MR2 + GboFR]
(5.32 + 5.32 + 0.43 + 0.20) N_Ping < 1000 68.53 % .................... [GPU3 MR2 + GboFR]
(9.63 + 9.63 + 0.67 + 0.30) N_Ping < 1000 65.75 % .................... [GPU2 MR2 + GboIP]
(6.04 + 6.04 + 0.54 + 0.23) N_Ping < 1000 68.53 % .................... [GPU3 MR2 + GboIP]
(1.81 + 1.81 + 0.17 + 0.07) N_Ping < 1000 71.15 % ...................... [GP MR2 + GboIP]

From these equations, we can define the expected ping rate at nominal load (75%) with the CPU margin
and without the CPU margin (in this latter case, NC2 disabled and no CS service interaction). The obtained
values for GPU2, GPU3 and GP (B10 MR1 and MR2) are presented in Table 19.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 36/109
Maximum PPC Ping rate (Ping per second)
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Assumptions MR1 GboFR MR2 GboFR MR2 GboIP


All rights reserved. Passing on and copying of this

GPU2 GPU3 GP GPU2 GPU3 GPU2 GPU3 GP


Nominal Load (75%) with CPU margin 39 68 185 33 61 33 53 184
Nominal Load (75%) without CPU margin 45 75 194 38 67 37 58 194

Table 19: Maximum PPC Ping rate for GPU2/GPU3/GP (MR1 and MR2)
From the results presented in Table 19 and based on L&T measurements, we can deduce that:
For GPU2, a degradation of around 15% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR). No degradation is observed for MR2 between GboFR and GboIP.
For GPU3, a degradation of around 10% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 13% is also observed with the
introduction of GboIP in MR2.
For GP, no degradation is observed between MR1 and MR2. GboIP and MR2 features have no impact
on the performances of Max Ping in this case.

4.3.5 Maximum PPC DL LLC rate


The maximum DL LLC rate is obtained when no TBF establishments are considered, and no UL traffic is
on-going. This is a theoretical value since no application will behave this way, but it is useful to be put in
relation with the available throughput at DSP level (this is done in Section 4.4). The rate is computed with
0% margin (theoretical maximum corresponding to load test condition with no other procedures on-going),
and all variables in PPC load criterion equal to 0, except LLC_d.
This yields to the eight following formulae, three valid for MR1-GboFR, two valid for MR2-GboFR and three
valid for MR2-GboIP (for each available hardware):
0.57 LLC_d < (PPC load) ................................................ [GPU2 MR1 + GboFR]
0.40 LLC_d < (PPC load) ................................................ [GPU3 MR1 + GboFR]
0.14 LLC_d < (PPC load) .................................................. [GP MR1 + GboFR]
0.60 LLC_d < (PPC load) ................................................ [GPU2 MR2 + GboFR]
0.43 LLC_d < (PPC load) ................................................ [GPU3 MR2 + GboFR]
0.67 LLC_d < (PPC load) ................................................ [GPU2 MR2 + GboIP]
0.54 LLC_d < (PPC load) ................................................ [GPU3 MR2 + GboIP]
0.17 LLC_d < (PPC load) .................................................. [GP MR2 + GboIP]

From these equations, we can define the expected DL LLC rate at nominal load (75%) and at high load
(85%). The obtained values for GPU2, GPU3 and GP (B10 MR1 and MR2) are presented in Table 20.
Maximum PPC DL LLC Rate (DL LLC per second)
Assumptions MR1 GboFR MR2 GboFR MR2 GboIP
GPU2 GPU3 GP GPU2 GPU3 GPU2 GPU3 GP
Nominal Load (75%) 1 320 1 870 5 335 1 245 1 759 1 114 1 388 4 291
High Load (85%) 1 495 2 119 6 046 1 411 1 993 1 262 1 573 4 863

Table 20: Maximum PPC DL LLC rate for GPU2/GPU3/GP (MR1 and MR2)
From the results presented in Table 20 and based on L&T measurements, we can deduce that:
For GPU2, a degradation of around 6% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 11% is also observed with the
introduction of GboIP in MR2.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 37/109
For GPU3, a degradation of around 6% is observed between MR1 and MR2 due to MR2 features (in
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

both cases with GboFR), and an additional degradation of around 20% is also observed with the
All rights reserved. Passing on and copying of this

introduction of GboIP in MR2.


For GP, a degradation of around 20% is observed between MR1 with GboFR and MR2 with GboIP.

4.3.6 Maximum PPC UL LLC rate


The maximum UL LLC rate is obtained when no TBF establishments are considered, and no DL traffic is
on-going. This is a theoretical value since no application will behave this way, but it is useful to be put in
relation with the available throughput at DSP level (this is done in Section 4.4). The rate is computed with
0% margin (in this case PDCH are GCH are previously established), and all variables in PPC load criterion
equal to 0, except LLC_u.

This yields to the eight following formulae, three valid for MR1-GboFR, two valid for MR2-GboFR and three
valid for MR2-GboIP (for each available hardware):
0.26 LLC_u < (PPC load) ................................................ [GPU2 MR1 + GboFR]
0.18 LLC_u < (PPC load) ................................................ [GPU3 MR1 + GboFR]
0.06 LLC_u < (PPC load) .................................................. [GP MR1 + GboFR]
0.29 LLC_u < (PPC load) ................................................ [GPU2 MR2 + GboFR]
0.20 LLC_u < (PPC load) ................................................ [GPU3 MR2 + GboFR]
0.30 LLC_u < (PPC load) ................................................ [GPU2 MR2 + GboIP]
0.23 LLC_u < (PPC load) ................................................ [GPU3 MR2 + GboIP]
0.07 LLC_u < (PPC load) .................................................. [GP MR2 + GboIP]

From these equations, we can define the expected UL LLC rate at nominal load (75%) and at high load
(85%). The obtained values for GPU2, GPU3 and GP (B10 MR1 and MR2) are presented in Table 21.
Maximum PPC DL LLC Rate (DL LLC per second)
Assumptions MR1 GboFR MR2 GboFR MR2 GboIP
GPU2 GPU3 GP GPU2 GPU3 GPU2 GPU3 GP
Nominal Load (75%) 2 921 4 250 12 278 2 582 3 767 2 526 3 308 10 948
High Load (85%) 3 310 4 817 13 915 2 927 4 270 2 862 3 749 12 407

Table 21: Maximum PPC UL LLC rate for GPU2/GPU3/GP (MR1 and MR2)
From the results presented in Table 21 and based on L&T measurements, we can deduce that:
For GPU2, a degradation of around 12% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 3% is also observed with the
introduction of GboIP in MR2.
For GPU3, a degradation of around 12% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 12% is also observed with the
introduction of GboIP in MR2.
For GP, a degradation of around 11% is observed between MR1 with GboFR and MR2 with GboIP.

4.4 GPU / GP Throughput objectives

4.4.1 General
It will be important for the operator that the GPU / GP enables to reach the throughput, which has been
planned at radio level. The GPU / GP throughput depends both on PPC capacity and DSP capacity. We
have predicted the DSP capacity, as a maximum number of PDCH, for a targeted capacity (given by O&M

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 38/109
parameter DSP_Load_Thr1). The operator will plan its network based on the GPU / GP capacity as seen
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

from the DSP, because this is the only dimensioning figure that we communicate externally.
All rights reserved. Passing on and copying of this

In PPC, the limitation is expressed as a maximum number of transmitted PDU, which in reality will be
lower, depending on the processing power used by TBF establishment. The throughput transferred through
the PPC in each direction is equal to the number of LLC PDU transmitted, multiplied by the average data
load for the PDUs. It is obvious that small PDUs will tend to decrease the maximum PPC throughput, and
TBF containing a small number of PDUs will worsen the situation. Hence the PPC throughput will be highly
dependent on the traffic model.
In the following sections we will compute the maximum GPU / GP throughput considering DSP limitation
and PPC limitation for B10 legacy (see Section 4.4.2) and for B10 Mx (see Section 4.4.3). It must be noted
that this is not the throughput as seen from the end user, where more protocol overhead must be taken
into account.

4.4.2 Maximum GPU Througput (B10 legacy)

4.4.2.1 Maximum GPU Throughput depending on DSP constraints


The GPU throughput above RLC level can be computed assuming some (M)CS distribution or maximum
(M)CS for each PDCH.

First we compute the expected throughput per PDCH, depending on the coding scheme (CS for GPRS or
MCS for EGPRS), and then we multiply by the maximum number of PDCH that the DSP can handle to obtain
the maximum GPU throughput. The maximum possible throughputs per PDCH, for EGPRS and GPRS,
depending on the coding scheme, are given in the table below.
Coding scheme CS1 CS2 CS3 CS4
Maximum throughput 8 12 14.4 20
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum throughput 8.8 11.2 14.8 17.6 22.4 29.6 44.8 54.2 59.2

Table 22: Maximum throughput per PDCH (kbit/s)


We can then compute the maximum reachable throughput for GPRS and EGPRS, with perfect radio
condition, based only on DSP processing capacity for each coding scheme (CS and MCS) for B10 legacy.
The number of PDCHs per DSP given in Table 23 (perfect propagation condition) and in Table 24 (radio
propagation condition) has been evaluated assuming the following:
DSP load set to 85%,
6 PDCHs per TRX,
Average RLC load per PDCH in DL and UL set respectively to 45% and 30%,
LLC PDU length in DL and UL set respectively to 500 bytes and 200 bytes.
Coding scheme CS1 CS2 CS3 CS4
Maximum PDCHs per DSP 61 60 55 49
Maximum DSP throughput (kbit/s) 488 720 792 980
Maximum GPU throughput (kbit/s) 1 952 2 880 3 168 3 920
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum PDCHs per DSP 56 53 52 49 45 41 33 28 26
Maximum DSP throughput (kbit/s) 493 594 770 862 1 008 1 214 1 478 1 523 1 539
Maximum GPU throughput (kbit/s) 1 971 2 374 3 078 3 450 4 032 4 854 5 914 6 093 6 157

Table 23: Maximum GPU Throughput (GPRS and EGPRS, perfect propagation)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 39/109
The above maximum GPU throughput does not correspond to realistic radio propagation conditions. In
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

practice, the goal for GPU throughput can be limited with a lower average throughput as shown in the
All rights reserved. Passing on and copying of this

table below.
Coding scheme CS1 CS2 CS3 CS4
Average throughput per PDCH (kbit/s) 8 11.9 14.1 15.6
Maximum PDCHs per DSP 61 60 55 49
Maximum GPU throughput (kbit/s) 1 952 2 866 3 109 3 050
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Average throughput per PDCH (kbit/s) 8.8 11.2 14.7 16.7 22.4 29.3 40.5 41.4 38.9
Maximum PDCHs per DSP 56 53 52 49 45 41 33 28 26
Maximum GPU throughput (kbit/s) 1 971 2 374 3 058 3 273 4 032 4 805 5 346 4 637 4 046

Table 24: Maximum GPU Throughput (GPRS and EGPRS, with radio propagation)
Considering that the Maximum Throughput per PDCH cannot be reached, we can set a reasonable target
for the maximum GPU Throughput above RLC to about:
EDGE: 4.0 Mbit/s,
GPRS: 3.1 Mbit/s.
However to conclude at GPU level we need to look at PPC (next sections).

4.4.2.2 Maximum GPU Throughput depending on PPC constraints


In the following we first compute the maximum PPC throughput when TBF are already established, and
then with some TBF establishment and varying hypothesis in term of DL PDUs per transaction.
In Section 4.3.5, we have shown that when the PPC is loaded at 85% by DL-LLC (neither TBF establishment
considered nor any other procedures), then the maximum DL-LLC rate is equal to:
For GPU2:
1495 LLC per second for B10 MR1 GboFR,
1411 LLC per second for B10 MR2 GboFR,
1262 LLC per second for B10 MR2 GboIP.
For GPU3:
2119 LLC per second for B10 MR1 GboFR,
1993 LLC per second for B10 MR2 GboFR,
1573 LLC per second for B10 MR2 GboIP.
We can then compute the maximum throughput that the PPC is able to transmit with the more than
optimistic hypothesis of no TBF establishment, depending on the LLC average size.
LLC average data length 100 200 400 600 800 1000 1500
MR1 GPU2 1 196 2 393 4 785 7 178 9 571 11 964 17 945
GboFR GPU3 1 696 3 391 6 782 10 173 13 564 16 955 25 433
Throughput above RLC MR2 GPU2 1 129 2 258 4 515 6 773 9 030 11 288 16 932
(kbit/s) GboFR GPU3 1 595 3 189 6 378 9 567 12 756 15 945 23 918
MR2 GPU2 1 010 2 020 4 039 6 059 8 078 10 098 15 147
GboIP GPU3 1 259 2 517 5 034 7 552 10 069 12 586 18 879

Table 25: Maximum DL PPC Throughput depending on LLC PDU length (GPU2/GPU3)
The maximum UL LLC rate for the PPC is four times greater than in the downlink hence the maximum UL
throughput with the same average LLC PDU length is four times greater than in the downlink direction.
The LLC PDU average length will be highly dependent on the application. However, we already have the
following information:

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 40/109
The maximum data length allowed of 1520 bytes for LLC PDU (N201 parameter) is used only in
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

memory card for lap-top PC or incorporated in PDAs, which will represent a small proportion of the
All rights reserved. Passing on and copying of this

PS mobile users, estimated to a max 10% (business profile).


The maximum data length implemented in most new (and to come) commercial handset is 500 or 576
bytes. Due to segmentation of application data to fit into LLC PDU, the average length for data may
be even smaller. (e.g.: 1500 bytes IP Packet with N201 = 576 => 3 LLC PDU 576+ 576 + 348, implies an
average LLC PDU = 500).
The PDU corresponding to GMM signalling as well as TCP control messages are around 50 bytes.

Hence if we consider the following LLC PDU length distribution we can deduce an average LLC PDU length.
The proportion of small PDUs is based on statistics found for the Internet.
LLC PDU length (bytes) 50 575 1520
Proportion 40% 50% 10%
Average length (bytes) 460

Table 26: Expected average LLC PDU length


With an average PDU length around 460 bytes, the PPC maximum throughput in DL is equal to:
GPU2 5 503
MR1 GboFR
GPU3 7 799
GPU2 5 192
MR2 GboFR
GPU3 7 335
GPU2 4 645
MR2 GboIP
GPU3 5 790

If the average drops around 200 bytes, the PPC maximum throughput is below 3.4 Mbit/s, as shown in
Table 25 (column in green).

4.4.2.3 DL GPU Throughput at PPC nominal load


To estimate the PPC nominal throughput, we have to now consider the PPC at nominal load with the
reserved CPU margin, and consider some TBF establishments. We can consider a simple transaction
model made of one UL TBF establishment on CCCH, one DL TBF establishment on PACCH (most probable
case) and N DL LLC-PDU transferred. Using the PPC load criterion for GPU2 and for GPU3 (see
respectively Equation 1 and Equation 2) we can compute the maximum DL LLC rate at PPC nominal load.
Rather than considering the number of PDUs (N) as an input, we will take the data load per transaction,
which is easier to map on an operator traffic model.
Applying the above transaction into the PPC load criterion for GPU2/GPU3 and for B10 MR1 (GboFR) and
for B10 MR2 (GboFR and GboIP), leads to the following formulae:
Transaction_rate (8.01 + 8.01 + 0.57 N_DL_LLC_per_trans) = 657.5 ..... [GPU2 MR1 + GboFR]
Transaction_rate (4.74 + 4.74 + 0.40 N_DL_LLC_per_trans) = 685.3 ..... [GPU3 MR1 + GboFR]
Transaction_rate (9.46 + 9.46 + 0.60 N_DL_LLC_per_trans) = 657.5 ..... [GPU2 MR2 + GboFR]
Transaction_rate (5.32 + 5.32 + 0.43 N_DL_LLC_per_trans) = 685.3 ..... [GPU3 MR2 + GboFR]
Transaction_rate (9.63 + 9.63 + 0.67 N_DL_LLC_per_trans) = 657.5 ..... [GPU2 MR2 + GboIP]
Transaction_rate (6.04 + 6.04 + 0.54 N_DL_LLC_per_trans) = 685.3 ..... [GPU3 MR2 + GboIP]

With:
N_DL_LLC_per_trans = Data load per transaction / Average_DL_LLC_PDU_length

From the above equations we can deduce the transaction rate at nominal load, and the DL PPC throughput
at nominal load, which is equal to:
Transaction_rate N_DL_LLC_per_trans Average_DL_LLC_PDU_length

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 41/109
In Table 27, the PPC throughput is computed with an average PDU length of 460 bytes (from Table 26). In
permitted without written authorization from Alcatel-Lucent

Table 28, the PPC throughput is computed with an average length of 250 bytes4. It must be noted
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

(although not reflected in our simple model) that the average PDU length will tend to decrease when the
data load per transaction decreases. This is because the control messages of the application, which are
short, will be in a higher number compared to the user data than for a long file transfer.
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 4 22 65 109 217
MR1 GboFR
GPU2
Transaction rate at PPC nominal load (per s) 35,5 23,2 12,4 8,5 4,7
DL LLC rate at PPC nominal load (per s) 155 504 808 919 1 024
PPC throughput (kbit/s) 569 1 853 2 972 3 380 3 768
GPU3
Transaction rate at PPC nominal load (per s) 61,0 37,6 19,2 12,9 7,1
DL LLC rate at PPC nominal load (per s) 265 818 1 254 1 403 1 541
PPC throughput (kbit/s) 976 3 011 4 614 5 164 5 671
MR2 GboFR
GPU 2
Transaction rate at PPC nominal load (per s) 30,5 20,5 11,3 7,8 4,4
DL LLC rate at PPC nominal load (per s) 133 446 737 847 954
PPC throughput (kbit/s) 488 1 643 2 711 3 116 3 509
GPU3
Transaction rate at PPC nominal load (per s) 54,9 34,4 17,8 12,0 6,6
DL LLC rate at PPC nominal load (per s) 239 748 1 162 1 307 1 441
PPC throughput (kbit/s) 878 2 754 4 278 4 810 5 305
MR2 GboIP
GPU2
Transaction rate at PPC nominal load (per s) 29,6 19,4 10,4 7,1 4,0
DL LLC rate at PPC nominal load (per s) 129 422 679 773 863
PPC throughput (kbit/s) 474 1 552 2 498 2 845 3 176
GPU3
Transaction rate at PPC nominal load (per s) 47,5 28,8 14,5 9,7 5,3
DL LLC rate at PPC nominal load (per s) 207 625 945 1 052 1 150
PPC throughput (kbit/s) 760 2 302 3 476 3 872 4 232

Table 27: GPU Throughput at PPC nominal load with LLC length of 460 bytes
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 8 40 120 200 400
MR1 GboFR
GPU2
Transaction rate at PPC nominal load (per s) 32,0 17,0 7,8 5,1 2,7
DL LLC rate at PPC nominal load (per s) 256 679 937 1 014 1 081
PPC throughput (kbit/s) 511 1 357 1 873 2 028 2 161
GPU3
Transaction rate at PPC nominal load (per s) 54,0 26,8 11,9 7,6 4,0
DL LLC rate at PPC nominal load (per s) 432 1 074 1 427 1 528 1 613
PPC throughput (kbit/s) 863 2 147 2 854 3 056 3 226
MR2 GboFR
GPU 2

4
The average DL PDU length will be lowered in case of low traffic per user, and then the higher proportion of GMM signalling
compared to the data low will affect the PDU average length.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 42/109
Transaction rate at PPC nominal load (per s) 27,7 15,3 7,2 4,7 2,5
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

DL LLC rate at PPC nominal load (per s) 222 611 865 943 1 012
All rights reserved. Passing on and copying of this

PPC throughput (kbit/s) 443 1 223 1 730 1 887 2 024


GPU3
Transaction rate at PPC nominal load (per s) 48,8 24,7 11,1 7,1 3,8
DL LLC rate at PPC nominal load (per s) 390 990 1 330 1 429 1 513
PPC throughput (kbit/s) 781 1 980 2 661 2 857 3 025
MR2 GboIP
GPU2
Transaction rate at PPC nominal load (per s) 26,7 14,2 6,6 4,3 2,3
DL LLC rate at PPC nominal load (per s) 213 569 789 854 911
PPC throughput (kbit/s) 427 1 139 1 577 1 709 1 823
GPU3
Transaction rate at PPC nominal load (per s) 41,8 20,3 8,9 5,7 3,0
DL LLC rate at PPC nominal load (per s) 334 814 1 069 1 141 1 201
PPC throughput (kbit/s) 669 1 628 2 138 2 282 2 402

Table 28: GPU Throughput at PPC nominal load with LLC length of 250 bytes
We see that in the best case (with high data load per transaction), we can expect a PPC DL throughput at
nominal load of 3.8 Mbit/s (for GPU2) and of 5.7 Mbit/s (for GPU3). In the worst case5, the PPC DL
throughput will be around 0.5 Mbit/s.
In the Uplink direction, the maximum PPC LLC rate is four times higher than in the downlink. The
maximum UL GPU throughput based on DSP constraints is limited to about 5.5 Mbit/s (EGPRS, with MCS7
and considering radio propagation). This throughput can be transmitted by the PPC (at 85% load) with an
average PDU length equal to 146 bytes (146 8 4696 = 5.5 Mbit/s). The expected uplink GPU
performances are satisfactory. In fact an Uplink rate of 1712 LLC/s is enough to reach an uplink
throughput of more than 2 Mbit/s (with the same PDU length), which is satisfactory for the uplink
direction.

4.4.3 Maximum GP Througput (B10 Mx)

4.4.3.1 Maximum GP Throughput depending on DSP constraints


The method to evaluate the GP throughput is equivalent to the one used to evaluate the GPU throughput
(see Section 4.4.2.1), but here we have to study two cases, the first one corresponds to Gb over Frame
Relay (GboFR) and the second one corresponds to Gb over IP (GboIP). Actually, the only difference
between these two configurations is the number of available GCH resources. This means that the impact
will only be visible for EGPRS and high coding scheme (from MCS6). For GPRS, the performance will be the
same for both cases.
The number of PDCH per DSP given in Table 29 and in Table 30, has been evaluated assuming the
following:
DSP load set to 85%,
3 PDCHs per TRX,
Average RLC load per PDCH in DL and UL set to 100%,
LLC PDU Length in DL and in UL set respectively to 500 bytes and 100 bytes.

Coding scheme CS1 CS2 CS3 CS4


Maximum PDCHs per DSP 252 240 223 202
Maximum DSP throughput (kbit/s) 2 016 2 880 3 211 4 040

5
2 kbytes per transaction corresponds to the maximum WAP page.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 43/109
Maximum GP throughput (kbit/s) 8 064 11 520 12 845 16 160
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
All rights reserved. Passing on and copying of this

MR1 GboFR
Maximum PDCHs per DSP 228 220 210 199 183 154 104 87 81
Maximum DSP throughput (kbit/s) 2 006 2 464 3 108 3 502 4 099 4 558 4 659 4 733 4 795
Maximum GP throughput (kbit/s) 8 026 9 856 12 432 14 010 16 397 18 234 18 637 18 931 19 181
MR2 GboIP
Maximum PDCHs per DSP 228 220 210 199 183 168 128 108 99
Maximum DSP throughput (kbit/s) 2 006 2 464 3 108 3 502 4 099 4 973 5 734 5 875 5 861
Maximum GP throughput (kbit/s) 8 026 9 856 12 432 14 010 16 397 19 891 22 938 23 501 23 443

Table 29: Maximum GP Throughput (GPRS and EGPRS, perfect propagation)

The above maximum GP throughput does not correspond to realistic radio propagation conditions. In
practice, the value for GP throughput can be limited with a lower average throughput as shown in the
table below.
Coding scheme CS1 CS2 CS3 CS4
Average throughput per PDCH (kbit/s) 8 11.9 14.1 15.6
Maximum PDCHs per DSP 252 240 223 202
Maximum GP throughput (kbit/s) 8 064 11 462 12 604 12 572
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Average throughput per PDCH (kbit/s) 8.8 11.2 14.7 16.7 22.4 29.3 40.5 41.4 38.9
MR1 GboFR
Maximum PDCHs per DSP 228 220 210 199 183 154 104 87 81
Maximum GP throughput (kbit/s) 8 026 9 856 12 348 13 293 16 397 18 049 16 848 14 407 12 604
MR2 GboIP
Maximum PDCHs per DSP 229 220 210 199 183 168 128 108 99
Maximum GP throughput (kbit/s) 8 061 9 856 12 348 13 293 16 397 19 690 20 736 17 885 15 404

Table 30: Maximum GP Throughput (GPRS and EGPRS, with radio propagation)
Considering that the Maximum Throughput per PDCH cannot be reached, we can set a reasonable target
for the maximum GP Throughput above RLC to about:
EDGE:
14.3 Mbit/s for GboFR,
15.8 Mbit/s for GboIP,
GPRS: 12.2 Mbit/s with GPRS.
However to conclude at GP level we need to look at PPC (see next sections).

4.4.3.2 Maximum GP Throughput depending on PPC constraints


In the following we first compute the maximum PPC throughput when TBF are already established, and
then with some TBF establishment and varying hypothesis in term of DL PDUs per transaction.
In Section 4.3.5, we have shown that when the PPC is loaded at 85% by DL-LLC (neither TBF establishment
considered nor any other procedures), then the maximum DL-LLC rate is equal to 6046 LLC per second for
B10 MR1 (GboFR) and 4863 LLC per second for B10 MR2 (GboIP).

We can then compute the maximum throughput that the PPC is able to transmit with the more than
optimistic hypothesis of no TBF establishment, depending on the LLC average size.
LLC average data length 100 200 400 600 800 1000 1500
MR1 4 837 9 674 19 348 29 022 38 696 48 370 72 554
Throughput above RLC (kbit/s)
MR2 3 890 7 781 15 562 23 343 31 123 38 904 58 356

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 44/109
Table 31: Maximum DL PPC throughput depending on LLC PDU length (GP)
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

The maximum UL LLC rate for the PPC is nearly three times greater than in the downlink hence the
maximum UL throughput with the same average LLC PDU length is around three times greater than in the
downlink direction.
With an average PDU length around 460 bytes (see Section 4.4.2.2 for an explanation of this value), the
PPC maximum throughput in DL is around 22.2 Mbit/s for MR1 and 17.9 Mbit/s for MR2. If the average
drops around 200 bytes, the PPC maximum throughput is only about 9.7 Mbit/s for MR1 and 7.8 Mbit/s for
MR2, as shown in Table 31 (green column).

4.4.3.3 DL GP Throughput at PPC nominal load

To estimate the PPC nominal throughput, the method is the same as the one described in Section 4.4.2.3
using the PPC load criterion for Mx (defined in Equation 3). This leads to:
Transaction_rate (1.83 + 1.83 + 0.14 N_DL_LLC_per_trans) = 714.5 ....... [GP MR1 + GboFR]
Transaction_rate (1.81 + 1.81 + 0.17 N_DL_LLC_per_trans) = 711.5 ....... [GP MR2 + GboIP]

From the above equations we can deduce the transaction rate at nominal load, and the DL PPC throughput
at nominal load.

In Table 32, the PPC Throughput is computed with an average PDU length of 460 bytes (from Table 26). In
Table 33, the PPC throughput is computed with an average length of 250 bytes4. It must be noted
(although not reflected in our simple model) that the average PDU length will tend to decrease when the
data load per transaction decreases. This is because the control messages of the application, which are
short, will be in a higher number compared to the user data than for a long file transfer.
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 4 22 65 109 217
MR1 GboFR
Transaction rate at PPC nominal load (per s) 167,0 106,3 55,7 37,7 20,9
DL LLC rate at PPC nominal load (per s) 726 2 310 3 630 4 098 4 538
PPC throughput (kbit/s) 2 672 8 501 13 359 15 082 16 698
MR2 GboIP
Transaction rate at PPC nominal load (per s) 162,6 96,0 47,4 31,5 17,1
DL LLC rate at PPC nominal load (per s) 707 2 086 3 091 3 420 3 717
PPC throughput (kbit/s) 2 602 7 676 11 373 12 586 13 679

Table 32: GP throughput at PPC nominal load with LLC length of 460 bytes
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 8 40 120 200 400
MR1 GboFR
Transaction rate at PPC nominal load (per s) 149,1 76,9 34,8 22,5 11,9
DL LLC rate at PPC nominal load (per s) 1 193 3 076 4 175 4 496 4 771
PPC throughput (kbit/s) 2 386 6 152 8 349 8 991 9 542
MR2 GboIP
Transaction rate at PPC nominal load (per s) 141,9 67,1 28,9 18,4 9,7
DL LLC rate at PPC nominal load (per s) 1 135 2 683 3 472 3 689 3 871
PPC throughput (kbit/s) 2 271 5 366 6 945 7 379 7 741

Table 33: GP throughput at PPC nominal load with LLC length of 250 bytes
We see that in the best case (with high data load per transaction), we can expect a PPC DL throughput at
nominal load of 16.7 Mbit/s (for MR1). In the worst-case5, the PPC DL throughput will be around 2.2
Mbit/s.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 45/109
In the Uplink direction, the maximum PPC LLC rate is three times higher than in the downlink. The
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

maximum UL GP throughput based on DSP constraints is limited to about 19.3 Mbit/s (EGPRS, with MCS6
All rights reserved. Passing on and copying of this

and considering radio propagation). This throughput can be transmitted by the PPC (at 85% load) with an
average PDU length equal to 240 bytes (240 8 10055 = 19.3 Mbit/s). The expected uplink GP
performances are satisfactory.

4.4.4 Conclusion and recommendation


When comparing downlink throughput estimation for DSP and PPC, we see that PPC is still the bottleneck,
mainly due to the expected low size of N201 parameter in most MS handsets (see Table 26).
The comparison is done between the downlink throughput at PPC nominal load, with an average of 10
kbytes per transaction, and an average LLC length equal to 460 bytes6 (see Section 4.4.2.3 for
GPU2/GPU3 and Section 4.4.3.3 for GP) and the GP(U) throughput at DSP level (see Section 4.4.2.1 for
legacy and 4.4.3.1 for Mx). The comparison is presented below for B10 MR1 (GboFR) and for B10 MR2
(GboFR and GboIP).

B10 MR1 (GboFR):


For GPU2, the thoughput at PPC level is equal to 1.8 Mbit/s and:
This is 40% below the GPU target of 3 Mbit/s with GPRS,
This is 55% below the GPU target of 4 Mbit/s with EGPRS.
For GPU3, the thoughput at PPC level is equal to 3 Mbit/s and:
This is equal to the GPU target of 3.1 Mbit/s with GPRS,
This is 25% below the GPU target of 4 Mbit/s with EGPRS.
For GP, the throughput at PPC level is equal to 8.5 Mbits/s and:
This is 30% below the GP target of 12.2 Mbit/s with GPRS,
This is 40% below the GP target of 14.3 Mbit/s with EGPRS.

B10 MR2 (GboFR):


For GPU2, the thoughput at PPC level is equal to 1.6 Mbit/s and:
This is 48% below the GPU target of 3.1 Mbit/s with GPRS,
This is 60% below the GPU target of 4 Mbit/s with EGPRS.
For GPU3, the thoughput at PPC level is equal to 2.7 Mbit/s and:
This is 11% below the GPU target of 3 Mbit/s with GPRS,
This is 31% below the GPU target of 4 Mbit/s with EGPRS.

B10 MR2 (GboIP):


For GPU2, the thoughput at PPC level is equal to 1.5 Mbit/s and:
This is 50% below the GPU target of 3.1 Mbit/s with GPRS,
This is 61% below the GPU target of 4 Mbit/s with EGPRS.
For GPU3, the thoughput at PPC level is equal to 2.3 Mbit/s and:
This is 25% below the GPU target of 3.1 Mbit/s with GPRS,
This is 43% below the GPU target of 4 Mbit/s with EGPRS.
For B10 GP, the throughput at PPC level is equal to 7.7 Mbits/s and:

6
This average transaction is assumed to take into account GMM signalling.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 46/109
This is 38% below the GP target of 12.2 Mbit/s with GPRS,
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

This is 52% below the GP target of 15.8 Mbit/s with EGPRS.


All rights reserved. Passing on and copying of this

Conclusion:
The above figures are absolute maximum. The detection of a PPC bottleneck problem by the operator
requires a significant amount of traffic, which is not expected at the beginning of B10. If the GPU / GP
throughput is limited by PPC, some GPU / GP counters will enable to give a diagnostic.
Concerning DSP capacity, a value of 85 % is recommended for the O&M parameter DSP_Load_Thr1. This
value allows reaching the GPU / GP target capacity (no regression of B10 compared to B9) with most
traffic conditions. On the contrary, a value of 60% for this parameter is too stringent to reach this GPU /
GP target capacity with almost any traffic conditions.

4.5 Main figures for the GPU / GP


Radio interface:
The DSP capacity is expressed as a maximum number of simultaneously activated PDCH. This number is
presented here for different value of the EGPRS penetration rates, and for a given (M)CS distribution (see
Table 34). This distribution is the same for DL and for UL.
Coding scheme CS1 CS2 CS3 CS4
CS Distribution [%] 10% 1% 1% 88%
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
MCS Distribution [%] 1% 1% 0% 0% 1% 2% 0% 12% 83%

Table 34: (M)CS Distribution

The other assumptions are the same as the ones presented in Section 4.2.2.3.1 for B10 legacy and in
Section 4.2.2.3.2 for B10 Mx.
EGPRS penetration rate 0% 5% 30% 50% 80%
B10 GPU2 / GPU3
Maximum PDCHs per DSP 50 48 41 37 31
Maximum PDCHs per GPU 200 192 164 148 124
B10 GP with GboFR
Maximum PDCHs per DSP 206 200 153 124 96
Maximum PDCHs per GP 824 800 612 496 384
B10 GP with GboIP
Maximum PDCHs per DSP 206 200 171 152 119
Maximum PDCHs per GP 824 800 684 608 476

Table 35: Maximum number of PDCHs versus EGPRS penetration rate

Ater capacity:
Maximum number of GCH connected due to processing capacity:
480 GCH per GPU for B10 legacy,
1456 GCH per GP for B10 Mx with Gb over Frame Relay, assuming 112 GCH per E1. In this case, 3 E1
are reserved for Gb, and 13 E1 for PS traffic.
1792 GCH per GP for B10 Mx with Gb over IP, assuming 112 GCH per E1.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 47/109
PPC processing capacity (for B10 MR1 GboFR):
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Maximum ping rate:


At GPU2 / GPU3 / GP level: a theoretical maximum of
45 pings per second for GPU2,
75 pings per second for GPU3,
And 194 pings per second for GP,
At PPC nominal load when no CPU margin is expected. These theoretical maxima apply only to the
GPU2 / GPU3 / GP to quantify the TBF establishment performances and it is meaningless at system
level, for which other procedures must be considered.
At BSS level: the maximum rate of 22.5 pings per second per GPU2, 37.5 pings per second per GPU3
and of 97 pings per second per GP are expected (so half the theoretical maximum for the GPU2 /
GPU3 / GP) because of BSCGP load constraints, and because it does not corresponds to a realistic
traffic model.

Max DL LLC rate:


At nominal load (75%):
1 320 DL LLC per second for B10 GPU2,
1 870 DL LLC per second for B10 GPU3,
5 335 DL LLC per second for B10 GP.
At high load (85%):
1 495 DL LLC per second for B10 GPU2,
2 119 DL LLC per second for B10 GPU3,
6 046 DL LLC per second for B10 GP.

Max UL LLC rate:


At nominal load (75%):
2 921 UL LLC per second for B10 GPU2,
4 250 UL LLC per second for B10 GPU3,
12 278 UL LLC per second for B10 GP,
At high load (85%):
3 310 UL LLC per second for B10 GPU2,
4 817 UL LLC per second for B10 GPU3,
13 915 UL LLC per second for B10 GP.
The full PPC load criterion is provided in Equation 1 for B10 GPU2, in Equation 2 for B10 GPU3 and in
Equation 3 for B10 GP.

PPC processing capacity (for B10 MR2 GboFR):

Maximum ping rate:


At GPU2 / GPU3 level: a theoretical maximum of
38 pings per second for GPU2,
And 67 pings per second for GPU3,
At PPC nominal load when no CPU margin is expected. These theoretical maxima apply only to the
GPU2 / GPU3 to quantify the TBF establishment performances and it is meaningless at system level,
for which other procedures must be considered.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 48/109
At BSS level: the maximum rate of 19 pings per second per GPU2 and of 33.5 pings per second per
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

GPU3 (so half the theoretical maximum for the GPU2 / GPU3) because of BSCGP load constraints, and
All rights reserved. Passing on and copying of this

because it does not corresponds to a realistic traffic model.

Max DL LLC rate:


At nominal load (75%):
1 245 DL LLC per second for B10 GPU2,
1 759 DL LLC per second for B10 GPU3,
At high load (85%):
1 4112 DL LLC per second for B10 GPU2,
1 993 DL LLC per second for B10 GPU3,

Max UL LLC rate:


At nominal load (75%):
2 582 UL LLC per second for B10 GPU2,
3 767 UL LLC per second for B10 GPU3,
At high load (85%):
2 927 UL LLC per second for B10 GPU2,
4 270 UL LLC per second for B10 GPU3,
The full PPC load criterion is provided in Equation 1 for B10 GPU2 and in Equation 2 for B10 GPU3.

PPC processing capacity (for B10 MR2 GboIP):

Maximum ping rate:


At GPU2 / GPU3 / GP level: a theoretical maximum of
37 pings per second for GPU2,
58 pings per second for GPU3,
And 194 pings per second for GP,
At PPC nominal load when no CPU margin is expected. These theoretical maxima apply only to the
GPU2 / GPU3 / GP to quantify the TBF establishment performances and it is meaningless at system
level, for which other procedures must be considered.
At BSS level: the maximum rate of 18.5 pings per second per GPU2, 29 pings per second per GPU3
and of 97 pings per second per GP are expected (so half the theoretical maximum for the GPU2 /
GPU3 / GP) because of BSCGP load constraints, and because it does not corresponds to a realistic
traffic model.

Max DL LLC rate:


At nominal load (75%):
1 114 DL LLC per second for B10 GPU2,
1 388 DL LLC per second for B10 GPU3,
4 291 DL LLC per second for B10 GP.
At high load (85%):
1 262 DL LLC per second for B10 GPU2,
1 573 DL LLC per second for B10 GPU3,
4 863 DL LLC per second for B10 GP.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 49/109
Max UL LLC rate:
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

At nominal load (75%):


All rights reserved. Passing on and copying of this

2 526 UL LLC per second for B10 GPU2,


3 308 UL LLC per second for B10 GPU3,
10 948 UL LLC per second for B10 GP,
At high load (85%):
2 862 UL LLC per second for B10 GPU2,
3 749 UL LLC per second for B10 GPU3,
12 407 UL LLC per second for B10 GP.
The full PPC load criterion is provided in Equation 1 for B10 GPU2, in Equation 2 for B10 GPU3 and in
Equation 3 for B10 GP.

Maximum number of simultaneously established TBF:


At GPU level (total for both directions): 960, and at GP level: 3840.
Per PDCH for UL: 6,
Per PDCH for DL: 10.

Maximum GPU / GP Throughput:


With GPRS per direction:
3.1 Mbit/s per direction for B10 legacy,
12.2 Mbit/s per direction for B10 Mx.
With EGPRS per direction:
4 Mbit/s per direction for B10 legacy,
14.3 Mbit/s per direction for B10 Mx with Gb over Frame Relay,
15.8 Mbit/s per direction for B10 Mx with Gb over IP.

For B10 GPU2, with PPC currently estimated performances these maxima can be reached only with a
sufficient LLC PDU average size of 500 bytes (see Section 4.4 for details). A higher throughput than 4
Mbit/s may be reachable depending on MCS distribution and PDU size, but the target of 4Mbit/s is already
considered a very ambitious target when limitation due to radio propagation is introduced.
With B10 MR1 for Mx, the GP throughput is multiplied by a factor of 3.6 to reach nearly 14.5 Mbits/s. With
B10 MR2 for Mx, the GP throughput is multiplied by a factor of 4 to reach nearly 16Mbit/s.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 50/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

5 TRAFFIC MODEL

5.1 Theoretical model for PS services


A GPRS traffic session is defined as the sequence of packet calls (or transaction). The user initiates the
packet call when requesting information. During a packet call several packets may be generated, which
means the packet call constitutes a burst of packets.
The instans of packet arrivals
to network equipment

A transaction

A packet service session

First packet arrival Last packet arrival


to network equipment to network equipment

Figure 9: Typical characteristic of a packet service session


For example a WEB session consists of the period during which a user is actively doing WEB browsing.
During this session, a transaction corresponds to the download of a WEB page. There is some times during
which the user is looking at the screen, which corresponds to idle periods between transactions. This
corresponds to reading times.
The full description of a PS session requires the use of a statistical model, which quantifies the arrival law
of sessions, the arrival law of transactions within a session, the distributions of the packet length and of
the data load in a session (page size). Examples of such models can be found in document [24], [25] and
[26]. These models are too complex to allow analytical computations, but are used to run simulations with
random traffic generators, as for example for the study described in document [28]. Such models are
quite useful to dimension network resource for a required quality of service.

5.2 Simplification and specific needs to predict the GPU / GP capacity


In this document we are only interested in looking at the GPU / GP average load7 for a given traffic
intensity, which determines the GPU / GP capacity. For this reason, we will only describe the traffic by
average values, and we will compute global averages over the busy hour period. This however does not
provide any information on statistical variations of the load, and no guaranty in term of QoS.

5.2.1 Session description


The session is described by:
The signalling phase description,
The number of transactions per session,

7
A full statistical model would enable to determine the capacity for a given grade of service (maximum queuing time).

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 51/109
The data load per session (page size),
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

The expected average packet size (IP-packet).


All rights reserved. Passing on and copying of this

PDP context activation occurs at the beginning of each session. PDP context deactivation occurs at the
end of each session.
The attachment procedure can be triggered:
Either at the beginning of each session,
Or the MS is attached when switched on.
Additionally, mobility management (Routing Area update and cell update) must be taken into account.

5.2.2 Transaction description


A transaction can be seen as the period during which resources will be allocated to serve a burst of
packets. With the delayed downlink TBF release feature8, a transaction can be identified to the time
during which the downlink TBF is established.
The wording Packet usually refers to an IP-packet. For the BSS GPRS traffic model, we are interested in
the LLC-PDU, which are the packet received on the Gb interface. To avoid confusion we will reserve the
word packet to Internet packet and LLC-PDU to the data units received on Gb interface.
A transaction is described by:
A number of TBF establishment and release. We distinguish the uplink TBF or the downlink TBF, and
the channels on which the establishment takes place,
A number of DL LLC PDU,
A number of UL LLC PDU,
An average UL LLC PDU size,
An average DL LLC PDU size.

5.2.3 EDGE and GPRS users


In the following we assume that EDGE and GPRS users have the same behavior, because we have no data
to assess any difference. In the long term it is likely that users with a higher speed terminal (EDGE) will
have a higher activity in term of both data load and frequency. This is neglected for the time being.
The only difference between EDGE and GPRS users in the traffic model presented hereafter lies in the LLC
PDU maximum size (N201 parameter). This difference (longer PDU for EDGE than GPRS), based from
information available at the time this document is drafted, will have to be reassessed when more data are
available.

5.2.4 User behavior at Busy Hour


From the session description and the transaction description, we characterize the user behavior at busy
hour, by a number of parameters, which are relevant inputs for the GPU / GP load model, as follow:
Number of transactions per user per busy hour,
Number of PS paging per user per busy hour,
Number of DL LLC-PDU per busy hour,
Number of UL LLC-PDU per busy hour,

8
It is assumed that the DL TBF is maintained as long as an uplink TBF is established taking into account the addition of
T_network_response_time, even if no data are sent on the downlink.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 52/109
Number of DL TBF on CCCH per busy hour,
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Number of UL TBF on CCCH per busy hour,


All rights reserved. Passing on and copying of this

Number of DL TBF on PACCH per busy hour,


Number of UL TBF on PACCH per busy hour,
Average bytes transferred in DL per busy hour,
Average bytes transferred in UL per busy hour,
Number of Radio Resource Reallocation.
The formula to compute the user behavior from the session description and transaction descriptions are
provided in Section 9.3. Some of the formula takes into account protocol behavior as detailed below
(mainly TCP/IP).

5.3 Profile definitions and mix of profiles


In the following we will consider several application profiles. Each profile is supposed to correspond to
one type of service (application above GPRS). For each profile, a set of value is proposed for the session
and transaction parameters.
The profiles below correspond to what we estimate to be the most popular applications. Of course, some
other application may exist, but they are considered as negligible9 or can be considered as matching
another service profile. In B8 release the MMS profile has been introduced and replaces the previous SMS
profile.
A relevant component of Internet traffic is FTP, although its importance is decreasing because it is more
and more common to use HTTP to download files instead of FTP. Traffic originated by FTP control is
neglected for simplicity. In fact the MMS model is very close to FTP and can be adapted to estimate
capacity with FTP (UL and DL).
We first explain the characteristics of each profile. Tables with numeric values for each parameter are
provided in Section 5.6.

5.3.1 WEB browsing


A Web browsing session consists of the download of a number of WEB pages. The underlying protocol is
http over TCP/IP. The parameter values presented below are based on document [27]:
The average size of a Web page is estimated to be 50 kbytes,
The average number of pages per session is 5 pages10,
The average size of downlink packet is around 1000 bytes11,
The average size of an uplink packet is around 40 bytes (mainly TCP-acknowledgements).
Each page download consists of a Mobile Originated transaction, with one downlink TBF transfer. The
uplink transfer corresponds to TCP Acknowledgement. With the introduction of the extended UL mode
feature, and assuming that the time between two TCP acknowledgements is lower than
T_Max_Extended_UL (timer corresponding to the maximum duration of the extended uplink TBF phase),
we can assume that only one UL TBF is generated and this TBF remains established as long as the DL TBF

9
For the time being we do not consider telemetry users, fleet management, and so on
10
The value from document [27] is much higher, but the data have been gathered in an environment with fixed telephony and very
active users (corporate and educational). So we prefer to take the same value as in document reference [28], which seems more
adapted to mobile PS services.
11
WEB-packet.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 53/109
remains established. This assumption is only valid if the mobile supports also the extended UL Mode.
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Otherwise, an UL TBF will be established on PACCH for every three TCP acknowlegedment.
All rights reserved. Passing on and copying of this

Note In the following, we will assume that 50% of the MS support the feature extended UL mode.

We consider a distribution for DL WEB packet derived from document [27], but as this document was
published in 1999, we have assumed that the 552 and 576 IP-packet sizes corresponding to older versions
of TCP-IP are now replaced by 1500. So the frequency of the size 1500 is increased accordingly. The
downlink LLC-PDU average PDU data length is not equal to the average IP-packet length, because of the
limiting effect of the GPRS N201 parameter. In the future, laptop computers with incorporated
GPRS/EDGE device may have a N201 parameter equal to the maximum IP-packet size (1500 bytes), but the
EDGE-capable mobiles currently planned for testing have N201 parameter around 800 bytes12. With a given
IP-packet distribution, we can deduce the downlink LLC-PDU data length distribution and the
corresponding average downlink LLC-PDU data length. Also many non-EDGE mobiles have a N201
parameter equal to 500 or 576 bytes. We study the cases of N201 set respectively to 576, 800 and 1500
bytes below.
IP-packet LLC PDU per TCP LLC-PDU for 100 LLC PDU length
IP-packet sizes LLC-PDU length
distribution packet TCP-segment distribution
1000 (average) 20% 2 500 40 17%
1500 55% 3 500 165 72%
40 25% 1 40 25 11%
Average Average
1035 100% 450

Table 36: DL LLC average data length for WEB if N201 = 576 bytes
IP-packet sizes IP-packet LLC PDU per TCP LLC-PDU length LLC-PDU for 100 LLC PDU length
distribution packet TCP-segment distribution
1000 (average) 20% 2 500 40 23%
1500 55% 2 750 110 63%
40 25% 1 40 25 14%
Average Average
1035 100% 591

Table 37: DL LLC average data length for WEB if N201 = 800 bytes
IP-packet sizes IP-packet LLC PDU per TCP LLC-PDU length LLC-PDU for 100 LLC PDU length
distribution packet TCP-segment distribution

1000 (average) 20% 1 1000 20 20%


1500 55% 1 1500 55 55%
40 25% 1 40 25 25%
Average Average
1035 100% 1035

Table 38: DL LLC average data length for WEB if N201 = 1500 bytes

5.3.2 WAP
The WAP service is accessible via every GPRS mobile equipment and allows using value added-services
such as:
Local information (for example, find the nearer restaurant or movie theatre),
News or Weathers,

12
Philips: N201=766, Nokia: N201= 806.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 54/109
Mobile Commerce,
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Mobile Banking,
All rights reserved. Passing on and copying of this

Messaging.
The WAP service is provided via a WAP gateway generally belonging to the PLMN Operator. The average
size of a WAP page is actually estimated to 1 kbyte (see document [28]). An increase could be expected in
the next years, corresponding to more attractive contents as well as to larger screens. Pages larger than 2
kbytes will be possible after migration to WAP2 (see MMS profile for information on WAP 2).
We exclude MMS services from WAP services, although MMS uses the WAP protocols. This is to make a
distinction between low data load services (initially WAP was designed for low data load) and higher load
services (MMS).

5.3.3 Multi Media Messaging Services (MMS)


MMS is the evolution of SMS. MMS can transmit messages containing text, graphics, photographic images,
audio and even video clips between mobile devices using WAP as bearer technology. MMS enabled phones
will feature a built-in media editor that allows users to easily create and edit the content of their
multimedia messages. By using a built-in or attached camera, for example, users can produce digital
postcards and send them to friends or associates.
The MMS will be both downlink and uplink. Contradictory information has been found about the relative
number of MMS for the downlink or uplink:
MMS will be first mainly photos sent from Mobiles with a build-in camera towards an e-mail address.
This favors the hypothesis of a higher uplink number of MMS sent.
MMS will be sent down to Mobile phone users by means of a mailing list (one MMS towards n GPRS
users). This favors the hypothesis of a higher number of DL MMS sent.
As we have no means to quantify each hypothesis, we will consider an equal number of MMS transactions
in the downlink and uplink direction.
While first MMS implementations will be based on enhanced WAP protocols (WSP/WDP MMS
encapsulation), later MMS versions will support WAP 2 protocols for communication between terminal and
MMS relay, which uses HTTP over TCP/IP. In this document, we will consider MMS based on WAP 2
protocols13. This implies that we have to consider TCP acknowledgment in the same way as for WEB
browsing (see Section 5.3.1 for details).

The data exchanges for MMS will depend on the kind of data exchanged. The following table gives some
examples.
Kind of information Mean size [kbyte] according to [29]
Text 14
1 not considered .
Contact photo, 80 x 100 Pixel 2
Compressed photo, 120 x 160 Pixel 10
High-resolution photo, 640 x 480 Pixel 30
Speech-memo, 60 seconds 60
Video-clip, 20 seconds 100

Table 39: Examples of MMS information type

13
Alcatel MMS server is planned to be deployed with WAP 2 from mid2004 onwards.
14
Up to now, the SMS service is still using CS-SMS over SDCCH because most MS suppliers have not implemented the "revert to CS-
mode" for SMS, when SMS is not available over GPRS. This is kept as an assumption for B9, but it remains to be checked in the long
term.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 55/109
On average, a mean MMS size of 50 kbytes is assumed15 for the downlink direction. For the uplink, the
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

average size corresponding to high-resolution photos (30 kbytes) is assumed.


All rights reserved. Passing on and copying of this

A MMS download will be preceded by a notification sent to the MS by the WAP server. The notification is
sent using WAP/GPRS, which requires the sending of a PS Paging message. The MS can respond to this
notification later. So we consider two distinct DL TBF establishments for the MMS-DL.

5.3.4 FTP transfer over TCP/IP


We do not have a specific profile for FTP transfer over TCP/IP. However, since we have assumed that the
MMS is based on WAP2, which uses TCP/IP, the MMS profile can be used to estimate the capacity for
FTP/TCP. Some parameters can be adapted if needed (paging removed for the DL case, file size to be
adapted to application need)16.

5.3.5 File Downlink transfer over UDP


Large file transfers for music and video clips are likely to use UDP instead of TCP. The size of the file
depends on the duration of the clip (a few minutes) and on the compression technique.
This type of transfer is characterized by IP packet sent in unacknowledged mode, and a few control
messages in the reverse direction (one for every 5 seconds of transfer). The IP packets are assumed to
reach the maximum size of 1500 bytes.
A file size around 300 kbytes is assumed in average, which corresponds to the download of a 1minute-
video clip with a throughput of 40 kbit/s or a 2 minutes music clip with a throughput of 20 kbit/s.

Note As the periodicity of the control messages on the uplink is equal to 5 seconds and therefore higher
than T_max_extended_UL timer, the extended UL mode has no influence on this type of traffic.
This means that an UL TBF will be associated to one control message.

5.3.6 Signalling
This profile does not characterize a GPRS application, but summarizes the GMM activity for all other
profiles. It describes all the signalling activity that occurs during a session. We assume that this profile
does not depend on the application (all sessions start with one PDP context activation). This also may
need to be refined in the future since some application may require the use of more than one
simultaneous PDP contexts. GMM transfers are characterized by TBF reduced to one small LLC frame. We
assume an average LLC PDU length of 50 bytes for all signalling messages17.
If the MS requires GPRS attachment for each GPRS session, all the signalling messages shown in Table 40
have to be taken into account for one session.
If the MS is always GPRS attached (class B mobiles), then only PDP context activation and deactivation are
needed in most cases. In this case the signalling flow generated by attachment can be considered as
negligible18 in a first approach.

15
The MMS size is expected to be limited to 50kbytes by operators in a first stage.
16
This means that associated Excel tool can be used to check the capacity with FTP download/upload by changing a few input
parameter values.
17
This approximation is acceptable because we are more interested in the number of PDU generated and TBF establishment than in
the corresponding data load, which will be small compared to application data.
18
The MS attaches at switched on. Since the battery performances are now very good, this event becomes very rare compared to
other signalling.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 56/109
The identity request is not taken into account, because it is needed only in a very small number of cases
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

(2% according to document [29]).


All rights reserved. Passing on and copying of this

Message types Direction Downlink count Uplink count


Attach Request Uplink 1
Identity Request Downlink 1
Identity Response Uplink 1
Authentication and Ciphering Request Downlink 1
Authentication and Ciphering Response Uplink 1
Attach Accept Downlink 1
Attach Complete Uplink 1
Activation PDP context Request Uplink 1
Activation PDP context Accept Downlink 1
GPRS session
Deactivate PDP context Request Uplink 1
Deactivation PDP context Accept Downlink 1
Detach Request Uplink 1
Detach Accept Downlink 1
Total messages (case of attachment at each session) 6 7
Total messages (case of MS always attached) 2 2

Table 40: Signalling messages at start of session


The following assumptions will be taken to establish the signalling model:
Attachment: In the following we consider that the MS is always GPRS attached, as soon as switched
on 19. To see the worst case as far as signalling load is concerned, a flag in the traffic model allow to
see the overhead if attachment is triggered for each session.
PDP context: For some services, several PDP contexts are needed per session. Hence the PDP
context procedure may be activated several times. Here we assume only one PDP context per
session. The usage of several PDP contexts per session is for further study.
Mobility (Routing Area update, cell updates):
Only periodic Routing area updates are taken into account in this stage. The number of routing
area updates depends on the number of GPRS users in the area covered by the GPU / GP and
the periodic RA update timer. The default value is used (T3312 = 54mn). The following message
sequence is exchanged:

Message types Direction


Routing Area Update request Uplink
Routing Area Update accept Downlink
Routing Area Update complete Uplink

This requires 1 UL TBF/(P)CCCH + 1 DL TBF/PACCH + 1UL TBF/PACCH.


Uplink TBF due to cell updates in packet transfer mode or GMM ready state should be much
lower than the uplink TBF due to TCP based applications, and are neglected at this stage.
PS paging occurs only with Mobile Terminated Transaction. Only the case of notification for DL MMS
induces some paging.
CS Paging in the GPU / GP should be considered in the case where Gs interface is present (network
operation mode I). In a first stage we consider the GPU / GP capacity without CS paging.

19
Case of class-B mobiles.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 57/109
Note A pure RA Update profile is added in the available profile described in the next section (see
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

5.4). It corresponds to MS, GPRS attached but doing no PS data traffic.


All rights reserved. Passing on and copying of this

5.4 Mix of profiles


A GPRS user will be using one or several of the services defined above.
A sophisticated model of subscribers can be defined, where each subscriber type (mass market, business,
etc.) uses a mix of services. This approach is not considered in the scope of the Alcatel BSS for B10, as we
lack the input data to do it. Such an approach will in any case result in a final mix of applications, so this
is only an intermediate step to estimate this mix. In B10, we will only use a mix of profile, where each
profile corresponds to an application.

5.5 User behavior at the busy hour


As currently we have not gathered any data concerning customer activity, we will consider that we have
one session per user per busy hour. This is likely to over-estimate the traffic intensity per user, and
consequently under-estimate the number of attached GPRS user per GPU / GP. However the number of
users per GPU / GP is not the most significant data. We are much more interested in the throughput that
the GPU / GP is able to transmit with a given traffic mix.

5.6 Parameter values for all profiles and mix

5.6.1 Computations
In the next sections, many values are computed from a reduced set of input parameters. The
computations are derived from the knowledge of the profiles presented in Section 5.3. All computations
are detailed in Section 9.3.

5.6.2 Session description for all profiles


Session description (application level) WEB WAP MMS-D MMS-U UDP-D
PS paging 0 0 1 0 0
Number of transactions per session 5 7 1 1 1
Average transaction size (kbytes) 50 1 50 30 300
Average packet size for user data (bytes) 1000 1000 1500 1500 1500
Average packet size for control message (bytes) 40 40 40 40 40
User data load session DL (kbytes) 250 7 50 0 300
User data load /session UL (kbytes) 0 0 0 30 0

Table 41: Session description for all data profiles

Note In the above table the data load per session includes the protocol overhead due to Internet
protocols (TCP/IP, UDP/IP) and application protocols. The way such overhead is taken or not taken
into account has to be checked, if comparisons are needed with other traffic models provided by
customers.

5.6.3 Transaction description for all profiles


From the information gathered in Section 5.3, we can propose some values for the parameters, which
describe the transaction at GPU / GP level. The GMM signalling is described as one transaction although it
is split between the beginning and the end of the session. Routing area update is incorporated with the
user behavior at the busy hour (see Section 5.2.4).

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 58/109
We have considered that the downlink TBF is maintained during the whole duration of the transaction,
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

thanks to the delayed downlink TBF release feature, and neglecting TBF establishment caused by cell
All rights reserved. Passing on and copying of this

reselection.
All transactions are mobiles originated. Hence the first TBF is always an uplink TBF establishment on
CCCH, and then there is a downlink TBF establishment on PACCH. Moreover, we assume that 50% of MS
support the feature extended UL mode, which allows the reduction of UL TBF establisments on PACCH
for acknowledgement purposes.
The number of LLC frames per transaction for a given data load depends on the value of the N201
parameter in the mobile. We will consider two main cases:
Most traffic is due to GPRS mobiles: in this case the most encountered value of N201= 576.
Most traffic is due to EDGE mobiles: in this case the most encountered value of N201= 80020. In some
cases we will also consider N201 value = 1500 (new generation of laptop with incorporated radio
transmission device).

Transaction description (GPU / GP level) WEB WAP MMS-D MMS-U UDP-D RA Upt GMM
N201 parameter 576
Number of PDU per packet 2 2 3 3 3
Mean DL LLC PDU size (bytes) 460 510 510 50 510 50 50
Mean UL LLC PDU size (bytes) 50 50 50 510 50 50 50
Number of DL LLC PDU 109 2 98 20 588 1 2
Number of UL LLC PDU 50 2 33 59 11 2 2
Number of DL TBF on CCCH 0 0 0 0 0 0 0
Number of UL TBF on CCCH 1 1 2 1 1 1 0
Number of DL TBF on PACCH 1 1 2 1 1 1 0
Number of UL TBF on PACCH 9 1 7 1 11 1 2
DL kbytes per transaction 50.0 1.0 50.0 1.0 300.0 0.05 0.1
UL kbytes per transaction 2.5 0.1 1.7 30.0 0.5 0.1 0.1

Table 42: Transaction description at GPU / GP level with GPRS mobiles (MS always attached)
Transaction description (GPU / GP level) WEB WAP MMS-D MMS-U UDP-D
N201 parameter 800
Number of PDU per packet 2 2 2 2 2
Mean DL LLC PDU size (bytes) 601 510 760 50 760
Mean UL LLC PDU size (bytes) 50 50 50 760 50
Number of DL LLC PDU 83 2 66 20 395
Number of UL LLC PDU 50 2 33 39 5
Number of UL TBF on PACCH 9 1 7 1 5

Table 43: Modified transaction parameters with EDGE mobiles

Note The LLC header overhead (10 bytes) is added to each LLC PDU and taken into account in the UL
and DL kbytes per transaction.

Note For UDP-D, the number of UL TBF on PACCH depends on the TBF duration. To estimate the
duration of a TBF, we make the following assumptions:
3 TS allocated in DL,
2.3 kbyte/s per PDCH for GPRS,
7.1 kbyte/s per PDCH for EGPRS.

20
Corresponds to the size encountered up to now for test mobiles used by Alcatel (Philips: N201=760, Nokia: N201= 806)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 59/109
Note For the GMM profile, downlink TBF establishments (on CCCH and on PACCH) are not counted here
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

because they are counted with the transaction, which starts immediately after the signalling
All rights reserved. Passing on and copying of this

phase.

5.6.4 User behaviors at busy hour


Assuming one session per user at the busy hour, we can estimate the user behavior at the busy hour for
the parameter values needed at GPU / GP level. The GMM signalling profile has been incorporated,
counting one GMM signalling transaction per session. When the MS is always attached, the periodic routing
area update is also added (1.11 per user per BH)21.

The number of LLC PDU for the main data flow depends on the value of N201 parameter, which is assumed
to be different for EDGE and GPRS (as shown in Section 5.6.3). Modified values for EDGE are provided in a
separate table.
User behavior at BH per profile WEB WAP MMS-D MMS-U UDP-D RA Upt
Number of sessions per user at BH 1.0 1.0 1.0 1.0 1.0 1.11
Number of transactions per user at BH 5 7 1 1 1 1
PS paging per user at BH 0 0 1 0 0 0
Number of DL LLC PDU at BH 547 17 101 23 591 1.11
Number of UL LLC PDU at BH 254 18 38 63 15 2.22
Number of DL TBF on CCCH at BH 0 0 0 0 0 0
Number of UL TBF on CCCH at BH 6 8 3 2 2 1.11
Number of DL TBF on PACCH at BH 6 8 3 2 2 1.11
Number of UL TBF on PACCH 50 10 10 4 14 1.11
Number of Radio Resource Reallocation (see note) 1 1 1 1 1 0
Average DL kbytes at BH 250.2 7.2 50.2 1.2 300.2 0.06
Average UL kbytes at BH 12.7 0.9 1.9 30.2 0.7 0.11
Average DL PDU size (bytes) 457.7 425.0 495.8 50.0 507.6 50.0
Average UL PDU size (bytes) 50.0 50.0 50.0 479.2 50.0 50.0

Table 44: User behavior at busy hour per profile (GPRS case)
User behavior at BH per profile WEB WAP MMS-D MMS-U UDP-D
Number of DL LLC PDU at BH 419 17 69 23 398
Number of UL LLC PDU at BH 254 18 38 44 9
Number of DL TBF on PACCH at BH 6 8 3 2 2
Number of UL TBF on PACCH at BH 50 10 10 4 8
Average DL PDU size (bytes) 597.3 425.0 727.9 50.0 754.4
Average UL PDU size (bytes) 50.0 50.0 50.0 691.4 50.0

Table 45: Modified parameters for user behavior for EDGE mobiles

Note (About Radio Resource Reallocation for each profile) The minimum number of resource re-
allocation is set to one per session. Depending on the congestion level in the cells and on the
penetration of EDGE MS, more re-allocations can occur (respectively T3 and T4). However this is
difficult to quantify and is not taken into account at this stage.

5.6.5 Mix of profiles: definition of an average user


The profile mix definition for B10 is the same as the one of the previous releases and is based on:
A still high level of WAP users with low data load per transaction,
An important rise of MMS,

21
One RAU per 54 minutes = 1.11 RAU at Busy Hour.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 60/109
A still low usage of WEB browsing because it requires expensive equipment (lap-top with GPRS
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

device),
All rights reserved. Passing on and copying of this

A still low usage of streaming type services (UDP-DL),


For the moment, the pure RA Update profile is not considered in the mix.

Profile WEB WAP MMS-D MMS-U UDP-D RA Upt


Percentage of subscribers for each profile 5% 40% 25% 25% 5% 0%

Table 46: Alcatel mix of profile for BSS

Note This mix will be updated if more information is found.


With the distribution of profiles, we can define an average PS user corresponding to the average weighted
by the distribution coefficients from Table 46 of the profiles from Table 44 (GPRS mobiles) and Table 45
(EGPRS mobiles). The resulting average user, with GMM signalling incorporated and RA updates, is
indicated in the following table.
User behavior at BH for a mix of profile Average user for GPRS Modified values for EDGE
Number of transactions per user at BH 4
PS paging per user at BH 0.25
Number of DL LLC PDU at BH 95 71
Number of UL LLC PDU at BH 46 41
Number of DL TBF on (P)CCCH at BH 0
Number of UL TBF on (P)CCCH at BH 5
Number of DL TBF on PACCH at BH 5
Number of UL TBF on PACCH 11 10
Number of Radio Resource Reallocation 1
Average DL kbytes at BH 43.2
Average UL kbytes at BH 9.1
Average DL PDU size (bytes) 456.3 612.2
Average UL PDU size (bytes) 197.4 221.9

Table 47: Average user behavior at Busy Hour

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 61/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

6 APPLICATION OF PS TRAFFIC MODEL TO THE BSS

6.1 GPU / GP and BSC role


With Alcatel architecture, the operator can add up to 6 GPU / GP per BSS to handle GPRS traffic.
For the BSC, the capacity for GPRS depends only on the number of PDCH and corresponding Ater channels,
which are used for PS services. This type of information can be found in document [13]. The signalling
load induced by PS traffic on the BSC is lower than the signalling load for CS service, which is removed
when a radio time slot is used for PS traffic (PDCH) instead of CS traffic (TCH). So there is no limitation
for PS traffic due to signalling load in the BSC.
So the main issue here is to know the capacity of the GPU / GP, which is the object of this section.

6.2 GPU / GP dimensioning data


The GPU / GP capacity for a given traffic profile will depend on its ability to:
Handle the Radio Resource management signalling load (mainly TBF establishment) and Gb protocol
for LLC data, which are handled by the PPC processor.
Transmit and receive a given throughput by RLC/MAC layers, which are handled by the DSP
processors.
Information on the GPU / GP architecture can be found in Section 2.

6.2.1 DSP role

6.2.1.1 B10 Legacy

GPU2 case:
In the following, we will consider the following figures for the maximum GPU throughput per service (see
Section 4.4.2.1):
GPRS: 3.1 Mbit/s for both directions,
EGPRS: 4 Mbit/s for both directions.

GPU3 case:
The same number of PDCH is declared for GPU3 as for GPU2, even if the DSP could handle more PDCHs
(gain of 40%-50%)22. Hence the maximum throughput determined at DSP level is the same for GPU3 as for
GPU2.

6.2.1.2 B10 Mx
Compared to B10 legacy, the number of PDCH that can be handled is multiplied by a factor of 4. In the
following, we will consider the following figures for the maximum GP throughput per service (see Section
4.4.3.1):
GPRS: 12.2 Mbit/s for both directions,

22
This allows to have the same radio resource management for both GPUs, and to lower the unbalance of capacity between DSP and
PPC for GPU3, as shown further.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 62/109
EGPRS:
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

14.3 Mbit/s for both directions for Gb over Frame Relay (B10 MR1),
All rights reserved. Passing on and copying of this

15.8 Mbits/s for both directions for Gb over IP (B10 MR2).

6.2.2 PPC role

6.2.2.1 B10 Legacy (GPU2 and GPU3)


The GPU capacity can be limited either by the maximum GPU throughput defined in Section 6.2.1.1, or by
the CPU load consumed by radio resource management and Gb protocol in the PPC. The objective of the
following sections is to determine the PPC capacity for each traffic profile and for the Alcatel Lucent
mix profile.
Knowing the cost in ms of CPU for each basic procedure (TBF establishment and release, DL and UL data
transfer), and the occurrence of each procedure per user, we can estimate the cost in ms of CPU per user
and for N users at the busy hour. The average CPU load for N users at the busy hour is expressed in ms/s
and is converted as a percentage (10ms/s CPU = 1% CPU load).
For a targetted average CPU load, we can23 then determine the average number of GPU users. The CPU
cost for each procedure, measured on a GPU (case L2-cache activated) is presented in Table 16 for B10
MR1 (GboFR), Table 17 for B10 MR2 (GboFR) and Table 18 in for B10 MR2 (GboIP) (see Section 4.3.2.4). In
the following, the CPU margin defined in Table 14 (see Section 4.3.2.3) is added to the CPU used by other
procedures, whose rates depend on the traffic model considered.

6.2.2.2 B10 Mx
As for GPU in B10 legacy, the GP capacity can be limited either by the maximum GP throughput defined in
Section 6.2.1.2, or by the CPU load consumed by radio resource management and Gb protocol in the PPC.
The objective of the following sections is to determine the PPC capacity for each traffic profile and for
the Alcatel mix.
The method is described in Section 6.2.2.1 and used the cost of CPU for each basic procedure as defined
in Table 15 (CPU margin), Table 16 for B10 MR1 (GboFR) and in Table 18 for B10 MR2 (GboIP) for GP case.

6.2.3 Relevant outputs


We have seen that with a given traffic model, we can determine an average number of users per GPU / GP
at the busy hour. However this number of users is not very significant because:
It corresponds to users who are not active all the time, so we cannot measure if we are in line with
the prediction on a test platform,
The real activity of a user at the busy hour will vary a lot from an operator to the other, so we
cannot give this number to our customers.
So we define some other figures, which are easier to check and more relevant for network dimensioning,
as follows:
Transactions per second at target load: this corresponds to the number of UL TBF establishment on
CCCH (one per transaction),
TBF establishment and release per second at target load (for all channel types),
Total LLC-PDU per second (uplink and downlink) at target load,
DL data load: this corresponds to the downlink data load transmitted through the GPU / GP including
protocol headers for Gb and higher layers (TCP/IP, application, etc.). It is also compared with the

23
Using Excel goal seek built-in tool.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 63/109
maximum GPU / GP throughput depending on DSP capacity, to see where the bottleneck of the GPU
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

/ GP is.
All rights reserved. Passing on and copying of this

UL Data load: the same is provided for uplink data load as for downlink data.
It is also interesting to know the number of Mobiles simultaneously connected per GPU / GP. This
figure however depends highly on the throughput allocated to each MS and, to a lesser extends on
the value of T_network_response_time (for delayed downlink TBF release), and should not be
considered as a load test passing criteria. We are interested in two extreme cases:
Maximum number of MS in the worst case (congestion, long delayed TBF release timer) to see if
it is compatible with the maximum number of TBF contexts per GPU / GP.
Effect of high-speed data service introduction, so with maximum throughput per MS.
GSL signalling load: the GSL signalling load can be computed for all profiles, with the GSL link load
model defined in Section 7. We take the following hypothesis:
All TBF establishments on CCCH and radio resource re-allocation trigger a transmission
allocation procedure,
Half of the TBF establishments request the allocation of GCHs (Abis allocation and GCH
connection),
Suspend-resume procedures are taken into account, but LCS data load is not taken into account,
There is no Gs interface, i.e. CS paging is not taken into account.
Moreover, it is always interesting to be able to compare our traffic model (Average user) and its
theoretical performance with what can be observed on the field or on test platform. Therefore, the
evolution of three parameters as a function of the PPC CPU load is provided. These three parameters are
the following ones:
Number of TBFs establishment per hour in DL and UL,
Number of LLC PDU transmitted per hour in DL and UL,
DL PPC Throughput expressed in kbit/s.

6.3 PPC capacity with all profiles and traffic mix for B10 legacy
Note In the different tables below, the number of subscribers per GPU / GP is determined for the
following cases. The user corresponds to either a homogeneous profile (WEB or WAP or MMS or
UDP) or to an average user. The average user corresponds to the mix of profiles defined in
Section 5.6.5. The number of users in each column must not be added. This is the maximum
number corresponding to each hypothesis

6.3.1 PPC performance for GPU2


The PPC capacity is indicated at 75% CPU load (nominal load), and with a CPU margin of 9.25%, as
explained in Section 6.2.2.1.
The way the PPC load is computed is detailed in Section 9.4. The number of GPU users is adjusted to
reach 75% CPU Load.
We distinguish the case of 100% GPRS MS and 100% EDGE MS. The capacity with a proportion of both can
be obtained as a weighted average of both tables below.

Note Compared to B9, B10 performances have decreased because we assume now that the feature
Extended UL TBF mode is not supported by all MS. B9 assumption to considerer all MS supporting
this feature was far too optimistic. The new assumption for B10 (50% of MS supporting this feature)
is more realistic even if still optimistic.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 64/109
6.3.1.1 GPRS case
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

For most GPRS only MS, the N201 parameter value is limited to 576 or 500 bytes. We assume N201 set to
576 bytes in this section.
Table 48, Table 49 and Table 50 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GboIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 629 9 155 14 441 4 734 81 215 9 268
TBF establishments / s at target load 45.3 67.0 48.6 23.5 75.2 53.0
Total LLC / s (UL and DL) at target load 585 89 451 797 75 362
Downlink data load
DL LLC / s 399.1 42.8 249.2 777.6 25.1 243.8
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 461.2 145.6 823.3 3 157.5 10.0 889.8
Percentage of max DL GPU throughput 47.1% 4.7% 26.6% 101.9% 0.3% 28.7%
Uplink data load
UL LLC / s 185.6 46.3 201.8 19.5 50.1 118.1
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 74.2 18.5 514.9 7.8 20.1 186.6
Percentage of max UL GPU throughput 2.4% 0.6% 16.6% 0.3% 0.6% 6.0%
Memory needed
Number of simultaneously connected MS 50 116 75 68 0 141
GSL link for one GSL at 64 kbit/s
MFS -> BSC 59.0% 88.5% 71.2% 56.0% 96.6% 74.6%
BSC -> MFS 50.4% 73.3% 59.1% 48.0% 79.6% 62.2%

Table 48: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 323 7 907 12 677 4 319 69 433 8 121
TBF establishments / s at target load 40.0 57.8 42.6 21.5 64.3 46.4
Total LLC / s (UL and DL) at target load 517 77 396 727 64 317
Downlink data load
DL LLC / s 352.7 37.0 218.8 709.4 21.4 213.6
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 291.3 125.7 722.7 2 880.8 8.6 779.7
Percentage of max DL GPU throughput 41.7% 4.1% 23.3% 92.9% 0.3% 25.2%
Uplink data load
UL LLC / s 164.0 40.0 177.1 17.7 42.9 103.5
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 65.6 16.0 452.0 7.1 17.1 163.5
Percentage of max UL GPU throughput 2.1% 0.5% 14.6% 0.2% 0.6% 5.3%
Memory needed
Number of simultaneously connected MS 69 153 111 78 0 196
GSL link for one GSL at 64 kbit/s
MFS -> BSC 58.1% 83.4% 68.7% 55.5% 90.0% 71.6%
BSC -> MFS 49.7% 69.3% 57.3% 47.7% 74.5% 60.0%

Table 49: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboFR)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 65/109
MMS DL RA Average
permitted without written authorization from Alcatel-Lucent

Capacity WEB WAP UDP-DL


and UL Update user
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

Number of subscribers per GPU 2 213 7 757 12 226 3 990 68 160 7 838
TBF establishments / s at target load 38.1 56.7 41.1 19.8 63.1 44.8
Total LLC / s (UL and DL) at target load 492 76 382 672 63 306
Downlink data load
DL LLC / s 336.0 36.3 211.0 655.4 21.0 206.2
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 230.1 123.3 697.0 2 661.4 8.4 752.5
Percentage of max DL GPU throughput 39.7% 4.0% 22.5% 85.9% 0.3% 24.3%
Uplink data load
UL LLC / s 156.3 39.3 170.8 16.4 42.1 99.9
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 62.5 15.7 435.9 6.6 16.8 157.8
Percentage of max UL GPU throughput 2.0% 0.5% 14.1% 0.2% 0.5% 5.1%
Memory needed
Number of simultaneously connected MS 65 151 107 72 0 189
GSL link for one GSL at 64 kbit/s
MFS -> BSC 57.7% 82.8% 68.1% 55.2% 89.3% 70.9%
BSC -> MFS 49.4% 68.9% 56.8% 47.4% 73.9% 59.4%

Table 50: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboIP)

6.3.1.2 EDGE case


Up to now, the value of N201 parameter seen in EDGE-capable MS is around 800 bytes. This value is used
to estimate the PPC capacity with 100% EDGE MS.

Table 51, Table 52 and Table 53 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GbIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 742 7 785 13 900 6 611 76 368 8 837
TBF establishments / s at target load 47.2 56.9 46.8 22.3 70.7 49.8
Total LLC / s (UL and DL) at target load 513 76 334 747 71 273
Downlink data load
DL LLC / s 318.9 36.4 177.6 730.6 23.6 173.2
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 1 524.1 123.8 792.4 4 409.6 9.4 848.5
Percentage of max DL GPU throughput 38.1% 3.1% 19.8% 110.2% 0.2% 21.2%
Uplink data load
UL LLC / s 193.6 39.4 156.9 16.6 47.1 100.1
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 77.4 15.8 495.6 6.6 18.9 177.6
Percentage of max UL GPU throughput 1.9% 0.4% 12.4% 0.2% 0.5% 4.4%
Memory needed
Number of simultaneously connected MS 52 149 74 55 0 139
GSL link for one GSL at 64 kbit/s
MFS -> BSC 61.7% 91.8% 75.5% 59.9% 105.8% 79.6%
BSC -> MFS 53.5% 79.8% 64.8% 52.0% 92.1% 68.9%

Table 51: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 421 6 863 12 313 6 054 65 859 7 825

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 66/109
TBF establishments / s at target load 41.7 50.2 41.4 20.4 61.0 44.1
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Total LLC / s (UL and DL) at target load 453 67 296 684 61 242
All rights reserved. Passing on and copying of this

Downlink data load


DL LLC / s 281.7 32.1 157.4 669.1 20.3 153.4
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 1 346.1 109.1 702.0 4 038.4 8.1 751.3
Percentage of max DL GPU throughput 33.7% 2.7% 17.5% 101.0% 0.2% 18.8%
Uplink data load
UL LLC / s 171.0 34.7 139.0 15.2 40.7 88.6
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 68.4 13.9 439.0 6.1 16.3 157.3
Percentage of max UL GPU throughput 1.7% 0.3% 11.0% 0.2% 0.4% 3.9%
Memory needed
Number of simultaneously connected MS 46 131 66 51 0 123
GSL link for one GSL at 64 kbit/s
MFS -> BSC 60.5% 86.9% 72.7% 59.2% 98.3% 76.3%
BSC -> MFS 52.4% 75.6% 62.4% 51.3% 85.5% 66.1%

Table 52: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 322 6 750 11 960 5 617 64 713 7 605
TBF establishments / s at target load 40.0 49.4 40.2 19.0 59.9 42.9
Total LLC / s (UL and DL) at target load 434 66 288 635 60 235
Downlink data load
DL LLC / s 270.2 31.6 152.8 620.7 20.0 149.1
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 1 291.1 107.3 681.9 3 746.6 8.0 730.2
Percentage of max DL GPU throughput 32.3% 2.7% 17.0% 93.7% 0.2% 18.3%
Uplink data load
UL LLC / s 164.0 34.2 135.0 14.1 39.9 86.1
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 65.6 13.7 426.4 5.6 16.0 152.9
Percentage of max UL GPU throughput 1.6% 0.3% 10.7% 0.1% 0.4% 3.8%
Memory needed
Number of simultaneously connected MS 44 129 64 47 0 120
GSL link for one GSL at 64 kbit/s
MFS -> BSC 60.1% 86.3% 72.1% 58.6% 97.4% 75.6%
BSC -> MFS 52.1% 75.0% 61.9% 50.8% 84.8% 65.5%

Table 53: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboIP)

6.3.1.3 Maximum throughput


The high cost of TBF establishment in PPC, and the low values of N201 parameter in most MS (B8
assumptions) are two important limiting factors for the GPU2 throughput. With nearly all profiles the GPU
throughput is limited by the PPC bottleneck to a value lower than the maximum reachable at DSP level.
PPC remains the bottleneck for the majority of traffic types.
If we consider a higher N201 value (1500 bytes instead of 800 for EGPRS case), the maximum throughput
will also be improved. The PPC throughput at 75% CPU Load for WEB traffic reaches 1.7 Mbit/s in this
case. This is 45% of the maximum GPU DL throughput with EGPRS (instead of 38%). The increase will be
important for WEB when the number of MS supporting extended UL mode is increased, as there will be
less and less the limitation in terms of CPU processing cost due to UL TBF establishment.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 67/109
6.3.1.4 Maximum number of established MS
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

In the different tables above, the maximum number of simultaneously connected mobiles is obtained
with:
3 PDCH allocated per MS in DL,
An average throughput per PDCH of 2.3 kbytes/s for GPRS (equivalent to 18.4 kbit/s) and 7.1
kbytes/s for EDGE (equivalent to 56.8 kbit/s),
T_network_response_time = 2s (for delayed DL TBF release feature).
The number of MS established at the same time is always lower than 100. This value is well below the
maximum value provided by PMU memory (1000 MS context).

Note The number of MS context is no more an issue as a preemption of MS context (inter and intra cell
preemptions) has been introduced in B9. This means that idle MS context can be reused by all the
cells handled by one GPU, if needed.

6.3.1.5 GSL Load


Due to the increased expected capacity at PPC level, the configuration of 2 GSL per GPU is recommended,
even if the results show that 1 GSL link is enough.

6.3.1.6 TBF establishment, LLC PDU transmission and PPC Throughput


In this section, the number of TBF established in DL and UL at busy hour, the number of DL and UL LLC
PDU transferred at busy hour, and the DL PPC throughput at busy hour are shown as a function of the CPU
load for the Average User traffic model. Two cases are presented, one for GPRS traffic and one for
EGPRS traffic. For each case, there is a comparison between B10 MR1 (GboFR), B10 MR2 (GboFR) and B10
MR2 (GboIP).
The results are presented in Figure 10 and Figure 11 for TBF establishment, in Figure 12 and Figure 13 for
LLC PDU transmitted and in Figure 14 and Figure 15 for DL PPC throughput (in kbit/s).
The impact of the different features (EDA, Gb over IP) introduced in B10 MR2 on the PPC CPU load
corresponds for the Average User traffic model to an increase of:
Around 12% between MR1 and MR2 with GboFR (due to all MR2 features except GboIP),
Around 4% in MR2 with the use of GboIP instead of GboFR.

Note The overhead of 9.25% is taken into account in the evaluation of the different parameters. This
explains why for 10% of CPU load, the processed values seem close to zero.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 68/109
permitted without written authorization from Alcatel-Lucent

Number of TBF Request per hour versus PPC CPU Load (GPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

250 000

200 000

150 000
TBF Request

MR1
MR2-FR
MR2-IP
100 000

50 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 10: Number of TBF established in DL and UL per hour for GPU2 (GPRS case)

Number of TBF Request per hour versus PPC CPU Load (EGPRS only)

250 000

200 000

150 000
TBF Request

MR1
MR2-FR
MR2-IP
100 000

50 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 11: Number of TBF established in DL and UL per hour for GPU2 (EGPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 69/109
permitted without written authorization from Alcatel-Lucent

Number of LLC PDU per hour versus PPC CPU Load (GPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

1 800 000

1 600 000

1 400 000

1 200 000
LLC PDU transferred

1 000 000 MR1


MR2-FR
800 000 MR2-IP

600 000

400 000

200 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 12: Number of LLC PDU transmitted in DL and UL per hour for GPU2 (GPRS case)

Number of LLC PDU per hour versus PPC CPU Load (EGPRS only)

1 400 000

1 200 000

1 000 000
LLC PDU transferred

800 000 MR1


MR2-FR
600 000 MR2-IP

400 000

200 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 13: Number of LLC PDU transmitted in DL and UL per hour for GPU2 (EGPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 70/109
permitted without written authorization from Alcatel-Lucent

DL PPC Throughput in kbit/s versus PPC CPU Load (GPRS only)


document, use and communication of its contents not
All rights reserved. Passing on and copying of this

1200,00

1000,00
DL PPC Throughput [kbit/s]

800,00

MR1
600,00 MR2-FR
MR2-IP

400,00

200,00

0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 14: DL PPC Throughput in kbit/s for GPU2 (GPRS case)

DL PPC Throughput in kbit/s versus PPC CPU Load (EGPRS only)

1200,00

1000,00
DL PPC Throughput [kbit/s]

800,00

MR1
600,00 MR2-FR
MR2-IP

400,00

200,00

0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 15: DL PPC Throughput in kbit/s for GPU2 (EGPRS case)

Note There are less TBF established and LLC PDU transferred for EGPRS case compared to GPRS case.
This is due to a higher value of the PDU size for EGPRS.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 71/109
6.3.2 PPC performance for GPU3
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

The PPC capacity is indicated at 75% CPU load (nominal load), and with a CPU margin of 6.47%, as
explained in Section 6.2.2.1.
The way the PPC load is computed is detailed in Section 9.4. The number of GPU users is adjusted to
reach 75% CPU Load.
We distinguish the case of 100% GPRS MS and 100% EDGE MS. The capacity with a proportion of both can
be obtained as a weighted average of both tables below.

Note Compared to B9, B10 performances have decreased because we assume now that the feature
Extended UL TBF mode is not supported by all MS. B9 assumption to considerer all MS supporting
this feature was far too optimistic. The new assumption for B10 (50% of MS supporting this feature)
is more realistic even if still optimistic).

6.3.2.1 GPRS case


For most GPRS only MS, the N201 parameter value is limited to 576 or 500 bytes. We assume N201 set to
576 bytes in this section.

Table 54, Table 55 and Table 56 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GbIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 4 277 15 678 23 850 7 353 140 836 15 392
TBF establishments / s at target load 73.7 114.7 80.2 36.6 130.4 88.0
Total LLC / s (UL and DL) at target load 951 153 745 1 238 130 601
Downlink data load
DL LLC / s 649.4 73.3 411.6 1 207.8 43.5 404.9
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 2 377.6 249.3 1 359.7 4 904.3 17.4 1 477.8
Percentage of max DL GPU throughput 76.7% 8.0% 43.9% 158.2% 0.6% 47.7%
Uplink data load
UL LLC / s 302.0 79.4 333.2 30.2 86.9 196.2
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 120.8 31.7 850.4 12.1 34.8 309.9
Percentage of max UL GPU throughput 3.9% 1.0% 27.4% 0.4% 1.1% 10.0%
Memory needed
Number of simultaneously connected MS 126 304 209 132 0 371
GSL link for one GSL at 64 kbit/s
MFS -> BSC 64.1% 115.3% 84.4% 58.8% 130.2% 90.2%
BSC -> MFS 54.4% 94.2% 68.9% 50.2% 105.7% 74.3%

Table 54: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 3 900 14 189 21 703 6 820 126 474 14 010
TBF establishments / s at target load 67.2 103.8 73.0 33.9 117.1 80.1
Total LLC / s (UL and DL) at target load 868 138 678 1 148 117 547
Downlink data load
DL LLC / s 592.1 66.4 374.6 1 120.3 39.0 368.5
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 2 168.0 225.6 1 237.4 4 549.1 15.6 1 345.1
Percentage of max DL GPU throughput 69.9% 7.3% 39.9% 146.7% 0.5% 43.4%

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 72/109
Uplink data load
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

UL LLC / s 275.4 71.8 303.2 28.0 78.1 178.6


All rights reserved. Passing on and copying of this

Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 110.2 28.7 773.8 11.2 31.2 282.0
Percentage of max UL GPU throughput 3.6% 0.9% 25.0% 0.4% 1.0% 9.1%
Memory needed
Number of simultaneously connected MS 115 275 190 123 0 338
GSL link for one GSL at 64 kbit/s
MFS -> BSC 63.0% 109.2% 81.4% 58.2% 122.1% 86.7%
BSC -> MFS 53.5% 89.4% 66.7% 49.7% 99.4% 71.5%

Table 55: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 3 302 12 592 18 754 5 569 111 691 12 114
TBF establishments / s at target load 56.9 92.1 63.1 27.7 103.4 69.3
Total LLC / s (UL and DL) at target load 735 123 586 938 103 473
Downlink data load
DL LLC / s 501.4 58.9 323.7 914.7 34.5 318.7
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 835.8 200.2 1 069.2 3 714.3 13.8 1 163.1
Percentage of max DL GPU throughput 59.2% 6.5% 34.5% 119.8% 0.4% 37.5%
Uplink data load
UL LLC / s 233.2 63.7 262.0 22.9 68.9 154.4
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 93.3 25.5 668.7 9.2 27.6 243.9
Percentage of max UL GPU throughput 3.0% 0.8% 21.6% 0.3% 0.9% 7.9%
Memory needed
Number of simultaneously connected MS 98 244 165 100 0 292
GSL link for one GSL at 64 kbit/s
MFS -> BSC 61.1% 102.6% 77.2% 56.8% 113.8% 81.8%
BSC -> MFS 52.0% 84.3% 63.6% 48.7% 93.0% 67.8%

Table 56: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboIP)

6.3.2.2 EDGE case


Up to now, the value of N201 parameter seen in EDGE-capable MS is around 800 bytes. This value is used
to estimate the PPC capacity with 100% EDGE MS.

Table 57, Table 58 and Table 59 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GboIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 4 482 13 038 22 866 10 258 131 141 14 582
TBF establishments / s at target load 77.2 95.4 76.9 34.6 121.4 82.2
Total LLC / s (UL and DL) at target load 838 127 550 1 159 121 451
Downlink data load
DL LLC / s 521.4 61.0 292.2 1 133.6 40.5 285.9
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 2 491.6 207.3 1 303.6 6 842.0 16.2 1 400.1
Percentage of max DL GPU throughput 62.3% 5.2% 32.6% 171.1% 0.4% 35.0%
Uplink data load
UL LLC / s 316.5 66.0 258.0 25.8 81.0 165.1

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 73/109
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

UL throughput (kbit/s) 126.6 26.4 815.3 10.3 32.4 293.1


All rights reserved. Passing on and copying of this

Percentage of max UL GPU throughput 3.17% 0.66% 20.38% 0.26% 0.81% 7.33%
Memory needed
Number of simultaneously connected MS 86 249 122 86 0 230
GSL link for one GSL at 64 kbit/s
MFS -> BSC 68.6% 119.3% 91.4% 64.9% 145.2% 98.3%
BSC -> MFS 59.6% 103.9% 78.2% 56.3% 126.5% 85.1%

Table 57: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 4 092 11 992 20 997 9 549 118 600 13 400
TBF establishments / s at target load 70.5 87.7 70.6 32.2 109.8 75.6
Total LLC / s (UL and DL) at target load 765 117 505 1 079 110 414
Downlink data load
DL LLC / s 476.0 56.1 268.3 1 055.3 36.6 262.7
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 2 274.6 190.7 1 197.1 6 369.2 14.6 1 286.6
Percentage of max DL GPU throughput 56.9% 4.8% 29.9% 159.2% 0.4% 32.2%
Uplink data load
UL LLC / s 289.0 60.7 237.0 24.0 73.2 151.7
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 115.6 24.3 748.6 9.6 29.3 269.3
Percentage of max UL GPU throughput 2.89% 0.61% 18.72% 0.24% 0.73% 6.73%
Memory needed
Number of simultaneously connected MS 78 229 112 80 0 211
GSL link for one GSL at 64 kbit/s
MFS -> BSC 67.1% 113.8% 88.1% 63.9% 136.2% 94.5%
BSC -> MFS 58.2% 99.1% 75.4% 55.5% 118.6% 81.8%

Table 58: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 3 510 10 831 18 512 7 873 105 505 11 829
TBF establishments / s at target load 60.4 79.2 62.3 26.6 97.7 66.7
Total LLC / s (UL and DL) at target load 656 105 445 890 98 366
Downlink data load
DL LLC / s 408.3 50.7 236.6 870.1 32.6 231.9
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 1 951.2 172.2 1 055.4 5 251.6 13.0 1 135.7
Percentage of max DL GPU throughput 48.8% 4.3% 26.4% 131.3% 0.3% 28.4%
Uplink data load
UL LLC / s 247.9 54.8 208.9 19.8 65.1 133.9
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 99.1 21.9 660.0 7.9 26.1 237.8
Percentage of max UL GPU throughput 2.48% 0.55% 16.50% 0.20% 0.65% 5.94%
Memory needed
Number of simultaneously connected MS 67 207 99 66 0 186
GSL link for one GSL at 64 kbit/s
MFS -> BSC 64.8% 107.8% 83.7% 61.7% 126.8% 89.4%
BSC -> MFS 56.2% 93.8% 71.7% 53.5% 110.4% 77.4%

Table 59: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboIP)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 74/109
6.3.2.3 Maximum throughput
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

With GPU3, the PPC bottleneck effect is diminished. This is particularly visible UDP-DL traffic, and also a
little on WEB traffic. With WAP traffic, the PPC bottleneck effect is still very strong.

6.3.2.4 GSL Load


The GSL link load follows the same increase as the PPC capacity. Hence one GSL link per GPU will not be
sufficient for a GPU3 at PCC nominal load. Two GSL links will be needed per GPU for GPU3.

6.3.2.5 TBF establishment, LLC PDU transmission and PPC Throughput


In this section, the number of TBF established in DL and UL at busy hour, the number of DL and UL LLC
PDU transferred at busy hour, and the DL PPC throughput at busy hour are shown as a function of the CPU
load for the Average User traffic model. Two cases are presented, one for GPRS traffic and one for
EGPRS traffic. For each case, there is a comparison between B10 MR1 (GboFR), B10 MR2 (GboFR) and B10
MR2 (GboIP).
The results are presented in Figure 16 and Figure 17 for TBF establishment, in Figure 18 and Figure 19 for
LLC PDU transmitted and in Figure 20 and Figure 21 for DL PPC throughput (in kbit/s).
The impact of the different features (EDA, Gb over IP) introduced in B10 MR2 on the PPC CPU load
corresponds for the Average User traffic model to an increase of:
Around 9% between MR1 and MR2 with GboFR (due to all MR2 features except GboIP),
Around 12% in MR2 with the use of GboIP instead of GboFR.

Note The overhead of 6.47% is taken into account in the evaluation of the different parameters.

Number of TBF Request per hour versus PPC CPU Load (GPRS only)

450 000

400 000

350 000

300 000
TBF Request

250 000 MR1


MR2-FR
200 000 MR2-IP

150 000

100 000

50 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 16: Number of TBF established in DL and UL per hour for GPU3 (GPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 75/109
permitted without written authorization from Alcatel-Lucent

Number of TBF Request per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

400 000

350 000

300 000

250 000
TBF Request

MR1
200 000 MR2-FR
MR2-IP
150 000

100 000

50 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 17: Number of TBF established in DL and UL per hour for GPU3 (EGPRS case)

Number of LLC PDU per hour versus PPC CPU Load (GPRS only)

3 000 000

2 500 000

2 000 000
LLC PDU transferred

MR1
1 500 000 MR2-FR
MR2-IP

1 000 000

500 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 18: Number of LLC PDU transmitted in DL and UL per hour for GPU3 (GPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 76/109
permitted without written authorization from Alcatel-Lucent

Number of LLC PDU per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

2 500 000

2 000 000
LLC PDU transferred

1 500 000
MR1
MR2-FR
MR2-IP
1 000 000

500 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 19: Number of LLC PDU transmitted in DL and UL per hour for GPU3 (EGPRS case)

DL PPC Throughput in kbit/s versus PPC CPU Load (GPRS only)

2000,00

1800,00

1600,00

1400,00
DL PPC Throughput [kbit/s]

1200,00
MR1
1000,00 MR2-FR
MR2-IP
800,00

600,00

400,00

200,00

0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 20: DL PPC Throughput in kbit/s for GPU3 (GPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 77/109
permitted without written authorization from Alcatel-Lucent

DL PPC Throughput in kbit/s versus PPC CPU Load (EGPRS only)


document, use and communication of its contents not
All rights reserved. Passing on and copying of this

1800,00

1600,00

1400,00
DL PPC Throughput [kbit/s]

1200,00

1000,00 MR1
MR2-FR
800,00 MR2-IP

600,00

400,00

200,00

0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 21: DL PPC Throughput in kbit/s for GPU3 (EGPRS case)

6.4 PPC capacity with all profiles and traffic mix for B10 Mx
The PPC capacity is indicated at 75% CPU load (nominal load), and with a CPU margin of 3.55% for B10
MR1 (GboFR) and of 3.85% for B10 MR2 (GboIP). The way the PPC load is computed is detailed in Section
9.4. The number of GP users is adjusted to reach 75% CPU Load. We distinguish the case of 100% GPRS MS
and 100% EDGE MS. The capacity with a proportion of both can be obtained as a weighted average of both
tables below.

Note Here also, we assume that only half of the MS population support the extended UL TBF mode
feature. This explains the important decrease of performances between B10 and B9, where each
MS was assumed supporting the feature.

6.4.1 GPRS case


For most GPRS only MS, the N201 parameter value is limited to 576 or 500 bytes. We assume N201 set to
576 bytes in this section.
Table 60 and Table 61 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 12 085 42 925 66 801 21 281 383 180 42 936
TBF establishments / s at target load 208.1 314.0 224.7 105.8 354.8 245.5
Total LLC / s (UL and DL) at target load 2 688 418 2 086 3 583 355 1 677
Downlink data load
DL LLC / s 1 834.9 200.8 1 152.9 3 495.6 118.3 1 129.4
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 6 718.3 682.6 3 808.5 14 194.5 47.3 4 122.4

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 78/109
Percentage of max DL GPU throughput 54.6% 5.5% 31.0% 115.4% 0.4% 33.5%
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Uplink data load


All rights reserved. Passing on and copying of this

UL LLC / s 853.4 217.3 933.4 87.5 236.5 547.3


Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 341.4 86.9 2 381.7 35.0 94.6 864.4
Percentage of max UL GPU throughput 2.8% 0.7% 19.4% 0.3% 0.8% 7.0%
Memory needed
Number of simultaneously connected MS 229 544 345 304 0 651
GSL link for one GSL at 64 kbit/s
MFS -> BSC 137.2% 276.1% 193.6% 122.5% 315.4% 209.4%
BSC -> MFS 118.6% 226.7% 159.1% 107.2% 257.3% 173.7%

Table 60: PPC capacity at 75% CPU load with GP for GPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 11 051 42 744 62 993 18 204 383 404 40 706
TBF establishments / s at target load 190.3 312.7 211.9 90.5 355.0 232.8
Total LLC / s (UL and DL) at target load 2 458 416 1 967 3 065 355 1 590
Downlink data load
DL LLC / s 1 678.0 199.9 1 087.2 2 990.3 118.3 1 070.8
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 6 143.5 679.7 3 591.4 12 142.5 47.3 3 908.3
Percentage of max DL GPU throughput 49.9% 5.5% 29.2% 98.7% 0.4% 31.8%
Uplink data load
UL LLC / s 780.4 216.4 880.2 74.8 236.7 518.9
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 312.2 86.5 2 246.0 29.9 94.7 819.5
Percentage of max UL GPU throughput 2.5% 0.7% 18.3% 0.2% 0.8% 6.7%
Memory needed
Number of simultaneously connected MS 327 829 553 328 0 981
GSL link for one GSL at 64 kbit/s
MFS -> BSC 138.2% 279.6% 192.4% 123.4% 319.8% 207.9%
BSC -> MFS 123.6% 233.6% 162.7% 112.1% 264.9% 176.9%

Table 61: PPC capacity at 75% CPU load with GP for GPRS (B10 MR2 GboIP)

6.4.2 EDGE case


Up to now, the value of N201 parameter seen in EDGE-capable MS is around 800 bytes. This value is used
to estimate the PPC capacity with 100% EDGE MS.
Table 62 and Table 63 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 12 624 36 075 64 053 29 683 358 479 40 751
TBF establishments / s at target load 217.4 263.9 215.5 100.2 331.9 229.8
Total LLC / s (UL and DL) at target load 2 360 351 1 541 3 355 332 1 260
Downlink data load
DL LLC / s 1 468.6 168.7 818.6 3 280.4 110.6 798.8
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 7 017.9 573.6 3 651.8 19 798.8 44.3 3 912.6
Percentage of max DL GPU throughput 49.1% 4.0% 25.5% 138.5% 0.3% 27.4%
Uplink data load
UL LLC / s 891.5 182.6 722.8 74.6 221.3 461.5

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 79/109
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

UL throughput (kbit/s) 356.6 73.0 2 283.8 29.8 88.5 819.1


All rights reserved. Passing on and copying of this

Percentage of max UL GPU throughput 2.5% 0.5% 16.0% 0.2% 0.6% 5.7%
Memory needed
Number of simultaneously connected MS 241 689 343 249 0 642
GSL link for one GSL at 64 kbit/s
MFS -> BSC 149.7% 289.1% 213.2% 140.3% 357.5% 232.3%
BSC -> MFS 133.1% 255.1% 185.0% 124.9% 314.9% 204.2%

Table 62: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 11 733 35 924 61 439 25 619 358 581 39 237
TBF establishments / s at target load 202.1 262.8 206.7 86.5 332.0 221.2
Total LLC / s (UL and DL) at target load 2 193 350 1 478 2 896 332 1 213
Downlink data load
DL LLC / s 1 364.9 168.0 785.2 2 831.2 110.7 769.2
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 6 522.2 571.2 3 502.8 17 088.1 44.3 3 767.3
Percentage of max DL GPU throughput 41.3% 3.6% 22.2% 108.2% 0.3% 23.8%
Uplink data load
UL LLC / s 828.5 181.8 693.3 64.4 221.3 444.3
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 331.4 72.7 2 190.6 25.7 88.5 788.7
Percentage of max UL GPU throughput 2.1% 0.5% 13.9% 0.2% 0.6% 5.0%
Memory needed
Number of simultaneously connected MS 224 686 329 215 0 618
GSL link for one GSL at 64 kbit/s
MFS -> BSC 150.3% 292.5% 212.8% 138.9% 361.8% 231.5%
BSC -> MFS 137.6% 261.9% 188.6% 127.6% 322.5% 207.4%

Table 63: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR2 GboIP)

6.4.3 Maximum throughput


Compared to B10 legacy, the CPU cost for TBF establishment is reduced with Mx. This means that the GP
throughput is increased especially for traffic associated with large data transfer as WEB or UDP. But, the
PPC is still the bottleneck in most cases. This is all the more true as the PS traffic observed in the field is
mostly WAP traffic with small data transfer.

6.4.4 Maximum number of established MS


The maximum number of TBF context in PMU memory is multiplied by a factor 4 with Mx (3840 instead of
960). This means that even in the worst case, the number of established MS will remain lower than this
threshold.

6.4.5 GSL Load


Due to the increased expected capacity at PPC level, the configuration of 3 GSL per GPU is recommended,
especially if the main data traffic is composed of Wap traffic, or if there is an important proportion of
signaling such as RA Update.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 80/109
6.4.6 TBF establishment, LLC PDU transmission per hour and PPC Throughput
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

In this section, the number of TBF established in DL and UL at busy hour, the number of DL and UL LLC
PDU transferred at busy hour, and the DL PPC throughput at busy hour are shown as a function of the CPU
load for the Average User traffic model. Two cases are presented, one for GPRS traffic and one for
EGPRS traffic. For each case, there is a comparison between B10 MR1 (GboFR) and B10 MR2 (GboIP).
The results are presented in Figure 22 and Figure 23 for TBF establishment, in Figure 24 and Figure 25 for
LLC PDU transmitted and in Figure 26 and Figure 27 for DL PPC throughput (in kbit/s).
The impact of the different features (EDA, Gb over IP) introduced in B10 MR2 on the PPC CPU load
corresponds to an increase of around 5% for the Average User traffic model.

Note The overhead of 3.55% for B10 MR1 (GboFR) and of 3.85% for B10 MR2 (GboIP) are taken into
account in the evaluation of the different parameters.

Number of TBF Request per hour versus PPC CPU Load (GPRS only)

1 200 000

1 000 000

800 000
TBF Request

MR1
600 000
MR2-IP

400 000

200 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 22: Number of TBF established in DL and UL per hour for GP (GPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 81/109
permitted without written authorization from Alcatel-Lucent

Number of TBF Request per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

1 200 000

1 000 000

800 000
TBF Request

MR1
600 000
MR2-IP

400 000

200 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 23: Number of TBF established in DL and UL per hour for GP (EGPRS case)

Number of LLC PDU per hour versus PPC CPU Load (GPRS only)

8 000 000

7 000 000

6 000 000
LLC PDU transferred

5 000 000

MR1
4 000 000
MR2-IP

3 000 000

2 000 000

1 000 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 24: Number of LLC PDU transmitted in DL and UL per hour for GP (GPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 82/109
permitted without written authorization from Alcatel-Lucent

Number of LLC PDU per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

6 000 000

5 000 000

4 000 000
LLC PDU transferred

MR1
3 000 000
MR2-IP

2 000 000

1 000 000

0
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 25: Number of LLC PDU transmitted in DL and UL per hour for GP (EGPRS case)

DL PPC Throughput in kbit/s versus PPC CPU Load (GPRS only)

6000,00

5000,00
DL PPC Throughput [kbit/s]

4000,00

MR1
3000,00
MR2-IP

2000,00

1000,00

0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [% ]

Figure 26: DL PPC Throughput in kbit/s for GP (GPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 83/109
permitted without written authorization from Alcatel-Lucent

DL PPC Throughput in kbit/s versus PPC CPU Load (EGPRS only)


document, use and communication of its contents not
All rights reserved. Passing on and copying of this

5000,00

4500,00

4000,00
DL PPC Throughput [kbit/s]

3500,00

3000,00
MR1
2500,00
MR2-IP
2000,00

1500,00

1000,00

500,00

0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%

CPU Load [%]

Figure 27: DL PPC Throughput in kbit/s for GP (EGPRS case)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 84/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

7 GSL CAPACITY

7.1 GSL Configuration


In the MFS, the BSCGP layer is instantiated on several GPUs / GPs. The BSCGP entity of one GPU / GP
board maintains up to 4 GSLs (LAP-D links) with the BSC and performs load sharing between them. These
redundant GSLs make up a so-called GSL Set. The GSL is carried by a 64 kbit/s Ater-channel. The BSCGP
entity in the BSC, when initiating the request performs load sharing between all the GSLs of the BSS. One
DTC (BSC G2) / CCP (BSC Mx) board is dedicated per GSL inside the BSC.
It is recommended to configure at least 2 GSL per GPU / GP for redundancy. It should be noted that there
is a maximum of one GSL per Ater mux, but that the sharing of one Atermux between CS and PS services
enables to configure several GSLs per GPU / GP while allocating only a part of the two Atermux for PS
services.

7.2 GSL load model for one GPU / GP


In the following we will consider the signalling load generated on one GSL link set, corresponding to
one GPU / GP.
To estimate the average signalling load on the GSL, we consider the signalling generated by Radio
Resource management, Transmission Resource Management and Radio Signalling procedures.
Other procedures related to cell management state handling and interface control are not considered in
the average load, because they are assumed to be rare events.
The average GSL PS load expressed in kbit/s is estimated as:
(Bytes_TBF_c Rate_TBF_c
+ Bytes_paging_CS Rate_paging_CS
+ Bytes_paging_PS Rate_paging_PS
+ Byte_DTM_TBF_c Rate_DTM_TBF_c
+ Byte_BSC_DTM_Shared_Infos Rate_bsc_shared_infos_
+ Bytes_suspend_resume Rate_suspend_resume
+ Bytes_trans_mgt Rate_gch_mgt
+ Bytes_rr_allocation Rate_rr_alloc
+ Bytes_rr_usage Rate_rr_usage) 8 / 1000

Bytes_xxx corresponds to the number of bytes transferred for the xxx procedure. It takes into account the
message flow for requests and responses.
Rate_xxx corresponds to the expected number of requests per second for a GPU / GP at nominal load. To
make this rate as independent as possible from a traffic model we take a worst-case hypothesis:
Rate_TBF_c is the maximum number of TBF establishment per second on the CCCH.
Rate_suspend_resume corresponds to 30% of the maximum number of Call Attempts per second
(CAPS) on the largest BSC. The value of 30% corresponds to the GPRS penetration rate.
Rate_paging_PS is expected negligible (most PS traffic is currently expected to be Mobile Originated)
and is not considered here,
Rate_paging_CS is the CS paging received on Gb (from Gs) and transmitted to the cells on the CCCH.
It is equal to 30% of the maximum number of CS paging received on a BSC on A interface. This case is
possible with Network Operating Mode I (with Gs interface).

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 85/109
Rate_rr_allocation and Rate_rr_usage correspond to the transmissions between the BSC and the MFS
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

of RAE-4 messages (RR Allocation Indication from BSC to MFS and RR Usage Indication from MFS to
All rights reserved. Passing on and copying of this

BSC).
For the transmission resource management, we assume that half of the established TBF (Rate_TBF_c)
requests some transmission resources.
Rate_DTM_TBF_c is the number of CS calls that goes from the dedicated mode to the DTM mode, and
then returns to the dedicated mode.
Rate_bsc_shared_infos is the number of BSC Shared DTM Indication Infos messages that are
tranmitted between the BSC and the MFS. This number depends on the number of MS DTM capable
that are in dedicated mode, and of a timer that defines the periodicity of the transmission
(T_SHARED_DTM_INFO).

7.3 GSL load estimation (worst case)


The GSL load is estimated here below for the worst-case hypothesis in term of GSL signalling load and in
terms of messages exchanged on the link. With a given traffic model or with specific operator
configuration (for example, no Gs interface), then this load may be lower.
Following inputs are used:
The average number of TBF establishments on CCCH corresponds to half the GPU / GP maximum
signalling capacity24 (see maximum PPC ping rate in Section 4.3.4) at PPC nominal load (75%).
The number of RR_Allocation_Indication transmitted from the BSC to the MFS, is evaluated assuming
a periodicity of 10 seconds. This parameter depends also on the number of cells (see Table 67).
The number of RR_Usage_Indication transmitted from the MFS to the BSC, is evaluated assuming a
periodicity of 5 seconds. This parameter depends also on the number of cells (see Table 67).
The maximum CAPS value depends on the maximum Erlang capacity of the BSC. The maximum
capacity is equal to 1900 Erlang, 4000 Erlang and 4500 for respectively BSC G2, BSC Evolution for B10
MR1 and BSC Evolution for B10 MR2. The number of suspend / resume requests is then equal to CAPS
Perc_gprs, where Perc_gprs corresponds to the GPRS penetration rate. The number of BSC Shared
DTM Indication Infos messages transmitted between the BSC and the MFS is also deduced from the
CAPS, and is equal to CAPS Perc_DTM_MS / T_SHARED_DTM_INFO, where Perc_DTM_MS is the
percentage of MS DTM capable.
Not all established TBFs have a need in terms of transmission resources. In some cases, there are
already enough allocated GCHs to fulfil the need of the TBF. Therefore, we assume that only half of
the established TBFs ask for new Abis resources and new GCHs connection.
The number of bytes for each message on BSCGP is provided in Table 65, with hypothesis listed in
Table 64. For each message a LAPD header length is counted in the direction of the message and a RR
frame length in the opposite direction (layer 2 acknowledgement). The total number of bytes per
procedure and per direction is added, and then multiplied by the corresponding rate, to obtain the
total data load for one GPU / GP.
Number of PS capable TRX per cell 4
Number of DL PDCH allocated per TBF 3
Penetration rate for GPRS MS 30%
Penetration rate for EGPRS MS (among GPRS MS) 50%
Max_GPRS_CS CS4
Max_EGPRS_MCS MCS9
Average number of GCHs per PDCH 3,0

24
Because we are not considering a traffic model for input, the GSL load is based on a worst case based on GPU / GP capacity.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 86/109
LAPD header length 8
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

RR frame length (Layer 2 acknowledgement) 8


All rights reserved. Passing on and copying of this

TCH_INFO_PERIOD 5s
RR_ALLOC_PERIOD 2
T_SHARED_DTM_INFO 8s
Percentage of MS DTM capable 10%
Percentage of CS call in dedicated mode going to DTM mode 2%

Table 64: BSCGP hypothesis and LAPD overhead

Table 65 provides the size of the different messages exchanged on the BSCGP interface between the MFS
and the BSC.
MFS BSC BSC MFS
B10 messages
bytes bytes
Abis Nibbles Allocation 35 8
Transmission Allocation Request 75 8
Transmission Allocation Confirm 8 98
Abis Nibbles Allocation 35 8
Transmission Deallocation Command 43 8
Transmission Deallocation Complete 8 66
1 cycle transmission allocation / de-allocation 204 196
CS paging 50 8
Channel request 8 34
Channel Assignment 56 8
Total TBF establishment 64 42
DTM Assignement Command 81 8
MFS Shared DTM Info Indication 31 8
MFS Shared DTM Info Indication Ack 8 34
1 cycle of DTM establishment / release 120 50
BSC Shared DTM Info Indication 8 52
MS Resume 8 38
MS Resume Ack 38 8
MS Suspend 8 38
MS Suspend Ack 41 8
1 cycle of suspend / resume 95 92
RR Allocation Indication 8 31
RR Usage Indication 39 8

Table 65: Size of messages between MFS and BSC

Table 66 provides the number of BSCGP messages exchanged on the BSCGP interface between the MFS and
thr BSC for each service (transmission, paging and so on). The messages used for acknowledgment are not
taken into account.
MFS BSC BSC MFS
1 cycle transmission allocation / de-allocation 4 2
CS paging 1 0
Total TBF establishment 1 1
1 cycle of DTM establishment / release 2 1
BSC Shared DTM Info Indication 0 1
1 cycle of suspend / resume 2 2
RR Allocation Indication 0 1
RR Usage Indication 1 0

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 87/109
Table 66: Number of BSCGP messages between MFS and BSC
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

With the introduction of Mx, there are several possible configurations with BSC and MFS, which are the
following ones:
BSC G2 and MFS GPU2,
BSC G2 and MFS GPU3
BSC G2 and MFS GP,
BSC Evolution and MFS GPU2,
BSC Evolution and MFS GPU3,
BSC Evolution and MFS GP.

The number of messages per second on the BSCGP depends on the type of BSC (G2 or Evolution) and of the
MFS (either GPU2, GPU3 or GP). Table 67 presents the assumptions for BSC G2 and Mx.
BSC G2 BSC Evolution MR1 BSC Evolution MR2
Maximum Erlang capacity 1900 4000 4500
Number of cells 264 500 500
Maximum paging per second 70 108 121
CAPS 38 80 90

Table 67: Assumptions for BSC G2 and BSC Evolution

Table 68 provides the occurences per second associated with each service (transmission, paging, TBF
establishment, and so on) for B10 MR1 and B10 MR2.
G2 / GPU2 G2 / GPU3 G2 / GP Ev / GPU2 Ev / GPU3 Ev / GP
B10 MR1 (GboFR)
1 cycle transmission (de)allocation 11.1 18.6 48.5 11.1 18.6 48.5
CS paging 21.0 21.0 21.0 32.4 32.4 32.4
Total TBF establishment 22.3 37.3 96.9 22.3 37.3 96.9
1 cycle of DTM (de) establishment 0.4 0.7 1.9 0.4 0.7 1.9
BSC Shared DTM Info Indication 23.8 23.8 23.8 50.0 50.0 50.0
1 cycle of suspend / resume 11.4 11.4 11.4 24.0 24.0 24.0
RR Allocation Indication 26.4 26.4 26.4 50.0 50.0 50.0
RR Usage Indication 52.8 52.8 52.8 100.0 100.0 100.0
B10 MR2 (GboIP)
1 cycle transmission (de)allocation 9.3 14.6 48.6 9.3 14.6 48.6
CS paging 21.0 21.0 21.0 36.3 36.3 36.3
Total TBF establishment 18.5 29.2 97.2 18.5 29.2 97.2
1 cycle of DTM (de) establishment 0.4 0.6 1.9 0.4 0.6 1.9
BSC Shared DTM Info Indication 23.8 23.8 23.8 56.3 56.3 56.3
1 cycle of suspend / resume 11.4 11.4 11.4 27.0 27.0 27.0
RR Allocation Indication 26.4 26.4 26.4 50.0 50.0 50.0
RR Usage Indication 52.8 52.8 52.8 100.0 100.0 100.0

Table 68: Occurrences per second for the available BSC / MFS hardware

From Table 65 and Table 68, we can evaluate the data load of one GP / GPU for the six possible mixed
configurations, with B10 MR1 and B10 MR2 (see Table 69).
GSL data load (kbit/s) MFS BSC BSC MFS
B10 MR1 (GboFR)
BSC G2 + MFS GPU2 71.0 54.6

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 88/109
BSC G2 + MFS GPU3 91.2 71.6
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

BSC G2 + MFS GP 171.6 138.9


All rights reserved. Passing on and copying of this

BSC Evolution + MFS GPU2 106.8 84.4


BSC Evolution + MFS GPU3 127.0 101.4
BSC Evolution + MFS GP 207.4 168.7
B10 MR2 (GboIP)
BSC G2 + MFS GPU2 66.0 50.5
BSC G2 + MFS GPU3 80.3 62.5
BSC G2 + MFS GP 171.9 139.2
BSC Evolution + MFS GPU2 106.0 85.3
BSC Evolution + MFS GPU3 120.4 97.3
BSC Evolution + MFS GP 212.0 174.0

Table 69: GSL data load for one GPU / GP (without location services)
Between the MFS and the BSC, the data load for one GPU / GP goes from 71.0 kbit/s (B10 MR1 GboFR) /
66 kbit/s (B10 MR2 GboIP) for a legacy configuration (BSC G2 and MFS GPU2) to 207.4 kbit/s (B10 MR1
GboFR) / 212 kbit/s (B10 MR2 GboIP) for an Mx configuration (BSC Evolution and MFS GP). Therefore, we
notice that the GSL load is multiplied by a factor of three with the introduction of Mx in the worst case.
From the results presented in Table 69, we notice that for the cases with an Mx MFS, 3 GSL links will be
needed with a BSC G2, and 4 GSL link will be needed with a BSC Evolution.

From Table 66 and Table 68, we can evaluate the total number of messages exchanged between the MFS
and the BSC (and vice-versa) per second (see Table 70).
GSL data load (messages per second) MFS BSC BSC MFS
B10 MR1 (GboFR)
BSC G2 + MFS GPU2 164 118
BSC G2 + MFS GPU3 210 148
BSC G2 + MFS GP 391 269
BSC Evolution + MFS GPU2 248 193
BSC Evolution + MFS GPU3 294 223
BSC Evolution + MFS GP 475 344
B10 MR2 (GboIP)
BSC G2 + MFS GPU2 153 110
BSC G2 + MFS GPU3 185 132
BSC G2 + MFS GP 392 269
BSC Evolution + MFS GPU2 247 198
BSC Evolution + MFS GPU3 279 219
BSC Evolution + MFS GP 486 357

Table 70: Number of messages per second exchanged on GSL link for GPU / GP
For a terrestrial link, assuming a K_GSL set to 7 and a round-trip delay of 25 ms, 280 messages can be
handled per second on one GSL link. In the worst case (BSC Evolution and MFS MX), two GSL links will be
needed. This means, that for a terrestrial GSL link, the available bandwidth is more restricting than the
window size.

7.4 GSL load model for up to 6 GPU / GP


The maximum penetration rate for PS services in the scope of B10 (30%) has been applied already with the
one GPU2 / GPU3 / GP per BSS for the GSL data load model. Consequently we consider that the 6 GPU /
GP case corresponds to an increase of traffic per PS user, but that the number of PS users is not

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 89/109
increased. Hence when more GPUs / GPs are considered, the part of the PS load, which depends on the
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

total number of PS users in the BSS, is distributed on more GPUs / GPs. These functions are MS suspend /
All rights reserved. Passing on and copying of this

resume and CS paging. The load of messages due to RAE-4 is also distributed on the GPUs / GPs, as theses
messages are independent of the PS traffic.

We give in the tables below the data load expected for 1 to 6 GPU2 / GPU3 / GP for each possible
configuration defined in 7.3.
Number of GPUs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 30.0 30.0 30.0 30.0 30.0 30.0
Relative load (paging. suspend/resume. RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 71.0 50.5 43.6 40.2 38.2 36.8
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 25.0 25.0 25.0 25.0 25.0 25.0
Relative load (paging. suspend/resume. RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 66.0 45.5 38.6 35.2 33.2 31.8

Table 71: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU2)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 50.2 50.2 50.2 50.2 50.2 50.2
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GP (kbit/s) 91.2 70.7 63.8 60.4 58.4 57.0
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 39.3 39.3 39.3 39.3 39.3 39.3
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 80.3 59.8 53.0 49.6 47.5 46.2

Table 72: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU3)
Number of GPUs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 130.6 130.6 130.6 130.6 130.6 130.6
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 171.6 151.1 144.2 140.8 138.8 137.4
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 130.9 130.9 130.9 130.9 130.9 130.9
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 171.9 151.4 144.6 141.2 139.1 137.7

Table 73: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GP)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 30.0 30.0 30.0 30.0 30.0 30.0
Relative load (paging, suspend/resume, RAE-4) 76.8 38.4 25.6 19.2 15.4 12.8
Total load per GP (kbit/s) 106.8 68.4 55.6 49.2 45.3 42.8
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 25.0 25.0 25.0 25.0 25.0 25.0
Relative load (paging, suspend/resume, RAE-4) 81.0 40.5 27.0 20.3 16.2 13.5
Total load per GPU (kbit/s) 106.0 65.5 52.0 45.2 41.2 38.5

Table 74: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU2)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 50.2 50.2 50.2 50.2 50.2 50.2

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 90/109
Relative load (paging, suspend/resume, RAE-4) 76.8 38.4 25.6 19.2 15.4 12.8
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Total load per GP (kbit/s) 127.0 88.6 75.8 69.4 65.5 63.0
All rights reserved. Passing on and copying of this

B10 MR2 (GboIP)


Fixed load (TBFs allocation and de-allocation) 39.3 39.3 39.3 39.3 39.3 39.3
Relative load (paging, suspend/resume, RAE-4) 81.0 40.5 27.0 20.3 16.2 13.5
Total load per GPU (kbit/s) 120.4 79.9 66.4 59.6 55.6 52.9

Table 75: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU3)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 130.6 130.6 130.6 130.6 130.6 130.6
Relative load (paging, suspend/resume, RAE-4) 76.8 38.4 25.6 19.2 15.4 12.8
Total load per GP (kbit/s) 207.4 169.0 156.2 149.8 145.9 143.4
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 130.9 130.9 130.9 130.9 130.9 130.9
Relative load (paging, suspend/resume, RAE-4) 81.0 40.5 27.0 20.3 16.2 13.5
Total load per GPU (kbit/s) 212.0 171.4 157.9 151.2 147.1 144.4

Table 76: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GP)
When there are several GPU / GP per BSS, the number of GSLs can be adjusted depending on the needs. A
maximum of 3 GSL per GPU / GP is needed for the case BSC Evolution and MFS Mx.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 91/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

8 GLOSSARY

Air / Um Radio Interface


Abis Interface between BSC and BTS
Ater Interface between MSC and BSS
BH Busy Hours
BLER Block Error Rate
BSC Base Station Controller
BSCGP Base Station Controller GPRS Protocol
BSS Base Station System
BSSAP BSS Application (GSM layer 3 protocol in BSC)
BSSGP Base Station System GPRS Protocol
BTS Base Transceiver Station
CAPS Call Attempts per Second (in BSS or cell)
CS Circuit Switched
CS-n Coding Scheme 1 to 4
DL DownLink
DRX Discontinuous reception
DSP Digital Signalling Processor
DTC Digital Trunk Controller
EDGE Enhanced Data Rates for GSM evolution
FR Frame Relay
Gb Interface between BSS and SGSN
GCH GPRS Channel
GGSN Gateway GPRS Support Node
GMM GPRS Mobility Management
GPRS General Packet Radio Service
GPRSAP GPRS Application Part (GSM layer 3 protocol relative to GPRS in the BSC)
GPU GPRS Processing Unit
GSL GPRS Signalling Link
GSM Global System For Mobile communication
HDLC High-level Data Layer Control procedures
HTTP Hyper Text Transfer Protocol
IE Information Element
IP Internet Protocol
L2-GCH Layer 2 GCH

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 92/109
L1-GCH
permitted without written authorization from Alcatel-Lucent

Layer 1 GCH
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

LAPD Link Access Procedure on the D Channel


LLC Logical Link Control layer
MAC Medium Access Control layer
M-EGCH Multiplexed EGCH channel
MFS Multi BSS Fast packet Server
MIPS Million Instructions Per Second
MMS Multi Media Messaging Services
MPDCH Master PDCH: PDCH carrying P[B/C]CCH channels
MS Mobile Station
MSC Mobile services Switching Center
MSG Message
NSS Network Switching System
N201 Maximum LLC-PDU data field length (limited by MS capability).
O&M Operation and Maintenance
PACCH Packet Associated Control Channel
PAGCH Packet Access Grant Channel
PBCCH Packet Broadcast Control Channel
PCCCH Packet Control Common Channel
PCM Pulse Code Modulation link
PDAN Packet Downlink Ack/Nack message
PDAS Packet Downlink Assignment message
PDCH Packet Data Channel
PDP Packet Data Protocol (such as IP or X.25)
PDTCH Packet Data Traffic Channel
PDU Protocol Data Unit
PLMN Public Land Mobile Network
PPC Power PC processor
PPCH Packet Paging Channel
PRACH Packet Radio Access Channel
PS Packet Switched
PUAN Packet Uplink Ack/Nack message
PUAS Packet Uplink Assignment message
QoS Quality of Service
RA Routing Area
RLC Radio Link Control layer
RRM Radio Resource Management Layer

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 93/109
SGSN
permitted without written authorization from Alcatel-Lucent

Serving GPRS Support Node


document, use and communication of its contents not
All rights reserved. Passing on and copying of this

TBF Temporary Block Flow


TCH-RM Traffic Channel Resource Manager (function responsible for radio channel
allocation in the BSC)
TCP Transport Control Protocol
TCU Terminal Control Unit
TRCU Transcoder Unit
TRK-DH Trunk device Handler (function which controls the Abis-Ater switch inside the BSC)
TRX Transceiver
TS Technical Specification / Time Slot
TSC Transcoder Sub-multiplexer Controller
UL UpLink
WAP Wireless Application Protocol
WEB Abbreviation for WEB-browsing with http protocol

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 94/109
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

9 ANNEX

9.1 Message routes through BSS

LLC LLC

PDCH Relay Relay Relay

(Routed transparently
RRM RR through the BSC) RR BSCGP BSCGP RRM BSSGP BSSGP
RLC RLC
Relay Network Network
MAC MAC
Service Service
L2-RSL L2-GSL L2-GSL M-EGCH
M-EGCH L2-RSL
GSM RF GSM RF
L1-GCH L1-RSL L1-RSL L1-GSL L1-GSL L1-GCH L1-Gb L1-Gb
SGSN

MS Um BTS Abis BSC Ater MFS Gb SGSN

Figure 28: Message routes: PDU Uplink / Downlink

PDCH Relay Relay Relay

(Routed transparently
RRM RR
through the BSC) RR BSCGP BSCGP RRM BSSGP BSSGP
RLC RLC
Relay Network Network
MAC MAC
Service Service
L2-RSL L2-GSL L2-GSL M-EGCH
M-EGCH L2-RSL
GSM RF GSM RF
L1-GCH L1-RSL L1-RSL L1-GSL L1-GSL L1-GCH L1-Gb L1-Gb
SGSN

MS Um BTS Abis BSC Ater MFS Gb SGSN

Figure 29: Message routes: Paging

Relay Relay Relay


PDCH
(Routed transparently
RRM RR through the BSC) RR BSCGP BSCGP RRM BSSGP BSSGP
RLC RLC
Relay Network Network
MAC MAC
Service Service
L2-RSL L2-GSL L2-GSL M-EGCH
M-EGCH L2-RSL
GSM RF GSM RF
L1-GCH L1-RSL L1-RSL L1-GSL L1-GSL L1-GCH L1-Gb L1-Gb
SGSN

MS Um BTS Abis BSC Ater MFS Gb SGSN

Figure 30: Message routes: Signalling

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 95/109
Relay Relay Relay
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

RRM RR RR BSCGP BSCGP RRM BSSGP BSSGP


RLC RLC
Relay Network Network
MAC MAC
Service Service
L2-RSL L2-GSL L2-GSL M-EGCH
M-EGCH L2-RSL
GSM RF GSM RF
L1-GCH L1-RSL L1-RSL L1-GSL L1-GSL L1-GCH L1-Gb L1-Gb
SGSN

MS Um BTS Abis BSC Ater MFS Gb SGSN

Figure 31: Message routes: Resource management

9.2 Protocol Layers and message sizes


LLC frame LLC frame size : 5 to 1560 bytes

LLC data FCS LLC header LLC data FCS LLC header RRM layer

Header :
RLC / MAC blocks 3 to n bytes
MAC RLC header RLC data BCS MAC RLC header RLC data BCS RLC/MAC layer
RLC datas : 22 bytes with CS1
32 bytes with CS2
L2-GCH blocks
Frame size : 40 bytes
Sync. L2 header RLC SDU Tail Sync. L2 header RLC SDU Tail L2-GCH Layer

Radio bursts

Radio burst Radio burst Radio burst Radio burst Radio burst Radio burst Radio layer
4 Radio bursts per RLC SDU
1 block = 4 radio bursts = 20 ms
Sync. = 0000 0000 001Z
FCS = Frame Check Sequence
MAC = Block Header
BCS = Block Check Sequence
(When SDCCH coding is used, BCS corresponds to the Fire code)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 96/109
LLC frame
permitted without written authorization from Alcatel-Lucent

LLC frame size : 5 to 1560 bytes


document, use and communication of its contents not

LLC data FCS LLC header LLC data FCS RRM layer
All rights reserved. Passing on and copying of this

BSSGP block
14 to 36 bytes 1 LLC frame per BSSGP block
LLC SDU BSSGP header LLC SDU BSSGP layer

NSC block
4 bytes 1 BSSGP block per NSC block
SDU NSC header BSSGP SDU NSC
sub-layer
NS
FR layer
Frame
1 byte 2 bytes 1 NSC block per FR frame 2 bytes 1 byte FR
Sync. Flag FR header NSC SDU FCS Sync. Flag sub-layer

FCS = Frame Check Sequence


Sync. Flag = 01111110

Signalling messages

,
Paging, Access Request, Access Grant ,
Paging, Access Request, Access Grant RRM layer

BSCGP blocks 4 to n bytes SDU size : see message type size


BSCGP PDU BSCGP header RRM SDU BSCGP layer

LapD
Frame 1 byte 4 bytes 1 BSCGP block per LapD frame 2 bytes 1 byte
Sync. Flag LAPD header BSCGP SDU FCS Sync. Flag LapD layer

FCS = Frame Check Sequence


Sync. Flag = 01111110

9.3 Traffic model formula


The model whose corresponding formulas are presented below exists as an Excel tool.
Global parameters (all users) Value
RA_update_timer (mn) 54 mn
GPRS attachement at session start Input value (0 = no attachment at session start (MS attached at switch-
(start_with_attach) on), 1 = attachement at session start)

RA update description per BH


Routing area update per user at BH = 60 / Ra_update_timer
DL TBF / CCCH per user at BH =0
(rau_dl_tbf_pc_bh)
UL TBF / CCCH per user at BH = 1 * RA update at BH
(rau_ul_tbf_pc_bh)
DL TBF / PACCH per user at BH = 1 * RA update at BH
(rau_dl_tbf_pa_bh)
UL TBF / PACCH per user at BH = 1 * RA update at BH
(rau_ul_tbf_pa_bh)

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 97/109
Number of DL LLC PDU per user at BH = 1 * RA update at BH
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

(nb_rau_dl_pdu_bh)
All rights reserved. Passing on and copying of this

Number of UL LLC PDU per user at BH = 2 * RA update at BH


(nb_rau_ul_pdu_bh)
PDU size = 50
DL kbytes per user at BH = Number of DL LLC PDU for RA update * PDU size / 1000
(rau_dl_kbytes_bh)
UL kbytes per user at BH = Number of UL LLC PDU for RA update * PDU size / 1000
(rau_ul_kbytes_bh)
RA update occurs for all MS only if permanently GPRS attached

UL TBF / Bytes / DL Bytes / UL


Session description (application level) DL LLC UL LLC
PACCH LLC LLC
Attach (if start_with_attach = 0 / 1) 0/4 0/3 0/4 50 50
Detach (if start_with_attach = 0 / 1) 0/1 0/1 0/1 50 50
PDP context activate 1 1 1 50 50
PDP context deactivate 1 1 1 50 50
SUM 2/7 2/6 2/7 100 / 300 100 / 350
Average PDU size for signalling 50 50
Value1 / value 2 depends on the value of the flag start_with_attach (0 or 1)

Session description Application only


Main direction of the transfer DL except for MMS-UL (UL)
Initial PAGING of MS (0 / 1) Input value (0 for all except for MMS-DL)
Number of transactions per session Input value (1 per session except for WEB and WAP)
(nb_trans_session)
Average transaction size (kbytes) Input value
Average packet size for user data (bytes) Input value (this is the IP packet size)
Average packet size for control message (bytes) Input value (TCP and application control messages altogether)
User data load per session (kbytes) = Average transaction size * number of transactions per session
Allocated throughput (kbytes/s) = Depends on percentage of EGPRS MS and on (M)CS distribution

Transaction description (GPU level) Application only


N201 parameter Input value (GPRS and EGPRS cases are distinguished)
Number of PDU per packet = Roundup of division between average packet size and N201
Mean DL LLC PDU size (bytes) For WEB: = specific estimate (see Section 5.3.1) + LLC_header,
Other DL applications: = average packet size for user data / number of
PDU per packet + LLC_header,
UL applications: = average packet size for control message +
LLC_header.
Mean UL LLC PDU size (bytes) DL applications: = average packet size for control message +
LLC_header,
UL applications: = average packet size for user data divided by number
of PDU per packet + LLC_header.
Number of DL LLC PDU DL applications: = average transaction size * 1000 / mean DL LLC PDU
(nb_dl_pdu_trans) size,
UL applications: = average transaction size * 1000 / average packet
size for user data (1 TCP-Ack per TCP segment).
Number of UL LLC PDU DL TCP based applications (WEB, MMS-DL): = average transaction size *
(nb_ul_pdu_trans) 1000 / average packet size for user data (1 TCP-Ack per TCP
segment),
DL UDP based applications: = 2 + average transaction size / (5 *
allocated throughput) (1 control message every 5 seconds of

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 98/109
download),
permitted without written authorization from Alcatel-Lucent

WAP: = 1 + average transaction size * 1000 / average packet size for


document, use and communication of its contents not
All rights reserved. Passing on and copying of this

user data,
UL applications: = average transaction size * 1000 / mean UL LLC PDU
size.
Number of DL TBF on (P)CCCH Input value (0, all transactions are supposed to be MS initiated, even
(nb_dl_tbf_pc_trans) for MMS-DL, for which the MS is paged and then respond to the paging)
Number of UL TBF on (P)CCCH Input value (1 except for MMS-DL (2): one for notification and one for
(nb_ul_tbf_pc_trans) MMS download)
Number of DL TBF on PACCH Input value (1 thanks to DL TBF release, excepts for MMS-DL (2): one
(nb_dl_tbf_pa_trans) for notification and one for MMS download)
Number of UL TBF on PACCH DL TCP based applications (WEB, MMS-DL): = 1 (extended UL mode
(nb_ul_tbf_pa_trans) taken into account and 1 UL TBF for initial request to server),
DL UDP based applications: = number of UL LLC PDU,
Other applications: = input value.
DL kbytes per transaction = Mean DL LLC PDU size * number of DL LLC PDU / 1000
(dl_kbytes_trans)
UL kbytes per transaction = Mean UL LLC PDU size * number of UL LLC PDU / 1000
(ul_kbytes_trans)

User behaviour at BH per profile (Signalling profile incorporated)


Percentage of subscribers from each profile Input value (used for traffic mix only)
Number of sessions per user at BH Input value
(nb_session_user_BH)
Number of transactions per user at BH = nb_session_user_BH * nb_trans_session
(nb_trans_user_BH)
Number of PS paging per user at BH = nb_session_user_BH * paging per session
(PS_paging_user_BH)
Number of DL LLC PDU per user at BH = nb_session_user_BH * (nb_dl_pdu_trans * nb_trans_session +
(DL_PDU_user_BH) nb_sig_dl_pdu_session) + nb_rau_dl_pdu_bh
Number of UL LLC PDU per user at BH = nb_session_user_BH * (nb_ul_pdu_trans * nb_trans_session +
(UL_PDU_user_BH) nb_sig_ul_pdu_session) + nb_rau_ul_pdu_bh
Number of DL TBF / (P)CCCH per user at BH = nb_session_user_BH * (nb_dl_tbf_pc_trans * nb_trans_session +
(DL_TBF_PC_user_BH) sig_dl_pc_session) + rau_dl_tbf_pc_bh
Number of UL TBF / (P)CCCH per user at BH = nb_session_user_BH * (nb_ul_tbf_pc_trans * nb_trans_session +
(UL_TBF_PC_user_BH) sig_ul_pc_session) + rau_ul_tbf_pc_bh
Number of DL TBF / PACCH per user at BH = nb_session_user_BH * (nb_dl_tbf_pa_trans * nb_trans_session +
(DL_TBF_PA_user_BH) sig_dl_pa_session) + rau_dl_tbf_pa_bh
Number of UL TBF / PACCH per user at BH = nb_session_user_BH * (nb_ul_tbf_pa_trans * nb_trans_session +
(UL_TBF_PA_user_BH) sig_ul_pa_session) + rau_ul_tbf_pa_bh
Average DL kbytes per user at BH = nb_session_user_BH * (dl_kbytes_trans * nb_trans_session +
(DL_kbytes_user_BH) sig_dl_kbytes_session) + rau_dl_kbytes_bh
Average UL kbytes per user at BH = nb_session_user_BH * (ul_kbytes_trans * nb_trans_session +
(UL_kbytes_user_BH) sig_ul_kbytes_session) + rau_ul_kbytes_bh
Number of radio resource reallocation per user = nb_session_user _BH * 1 (for T2 or T3 reallocation after the
at BH signalling phase)
(nb_rrr_user_BH)

Estimation fo transaction duration All profiles (except signalling)


Number of DL PDCHs allocated to a TBF Input value
(nb_dl_pdch_tbf)
Number of GCHs per PDCHs = value depends on type of traffic (GPRS or EGPRS) and of
(nb_gch_pdch) Max_(M)CS_(E)GPRS

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 99/109
Allocated throughput = throughput per PDCH * nb_dl_pdch_tbf
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Data download / upload duration (s) = Average transaction size / allocated throughput
All rights reserved. Passing on and copying of this

Request to server duration = First ping duration


Signalling phase duration = start_with_attach * 3 * ping duration + first ping duration + 2 * ping
duration
Average transaction duration = transfer duration + request duration + signalling duration /
(mean_trans_duration) nb_trans_session + Delayed DL TBF timer

Note Each signalling response-request counts as one ping duration. The following assupmptions are used
for ping duration:
First ping duration: 1s,
Other pings duration: 0,8s.

9.4 CPU load formula


Inputs for GPU load model
Number of subscribers per GPU = N_GPU_user (to be adjusted to reach desired CPU load)
Choose user profile (from a list) Single profile or mix can be chosen
PS paging per user at BH PS_paging_user_BH
Number of DL LLC-PDU at BH DL_PDU_user_BH
Number of UL LLC-PDU at BH UL_PDU_user_BH
Number of downlink TBF on (P)CCCH at BH DL_TBF_PC_user_BH
Number of uplink TBF on (P)CCCH at BH UL_TBF_PC_user_BH
Number of downlink TBF on PACCH DL_TBF_PA_user_BH
Number of uplink TBF on PACCH UL_TBF_PA_user_BH
Average DL kbytes per user at BH DL_kbytes_user_BH
Average UL kbytes per user at BH UL_kbytes_user_BH
Number of transactions at BH nb_trans_user_BH
Average transaction duration mean_trans_duration
Average number of resource reallocation at BH nb_rrr_user_BH
The mix is obtained as a weighted average of all profiles by a given mix distribution.

Rate per second for N_GPU_user CPU cost in ms per second


= DL_TBF_PC_user_BH * N_GPU_user / 3600 = rate_DL_TBF_PC * exec_DL_TBF_PC + MIN(rate_DL_TBF_PC, 10) *
RoundUp(nb_dl_tbf_pdch * nb_gch_pdch) * exec_GCH_alloc
= UL_TBF_PC_user_BH * N_GPU_user / 3600 = rate_UL_TBF_PC * exec_UL_TBF_PC + MIN(rate_UL_TBF_PC, 10) *
RoundUp(nb_dl_tbf_pdch * nb_gch_pdch) * exec_GCH_alloc
= DL_TBF_PA_user_BH * N_GPU_user / 3600 = rate_DL_TBF_PA * exec_DL_TBF_PA
= UL_TBF_PA_user_BH * N_GPU_user / 3600 = rate_UL_TBF_PA * exec_UL_TBF_PA
= DL_PDU_user_BH * N_GPU_user / 3600 = rate_DL_PDU * exec_LLC_DL
= UL_PDU_user_BH * N_GPU_user / 3600 = rate_UL_PDU * exec_LLC_UL
= PS_paging_user_BH * N_GPU_user / 3600 = rate_PS_paging * (exec_paging_P * MPDCH_ratio + exec_paging_C *
(1 MPDCH_ratio))
= Gb_CS_paging = rate_CS_paging * (exec_paging_P * MPDCH_ratio + exec_paging_C *
(1 MPDCH_ratio))
= nb_rrr_user_BH * N_GPU_user / 3600 = rate_rrr * exec_rrr + MIN(rate_rrr, 10) * RoundUp(nb_dl_tbf_pdch *
nb_gch_pdch) * exec_GCH_alloc
CPU overhead for B10 9,25%
Total PPC CPU load in % Sum of all / 10 (10ms/s = 1%)

PPC capacity
Number of subscribers per GPU = N_GPU_user

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 100/109
TBF establishment at target load Sum of all TBF rates
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Total of LLC-PDU / s (DL and UL) at target load Sum of DL and UL PDU rates
All rights reserved. Passing on and copying of this

DL LLL throughput (kbit/s) = DL_kbytes_user_BH * N_GPU_user * 8 / 3600


Percentage of max GPU throughput = DL LLC throughput / (3100 * (1 EGPRS_ratio) + 4100 * EGPRS_ratio)
UL LLL throughput (kbit/s) = UL_kbytes_user_BH * N_GPU_user * 8 / 3600
Percentage of max GPU throughput = UL LLC throughput / (3100 * (1 EGPRS_ratio) + 4100 * EGPRS_ratio)

Max simultaneously connected MS


Number of transactions per second = N_GPU_user * nb_trans_user_BH / 3600
Number of simultaneous established = Number of transactions per second * mean_trans_duration
transactions

9.5 BSC processing capacity for PS services


This aim of this section is to verify that the capacity of the BSC is sufficient to handle the introduction of
GPRS traffic. It is foreseen that there is no capacity problem with Mx BSC. Therefore this part has not
been updated taking into Mx BSC measurements.

9.5.1 BSC board presentation


The BSC configuration and board usage is briefly explained in Section 2.2. It is important to note that the
load corresponding to the trunk device handler function, although computed separately, is distributed on
all the DTC and has to be added to each DTC type. The objective to ensure BSC performances with PS
services is that PS functionality should not increase the BSC processor load to more than 5%25. When a
processor is used for both CS and PS services, the maximum CS load is reduced when GPRS traffic is
introduced. In the following, we consider that the CS traffic reduction corresponds to at least one Ater
mux per GPU (about 100 Erlang). The CS load reduction is proportional to the number of GPU connected
to the BSC. The CPU cost for each procedure in the BSC is based on measurements from B8.

9.5.2 Main assumptions


The BSC processor load with GPRS is based on a number of assumptions, which are summarized hereafter
(they are recalled further when needed):
With one GPU per BSC, no reduction of CS load is considered. With more than one GPU, then
reduction of CS load proportional to the number of added GPUs is considered.
All computations related to CS load reduction due to PS traffic is based on the Alcatel CS traffic
model defined in document [13], and on an average 100 Erlang per Ater-Mux.
No PS traffic model is assumed, but the signalling load generated by one GPU is based on the
hypothesis described in Section 7.3, mainly:
The TBF establishment rate corresponds to half the maximum PPC ping rate value (see Section
4.3.4) at PPC nominal load (75%). This may be much higher than what is generated by real
traffic.
Thanks to the tuning on waiting queues in the MFS, the flow of transmission resource allocation
and de-allocation requests can be controlled on the BSCGP interface. The maximum rate, as
defined in Section 7.3, is assumed.

25
Objective carried out from B7.2.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 101/109
9.5.3 DTC boards
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

The DTC are used to route the traffic information and to terminate and to process the signalling
information exchanged with the MSC (SS7) and the MFS (GSL). For CS and (E)GPRS traffic, 16 kbit/s
channels from 4 DTCs are sub multiplexed to form an Ater mux interface.

9.5.3.1 GSL-DTC load


One DTC corresponds to one single GSL link. Hence the load on the DTC will depend on the GSL link load.
Considering the same hypothesis as in Section 7.3, we will consider the worst case in term of signalling
load on the GSL for one GPU with 2 GSLs configured.

9.5.3.1.1 Processing cost model for one GPU


The following table indicates the processing cost of the GPRS elementary functions processed by the DTC-
GSL26, and the total processing cost at DTC level when the same hypothesis as for GSL load estimation are
applied (see Section 7.3). This corresponds to a worst case for the GSL load.

For the GSL, the number of requested resources in transmission allocation and de-allocation requests has
a negligible influence.
Occurrences per
Processing cost (ms) Total cost (ms)
second per GPU
Paging 3.33 21 69.9
UL TBF establishment on CCCH 6.5 21 136.7
DL TBF establishment on CCCH 3.21 0 0
Radio allocation / de-allocation 11.0 10 110
Transmission allocation / de-allocation 38.6 10 386
Suspend and resume 12.2 11.4 139.1
Total DTC cost (ms) 841.7
DTC load if 2 GSL per GPU 42.1%

Table 77: CPU load of GSL DTC (1 GPU)

9.5.3.1.2 PS induced load for 1 to 6 GPU per BSS


As explained in Section 7.4 when more GPU are considered, the part of the PS load, which depends on the
total number of PS users in the BSS, is distributed on more GPUs27. The corresponding processing load is
decreased for one GPU, and consequently for the DTC.
The CPU load on the GSLDTC for up to 6 GPU is estimated below.
Total CPU load per DTC =
CPU load (1 GPU case) for TBF establishment
+ CPU load (1 GPU case) for GCH allocation/de-allocation
+ (CPU load (1 GPU case) for Paging, suspend resume,
PDCH allocation/de-allocation) / Number of GPU in BSS

Number of GPUs 1 2 3 4 5 6
Fixed load (TBFs allocation and de-allocation) 522.7 522.7 522.7 522.7 522.7 522.7
Relative load (paging, suspend/resume, RAE-4) 319.0 159.5 106.3 79.8 63.8 53.2
Total CPU load (ms) 841.7 682.2 629.1 602.5 586.5 575.9
CPU load if 2 GSL per DTC (%) 42.1% 34.1% 31.5% 30.1% 29.3% 28.8%

26
Values measured on the BSC.
27
Note that this is an artefact from our model, which considers that the number of GPU in the BSS does not depend on the
proportion of PS users in a network The CPU Load for the one GPU case tends to be over-estimated. The model could be refined for
future releases.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 102/109
Table 78: Data load on the GSL for up to 6 GPU (without Location services)
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

We see that the GSL-DTC load decreases with more GPU per BSC. But in this case the number of BSSAP-
DTC will be reduced in the BSC, and the load on the remaining BSSAP-DTC has to be checked. This is
described in Section 9.5.3.2.
Due to load sharing between the GSL it is possible to minimize the number of GSL-DTC in the BSC provided
that the DTC load does not exceed 60%.
There is no load concern for GSL DTC, as long as two GSLs per GPU are configured, but the impact on
BSSAP load has to be checked.

9.5.3.2 BSSAP-DTC load

9.5.3.2.1 Processing cost model for one GPU


The BSSAP-DTC handles simultaneously BSSAP (CS service), BSSLP (location services) and GPRSAP (PS
services). In this section we estimate the added load due to PS. The average load on BSSAP with CS
services only on a type 6 BSC, with Alcatel traffic model at maximum capacity, estimated with the BSC
load model is currently 34%. The following table indicates the processing cost of the GPRS elementary
functions (i.e. GPRSAP) processed by all the BSSAP-DTC28, for one GPU.

The worst case (as far as BSC and GSL load are concerned) defined in Section 7.3 is used. This cost will be
distributed over all available BSSAP-DTC.
Occurrences per
Processing cost (ms) Total cost (ms)
second
Paging 5.9 21 123.9
UL TBF establishment on CCCH 5.1 21 107.3
DL TBF establishment on CCCH 2.31 0 0
Radio allocation / de-allocation 13.2 10 132.0
Transmission allocation / de-allocation 37.8 10 378.0
Suspend and resume 13.1 11.4 149.3
Total cost (ms) 890.5

Table 79: GPRSAP load on BSSAP-DTC


The number of BSSAP-DTC depends on the BSC type. The following section gives the added load on BSSAP
for 1 to 6 GPU per BSC.

9.5.3.2.2 PS induced load for 1 to 6 GPU per BSS


In case of several GPU, then the load induced by GPRS on the BSSAP DTC is increased. However, in this
case the maximum CS load in the BSC is reduced because of the radio and A-ter resources used by GPRS.
In the following, we assume that at least one Atermux is used per added GPU for GPRS. We consider the
following cases:
1) One Atermux and one GSL per GPU,
2) One Atermux and two GSL per GPU29,
3) Two Atermux and two GSL per GPU.
With a Type 6 BSC, we may have up to 6 GPU connected, with a maximum 44 BSSAP/GPRSAP DTC.
However the number of BSSAP/GPRSAP DTC depends on the number of configured GSL (see Section 2.2.1

28
Value measured on the BSC.
29
This case corresponds to a configuration, which is not allowed, but it provides an upper bound with a simplified model for the case
of a small proportion of Ater mux configured for PS so as to have a redundant GSL (for example, 1 Amux +1/12 Amux and 2 GSL
configured).

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 103/109
for configuration rules). In the table below we compute the added load on the BSSAP-DTC when the CS
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

load is reduced.
All rights reserved. Passing on and copying of this

Each Ater-mux used for GPRS corresponds to about 100 Erlang lost for CS service. The BSSAP load
corresponding to one Atermux used for CS services for 100 Erlangs with Alcatel traffic model defined in
document [13] is around 717ms30.
Number of GPUs 3 4 5 6
Maximum number of BSSAP-DTC 44 44 44 44
Remaining BSSAP-DTC if 1 GSL per GPU 41 40 39 38
Remaining BSSAP-DTC if 2 GSL per GPU 38 36 34 32
Total GPRSAP DTC load (ms) 2 672 3 562 4 453 5 343
Removed CS load for 1 Amux per GPU 2 152 2 869 3 586 4 303
1) Added BSSAP-DTC load 1 Amux / 1 GSL per GPU 1.3% 1.7% 2.2% 2.7%
2) Added BSSAP-DTC load 1 Amux / 2 GSL per GPU 1.4% 1.9% 2.5% 3.2%
3) Added BSSAP-DTC load 2 Amux / 2 GSL per GPU -4.3% -6.0% -8.0% -10.2%

Table 80: Added load on BSSAP-DTC for up to 6 GPU on BSC type 6


With one Atermux per GPU, the added load on BSSAP DTC does not exceed the targeted 5%, even if the
number of GPUs per BSC is equal to 6. With 2 Atermux or more per GPU, we expect the BSSAP load to be
reduced. Consequently there is no concern for CPU load on BSSAP DTC.

9.5.3.3 TCH-RM DTC load


The GPRS load on the TCH-RM processor is only due to radio resource allocation and de-allocation. TCH-
RM boards operate in redundant pairs. Hence the number of pairs is meaningful to compute the load.

9.5.3.3.1 Processing cost model for one GPU

The cost for one GPU is provided hereafter. With the introduction of RAE-4 feature, every
RR_ALLOC_PERIOD * TCH_INFO_PERIOD, messages are handled in TCH-RM, concerning radio allocation /
de-allocation for PS traffic. Considering 100 cells and RR_ALLOC_PERIOD set to 2, it gives 10 messages per
seconds.
Occurrences per
Processing cost (ms) Total cost (ms)
second
Radio allocation / de-allocation 10.4 10 104
Total cost (ms) 104

Table 81: Added load on TCH-RM DTC for one GPU

A BSC may have from 2 to 6 TCH-RM pairs depending on the BSC type. The load due to PS services on
TCH-RM for BSC type 3 to 6 is provided in the table below.
BSC type 3 4 5 6
Number of TCH-RM pairs 4 4 6 6
TCH-RM DTC load 2.6% 2.6% 1.7% 1.7%

Table 82: Added load on TCH-RM DTC for one GPU depending on BSC type

9.5.3.3.2 PS induced load for 1 to 6 GPU per BSS

In this case, the load related to the diminution of the CS capacity must be subtracted. As in Section
9.5.3.2, we assume that at least one Atermux per GPU is dedicated for GPRS, and that the CS load to be
subtracted corresponds to at least 100 Erlang. For TCHRM, the CPU cost for 100 Erlang with Alcatel CS

30
Computed with BSC load model, not presented in this document.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 104/109
traffic model (see document [13]) is about 104 ms. In the table below, we see that the load in TCHRM is
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

reduced with current BSCGP load hypothesis for a BSC type 6 (6 TCH-RM pairs).
All rights reserved. Passing on and copying of this

Number of GPU in BSC 3 4 5 6


Removed CS load for 1 Amux per GPU 5.2% 6.9% 8.7% 10.4%
TCH-RM DTC load 5.2% 6.9% 8.7% 10.4%
Added TCH-RM load (1 Amux per GPU) 0.0% 0.0% 0.0% 0.0%
Added TCH-RM load (2 Amux per GPU) -5.2% -6.9% -8.7% -10.4%

Table 83: TCH-RM DTC load, multi-GPU case (type 6 BSC)


As far as TCHRM is concerned, the CPU load is not increased by PS services.

9.5.3.4 DTC: Trunk device handler load


The trunk device handler function (TRKDH) is active on all the DTC of the BSC. The TRKDH manages the
switching of CICs and GICs between Abis and Ater in the BSC. For PS services, only transmission allocation
and de-allocation requests are triggering the trunk device handler function. The resulting load will be
added to all DTC processors in the BSC (BSSAP, GSL, SS7). As a simplification, we consider that one Ater
link, connected to one DTC, will be totally dedicated either to PS service or CS service. Hence we
compute the TRKDH load for a DTC, which is either managing GIC or CIC. This enables us to see the worst
case, especially with EDGE.
The cost of one transmission allocation and de-allocation request for the trunk device handler strongly
depends on the number of requested Ater-nibbles. The cost of one allocation and one de-allocation for
N_GIC Ater nibbles is expressed by the following formula31:
4.96+ N_GIC*5.26

The average number of GICs per request will depend on the percentage of EGPRS MS.
To compute the cost we estimate an average number of GICs per transmission allocation requests
corresponding to:
4 PDCH requested per allocation,
30% of penetration rate for EGPRS MS,
Max_GPRS_CS and Max_EGPRS_MCS are set to respectively CS4 and MCS9.

This yields to an average of 10 GICs per requests. The corresponding load for trunk device handler for
one GPU is computed below, with the hypothesis of a maximum 10-transmission allocation and 10-
transmission de-allocation request per second per GPU (see Section 7.3 for justification).
Occurrences per
Processing cost (ms) Total cost (ms)
second
Transmission allocation / de-allocation 57.5 10 575
Total cost (ms) 575

Table 84: PS TRKDH CPU cost for one GPU (EGPRS penetration rate of 30%)
With our model, the load on one GPU is independent of the number of AterMux connected to this GPU. We
consider that the maximum rate for transmission allocation request on BSCGP interface can be reached
with only one Ater-mux per GPU. So connecting more AterMux per GPU implies distributing the
corresponding TRKDH load on more DTC (4 DTC per AterMux). This is quantified on the table below, where
the total load per GPU is divided per the number of DTC (4 per AterMux)

The corresponding TRKDH handler load for CS services for one Atermux (100 Erlang) has to be deduced
(2.2% CPU load per DTC).

31
Measured on B7.2 BSC.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 105/109
Number of AterMux per GPU 1 2 3 4
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

PS TRKDH load on one DTC (%) 14.4% 7.2% 4.8% 3.6%


All rights reserved. Passing on and copying of this

CS TRHDL load to be removed per DTC 2.2% 2.2% 2.2% 2.2%


Added load for TRKHDL connected to GIC 12.2% 5.0% 2.6% 1.4%

Table 85: Added load on DTC for TRKDH for one GPU (EGPRS penetration rate of 30%)
The added load to all DTC due to TRKDH function is relatively high, especially if we assume that the
maximum rate for transmission allocation requests can be reached with only one Ater-mux per GPU. This
may cause problem on already loaded DTC.

9.5.3.5 SS7-DTC: impact of PS services


There is no direct impact of PS services on SS7 load. The load on each terminating SS7-DTC depends on
the load on the link. The removal of one SS7 link is needed only if a full Atermux is configured for PS.
Since there is one SS7 link per Atermux in the BSC (except for Type 6 where there are two additional
Atermux), there is no risk of overload for the remaining SS7 Links and processors.

9.5.3.6 Conclusion for DTC


The following tables summarize the load situation for the DTC on a type 6 BSC with 2 cases:
3 GPU per BSC,
6 GPU per BSC.

The additional trunk handler load with EDGE is added on all DTC. When EGPRS is not activated or has a
lower penetration, the added DTC load will be lower than estimated below. Also the more AterMux per
GPU are configured, the more the CPU load due to CS traffic will be reduced.
BSSAP TCH-RM SS7 GSL
Load with CS at max capacity (w/o TRKHDL) 35.3% 54.1% 43.2% 2.4%
PS TRKDH load (1 Amux per GPU, 30% EGPRS) 14.4% 14.4% 14.4% 14.4%
PS TRKDH load (2 Amux per GPU, 30% EGPRS) 7.2% 7.2% 7.2% 7.2%
Added PS load other functions (1 Amux per GPU) 2.6% 0.0% 0.0% 42.6%
Added PS load other functions (2 Amux per GPU) -2.9% -5.2% 0.0% 42.6%
Resulting load (1 Amux per GPU) 52.2% 68.5% 57.6% 59.4%
Resulting load (2 Amux per GPU) 39.6% 56.1% 50.4% 52.2%

Table 86: DTC load on a type 6 BSC with 3 GPU and EGPRS penetration rate of 30%
BSSAP TCH-RM SS7 GSL
Load with CS at max capacity (w/o TRKHDL) 35.3% 54.1% 43.2% 2.4%
PS TRKDH load (1 Amux per GPU, 30% EGPRS) 14.4% 14.4% 14.4% 14.4%
PS TRKDH load (2 Amux per GPU, 30% EGPRS) 7.2% 7.2% 7.2% 7.2%
Added PS load other functions (1 Amux per GPU) 5.5% 0.0% 0.0% 40.0
Added PS load other functions (2 Amux per GPU) -6.9% -10.4% 0.0% 40.0%
Resulting load (1 Amux per GPU) 55.2% 68.5% 57.6% 56.8%
Resulting load (2 Amux per GPU) 35.6% 50.9% 50.4% 49.6%

Table 87: DTC load on a type 6 BSC with 6 GPU and EGPRS penetration rate of 30%
The DTC load is mainly increased due to the trunk device handler function, which is distributed on all the
DTCs. However the DTC load is expected to remain below targeted 60% even with high load for trunk
device handler with higher penetration rate for EGPRS MS.

9.5.4 TCU Board


The TCU are used to route the traffic Information and to terminate and process the signalling information
exchanged with the BTS.

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 106/109
9.5.4.1 CPU cost model for TCU
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not
All rights reserved. Passing on and copying of this

A TCU controls a maximum of 4 TRX. These TRX may belong to the same cell or several distinct cells. The
added load due to GPRS in the TCU is of different kinds:
Load on the CCCH and SDCCH: the induced load varies with the cell size. The CCCH load is due to
Paging and TBF establishment on the CCCH, when the MPDCH is not configured in the cell.
Paging: The PS paging load is very small compared to CS Paging. Hence, we will consider it as
negligible. The number of CS paging is not increased by Gs interface. So there is no added
Paging load for GPRS in the TCU compared to CS traffic.
TBF establishment on CCCH: it depends on the number of GPRS users in the cell, hence on the
cell size.
Suspend/resume: This load depends on the number of GPRS users in the cell. At TCU level, it
depends on the number of SDCCH mapped on the TCU (maximum 32 SDCCH). As a simplification,
we will consider the global load in one cell. This simplification tends to slightly overestimate
the TCU load for large cells, where the computed load may in fact be distributed on several
TCUs.
Transmission allocation/de-allocation: In this case we have to compare the load generated by
transmission allocation and de-allocation for one TRX compared to the load if this TRX is used for CS
services.

The cost of the above procedures in the TCU is given in the table below, with N_GIC = number of GIC per
allocation or de-allocation request.
Procedure CPU cost at TCU level in ms
Suspend / resume cost 3.27
UL TBF establishment cost 8.62
DL TBF establishment cost 2.68
Transmission allocation and de-allocation cost = 4.37 + N_GIC * 7.23

Table 88: Cost processing model in TCU

9.5.4.2 TCU load (PS service) for CCCH and SDCCH channels
The TCU load for PS services on CCCH and SDCCH channels corresponds to TBF establishment and
Suspend/resume procedures. In this case the TCU load depends on the cell size and comes additionally to
the CS load. The number of GPRS users in the cell depends on:
The total number of CS users, which is obtained from the following relation:
Total CS users = Cell capacity in Erlang / mErlang per user
The GPRS penetration ratio.
The number of suspend and resume is one for each CS call corresponding to a GPRS attached users, so
proportional to the number of CS call attempts per second (CAPS) in the cell. CAPS is determined by the
relation:
CAPS = Cell traffic in Erlang / call TCH holding time

The basic traffic hypotheses presented in Table 89 are applied to the cell.
Transaction per PS user at BH 1
Number of UL access on CCCH per transaction 5
Number of suspend/resume per CS call attempts for GPRS users 1
GPRS penetration rate among CS subscribers 30%
Average TCH holding time for CS calls 50
CS traffic per subscriber (mErl) 15

Table 89: Traffic hypotheses applied to the cell

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 107/109
Note The number of transactions per PS user and the number of UL access CCCH per transaction
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

corresponds to pessimistic traffic hypothesis in term of signalling load is voluntarily independent of


All rights reserved. Passing on and copying of this

the Alcatel GPRS traffic, model, which is at this stage not validated by feedback from operational
field.

From Table 89 and in Table 90, we can compute the PS cell load and TCU Load depending on the cell size,
and whether one or two CCCH are mapped on the TCU.
Cell size 2 TRX 4 TRX 8 TRX 12 TRX 16 TRX
Erlangs per cell 9 20 46 75 100
CS users in cell 600 1333 3067 5000 6667
GPRS users in cell 180 400 920 1500 2000
CS call attempts per second (CAPS) 0.18 0.4 0.92 1.5 2
Suspend / resume per second 0.054 0.12 0.28 0.45 0.60
UL TBF/s per cell on CCCH 0.25 0.56 1.28 2.08 2.78
DL TBF/s per cell on CCCH 0.00 0.00 0.00 0.00 0.00
Suspend / resume cost in ms 0.18 0.39 0.90 1.47 1.96
UL TBF establishment cost in ms 2.16 4.79 11.01 17.96 23.94
DL TBF establishment cost in ms 0.00 0.00 0.00 0.00 0.00
Induced TCU load for 1 CCCH in ms 2.33 5.18 11.92 19.43 25.91
Added TCU load if 1 CCCH mapped [%] 0.2% 0.5% 1.2% 1.9% 2.6%
Induced TCU load for 2 CCCH in ms 4.49 9.97 22.93 37.39 49.85
Added TCU load if 2 CCCH mapped [%] 0.4% 1.0% 2.3% 3.7% 5.0%

Table 90: PS TCU load for CCCH and SDCCH


The added load for CCCH and SDCCH on the TCU is below targeted 5%. 5% is reached only for very large
cells (16 TRX) having 2 CCCHs mapped.

9.5.4.3 TCU load for TCH


The TCU load for PS corresponds to transmission allocation and de-allocation function. In this case the PS
load comes instead of the CS load.
The CPU cost is evaluated for one TRX. We compute the relative load for PS services by adding the PS load
for one TRX and subtracting the CS load for one TRX.
We consider the worst case where each PS access on the CCCH will trigger one transmission allocation and
de-allocation request. The number of PS access on the CCCH (UL) for one TRX is taken from Table 90, for
a 4 TRX cell (2nd column), which corresponds to an average case. This yields to:
Number of transmission allocation and de-allocation request/s for one TRX: 0,56,
The cost for one FR-TRX used for CS, loaded at 5 Erlang per TRX (current average in a network) is
estimated with the BSC load model with Alcatel traffic model for CS as 51 ms.
Finally we take the following assumptions to calculate the average number of GCHs needed at TBF
establishment:
Average number of PDCH per transmission allocation and de-allocation request is 4,
Max_GPRS_CS and Max_EGPRS_MCS set to respectively CS4 and MCS9,
Different values for EGPRS penetration rate (no EGPRS, 30% of EGPRS MS, 50% of EGPRS MS, only
EGPRS MS).

The results are presented in the following table.


EGPRS penetration rate 0% 30% 50% 100%
Transmission allocation/de-allocation cost in ms 55.0 76.7 98.4 134.5
CPU load for n request / s per TRX in ms 30.5 42.6 54.6 74.7

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 108/109
Load removed for CS at 5 Erlang per TRX in ms 51.2 51.2 51.2 51.2
permitted without written authorization from Alcatel-Lucent
document, use and communication of its contents not

Resulting added load to TCU [%] -2.1% -0.9% 0.3% 2.4%


All rights reserved. Passing on and copying of this

Table 91: PS TCU load for TCHs


The added load for TCH on the TCU increases with the EGPRS penetration rate but it remains below
targeted 5%.

9.5.4.4 Conclusion for the TCU


The added TCU load with GPRS remains below targeted 5% for SDCCH and CCCH signalling. The PS load for
TCH on the TCU is reduced by the introduction of PS services and even more with EDGE, even with
pessimistic hypothesis in term of PS traffic. It should however be noted that for a large cell, the added
load for signalling and the reduction of load for the TCH may correspond to distinct TCU boards, so in a
few cases, the TCU load may not be decreased although it decreases in average.

9.5.5 Main figures for the BSC


The BSC can support up to 6 GPU, with 2 GSL links per GPU. The maximum CS capacity of the BSC for a
given configuration is reduced, when transmission resources on Ater and Abis are dedicated to PS
resources. The resulting signalling load of diminished CS traffic and added PS signalling results in a small
increase of signalling load on the BSC, mainly due to the trunk handler function when the penetration rate
of EGPRS MS increases. This resulting average load on the BSC processor remains below nominal load
target.

END OF DOCUMENT

ED01 RELEASED BSS Traffic Model and Capacity: PS part


0131_01.doc
3BK 11203 0131 DSZZA 109/109

You might also like