Professional Documents
Culture Documents
VELIZY
Originators
BSS TRAFFIC MODEL AND CAPACITY: PS PART
F. Mercier
RELEASE B10
ABSTRACT
This document presents a simple traffic model for PS services. This model is then used to estimate the
capacity of the Alcatel BSS with GPRS and EDGE, in the scope of B10 software release.
Approvals
REVIEW
Ed. 01 Proposal 01 2007/11/16 Review report: GSM-Wimax/R&D/SYT/FME/207271
Ed. 01 Proposal 02 2008/01/14 Review report: MOAD/GSM/R&D/SYT/FME/208062
Ed. 01 Proposal 01 2007/03/5 Based on B9 version of the document (3BK 11203 0115 DSZZA Ed.
02).
It takes into account the following B10 feature:
WRR (Weighted Round Robin).
Only Best-Effort traffic is considered (DTM, GBR and EDA are not
considered in this proposal).
Ed. 01 Proposal 02 2007/11/16 Compared to Edition 1 Proposal 1, the following impacts are
taken into account:
Impact of the feature Gb over Ip at DSP and PPC level.
Impact of the feature DTM on the GSL link. The impact of
this feature on PPC is not studied.
Ed. 01 Released 2008/01/14 Compared to Edition 2 Proposal 2, the following impacts are
taken into account:
Comparison of MR2 performance with Gb over FR and Gb
over IP (iso software), for legacy only.
Addition of a RA Update profile, which takes into account
GPRS attached MS, without on-going PS data traffic.
For GP case, reduction of the number of GCHs per link E1.
9 ANNEX................................................................................................................... 95
9.1 Message routes through BSS .................................................................................. 95
9.2 Protocol Layers and message sizes ......................................................................... 96
9.3 Traffic model formula ......................................................................................... 97
9.4 CPU load formula .............................................................................................. 100
9.5 BSC processing capacity for PS services .................................................................. 101
9.5.1 BSC board presentation ................................................................................ 101
9.5.2 Main assumptions ....................................................................................... 101
9.5.3 DTC boards............................................................................................... 102
9.5.4 TCU Board................................................................................................ 106
9.5.5 Main figures for the BSC ............................................................................... 109
Table 35: Maximum number of PDCHs versus EGPRS penetration rate ............................................47
Table 36: DL LLC average data length for WEB if N201 = 576 bytes ...............................................54
Table 37: DL LLC average data length for WEB if N201 = 800 bytes ...............................................54
Table 38: DL LLC average data length for WEB if N201 = 1500 bytes..............................................54
Table 39: Examples of MMS information type .........................................................................55
Table 40: Signalling messages at start of session .....................................................................57
Table 41: Session description for all data profiles....................................................................58
Table 42: Transaction description at GPU / GP level with GPRS mobiles (MS always attached) ..............59
Table 43: Modified transaction parameters with EDGE mobiles ....................................................59
Table 44: User behavior at busy hour per profile (GPRS case)......................................................60
Table 45: Modified parameters for user behavior for EDGE mobiles...............................................60
Table 46: Alcatel mix of profile for BSS ................................................................................61
Table 47: Average user behavior at Busy Hour ........................................................................61
Table 48: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR1 GboFR)...............................65
Table 49: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboFR)...............................65
Table 50: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboIP) ...............................66
Table 51: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR1 GboFR) .............................66
Table 52: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboFR) .............................67
Table 53: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboIP) ..............................67
Table 54: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR1 GboFR)...............................72
Table 55: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboFR)...............................73
Table 56: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboIP) ...............................73
Table 57: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR1 GboFR) .............................74
Table 58: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboFR) .............................74
Table 59: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboIP) ..............................74
Table 60: PPC capacity at 75% CPU load with GP for GPRS (B10 MR1 GboFR) ..................................79
Table 61: PPC capacity at 75% CPU load with GP for GPRS (B10 MR2 GboIP)...................................79
Table 62: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR1 GboFR) ................................80
Table 63: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR2 GboIP) .................................80
Table 64: BSCGP hypothesis and LAPD overhead......................................................................87
Table 65: Size of messages between MFS and BSC....................................................................87
Table 66: Number of BSCGP messages between MFS and BSC ......................................................88
Table 67: Assumptions for BSC G2 and BSC Evolution ................................................................88
Table 68: Occurrences per second for the available BSC / MFS hardware........................................88
Table 70: Number of messages per second exchanged on GSL link for GPU / GP ...............................89
Table 71: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU2) ........................................90
Table 72: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU3) ........................................90
Table 73: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GP)............................................90
Table 74: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU2) ................................90
Table 75: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU3) ................................91
Table 76: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GP)....................................91
Table 77: CPU load of GSL DTC (1 GPU) .............................................................................. 102
Table 78: Data load on the GSL for up to 6 GPU (without Location services) .................................. 103
Table 79: GPRSAP load on BSSAP-DTC ................................................................................ 103
Table 80: Added load on BSSAP-DTC for up to 6 GPU on BSC type 6............................................. 104
Table 81: Added load on TCH-RM DTC for one GPU............................................................... 104
Table 82: Added load on TCH-RM DTC for one GPU depending on BSC type.................................. 104
Table 83: TCH-RM DTC load, multi-GPU case (type 6 BSC) ...................................................... 105
Table 84: PS TRKDH CPU cost for one GPU (EGPRS penetration rate of 30%) .................................. 105
Table 85: Added load on DTC for TRKDH for one GPU (EGPRS penetration rate of 30%) ..................... 106
Table 86: DTC load on a type 6 BSC with 3 GPU and EGPRS penetration rate of 30%......................... 106
Table 87: DTC load on a type 6 BSC with 6 GPU and EGPRS penetration rate of 30%......................... 106
Table 88: Cost processing model in TCU ............................................................................. 107
Table 89: Traffic hypotheses applied to the cell ................................................................... 107
Table 90: PS TCU load for CCCH and SDCCH ......................................................................... 108
Table 91: PS TCU load for TCHs........................................................................................ 109
REFERENCED DOCUMENTS
Alcatel references
[1] 3BK 11203 0443 DSZZA FBS GPRS Radio Interface RRM Sub-layer (PRH)
[2] 3BK 11202 0441 DSZZA FBS GPRS Radio Interface RRM Sub-layer (PCC)
[3] 3BK 11202 0458 DSZZA FBS GPRS Gb interface BSSGP Layer
[4] 3BK 11202 0460 DSZZA FBS GPRS MFS-BSC interface BSCGP Layer
[5] 3BK 11202 0450 DSZZA FBS GPRS Radio Interface RLC Layer
[6] 3BK 11202 0462 DSZZA FBS GPRS Radio Interface MAC Layer
[7] 3BK 11202 0467 DSZZA FBS GPRS GPRS MFS/BTS Interface M-EGCH stack
[8] 3BK 11203 0134 DSZZA BSS telecom parameters
[9] 3BK 11202 0445 DSZZA Resource Allocation and Management
[10] 3BK 11203 0132 DSZZA LCS and (E)GPRS telecom presentation
[11] 3BK 11202 0428 DSZZA GPU overload control and CPU power budget
[12] 3BK 11203 0115 DSZZA BSS Traffic Model and Capacity: PS part B9
[13] 3BK 11203 0142 DSZZA BSS telecom performance objectives, traffic model and
capacity: CS part
[14] 3BK 11203 0124 DSZZA Alcatel Radio System Architecture
[15] 3BK 11203 0128 DSZZA Transmission Functional Specification
3GPP references
[16] 3GPP TS 23.060 GPRS Service description Stage 2
[17] 3GPP TS 03.64 GPRS Overall description of the GPRS radio interface Stage 2
[18] 3GPP TS 04.60 Radio Link Control / Medium Access Control (RLC/MAC) protocol
[19] 3GPP TS 04.64 LLC Layer Specification
[20] 3GPP TS 08.14 BSS SGSN interface; Gb interface Layer 1
[21] 3GPP TS 24.008 Mobile radio interface layer 3 specification, Core Network
Protocols Stage 3
[22] 3GPP TS 08.18 BSS SGSN interface; BSS GPRS Protocol (BSSGP)
[23] 3GPP TS 08.16 BSS SGSN Network Service
RELATED DOCUMENTS
[24] IEEE P802.20 TM /PD Traffic Model for IEEE 802.20 MBWA System Simulations
www:http//comnets.rtwth a WAP Traffic Model and its appliance for the Performance
[25] -aachen.de/~pst analysis of WAP over GPRS. RWTH Aachen University of
Technology, Germany
Report No. 261- June 2000 Source Traffic Modeling of Wireless Applications. D. Stahele, K.
document, use and communication of its contents not
Universitt Wrzburg
Proceeding of 16th ITC, A page oriented WWW traffic model for wireless system
[27] Edingburg, June 1999 simulations. A. Reyes-KLEcuona, E. Gonzales-Parad, E. Casilari,
JC. Casaloa
The reading of this document is targeted to people who already have a good understanding of PS services
with previous releases. General information on this subject can be found in document [10].
This document is Alcatel internal and shall not be shown to customers.
Note In the following, the term legacy will refer to MFS using G2 platform.
1.2 Definitions
2 BSS CONFIGURATION
BTS PLMN
BTS Frame
X Y Relay
Data
MFS Network
Z
BTS
Interfaces carrying Z
normal GSM and/or
Gb
GPRS MFS - BTS traffic/
signalling SGSN
Interfaces carrying normal GSM
and/or Gb traffic/ signalling
Normal GSM traffic/ signalling carried transparently through the MFS
GPU board
2048 kbit/s
PCM
interfaces
GPU board
Ethernet
100 Mbits/s Hub
Ethernet (internal)
interfaces
O&M O&M
server server
Ethernet
external
interface
#1 DSP
DSP
8 Mbit/s
2 Mbit/s PCM30
G .703/
PCM30 Interfaces
G .704
Interfaces DSP
Interfaces
DSP
HDLC
# 16
PCI bus
Clock
Ethernet
100 Mbit/s
Ethernet
Interfaces Ethernet
RS-232 O BC (PPC)
Interface
ICL ISL
The number of links between the DSPs and the internal switch, and between the HDLC controller and the
internal switch are given in number of 2 Mbit/s PCM equivalents. It means that it does not represent the
reality in term of physical links, but it gives the maximum physical connectivity:
There are 4 x 2 Mbit/s PCM links between one DSP and the internal switch,
There are 16 x 2 Mbit/s PCM links between the HDLC controller and the internal switch.
PCI Bus
See
note
DSP 16 x 8 Mbits Ports
DSP
16 2-Mbits
DSP Ports
64 bits/s
DSP Switch
PPC
Note This physical connection limit is lowered taking into account the processing power limits at DSP
and PPC sides (see Sections 4.2 and 4.4 for limitation due to software).
On each PCM, the timeslots to be processed can be selected using the switch and in addition, any timeslot
on one external port can be transparently connected to any timeslot on its own or any other external
port.
External E1 Links
GP P
OMCP P
GP N
OMCP W
GP 1
NE1oE
SSW W
2.1.3.2 GP board
As shown in the figure below, the GP board re-uses the GPU core, added with the nE1oE function and a
local Gigabit Ethernet switch.
GP Board Control
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
Processor
GPU
HDLC 4x
DSP GbE
switch
NE1oE 16 x E1
Framer TDM cross connect
((N x 64 Kbit/s)
Gb Ethernet
NE1oE
1
Most signalling messages are being transferred through the switch. A path needs to be established for each message sent.
Communications is also possible through the broadcast bus (used for A interface paging broadcast to the TCUs).
BSSAP and GPRSAP: handles Layer 3 GSM and GPRS signalling (BSCGP)
All rights reserved. Passing on and copying of this
Additionally, the trunk handling function (switching between Abis and Ater channels) is active on all the
DTCs. The table below gives the number of boards of each type, and the number of Ater circuit, for each
BSC type.
Configurations (BSC type) 1 2 3 4 5 6
TCUs 8 32 48 72 88 112
DTCs 16 24 40 48 64 72
MTP SS7 4 6 10 12 16 16(1)
BSSAP/GPRSAP DTCs or GSL DTC 8 14 22 28 36 44(2)
TCHRM DTC pairs 2 2 4 4 6 6
TRXs FR connectable 32 128 192 288 352 448
Number of Ater-Mux (Ater sub multiplexing 4:1) 4 6 10 12 16 18
Total number of Ater circuits 470 710 1188 1428 1906 2146
OMCP CCP
Configuration Type 1 2 3 4 5
Capacity Nb TRX 200 400 600 800 1000
Nb Cell 200 400 500 500 500
Nb BTS 150 255 255 255 255
Nb of E1 Abis 96 96 176 176 176
Ater CS 10 20 30 40 48
Ater PS 6 12 18 24 28
Nb of VCE on CCP Nb TCU 50 100 150 200 250
Nb DTC CS 40 80 120 160 192
Nb DTC PS 24 48 72 96 112
Nb of VCE on OMCP Nb TCH-RM pairs 1 1 1 1 1
Nb SCPR pairs 1 1 1 1 1
Nb OCPR pairs 1 1 1 1 1
Nb TSC pairs 8 8 8 8 8
Nb boards ATCA Nb CCP 1 2 3 4 5
Nb spare CCP 1 1 1 1 1
Nb OMCP 2 2 2 2 2
Nb SSW 2 2 2 2 2
Nb TP GSM 2 2 2 2 2
Nb boards LIU Nb MUX 2 2 2 2 2
Nb LIU 8 8 14 15 16
Note The impact of this feature is not studied at PPC level, but its impact on the GSL link is studied in
this proposal.
LLC LLC
2
This ratio would probably need to be changed with current analysis but the object of this document is not to propose change in
hardware design.
The variables indicating each RLC block status for the ARQ mechanism is reserved statically, and must be
dimensioned according to the maximum Window Size and maximum number of TBF (including those in
delayed mode). Only one type of window variable is defined based on maximum WS for EGPRS, for all TBF
types (EGPRS and GPRS). This is constraining in term of memory needs but improve the real time
performances of the DSP. The following table gives the dimensioning parameter for the DSP for B10 legacy
and for B10 Mx.
Parameters Value for legacy Value for Mx
(1)
Maximum number of TBFs 210 / 240 960
Maximum number of PDCHs 120 480
Maximum number of TRX 60 240
Maximum number of GCHs 120 480
Proportion of EGPRS TBF 30% 30%
GPRS TBF window size value 64 64
Maximum EGPRS TBF window size value (for 4 TS capable MS) 512 512
Average WS filling for GPRS 50% 50%
Average WS filling for EGPRS 20% 20%
Proportion of active DL TBF 75% 75%
Using information provided by Table 3 and the above formula, we can evaluate the needed resources for
data and signalling. This is presented in Table 4.
The values presented in Table 5 are related to B10-MR1 legacy, with information provided by PTU team.
Number of Context size Total size
Memory usage
context (byte) (byte)
B10 code 791 986
Code hole 0
Code increase 323 854
Total code 1 115 840
Timers 60 616
Common variables 234 054
Trace & debug 240 000
TBF variables 210 716 150 360
ARQ window variables 107 520 4 430 080
TRX variables 60 2 611 187 992
MPDCH variables 1 3 080 3 080
PDCH variables 120 2 596 311 520
GCH variables 120 309 37 080
Miscellaneous 64 634
Total variables 1 719 416
RLC containers (data and signalling) 13 000 92 1 196 000
Total used (sum of shaded cells) 4 031 256
Total available 4 194 304
Unused memory 163 048
Note From Table 5, we can see that the remaining external memory for GPU2 at DSP level is very low.
Note For GPU3, the DSP external memory is about 8Mbytes. This means that the unused memory is
around 4 Mbytes.
The DSP external memory is about 16Mbytes for Mx. The goal of this section is to show that the
requirements given in Table 3 are compatible with the current available memory.
The DSP external memory usage is obtained by multiplying the context size by the maximum number of
contexts needed (as defined in Table 3), then adding the total number of signalling and data resources by
the RLC container size (one size fits all), and adding the code size. Subtotals per type of contexts and
totals are highlighted with shaded cells. The values presented in Table 6 are related to B10-MR1 Mx, with
information provided by PTU team.
Number of Context size Total size
Memory usage
context (byte) (byte)
B10 code 791 986
Code hole 0
Code increase 108 110
Total code 900 096
Timers 181 008
Common variables 941 736
Trace & debug 244 156
TBF variables 960 720 691 200
ARQ window variables 491 520 4 1 966 080
TRX variables 240 2 450 588 000
MPDCH variables 1 3 080 3 080
PDCH variables 480 2 596 1 246 080
GCH variables 480 348 167 040
Miscellaneous 292 002
Total variables 6 320 382
RLC containers (data and signalling) 52 000 92 4 784 000
Total used (sum of shaded cells) 12 004 478
Total available 16 777 216
Unused memory 4 772 738
DSP Internal memory is much smaller than external memory (about 64 kbytes for GPU2) and is reserved
for functions, which require real time optimization. DSP Internal memory usage is provided hereafter for
information (see Table 7).
DSP internal memory Bytes Comments
HPI 10 152 HPI descriptor and HPI circular buffer for DMA
GCH Common Buffer 3 912 CRC Z-sequence loop-up table, McBSP DMA buffer
TRX 4 052 MAC SDL instance table and MAC temp variables
TBF (max 210 TBF) 6 720 RLC SDL instance table and TBF variable in the SDL level
GCH optimization For static loop-up table and some temp buffer to improve GCH
0
data operation.
L2GCH Data Buffer Move UL buffer to external memory, this buffer is for MEGCH
21 408
context and some single GCH context.
Stack 2 048 DSP stack, increase it in B10 for MAC process, DL re-ordering
Common Internal: SDL, DSP System 8 568 SDT cmicro kernel and TI DSP kernel
RLC optimization 0
Total internal Memory used 56 860
Unused memory 8 676
Note For Mx, the DSP internal memory is about 1Mbytes. This means that the unused internal memory
for Mx is around 950 kbytes.
Then, for a given number of active PDCHs per TRX, it is easy to obtain the number of PDCHs that the DSP
can handle at the targeted DSP load.
3
One radio block contains one RLC block (GPRS and EGPRS up to MCS-6) or 2 RLC blocks (MCS-7, MCS-8 and MCS-9).
For each scenario, a certain number of signals or messages are exchanged between the different layers
(RLC, MAC, M-EGCH and L1GCH) and also with PPC (HPI interface). For each signal, its cost in terms of
CPU is measured. This cost is dependent of a certain number of parameters, which can be the following
ones:
The number of PDCHs,
The number of TRXs,
The number of GCHs,
The PDU rate in DL and in UL,
The RLC load per PDCH in DL and in UL.
The PDU rate corresponds to the peak PDU rate per 20ms. It depends on the LLC-PDU size and the RLC
block size (i.e. the used coding scheme, (M)CS).
The CPU cost is also related to the polling rate (in DL) and the acknowledgement rate (in UL). These two
mechanisms are different for EGPRS and GPRS. In both cases, the sending of a PDAN or of a PUAN is
triggered by a number of RLC blocks received or transmitted but the number of RLC blocks taken into
account is different in GPRS and EGPRS:
For EGPRS, the polling rate and the acknowledgement rate are function of the window size and of an
O&M parameter (EGPRS_DL_ACK_FACTOR for DL and EGPRS_UL_ACK_FACTOR for UL):
For DL, an EGPRS PDAN message is received every EGPRS_DL_ACK_PERIOD =
EGPRS_DL_ACK_FACTOR * WINDOW_SIZE transmitted RLC data blocks.
For UL, an EGPRS PUAN message is sent every EGPRS_UL_ACK_PERIOD = min (32,
EGPRS_UL_Ack_Factor * window_size) received radio blocks.
For GPRS, the polling rate and the acknowledgement are O&M parameters (GPRS_DL_ACK_PERIOD
and GPRS_UL_ACK_PERIOD respectively for DL and UL) and represent directly the number of radio
blocks.
The parameters used for the polling and acknowledgement mechanism in the DSP capacity model are
presented in Table 8.
Parameters Value
GPRS_DL_ACK_PERIOD 12
GPRS_UL_ACK_PERIOD 16
EGPRS Window size (if one TS allocated to TBF) 192
EGPRS_DL_ACK_FACTOR 0.25
EGPRS_UL_ACK_FACTOR 0.25
Number of transmitted radio blocks necessary to request a PDAN (EGPRS) 48
Number of received radio blocks necessary to send a PUAN (EGPRS) 48
Choice of coding scheme (CS or MCS): it is possible to choose one coding scheme or a distribution of
All rights reserved. Passing on and copying of this
CS and / or MCS. For this latter case, it is possible to choose a percentage of EGPRS traffic,
Choice of a distribution of LLC-PDU size,
RLC load per PDCH in DL and in UL,
O&M parameter DSP_Load_Thr1.
From the measurements provided by PTU, we have the CPU cost per signal for one PDCH and one TBF per
PDCH during 20ms. It is then easy to evaluate the sum of CPU costs for all these signals during 20ms as a
function of N_TRX, N_GCH and N_PDCH and to find the number of TRXs (N_TRX), which allow reaching
DSP_Load_Thr1 of DSP capacity during 20 ms, knowing that:
N_PDCH = N_TRX N_PDCH_per_TRX,
N_GCH = GCH_Needed (Coding scheme) N_PDCH_per_TRX N_TRX.
The parameter GCH_Needed (Coding scheme) is given by Table 9 as a function of the coding scheme for
GPRS and EGPRS.
Coding scheme CS1 CS2 CS3 CS4
RLC data size (bytes) 20 30 36 50
GCH needed per PDCH 0.73 1 1.25 1.64
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
RLC data size (bytes) 22 28 37 44 56 74 112 136 148
GCH needed per PDCH 0.89 1 1.33 1.5 1.86 2.36 3.49 4.14 4.49
Table 9: RLC data size and needed GCH for each coding scheme (GPRS and EGPRS)
Table 10: Maximum number of PDCHs per DSP and per GPU versus coding (legacy)
Note For MSC8 and MCS9, the limitation of the number of PDCHs per DSP is due to the fact that the
maximum number of GCHs is reached (120 GCHs per DSP), and not that the targeted CPU load is
reached.
For MX, two cases are presented, one for Gb over Frame Relay (GboFR) and one for Gb over IP (GboIP),
assuming a configuration of 16 E1 links. In both cases (Table 11, and Table 12), the maximum number of
PDCHs per DSP and per GP is presented for each GPRS and EGPRS coding scheme, with the following
assumptions:
DSP_Load_Thr_1 set to 85%,
3 PDCHs per TRX,
Average size of LLC-PDU of 500 bytes in DL and 100 bytes in UL,
RLC load of 100% both in DL and in UL.
With GboFR, a share of E1 links is done between Ater and Gb. For a configuration with 16 E1, we assume
that 13 E1 links are reserved for Ater and 3 are reserved for Gb. This leads to a maximum number of GCHs
available per DSP of 364 GCHs, assuming 112 GCHs per E1 (and not 480 GCHs as expected with a factor 4).
The results are presented in Table 11.
Coding scheme CS1 CS2 CS3 CS4
Maximum PDCHs per DSP 252 240 223 202
Maximum PDCHs per GP 1008 960 892 808
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum PDCHs per DSP 228 220 210 199 183 154 104 87 81
Maximum PDCHs per GP 912 880 840 796 732 616 416 348 324
Table 11: Maximum number of PDCHs per DSP and per GP versus (M)CS for GboFR
Comparing the DSP performance in terms of number of PDCHs between B10 legacy and Mx, we note that
the objective of factor 4 is reached in most cases for the 13 + 3 E1 configuration. The factor of 4 is not
reached for high coding scheme values for EGPRS (for MCS7, MCS8 and MCS9). This is due to the limitation
in terms of GCHs and not to the limitation in terms of CPU load.
With GboIP, all the E1 links can be used for Ater. This leads to a maximum number of GCHs available per
DSP of 448 GCHs, assuming 112 GCHs per E1.
Coding scheme CS1 CS2 CS3 CS4
Maximum PDCHs per DSP 252 240 223 202
Maximum PDCHs per GP 1008 960 892 808
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum PDCHs per DSP 228 220 210 199 183 168 128 108 99
Maximum PDCHs per GP 912 880 840 796 732 672 512 432 396
Table 12: Maximum number of PDCHs per DSP and per GP versus (M)CS for GboIP
In this case, we note that compared to legacy, the factor of 4 is respected nearly for every coding scheme
(except for high coding scheme of EGPRS). Compared to Mx with GboFR, we note an improvement in terms
of number of PDCHs that can be reached for EGPRS high coding schemes, due to a higher number of GCH
available.
The PPC memory capacity is 128 Mbytes on the GPU2 and 256 Mbytes on the GPU3. The memory usage and
management is the same for both GPUs.
It has been splitted into:
27 Mbytes reserved for the code, about 25 Mbytes is currently used (20.5 for PMU and 2.5 for LCU),
64 Mbytes of dynamic memory allocation,
37 Mbytes reserved for the dynamic buffers allocation (e.g. temporary storage of LLC PDUs).
The following table shows that enough memory is available for the GPU2.
Size for each Number per Total size (in
Memory usage
(kbytes) GPU kbytes)
TRX 25.1 448 11 265
PDCH (1) 2.3 3 584 8 071
TBF 3.4 1 440 4 945
MS 6.3 1 000 6 292
Cell 61.8 240 14 823
Routing Area 16.0 20 320
Miscellaneous 13 413
Total size variables 62 159
Table 13: PPC memory requirements for data and variables for GPU2
(1)
Maximum number of PDCH corresponds to the maximum number of TRX (240) multiplied by the number
of PDCH per TRX (8).
From these assumptions, we can deduce easily that enough memory is available for PPC with GP.
of complexity for TBF establishment procedures. Moveover, an UL TBF whose MS supports EDA, is not
All rights reserved. Passing on and copying of this
allocated directly in EDA mode. In a first step, the UL TBF is allocated in DA mode, and if it is proved
after a certain time that the TBF has an UL bias, it is reallocated in EDA mode. This means that one
allocation (in DA mode) and (at least) one reallocation (from DA mode to EDA mode) will be needed
for an UL TBF supporting EDA.
Gb over IP (GboIP): This feature impacts the PPC capacity, as the transfer of LLC PDU on the Gb
interface is modified, with the introduction of new stacks. It has therefore an impact on the CPU
load for LLC PDU transfer.
Note The impact of the two features DTM and EDA at PPC level is not studied in this proposal of the
document. Only Best-effort traffic and DA mode allocation are considered in this document. The
measurements available at PPC level are the following ones:
For MFS legacy: MR1 with Gb over Frame Relay, and MR2 for Gb over Frame Relay and Gb
over IP.
For MFS Mx: MR1 with Gb over Frame Relay, and MR2 with Gb over IP (measurements for
MR2 and Gb over Frame Relay are not available).
Note For PPC capacity, we assume that a MFS legacy (GPU2 or GPU3) is associated with a BSC-G2 and
that a Mx MFS (GP) is associated with a BSC Evolution. The other possibilities are not considered.
We consider that 75% of the processing power is available at nominal load. In this 75% load, we will
reserve some overhead for the following procedures, with the following hypothesis:
The cost introduced by PDCH allocation is now independent of the traffic model with RAE-4 feature
and therefore it is added in the CPU margin. The cost values associated to this feature are only
assumptions, as up to now, no measures are available for B10. Moreover, the worst case is assumed
(one message per cell). Considering 264 cells per GPU2 / GPU3 (legacy case) and 500 cells per GP (Mx
case), and a periodicity of 10 seconds for the RAE-4 messages between BSC and MFS (RR Allocation
messages transmitted every TCH_INFO_PERIOD RR_ALLOC_PERIOD, with TCH_INFO_PERIOD and
RR_ALLOC_PERIOD equal to respectively 5 seconds and 2), the maximum rate is equal to 26,4
messages per second for legacy and 50 messages per second for Mx.
The cost of Suspend/Resume depends on the number of paging and on the penetration rate of GPRS
MS. The paging is equal to:
70 paging per second for BSC G2. The same value applies for MR1 and MR2,
108 paging per second for BSC Evolution with B10 MR1,
121 paging per second for BSC Evolution with B10 MR2.
For Gb overflow control function, the maximum rate per second is limited to 10 due to the limitation
introduced by BSCGP flow control.
For LCS (localization) hypothesis, see document [30].
The Number of CS paging depends on the presence or not of the Gs interface. Here, it is assumed
that there is no Gs interface and this means that the CS paging has no impact on the overhead.
The overhead for Full-intra-RA re-routing corresponds to TCP based application (most common
cases).
MR1 and for MR2), and in Table 15 for GP (different values for MR1 and MR2 due to a different value of
All rights reserved. Passing on and copying of this
Table 14: CPU margin to be added in PPC for GPU2 and GPU3
Maximum rate CPU load
Functions Cost in ms
MR1 MR2 MR1 MR2
PDCH (de)allocation 0.009 50 50 0.05% 0.05%
Suspend / resume 0.553 24 27 1.33% 1.49%
Gb flow control 0.163 10 10 0.16% 0.16%
Cost of localization (A-GPS) 1.25 8 9 1.00% 0,68%
NC2 overhead for 100 MS 0.05 104.17 104.17 0.52% 0.52%
Cost of CS paging on CCCH 0.03 0 0 0% 0%
Overhead for LLC PDU rerouting 0.5% 0,5%
Total CPU margin 3.55% 3,85%
Table 15: CPU margin to be added in PPC for GP (MR1 and MR2)
Therefore, we will reserve:
For GPU2, 9.25% CPU Load for PPC margin and the remaining 65.75% load is available for other
procedures at PPC nominal load.
For GPU3, 6.47% CPU Load for PPC margin and the remaining 68.53% load is available for other
procedures at PPC nominal load
For GP-MR1, 3.55% CPU Load for PPC margin and the remaining 71.45% load is available for other
procedures at PPC nominal load.
For GP-MR2, 3.85% CPU Load for PPC margin and the remaining 71.15% load is available for other
procedures at PPC nominal load
Note The values for B10 GPU2 given in Table 14 correspond to B8 measurements with L2-cache
activated, as no measurements will be provided for B10 concerning this part. The GPU3 and GP
values for CPU load are deduced from GPU2 values by respectively applying a factor of 0.7 and of
0.25.
The following CPU cost for each procedure is considered to estimate the PPC capacity. The measurements
presented in the tables below correspond to PPC measurements for GPU2/GPU3 and for GP. Table 16 is
for B10 MR1 (GboFR), Table 17 is for B10MR2 (GboFR) and Table 18 is for B10 MR2 (GboIP).
Cost in ms
Procedures
GPU2 GPU3 GP
DL TBF establishment on CCCH 8.013 4.745 1.834
DL TBF establishment on PACCH 8.013 4.745 1.834
UL TBF establishment on CCCH 8.013 4.745 1.834
Table 16: CPU cost model for PPC for GPU2, GPU3 and GP for B10 MR1 (with GboFR)
Cost in ms
Procedures
GPU2 GPU3
DL TBF establishment on CCCH 9.463 5.318
DL TBF establishment on PACCH 9.463 5.318
UL TBF establishment on CCCH 9.463 5.318
UL TBF establishment on PACCH 9.463 5.318
DL LLC transfer processing cost 0.602 0.426
UL LLC transfer processing cost 0.29 0.199
CPU for one PS paging on CCCH 1.2 0.84
Radio resource reallocation 9.463 5.318
GCH allocation and connection 1.11 0.777
Table 17: CPU cost model for PPC for GPU2 and GPU3 for B10MR2 (with GboFR)
Cost in ms
Procedures
GPU2 GPU3 GP
DL TBF establishment on CCCH 9,626 6,036 1.808
DL TBF establishment on PACCH 9,626 6,036 1.808
UL TBF establishment on CCCH 9,626 6,036 1.808
UL TBF establishment on PACCH 9,626 6,036 1.808
DL LLC transfer processing cost 0,673 0,540 0.175
UL LLC transfer processing cost 0,297 0,227 0.069
CPU for one PS paging on CCCH 1,2 0,84 0.300
Radio resource reallocation 9,626 6,036 1.808
GCH allocation and connection 1,11 0,777 0.278
Table 18: CPU cost model for PPC for GPU2, GPU3 and GP for B10 MR2 (with GboIP)
We define the following notations:
TBF_dc: number of DL TBF/s established on CCCH.
TBF_uc: number of UL TBF/s established on CCCH.
TBF_da: number of DL TBF/s established on PACCH.
TBF_ua: number of UL TBF/s established on PACCH.
LLC_u: number of UL LLC frames transferred per s.
LLC_d: number of DL LLC frames transferred per s.
Paging_P: number of PS paging/s to be sent on CCCH.
RRR: number of Radio resource reallocation per s.
Note The values presented in Table 16, in Table 17 and in Table 18 have been derived from Load and
Stress tests. This explains why the cost of a TBF establishment is the same whatever the cases
(UL/DL or CCCH/PACCH).
8.01 TBF_dc + 8.01 (TBF_da + RRR) + 8.01 TBF_uc + 8.01 TBF_ua + 0.57 LLC_d + 0.26 LLC_u + 1.20
Paging_P < 1000 65.75% (MR1 GboFR)
9.46 TBF_dc + 9.46 (TBF_da + RRR) + 9.46 TBF_uc + 9.46 TBF_ua + 0.60 LLC_d + 0.29 LLC_u + 1.20
Paging_P < 1000 65.75% (MR2 GboFR)
9.63 TBF_dc + 9.63 (TBF_da + RRR) + 9.63 TBF_uc + 9.63 TBF_ua + 0.67 LLC_d + 0.30 LLC_u + 1.20
Paging_P < 1000 65.75% (MR2 GboIP)
Maximum UL LLC rate: this will quantify the PPC data load capacity in uplink.
All rights reserved. Passing on and copying of this
Expectations for the three above data are given in the following sections.
Note As the number of ping rate, DL LLC rate and UL LLC rate processed below are based on
measurements of L&T tests, they cannot be used anymore as objective for these L&T tests.
Replacing the variables by N_Ping in the different PPC load criterion (see Equation 1, Equation 2 and
Equation 3) yield to the eight following formulae, three valid for MR1-GboFR, two valid for MR2-GboFR
and three valid for MR2-GboIP (for each available hardware):
(8.01 + 8.01 + 0.57 + 0.26) N_Ping < 1000 65.75 % .................... [GPU2 MR1 + GboFR]
(4.74 + 4.74 + 0.40 + 0.18) N_Ping < 1000 68.53 % .................... [GPU3 MR1 + GboFR]
(1.83 + 1.83 + 0.14 + 0.06) N_Ping < 1000 71.45 % ...................... [GP MR1 + GboFR]
(9.46 + 9.46 + 0.60 + 0.29) N_Ping < 1000 65.75 % .................... [GPU2 MR2 + GboFR]
(5.32 + 5.32 + 0.43 + 0.20) N_Ping < 1000 68.53 % .................... [GPU3 MR2 + GboFR]
(9.63 + 9.63 + 0.67 + 0.30) N_Ping < 1000 65.75 % .................... [GPU2 MR2 + GboIP]
(6.04 + 6.04 + 0.54 + 0.23) N_Ping < 1000 68.53 % .................... [GPU3 MR2 + GboIP]
(1.81 + 1.81 + 0.17 + 0.07) N_Ping < 1000 71.15 % ...................... [GP MR2 + GboIP]
From these equations, we can define the expected ping rate at nominal load (75%) with the CPU margin
and without the CPU margin (in this latter case, NC2 disabled and no CS service interaction). The obtained
values for GPU2, GPU3 and GP (B10 MR1 and MR2) are presented in Table 19.
Table 19: Maximum PPC Ping rate for GPU2/GPU3/GP (MR1 and MR2)
From the results presented in Table 19 and based on L&T measurements, we can deduce that:
For GPU2, a degradation of around 15% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR). No degradation is observed for MR2 between GboFR and GboIP.
For GPU3, a degradation of around 10% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 13% is also observed with the
introduction of GboIP in MR2.
For GP, no degradation is observed between MR1 and MR2. GboIP and MR2 features have no impact
on the performances of Max Ping in this case.
From these equations, we can define the expected DL LLC rate at nominal load (75%) and at high load
(85%). The obtained values for GPU2, GPU3 and GP (B10 MR1 and MR2) are presented in Table 20.
Maximum PPC DL LLC Rate (DL LLC per second)
Assumptions MR1 GboFR MR2 GboFR MR2 GboIP
GPU2 GPU3 GP GPU2 GPU3 GPU2 GPU3 GP
Nominal Load (75%) 1 320 1 870 5 335 1 245 1 759 1 114 1 388 4 291
High Load (85%) 1 495 2 119 6 046 1 411 1 993 1 262 1 573 4 863
Table 20: Maximum PPC DL LLC rate for GPU2/GPU3/GP (MR1 and MR2)
From the results presented in Table 20 and based on L&T measurements, we can deduce that:
For GPU2, a degradation of around 6% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 11% is also observed with the
introduction of GboIP in MR2.
both cases with GboFR), and an additional degradation of around 20% is also observed with the
All rights reserved. Passing on and copying of this
This yields to the eight following formulae, three valid for MR1-GboFR, two valid for MR2-GboFR and three
valid for MR2-GboIP (for each available hardware):
0.26 LLC_u < (PPC load) ................................................ [GPU2 MR1 + GboFR]
0.18 LLC_u < (PPC load) ................................................ [GPU3 MR1 + GboFR]
0.06 LLC_u < (PPC load) .................................................. [GP MR1 + GboFR]
0.29 LLC_u < (PPC load) ................................................ [GPU2 MR2 + GboFR]
0.20 LLC_u < (PPC load) ................................................ [GPU3 MR2 + GboFR]
0.30 LLC_u < (PPC load) ................................................ [GPU2 MR2 + GboIP]
0.23 LLC_u < (PPC load) ................................................ [GPU3 MR2 + GboIP]
0.07 LLC_u < (PPC load) .................................................. [GP MR2 + GboIP]
From these equations, we can define the expected UL LLC rate at nominal load (75%) and at high load
(85%). The obtained values for GPU2, GPU3 and GP (B10 MR1 and MR2) are presented in Table 21.
Maximum PPC DL LLC Rate (DL LLC per second)
Assumptions MR1 GboFR MR2 GboFR MR2 GboIP
GPU2 GPU3 GP GPU2 GPU3 GPU2 GPU3 GP
Nominal Load (75%) 2 921 4 250 12 278 2 582 3 767 2 526 3 308 10 948
High Load (85%) 3 310 4 817 13 915 2 927 4 270 2 862 3 749 12 407
Table 21: Maximum PPC UL LLC rate for GPU2/GPU3/GP (MR1 and MR2)
From the results presented in Table 21 and based on L&T measurements, we can deduce that:
For GPU2, a degradation of around 12% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 3% is also observed with the
introduction of GboIP in MR2.
For GPU3, a degradation of around 12% is observed between MR1 and MR2 due to MR2 features (in
both cases with GboFR), and an additional degradation of around 12% is also observed with the
introduction of GboIP in MR2.
For GP, a degradation of around 11% is observed between MR1 with GboFR and MR2 with GboIP.
4.4.1 General
It will be important for the operator that the GPU / GP enables to reach the throughput, which has been
planned at radio level. The GPU / GP throughput depends both on PPC capacity and DSP capacity. We
have predicted the DSP capacity, as a maximum number of PDCH, for a targeted capacity (given by O&M
from the DSP, because this is the only dimensioning figure that we communicate externally.
All rights reserved. Passing on and copying of this
In PPC, the limitation is expressed as a maximum number of transmitted PDU, which in reality will be
lower, depending on the processing power used by TBF establishment. The throughput transferred through
the PPC in each direction is equal to the number of LLC PDU transmitted, multiplied by the average data
load for the PDUs. It is obvious that small PDUs will tend to decrease the maximum PPC throughput, and
TBF containing a small number of PDUs will worsen the situation. Hence the PPC throughput will be highly
dependent on the traffic model.
In the following sections we will compute the maximum GPU / GP throughput considering DSP limitation
and PPC limitation for B10 legacy (see Section 4.4.2) and for B10 Mx (see Section 4.4.3). It must be noted
that this is not the throughput as seen from the end user, where more protocol overhead must be taken
into account.
First we compute the expected throughput per PDCH, depending on the coding scheme (CS for GPRS or
MCS for EGPRS), and then we multiply by the maximum number of PDCH that the DSP can handle to obtain
the maximum GPU throughput. The maximum possible throughputs per PDCH, for EGPRS and GPRS,
depending on the coding scheme, are given in the table below.
Coding scheme CS1 CS2 CS3 CS4
Maximum throughput 8 12 14.4 20
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Maximum throughput 8.8 11.2 14.8 17.6 22.4 29.6 44.8 54.2 59.2
Table 23: Maximum GPU Throughput (GPRS and EGPRS, perfect propagation)
practice, the goal for GPU throughput can be limited with a lower average throughput as shown in the
All rights reserved. Passing on and copying of this
table below.
Coding scheme CS1 CS2 CS3 CS4
Average throughput per PDCH (kbit/s) 8 11.9 14.1 15.6
Maximum PDCHs per DSP 61 60 55 49
Maximum GPU throughput (kbit/s) 1 952 2 866 3 109 3 050
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Average throughput per PDCH (kbit/s) 8.8 11.2 14.7 16.7 22.4 29.3 40.5 41.4 38.9
Maximum PDCHs per DSP 56 53 52 49 45 41 33 28 26
Maximum GPU throughput (kbit/s) 1 971 2 374 3 058 3 273 4 032 4 805 5 346 4 637 4 046
Table 24: Maximum GPU Throughput (GPRS and EGPRS, with radio propagation)
Considering that the Maximum Throughput per PDCH cannot be reached, we can set a reasonable target
for the maximum GPU Throughput above RLC to about:
EDGE: 4.0 Mbit/s,
GPRS: 3.1 Mbit/s.
However to conclude at GPU level we need to look at PPC (next sections).
Table 25: Maximum DL PPC Throughput depending on LLC PDU length (GPU2/GPU3)
The maximum UL LLC rate for the PPC is four times greater than in the downlink hence the maximum UL
throughput with the same average LLC PDU length is four times greater than in the downlink direction.
The LLC PDU average length will be highly dependent on the application. However, we already have the
following information:
memory card for lap-top PC or incorporated in PDAs, which will represent a small proportion of the
All rights reserved. Passing on and copying of this
Hence if we consider the following LLC PDU length distribution we can deduce an average LLC PDU length.
The proportion of small PDUs is based on statistics found for the Internet.
LLC PDU length (bytes) 50 575 1520
Proportion 40% 50% 10%
Average length (bytes) 460
If the average drops around 200 bytes, the PPC maximum throughput is below 3.4 Mbit/s, as shown in
Table 25 (column in green).
With:
N_DL_LLC_per_trans = Data load per transaction / Average_DL_LLC_PDU_length
From the above equations we can deduce the transaction rate at nominal load, and the DL PPC throughput
at nominal load, which is equal to:
Transaction_rate N_DL_LLC_per_trans Average_DL_LLC_PDU_length
Table 28, the PPC throughput is computed with an average length of 250 bytes4. It must be noted
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
(although not reflected in our simple model) that the average PDU length will tend to decrease when the
data load per transaction decreases. This is because the control messages of the application, which are
short, will be in a higher number compared to the user data than for a long file transfer.
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 4 22 65 109 217
MR1 GboFR
GPU2
Transaction rate at PPC nominal load (per s) 35,5 23,2 12,4 8,5 4,7
DL LLC rate at PPC nominal load (per s) 155 504 808 919 1 024
PPC throughput (kbit/s) 569 1 853 2 972 3 380 3 768
GPU3
Transaction rate at PPC nominal load (per s) 61,0 37,6 19,2 12,9 7,1
DL LLC rate at PPC nominal load (per s) 265 818 1 254 1 403 1 541
PPC throughput (kbit/s) 976 3 011 4 614 5 164 5 671
MR2 GboFR
GPU 2
Transaction rate at PPC nominal load (per s) 30,5 20,5 11,3 7,8 4,4
DL LLC rate at PPC nominal load (per s) 133 446 737 847 954
PPC throughput (kbit/s) 488 1 643 2 711 3 116 3 509
GPU3
Transaction rate at PPC nominal load (per s) 54,9 34,4 17,8 12,0 6,6
DL LLC rate at PPC nominal load (per s) 239 748 1 162 1 307 1 441
PPC throughput (kbit/s) 878 2 754 4 278 4 810 5 305
MR2 GboIP
GPU2
Transaction rate at PPC nominal load (per s) 29,6 19,4 10,4 7,1 4,0
DL LLC rate at PPC nominal load (per s) 129 422 679 773 863
PPC throughput (kbit/s) 474 1 552 2 498 2 845 3 176
GPU3
Transaction rate at PPC nominal load (per s) 47,5 28,8 14,5 9,7 5,3
DL LLC rate at PPC nominal load (per s) 207 625 945 1 052 1 150
PPC throughput (kbit/s) 760 2 302 3 476 3 872 4 232
Table 27: GPU Throughput at PPC nominal load with LLC length of 460 bytes
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 8 40 120 200 400
MR1 GboFR
GPU2
Transaction rate at PPC nominal load (per s) 32,0 17,0 7,8 5,1 2,7
DL LLC rate at PPC nominal load (per s) 256 679 937 1 014 1 081
PPC throughput (kbit/s) 511 1 357 1 873 2 028 2 161
GPU3
Transaction rate at PPC nominal load (per s) 54,0 26,8 11,9 7,6 4,0
DL LLC rate at PPC nominal load (per s) 432 1 074 1 427 1 528 1 613
PPC throughput (kbit/s) 863 2 147 2 854 3 056 3 226
MR2 GboFR
GPU 2
4
The average DL PDU length will be lowered in case of low traffic per user, and then the higher proportion of GMM signalling
compared to the data low will affect the PDU average length.
DL LLC rate at PPC nominal load (per s) 222 611 865 943 1 012
All rights reserved. Passing on and copying of this
Table 28: GPU Throughput at PPC nominal load with LLC length of 250 bytes
We see that in the best case (with high data load per transaction), we can expect a PPC DL throughput at
nominal load of 3.8 Mbit/s (for GPU2) and of 5.7 Mbit/s (for GPU3). In the worst case5, the PPC DL
throughput will be around 0.5 Mbit/s.
In the Uplink direction, the maximum PPC LLC rate is four times higher than in the downlink. The
maximum UL GPU throughput based on DSP constraints is limited to about 5.5 Mbit/s (EGPRS, with MCS7
and considering radio propagation). This throughput can be transmitted by the PPC (at 85% load) with an
average PDU length equal to 146 bytes (146 8 4696 = 5.5 Mbit/s). The expected uplink GPU
performances are satisfactory. In fact an Uplink rate of 1712 LLC/s is enough to reach an uplink
throughput of more than 2 Mbit/s (with the same PDU length), which is satisfactory for the uplink
direction.
5
2 kbytes per transaction corresponds to the maximum WAP page.
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
All rights reserved. Passing on and copying of this
MR1 GboFR
Maximum PDCHs per DSP 228 220 210 199 183 154 104 87 81
Maximum DSP throughput (kbit/s) 2 006 2 464 3 108 3 502 4 099 4 558 4 659 4 733 4 795
Maximum GP throughput (kbit/s) 8 026 9 856 12 432 14 010 16 397 18 234 18 637 18 931 19 181
MR2 GboIP
Maximum PDCHs per DSP 228 220 210 199 183 168 128 108 99
Maximum DSP throughput (kbit/s) 2 006 2 464 3 108 3 502 4 099 4 973 5 734 5 875 5 861
Maximum GP throughput (kbit/s) 8 026 9 856 12 432 14 010 16 397 19 891 22 938 23 501 23 443
The above maximum GP throughput does not correspond to realistic radio propagation conditions. In
practice, the value for GP throughput can be limited with a lower average throughput as shown in the
table below.
Coding scheme CS1 CS2 CS3 CS4
Average throughput per PDCH (kbit/s) 8 11.9 14.1 15.6
Maximum PDCHs per DSP 252 240 223 202
Maximum GP throughput (kbit/s) 8 064 11 462 12 604 12 572
Coding scheme MCS1 MCS2 MCS3 MCS4 MCS5 MCS6 MCS7 MCS8 MCS9
Average throughput per PDCH (kbit/s) 8.8 11.2 14.7 16.7 22.4 29.3 40.5 41.4 38.9
MR1 GboFR
Maximum PDCHs per DSP 228 220 210 199 183 154 104 87 81
Maximum GP throughput (kbit/s) 8 026 9 856 12 348 13 293 16 397 18 049 16 848 14 407 12 604
MR2 GboIP
Maximum PDCHs per DSP 229 220 210 199 183 168 128 108 99
Maximum GP throughput (kbit/s) 8 061 9 856 12 348 13 293 16 397 19 690 20 736 17 885 15 404
Table 30: Maximum GP Throughput (GPRS and EGPRS, with radio propagation)
Considering that the Maximum Throughput per PDCH cannot be reached, we can set a reasonable target
for the maximum GP Throughput above RLC to about:
EDGE:
14.3 Mbit/s for GboFR,
15.8 Mbit/s for GboIP,
GPRS: 12.2 Mbit/s with GPRS.
However to conclude at GP level we need to look at PPC (see next sections).
We can then compute the maximum throughput that the PPC is able to transmit with the more than
optimistic hypothesis of no TBF establishment, depending on the LLC average size.
LLC average data length 100 200 400 600 800 1000 1500
MR1 4 837 9 674 19 348 29 022 38 696 48 370 72 554
Throughput above RLC (kbit/s)
MR2 3 890 7 781 15 562 23 343 31 123 38 904 58 356
The maximum UL LLC rate for the PPC is nearly three times greater than in the downlink hence the
maximum UL throughput with the same average LLC PDU length is around three times greater than in the
downlink direction.
With an average PDU length around 460 bytes (see Section 4.4.2.2 for an explanation of this value), the
PPC maximum throughput in DL is around 22.2 Mbit/s for MR1 and 17.9 Mbit/s for MR2. If the average
drops around 200 bytes, the PPC maximum throughput is only about 9.7 Mbit/s for MR1 and 7.8 Mbit/s for
MR2, as shown in Table 31 (green column).
To estimate the PPC nominal throughput, the method is the same as the one described in Section 4.4.2.3
using the PPC load criterion for Mx (defined in Equation 3). This leads to:
Transaction_rate (1.83 + 1.83 + 0.14 N_DL_LLC_per_trans) = 714.5 ....... [GP MR1 + GboFR]
Transaction_rate (1.81 + 1.81 + 0.17 N_DL_LLC_per_trans) = 711.5 ....... [GP MR2 + GboIP]
From the above equations we can deduce the transaction rate at nominal load, and the DL PPC throughput
at nominal load.
In Table 32, the PPC Throughput is computed with an average PDU length of 460 bytes (from Table 26). In
Table 33, the PPC throughput is computed with an average length of 250 bytes4. It must be noted
(although not reflected in our simple model) that the average PDU length will tend to decrease when the
data load per transaction decreases. This is because the control messages of the application, which are
short, will be in a higher number compared to the user data than for a long file transfer.
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 4 22 65 109 217
MR1 GboFR
Transaction rate at PPC nominal load (per s) 167,0 106,3 55,7 37,7 20,9
DL LLC rate at PPC nominal load (per s) 726 2 310 3 630 4 098 4 538
PPC throughput (kbit/s) 2 672 8 501 13 359 15 082 16 698
MR2 GboIP
Transaction rate at PPC nominal load (per s) 162,6 96,0 47,4 31,5 17,1
DL LLC rate at PPC nominal load (per s) 707 2 086 3 091 3 420 3 717
PPC throughput (kbit/s) 2 602 7 676 11 373 12 586 13 679
Table 32: GP throughput at PPC nominal load with LLC length of 460 bytes
Average data load per transaction (kbytes) 2 10 30 50 100
Number of DL LLC per transaction 8 40 120 200 400
MR1 GboFR
Transaction rate at PPC nominal load (per s) 149,1 76,9 34,8 22,5 11,9
DL LLC rate at PPC nominal load (per s) 1 193 3 076 4 175 4 496 4 771
PPC throughput (kbit/s) 2 386 6 152 8 349 8 991 9 542
MR2 GboIP
Transaction rate at PPC nominal load (per s) 141,9 67,1 28,9 18,4 9,7
DL LLC rate at PPC nominal load (per s) 1 135 2 683 3 472 3 689 3 871
PPC throughput (kbit/s) 2 271 5 366 6 945 7 379 7 741
Table 33: GP throughput at PPC nominal load with LLC length of 250 bytes
We see that in the best case (with high data load per transaction), we can expect a PPC DL throughput at
nominal load of 16.7 Mbit/s (for MR1). In the worst-case5, the PPC DL throughput will be around 2.2
Mbit/s.
maximum UL GP throughput based on DSP constraints is limited to about 19.3 Mbit/s (EGPRS, with MCS6
All rights reserved. Passing on and copying of this
and considering radio propagation). This throughput can be transmitted by the PPC (at 85% load) with an
average PDU length equal to 240 bytes (240 8 10055 = 19.3 Mbit/s). The expected uplink GP
performances are satisfactory.
6
This average transaction is assumed to take into account GMM signalling.
Conclusion:
The above figures are absolute maximum. The detection of a PPC bottleneck problem by the operator
requires a significant amount of traffic, which is not expected at the beginning of B10. If the GPU / GP
throughput is limited by PPC, some GPU / GP counters will enable to give a diagnostic.
Concerning DSP capacity, a value of 85 % is recommended for the O&M parameter DSP_Load_Thr1. This
value allows reaching the GPU / GP target capacity (no regression of B10 compared to B9) with most
traffic conditions. On the contrary, a value of 60% for this parameter is too stringent to reach this GPU /
GP target capacity with almost any traffic conditions.
The other assumptions are the same as the ones presented in Section 4.2.2.3.1 for B10 legacy and in
Section 4.2.2.3.2 for B10 Mx.
EGPRS penetration rate 0% 5% 30% 50% 80%
B10 GPU2 / GPU3
Maximum PDCHs per DSP 50 48 41 37 31
Maximum PDCHs per GPU 200 192 164 148 124
B10 GP with GboFR
Maximum PDCHs per DSP 206 200 153 124 96
Maximum PDCHs per GP 824 800 612 496 384
B10 GP with GboIP
Maximum PDCHs per DSP 206 200 171 152 119
Maximum PDCHs per GP 824 800 684 608 476
Ater capacity:
Maximum number of GCH connected due to processing capacity:
480 GCH per GPU for B10 legacy,
1456 GCH per GP for B10 Mx with Gb over Frame Relay, assuming 112 GCH per E1. In this case, 3 E1
are reserved for Gb, and 13 E1 for PS traffic.
1792 GCH per GP for B10 Mx with Gb over IP, assuming 112 GCH per E1.
GPU3 (so half the theoretical maximum for the GPU2 / GPU3) because of BSCGP load constraints, and
All rights reserved. Passing on and copying of this
For B10 GPU2, with PPC currently estimated performances these maxima can be reached only with a
sufficient LLC PDU average size of 500 bytes (see Section 4.4 for details). A higher throughput than 4
Mbit/s may be reachable depending on MCS distribution and PDU size, but the target of 4Mbit/s is already
considered a very ambitious target when limitation due to radio propagation is introduced.
With B10 MR1 for Mx, the GP throughput is multiplied by a factor of 3.6 to reach nearly 14.5 Mbits/s. With
B10 MR2 for Mx, the GP throughput is multiplied by a factor of 4 to reach nearly 16Mbit/s.
5 TRAFFIC MODEL
A transaction
7
A full statistical model would enable to determine the capacity for a given grade of service (maximum queuing time).
PDP context activation occurs at the beginning of each session. PDP context deactivation occurs at the
end of each session.
The attachment procedure can be triggered:
Either at the beginning of each session,
Or the MS is attached when switched on.
Additionally, mobility management (Routing Area update and cell update) must be taken into account.
8
It is assumed that the DL TBF is maintained as long as an uplink TBF is established taking into account the addition of
T_network_response_time, even if no data are sent on the downlink.
9
For the time being we do not consider telemetry users, fleet management, and so on
10
The value from document [27] is much higher, but the data have been gathered in an environment with fixed telephony and very
active users (corporate and educational). So we prefer to take the same value as in document reference [28], which seems more
adapted to mobile PS services.
11
WEB-packet.
Otherwise, an UL TBF will be established on PACCH for every three TCP acknowlegedment.
All rights reserved. Passing on and copying of this
Note In the following, we will assume that 50% of the MS support the feature extended UL mode.
We consider a distribution for DL WEB packet derived from document [27], but as this document was
published in 1999, we have assumed that the 552 and 576 IP-packet sizes corresponding to older versions
of TCP-IP are now replaced by 1500. So the frequency of the size 1500 is increased accordingly. The
downlink LLC-PDU average PDU data length is not equal to the average IP-packet length, because of the
limiting effect of the GPRS N201 parameter. In the future, laptop computers with incorporated
GPRS/EDGE device may have a N201 parameter equal to the maximum IP-packet size (1500 bytes), but the
EDGE-capable mobiles currently planned for testing have N201 parameter around 800 bytes12. With a given
IP-packet distribution, we can deduce the downlink LLC-PDU data length distribution and the
corresponding average downlink LLC-PDU data length. Also many non-EDGE mobiles have a N201
parameter equal to 500 or 576 bytes. We study the cases of N201 set respectively to 576, 800 and 1500
bytes below.
IP-packet LLC PDU per TCP LLC-PDU for 100 LLC PDU length
IP-packet sizes LLC-PDU length
distribution packet TCP-segment distribution
1000 (average) 20% 2 500 40 17%
1500 55% 3 500 165 72%
40 25% 1 40 25 11%
Average Average
1035 100% 450
Table 36: DL LLC average data length for WEB if N201 = 576 bytes
IP-packet sizes IP-packet LLC PDU per TCP LLC-PDU length LLC-PDU for 100 LLC PDU length
distribution packet TCP-segment distribution
1000 (average) 20% 2 500 40 23%
1500 55% 2 750 110 63%
40 25% 1 40 25 14%
Average Average
1035 100% 591
Table 37: DL LLC average data length for WEB if N201 = 800 bytes
IP-packet sizes IP-packet LLC PDU per TCP LLC-PDU length LLC-PDU for 100 LLC PDU length
distribution packet TCP-segment distribution
Table 38: DL LLC average data length for WEB if N201 = 1500 bytes
5.3.2 WAP
The WAP service is accessible via every GPRS mobile equipment and allows using value added-services
such as:
Local information (for example, find the nearer restaurant or movie theatre),
News or Weathers,
12
Philips: N201=766, Nokia: N201= 806.
Mobile Banking,
All rights reserved. Passing on and copying of this
Messaging.
The WAP service is provided via a WAP gateway generally belonging to the PLMN Operator. The average
size of a WAP page is actually estimated to 1 kbyte (see document [28]). An increase could be expected in
the next years, corresponding to more attractive contents as well as to larger screens. Pages larger than 2
kbytes will be possible after migration to WAP2 (see MMS profile for information on WAP 2).
We exclude MMS services from WAP services, although MMS uses the WAP protocols. This is to make a
distinction between low data load services (initially WAP was designed for low data load) and higher load
services (MMS).
The data exchanges for MMS will depend on the kind of data exchanged. The following table gives some
examples.
Kind of information Mean size [kbyte] according to [29]
Text 14
1 not considered .
Contact photo, 80 x 100 Pixel 2
Compressed photo, 120 x 160 Pixel 10
High-resolution photo, 640 x 480 Pixel 30
Speech-memo, 60 seconds 60
Video-clip, 20 seconds 100
13
Alcatel MMS server is planned to be deployed with WAP 2 from mid2004 onwards.
14
Up to now, the SMS service is still using CS-SMS over SDCCH because most MS suppliers have not implemented the "revert to CS-
mode" for SMS, when SMS is not available over GPRS. This is kept as an assumption for B9, but it remains to be checked in the long
term.
A MMS download will be preceded by a notification sent to the MS by the WAP server. The notification is
sent using WAP/GPRS, which requires the sending of a PS Paging message. The MS can respond to this
notification later. So we consider two distinct DL TBF establishments for the MMS-DL.
Note As the periodicity of the control messages on the uplink is equal to 5 seconds and therefore higher
than T_max_extended_UL timer, the extended UL mode has no influence on this type of traffic.
This means that an UL TBF will be associated to one control message.
5.3.6 Signalling
This profile does not characterize a GPRS application, but summarizes the GMM activity for all other
profiles. It describes all the signalling activity that occurs during a session. We assume that this profile
does not depend on the application (all sessions start with one PDP context activation). This also may
need to be refined in the future since some application may require the use of more than one
simultaneous PDP contexts. GMM transfers are characterized by TBF reduced to one small LLC frame. We
assume an average LLC PDU length of 50 bytes for all signalling messages17.
If the MS requires GPRS attachment for each GPRS session, all the signalling messages shown in Table 40
have to be taken into account for one session.
If the MS is always GPRS attached (class B mobiles), then only PDP context activation and deactivation are
needed in most cases. In this case the signalling flow generated by attachment can be considered as
negligible18 in a first approach.
15
The MMS size is expected to be limited to 50kbytes by operators in a first stage.
16
This means that associated Excel tool can be used to check the capacity with FTP download/upload by changing a few input
parameter values.
17
This approximation is acceptable because we are more interested in the number of PDU generated and TBF establishment than in
the corresponding data load, which will be small compared to application data.
18
The MS attaches at switched on. Since the battery performances are now very good, this event becomes very rare compared to
other signalling.
19
Case of class-B mobiles.
5.6.1 Computations
In the next sections, many values are computed from a reduced set of input parameters. The
computations are derived from the knowledge of the profiles presented in Section 5.3. All computations
are detailed in Section 9.3.
Note In the above table the data load per session includes the protocol overhead due to Internet
protocols (TCP/IP, UDP/IP) and application protocols. The way such overhead is taken or not taken
into account has to be checked, if comparisons are needed with other traffic models provided by
customers.
thanks to the delayed downlink TBF release feature, and neglecting TBF establishment caused by cell
All rights reserved. Passing on and copying of this
reselection.
All transactions are mobiles originated. Hence the first TBF is always an uplink TBF establishment on
CCCH, and then there is a downlink TBF establishment on PACCH. Moreover, we assume that 50% of MS
support the feature extended UL mode, which allows the reduction of UL TBF establisments on PACCH
for acknowledgement purposes.
The number of LLC frames per transaction for a given data load depends on the value of the N201
parameter in the mobile. We will consider two main cases:
Most traffic is due to GPRS mobiles: in this case the most encountered value of N201= 576.
Most traffic is due to EDGE mobiles: in this case the most encountered value of N201= 80020. In some
cases we will also consider N201 value = 1500 (new generation of laptop with incorporated radio
transmission device).
Transaction description (GPU / GP level) WEB WAP MMS-D MMS-U UDP-D RA Upt GMM
N201 parameter 576
Number of PDU per packet 2 2 3 3 3
Mean DL LLC PDU size (bytes) 460 510 510 50 510 50 50
Mean UL LLC PDU size (bytes) 50 50 50 510 50 50 50
Number of DL LLC PDU 109 2 98 20 588 1 2
Number of UL LLC PDU 50 2 33 59 11 2 2
Number of DL TBF on CCCH 0 0 0 0 0 0 0
Number of UL TBF on CCCH 1 1 2 1 1 1 0
Number of DL TBF on PACCH 1 1 2 1 1 1 0
Number of UL TBF on PACCH 9 1 7 1 11 1 2
DL kbytes per transaction 50.0 1.0 50.0 1.0 300.0 0.05 0.1
UL kbytes per transaction 2.5 0.1 1.7 30.0 0.5 0.1 0.1
Table 42: Transaction description at GPU / GP level with GPRS mobiles (MS always attached)
Transaction description (GPU / GP level) WEB WAP MMS-D MMS-U UDP-D
N201 parameter 800
Number of PDU per packet 2 2 2 2 2
Mean DL LLC PDU size (bytes) 601 510 760 50 760
Mean UL LLC PDU size (bytes) 50 50 50 760 50
Number of DL LLC PDU 83 2 66 20 395
Number of UL LLC PDU 50 2 33 39 5
Number of UL TBF on PACCH 9 1 7 1 5
Note The LLC header overhead (10 bytes) is added to each LLC PDU and taken into account in the UL
and DL kbytes per transaction.
Note For UDP-D, the number of UL TBF on PACCH depends on the TBF duration. To estimate the
duration of a TBF, we make the following assumptions:
3 TS allocated in DL,
2.3 kbyte/s per PDCH for GPRS,
7.1 kbyte/s per PDCH for EGPRS.
20
Corresponds to the size encountered up to now for test mobiles used by Alcatel (Philips: N201=760, Nokia: N201= 806)
because they are counted with the transaction, which starts immediately after the signalling
All rights reserved. Passing on and copying of this
phase.
The number of LLC PDU for the main data flow depends on the value of N201 parameter, which is assumed
to be different for EDGE and GPRS (as shown in Section 5.6.3). Modified values for EDGE are provided in a
separate table.
User behavior at BH per profile WEB WAP MMS-D MMS-U UDP-D RA Upt
Number of sessions per user at BH 1.0 1.0 1.0 1.0 1.0 1.11
Number of transactions per user at BH 5 7 1 1 1 1
PS paging per user at BH 0 0 1 0 0 0
Number of DL LLC PDU at BH 547 17 101 23 591 1.11
Number of UL LLC PDU at BH 254 18 38 63 15 2.22
Number of DL TBF on CCCH at BH 0 0 0 0 0 0
Number of UL TBF on CCCH at BH 6 8 3 2 2 1.11
Number of DL TBF on PACCH at BH 6 8 3 2 2 1.11
Number of UL TBF on PACCH 50 10 10 4 14 1.11
Number of Radio Resource Reallocation (see note) 1 1 1 1 1 0
Average DL kbytes at BH 250.2 7.2 50.2 1.2 300.2 0.06
Average UL kbytes at BH 12.7 0.9 1.9 30.2 0.7 0.11
Average DL PDU size (bytes) 457.7 425.0 495.8 50.0 507.6 50.0
Average UL PDU size (bytes) 50.0 50.0 50.0 479.2 50.0 50.0
Table 44: User behavior at busy hour per profile (GPRS case)
User behavior at BH per profile WEB WAP MMS-D MMS-U UDP-D
Number of DL LLC PDU at BH 419 17 69 23 398
Number of UL LLC PDU at BH 254 18 38 44 9
Number of DL TBF on PACCH at BH 6 8 3 2 2
Number of UL TBF on PACCH at BH 50 10 10 4 8
Average DL PDU size (bytes) 597.3 425.0 727.9 50.0 754.4
Average UL PDU size (bytes) 50.0 50.0 50.0 691.4 50.0
Table 45: Modified parameters for user behavior for EDGE mobiles
Note (About Radio Resource Reallocation for each profile) The minimum number of resource re-
allocation is set to one per session. Depending on the congestion level in the cells and on the
penetration of EDGE MS, more re-allocations can occur (respectively T3 and T4). However this is
difficult to quantify and is not taken into account at this stage.
21
One RAU per 54 minutes = 1.11 RAU at Busy Hour.
device),
All rights reserved. Passing on and copying of this
GPU2 case:
In the following, we will consider the following figures for the maximum GPU throughput per service (see
Section 4.4.2.1):
GPRS: 3.1 Mbit/s for both directions,
EGPRS: 4 Mbit/s for both directions.
GPU3 case:
The same number of PDCH is declared for GPU3 as for GPU2, even if the DSP could handle more PDCHs
(gain of 40%-50%)22. Hence the maximum throughput determined at DSP level is the same for GPU3 as for
GPU2.
6.2.1.2 B10 Mx
Compared to B10 legacy, the number of PDCH that can be handled is multiplied by a factor of 4. In the
following, we will consider the following figures for the maximum GP throughput per service (see Section
4.4.3.1):
GPRS: 12.2 Mbit/s for both directions,
22
This allows to have the same radio resource management for both GPUs, and to lower the unbalance of capacity between DSP and
PPC for GPU3, as shown further.
14.3 Mbit/s for both directions for Gb over Frame Relay (B10 MR1),
All rights reserved. Passing on and copying of this
6.2.2.2 B10 Mx
As for GPU in B10 legacy, the GP capacity can be limited either by the maximum GP throughput defined in
Section 6.2.1.2, or by the CPU load consumed by radio resource management and Gb protocol in the PPC.
The objective of the following sections is to determine the PPC capacity for each traffic profile and for
the Alcatel mix.
The method is described in Section 6.2.2.1 and used the cost of CPU for each basic procedure as defined
in Table 15 (CPU margin), Table 16 for B10 MR1 (GboFR) and in Table 18 for B10 MR2 (GboIP) for GP case.
23
Using Excel goal seek built-in tool.
/ GP is.
All rights reserved. Passing on and copying of this
UL Data load: the same is provided for uplink data load as for downlink data.
It is also interesting to know the number of Mobiles simultaneously connected per GPU / GP. This
figure however depends highly on the throughput allocated to each MS and, to a lesser extends on
the value of T_network_response_time (for delayed downlink TBF release), and should not be
considered as a load test passing criteria. We are interested in two extreme cases:
Maximum number of MS in the worst case (congestion, long delayed TBF release timer) to see if
it is compatible with the maximum number of TBF contexts per GPU / GP.
Effect of high-speed data service introduction, so with maximum throughput per MS.
GSL signalling load: the GSL signalling load can be computed for all profiles, with the GSL link load
model defined in Section 7. We take the following hypothesis:
All TBF establishments on CCCH and radio resource re-allocation trigger a transmission
allocation procedure,
Half of the TBF establishments request the allocation of GCHs (Abis allocation and GCH
connection),
Suspend-resume procedures are taken into account, but LCS data load is not taken into account,
There is no Gs interface, i.e. CS paging is not taken into account.
Moreover, it is always interesting to be able to compare our traffic model (Average user) and its
theoretical performance with what can be observed on the field or on test platform. Therefore, the
evolution of three parameters as a function of the PPC CPU load is provided. These three parameters are
the following ones:
Number of TBFs establishment per hour in DL and UL,
Number of LLC PDU transmitted per hour in DL and UL,
DL PPC Throughput expressed in kbit/s.
6.3 PPC capacity with all profiles and traffic mix for B10 legacy
Note In the different tables below, the number of subscribers per GPU / GP is determined for the
following cases. The user corresponds to either a homogeneous profile (WEB or WAP or MMS or
UDP) or to an average user. The average user corresponds to the mix of profiles defined in
Section 5.6.5. The number of users in each column must not be added. This is the maximum
number corresponding to each hypothesis
Note Compared to B9, B10 performances have decreased because we assume now that the feature
Extended UL TBF mode is not supported by all MS. B9 assumption to considerer all MS supporting
this feature was far too optimistic. The new assumption for B10 (50% of MS supporting this feature)
is more realistic even if still optimistic.
For most GPRS only MS, the N201 parameter value is limited to 576 or 500 bytes. We assume N201 set to
576 bytes in this section.
Table 48, Table 49 and Table 50 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GboIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 629 9 155 14 441 4 734 81 215 9 268
TBF establishments / s at target load 45.3 67.0 48.6 23.5 75.2 53.0
Total LLC / s (UL and DL) at target load 585 89 451 797 75 362
Downlink data load
DL LLC / s 399.1 42.8 249.2 777.6 25.1 243.8
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 461.2 145.6 823.3 3 157.5 10.0 889.8
Percentage of max DL GPU throughput 47.1% 4.7% 26.6% 101.9% 0.3% 28.7%
Uplink data load
UL LLC / s 185.6 46.3 201.8 19.5 50.1 118.1
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 74.2 18.5 514.9 7.8 20.1 186.6
Percentage of max UL GPU throughput 2.4% 0.6% 16.6% 0.3% 0.6% 6.0%
Memory needed
Number of simultaneously connected MS 50 116 75 68 0 141
GSL link for one GSL at 64 kbit/s
MFS -> BSC 59.0% 88.5% 71.2% 56.0% 96.6% 74.6%
BSC -> MFS 50.4% 73.3% 59.1% 48.0% 79.6% 62.2%
Table 48: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 323 7 907 12 677 4 319 69 433 8 121
TBF establishments / s at target load 40.0 57.8 42.6 21.5 64.3 46.4
Total LLC / s (UL and DL) at target load 517 77 396 727 64 317
Downlink data load
DL LLC / s 352.7 37.0 218.8 709.4 21.4 213.6
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 291.3 125.7 722.7 2 880.8 8.6 779.7
Percentage of max DL GPU throughput 41.7% 4.1% 23.3% 92.9% 0.3% 25.2%
Uplink data load
UL LLC / s 164.0 40.0 177.1 17.7 42.9 103.5
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 65.6 16.0 452.0 7.1 17.1 163.5
Percentage of max UL GPU throughput 2.1% 0.5% 14.6% 0.2% 0.6% 5.3%
Memory needed
Number of simultaneously connected MS 69 153 111 78 0 196
GSL link for one GSL at 64 kbit/s
MFS -> BSC 58.1% 83.4% 68.7% 55.5% 90.0% 71.6%
BSC -> MFS 49.7% 69.3% 57.3% 47.7% 74.5% 60.0%
Table 49: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboFR)
Number of subscribers per GPU 2 213 7 757 12 226 3 990 68 160 7 838
TBF establishments / s at target load 38.1 56.7 41.1 19.8 63.1 44.8
Total LLC / s (UL and DL) at target load 492 76 382 672 63 306
Downlink data load
DL LLC / s 336.0 36.3 211.0 655.4 21.0 206.2
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 230.1 123.3 697.0 2 661.4 8.4 752.5
Percentage of max DL GPU throughput 39.7% 4.0% 22.5% 85.9% 0.3% 24.3%
Uplink data load
UL LLC / s 156.3 39.3 170.8 16.4 42.1 99.9
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 62.5 15.7 435.9 6.6 16.8 157.8
Percentage of max UL GPU throughput 2.0% 0.5% 14.1% 0.2% 0.5% 5.1%
Memory needed
Number of simultaneously connected MS 65 151 107 72 0 189
GSL link for one GSL at 64 kbit/s
MFS -> BSC 57.7% 82.8% 68.1% 55.2% 89.3% 70.9%
BSC -> MFS 49.4% 68.9% 56.8% 47.4% 73.9% 59.4%
Table 50: PPC capacity at 75% CPU load with GPU2 for GPRS (B10 MR2 GboIP)
Table 51, Table 52 and Table 53 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GbIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 742 7 785 13 900 6 611 76 368 8 837
TBF establishments / s at target load 47.2 56.9 46.8 22.3 70.7 49.8
Total LLC / s (UL and DL) at target load 513 76 334 747 71 273
Downlink data load
DL LLC / s 318.9 36.4 177.6 730.6 23.6 173.2
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 1 524.1 123.8 792.4 4 409.6 9.4 848.5
Percentage of max DL GPU throughput 38.1% 3.1% 19.8% 110.2% 0.2% 21.2%
Uplink data load
UL LLC / s 193.6 39.4 156.9 16.6 47.1 100.1
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 77.4 15.8 495.6 6.6 18.9 177.6
Percentage of max UL GPU throughput 1.9% 0.4% 12.4% 0.2% 0.5% 4.4%
Memory needed
Number of simultaneously connected MS 52 149 74 55 0 139
GSL link for one GSL at 64 kbit/s
MFS -> BSC 61.7% 91.8% 75.5% 59.9% 105.8% 79.6%
BSC -> MFS 53.5% 79.8% 64.8% 52.0% 92.1% 68.9%
Table 51: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 421 6 863 12 313 6 054 65 859 7 825
Total LLC / s (UL and DL) at target load 453 67 296 684 61 242
All rights reserved. Passing on and copying of this
Table 52: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 2 322 6 750 11 960 5 617 64 713 7 605
TBF establishments / s at target load 40.0 49.4 40.2 19.0 59.9 42.9
Total LLC / s (UL and DL) at target load 434 66 288 635 60 235
Downlink data load
DL LLC / s 270.2 31.6 152.8 620.7 20.0 149.1
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 1 291.1 107.3 681.9 3 746.6 8.0 730.2
Percentage of max DL GPU throughput 32.3% 2.7% 17.0% 93.7% 0.2% 18.3%
Uplink data load
UL LLC / s 164.0 34.2 135.0 14.1 39.9 86.1
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 65.6 13.7 426.4 5.6 16.0 152.9
Percentage of max UL GPU throughput 1.6% 0.3% 10.7% 0.1% 0.4% 3.8%
Memory needed
Number of simultaneously connected MS 44 129 64 47 0 120
GSL link for one GSL at 64 kbit/s
MFS -> BSC 60.1% 86.3% 72.1% 58.6% 97.4% 75.6%
BSC -> MFS 52.1% 75.0% 61.9% 50.8% 84.8% 65.5%
Table 53: PPC capacity at 75% CPU load with GPU2 for EGPRS (B10 MR2 GboIP)
In the different tables above, the maximum number of simultaneously connected mobiles is obtained
with:
3 PDCH allocated per MS in DL,
An average throughput per PDCH of 2.3 kbytes/s for GPRS (equivalent to 18.4 kbit/s) and 7.1
kbytes/s for EDGE (equivalent to 56.8 kbit/s),
T_network_response_time = 2s (for delayed DL TBF release feature).
The number of MS established at the same time is always lower than 100. This value is well below the
maximum value provided by PMU memory (1000 MS context).
Note The number of MS context is no more an issue as a preemption of MS context (inter and intra cell
preemptions) has been introduced in B9. This means that idle MS context can be reused by all the
cells handled by one GPU, if needed.
Note The overhead of 9.25% is taken into account in the evaluation of the different parameters. This
explains why for 10% of CPU load, the processed values seem close to zero.
Number of TBF Request per hour versus PPC CPU Load (GPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
250 000
200 000
150 000
TBF Request
MR1
MR2-FR
MR2-IP
100 000
50 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 10: Number of TBF established in DL and UL per hour for GPU2 (GPRS case)
Number of TBF Request per hour versus PPC CPU Load (EGPRS only)
250 000
200 000
150 000
TBF Request
MR1
MR2-FR
MR2-IP
100 000
50 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 11: Number of TBF established in DL and UL per hour for GPU2 (EGPRS case)
Number of LLC PDU per hour versus PPC CPU Load (GPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
1 800 000
1 600 000
1 400 000
1 200 000
LLC PDU transferred
600 000
400 000
200 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 12: Number of LLC PDU transmitted in DL and UL per hour for GPU2 (GPRS case)
Number of LLC PDU per hour versus PPC CPU Load (EGPRS only)
1 400 000
1 200 000
1 000 000
LLC PDU transferred
400 000
200 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 13: Number of LLC PDU transmitted in DL and UL per hour for GPU2 (EGPRS case)
1200,00
1000,00
DL PPC Throughput [kbit/s]
800,00
MR1
600,00 MR2-FR
MR2-IP
400,00
200,00
0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%
1200,00
1000,00
DL PPC Throughput [kbit/s]
800,00
MR1
600,00 MR2-FR
MR2-IP
400,00
200,00
0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%
Note There are less TBF established and LLC PDU transferred for EGPRS case compared to GPRS case.
This is due to a higher value of the PDU size for EGPRS.
The PPC capacity is indicated at 75% CPU load (nominal load), and with a CPU margin of 6.47%, as
explained in Section 6.2.2.1.
The way the PPC load is computed is detailed in Section 9.4. The number of GPU users is adjusted to
reach 75% CPU Load.
We distinguish the case of 100% GPRS MS and 100% EDGE MS. The capacity with a proportion of both can
be obtained as a weighted average of both tables below.
Note Compared to B9, B10 performances have decreased because we assume now that the feature
Extended UL TBF mode is not supported by all MS. B9 assumption to considerer all MS supporting
this feature was far too optimistic. The new assumption for B10 (50% of MS supporting this feature)
is more realistic even if still optimistic).
Table 54, Table 55 and Table 56 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GbIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 4 277 15 678 23 850 7 353 140 836 15 392
TBF establishments / s at target load 73.7 114.7 80.2 36.6 130.4 88.0
Total LLC / s (UL and DL) at target load 951 153 745 1 238 130 601
Downlink data load
DL LLC / s 649.4 73.3 411.6 1 207.8 43.5 404.9
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 2 377.6 249.3 1 359.7 4 904.3 17.4 1 477.8
Percentage of max DL GPU throughput 76.7% 8.0% 43.9% 158.2% 0.6% 47.7%
Uplink data load
UL LLC / s 302.0 79.4 333.2 30.2 86.9 196.2
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 120.8 31.7 850.4 12.1 34.8 309.9
Percentage of max UL GPU throughput 3.9% 1.0% 27.4% 0.4% 1.1% 10.0%
Memory needed
Number of simultaneously connected MS 126 304 209 132 0 371
GSL link for one GSL at 64 kbit/s
MFS -> BSC 64.1% 115.3% 84.4% 58.8% 130.2% 90.2%
BSC -> MFS 54.4% 94.2% 68.9% 50.2% 105.7% 74.3%
Table 54: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 3 900 14 189 21 703 6 820 126 474 14 010
TBF establishments / s at target load 67.2 103.8 73.0 33.9 117.1 80.1
Total LLC / s (UL and DL) at target load 868 138 678 1 148 117 547
Downlink data load
DL LLC / s 592.1 66.4 374.6 1 120.3 39.0 368.5
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 2 168.0 225.6 1 237.4 4 549.1 15.6 1 345.1
Percentage of max DL GPU throughput 69.9% 7.3% 39.9% 146.7% 0.5% 43.4%
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 110.2 28.7 773.8 11.2 31.2 282.0
Percentage of max UL GPU throughput 3.6% 0.9% 25.0% 0.4% 1.0% 9.1%
Memory needed
Number of simultaneously connected MS 115 275 190 123 0 338
GSL link for one GSL at 64 kbit/s
MFS -> BSC 63.0% 109.2% 81.4% 58.2% 122.1% 86.7%
BSC -> MFS 53.5% 89.4% 66.7% 49.7% 99.4% 71.5%
Table 55: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 3 302 12 592 18 754 5 569 111 691 12 114
TBF establishments / s at target load 56.9 92.1 63.1 27.7 103.4 69.3
Total LLC / s (UL and DL) at target load 735 123 586 938 103 473
Downlink data load
DL LLC / s 501.4 58.9 323.7 914.7 34.5 318.7
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 1 835.8 200.2 1 069.2 3 714.3 13.8 1 163.1
Percentage of max DL GPU throughput 59.2% 6.5% 34.5% 119.8% 0.4% 37.5%
Uplink data load
UL LLC / s 233.2 63.7 262.0 22.9 68.9 154.4
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 93.3 25.5 668.7 9.2 27.6 243.9
Percentage of max UL GPU throughput 3.0% 0.8% 21.6% 0.3% 0.9% 7.9%
Memory needed
Number of simultaneously connected MS 98 244 165 100 0 292
GSL link for one GSL at 64 kbit/s
MFS -> BSC 61.1% 102.6% 77.2% 56.8% 113.8% 81.8%
BSC -> MFS 52.0% 84.3% 63.6% 48.7% 93.0% 67.8%
Table 56: PPC capacity at 75% CPU load with GPU3 for GPRS (B10 MR2 GboIP)
Table 57, Table 58 and Table 59 present the results for respectively B10 MR1 (GboFR) and B10 MR2 (GboFR
and GboIP).
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 4 482 13 038 22 866 10 258 131 141 14 582
TBF establishments / s at target load 77.2 95.4 76.9 34.6 121.4 82.2
Total LLC / s (UL and DL) at target load 838 127 550 1 159 121 451
Downlink data load
DL LLC / s 521.4 61.0 292.2 1 133.6 40.5 285.9
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 2 491.6 207.3 1 303.6 6 842.0 16.2 1 400.1
Percentage of max DL GPU throughput 62.3% 5.2% 32.6% 171.1% 0.4% 35.0%
Uplink data load
UL LLC / s 316.5 66.0 258.0 25.8 81.0 165.1
Percentage of max UL GPU throughput 3.17% 0.66% 20.38% 0.26% 0.81% 7.33%
Memory needed
Number of simultaneously connected MS 86 249 122 86 0 230
GSL link for one GSL at 64 kbit/s
MFS -> BSC 68.6% 119.3% 91.4% 64.9% 145.2% 98.3%
BSC -> MFS 59.6% 103.9% 78.2% 56.3% 126.5% 85.1%
Table 57: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 4 092 11 992 20 997 9 549 118 600 13 400
TBF establishments / s at target load 70.5 87.7 70.6 32.2 109.8 75.6
Total LLC / s (UL and DL) at target load 765 117 505 1 079 110 414
Downlink data load
DL LLC / s 476.0 56.1 268.3 1 055.3 36.6 262.7
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 2 274.6 190.7 1 197.1 6 369.2 14.6 1 286.6
Percentage of max DL GPU throughput 56.9% 4.8% 29.9% 159.2% 0.4% 32.2%
Uplink data load
UL LLC / s 289.0 60.7 237.0 24.0 73.2 151.7
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 115.6 24.3 748.6 9.6 29.3 269.3
Percentage of max UL GPU throughput 2.89% 0.61% 18.72% 0.24% 0.73% 6.73%
Memory needed
Number of simultaneously connected MS 78 229 112 80 0 211
GSL link for one GSL at 64 kbit/s
MFS -> BSC 67.1% 113.8% 88.1% 63.9% 136.2% 94.5%
BSC -> MFS 58.2% 99.1% 75.4% 55.5% 118.6% 81.8%
Table 58: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 3 510 10 831 18 512 7 873 105 505 11 829
TBF establishments / s at target load 60.4 79.2 62.3 26.6 97.7 66.7
Total LLC / s (UL and DL) at target load 656 105 445 890 98 366
Downlink data load
DL LLC / s 408.3 50.7 236.6 870.1 32.6 231.9
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 1 951.2 172.2 1 055.4 5 251.6 13.0 1 135.7
Percentage of max DL GPU throughput 48.8% 4.3% 26.4% 131.3% 0.3% 28.4%
Uplink data load
UL LLC / s 247.9 54.8 208.9 19.8 65.1 133.9
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 99.1 21.9 660.0 7.9 26.1 237.8
Percentage of max UL GPU throughput 2.48% 0.55% 16.50% 0.20% 0.65% 5.94%
Memory needed
Number of simultaneously connected MS 67 207 99 66 0 186
GSL link for one GSL at 64 kbit/s
MFS -> BSC 64.8% 107.8% 83.7% 61.7% 126.8% 89.4%
BSC -> MFS 56.2% 93.8% 71.7% 53.5% 110.4% 77.4%
Table 59: PPC capacity at 75% CPU load with GPU3 for EGPRS (B10 MR2 GboIP)
With GPU3, the PPC bottleneck effect is diminished. This is particularly visible UDP-DL traffic, and also a
little on WEB traffic. With WAP traffic, the PPC bottleneck effect is still very strong.
Note The overhead of 6.47% is taken into account in the evaluation of the different parameters.
Number of TBF Request per hour versus PPC CPU Load (GPRS only)
450 000
400 000
350 000
300 000
TBF Request
150 000
100 000
50 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 16: Number of TBF established in DL and UL per hour for GPU3 (GPRS case)
Number of TBF Request per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
400 000
350 000
300 000
250 000
TBF Request
MR1
200 000 MR2-FR
MR2-IP
150 000
100 000
50 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 17: Number of TBF established in DL and UL per hour for GPU3 (EGPRS case)
Number of LLC PDU per hour versus PPC CPU Load (GPRS only)
3 000 000
2 500 000
2 000 000
LLC PDU transferred
MR1
1 500 000 MR2-FR
MR2-IP
1 000 000
500 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 18: Number of LLC PDU transmitted in DL and UL per hour for GPU3 (GPRS case)
Number of LLC PDU per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
2 500 000
2 000 000
LLC PDU transferred
1 500 000
MR1
MR2-FR
MR2-IP
1 000 000
500 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 19: Number of LLC PDU transmitted in DL and UL per hour for GPU3 (EGPRS case)
2000,00
1800,00
1600,00
1400,00
DL PPC Throughput [kbit/s]
1200,00
MR1
1000,00 MR2-FR
MR2-IP
800,00
600,00
400,00
200,00
0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%
1800,00
1600,00
1400,00
DL PPC Throughput [kbit/s]
1200,00
1000,00 MR1
MR2-FR
800,00 MR2-IP
600,00
400,00
200,00
0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%
6.4 PPC capacity with all profiles and traffic mix for B10 Mx
The PPC capacity is indicated at 75% CPU load (nominal load), and with a CPU margin of 3.55% for B10
MR1 (GboFR) and of 3.85% for B10 MR2 (GboIP). The way the PPC load is computed is detailed in Section
9.4. The number of GP users is adjusted to reach 75% CPU Load. We distinguish the case of 100% GPRS MS
and 100% EDGE MS. The capacity with a proportion of both can be obtained as a weighted average of both
tables below.
Note Here also, we assume that only half of the MS population support the extended UL TBF mode
feature. This explains the important decrease of performances between B10 and B9, where each
MS was assumed supporting the feature.
Table 60: PPC capacity at 75% CPU load with GP for GPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 11 051 42 744 62 993 18 204 383 404 40 706
TBF establishments / s at target load 190.3 312.7 211.9 90.5 355.0 232.8
Total LLC / s (UL and DL) at target load 2 458 416 1 967 3 065 355 1 590
Downlink data load
DL LLC / s 1 678.0 199.9 1 087.2 2 990.3 118.3 1 070.8
Average DL LLC PDU size (bytes) 457.7 425.0 412.9 507.6 50.0 456.3
DL throughput (kbit/s) 6 143.5 679.7 3 591.4 12 142.5 47.3 3 908.3
Percentage of max DL GPU throughput 49.9% 5.5% 29.2% 98.7% 0.4% 31.8%
Uplink data load
UL LLC / s 780.4 216.4 880.2 74.8 236.7 518.9
Average UL LLC PDU size (bytes) 50.0 50.0 319.0 50.0 50.0 197.4
UL throughput (kbit/s) 312.2 86.5 2 246.0 29.9 94.7 819.5
Percentage of max UL GPU throughput 2.5% 0.7% 18.3% 0.2% 0.8% 6.7%
Memory needed
Number of simultaneously connected MS 327 829 553 328 0 981
GSL link for one GSL at 64 kbit/s
MFS -> BSC 138.2% 279.6% 192.4% 123.4% 319.8% 207.9%
BSC -> MFS 123.6% 233.6% 162.7% 112.1% 264.9% 176.9%
Table 61: PPC capacity at 75% CPU load with GP for GPRS (B10 MR2 GboIP)
Percentage of max UL GPU throughput 2.5% 0.5% 16.0% 0.2% 0.6% 5.7%
Memory needed
Number of simultaneously connected MS 241 689 343 249 0 642
GSL link for one GSL at 64 kbit/s
MFS -> BSC 149.7% 289.1% 213.2% 140.3% 357.5% 232.3%
BSC -> MFS 133.1% 255.1% 185.0% 124.9% 314.9% 204.2%
Table 62: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR1 GboFR)
MMS DL RA Average
Capacity WEB WAP UDP-DL
and UL Update user
Number of subscribers per GPU 11 733 35 924 61 439 25 619 358 581 39 237
TBF establishments / s at target load 202.1 262.8 206.7 86.5 332.0 221.2
Total LLC / s (UL and DL) at target load 2 193 350 1 478 2 896 332 1 213
Downlink data load
DL LLC / s 1 364.9 168.0 785.2 2 831.2 110.7 769.2
Average DL LLC PDU size (bytes) 597.3 425.0 557.7 754.4 50.0 612.2
DL throughput (kbit/s) 6 522.2 571.2 3 502.8 17 088.1 44.3 3 767.3
Percentage of max DL GPU throughput 41.3% 3.6% 22.2% 108.2% 0.3% 23.8%
Uplink data load
UL LLC / s 828.5 181.8 693.3 64.4 221.3 444.3
Average UL LLC PDU size (bytes) 50.0 50.0 394.9 50.0 50.0 221.9
UL throughput (kbit/s) 331.4 72.7 2 190.6 25.7 88.5 788.7
Percentage of max UL GPU throughput 2.1% 0.5% 13.9% 0.2% 0.6% 5.0%
Memory needed
Number of simultaneously connected MS 224 686 329 215 0 618
GSL link for one GSL at 64 kbit/s
MFS -> BSC 150.3% 292.5% 212.8% 138.9% 361.8% 231.5%
BSC -> MFS 137.6% 261.9% 188.6% 127.6% 322.5% 207.4%
Table 63: PPC capacity at 75% CPU load with GP for EGPRS (B10 MR2 GboIP)
In this section, the number of TBF established in DL and UL at busy hour, the number of DL and UL LLC
PDU transferred at busy hour, and the DL PPC throughput at busy hour are shown as a function of the CPU
load for the Average User traffic model. Two cases are presented, one for GPRS traffic and one for
EGPRS traffic. For each case, there is a comparison between B10 MR1 (GboFR) and B10 MR2 (GboIP).
The results are presented in Figure 22 and Figure 23 for TBF establishment, in Figure 24 and Figure 25 for
LLC PDU transmitted and in Figure 26 and Figure 27 for DL PPC throughput (in kbit/s).
The impact of the different features (EDA, Gb over IP) introduced in B10 MR2 on the PPC CPU load
corresponds to an increase of around 5% for the Average User traffic model.
Note The overhead of 3.55% for B10 MR1 (GboFR) and of 3.85% for B10 MR2 (GboIP) are taken into
account in the evaluation of the different parameters.
Number of TBF Request per hour versus PPC CPU Load (GPRS only)
1 200 000
1 000 000
800 000
TBF Request
MR1
600 000
MR2-IP
400 000
200 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 22: Number of TBF established in DL and UL per hour for GP (GPRS case)
Number of TBF Request per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
1 200 000
1 000 000
800 000
TBF Request
MR1
600 000
MR2-IP
400 000
200 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 23: Number of TBF established in DL and UL per hour for GP (EGPRS case)
Number of LLC PDU per hour versus PPC CPU Load (GPRS only)
8 000 000
7 000 000
6 000 000
LLC PDU transferred
5 000 000
MR1
4 000 000
MR2-IP
3 000 000
2 000 000
1 000 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 24: Number of LLC PDU transmitted in DL and UL per hour for GP (GPRS case)
Number of LLC PDU per hour versus PPC CPU Load (EGPRS only)
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
6 000 000
5 000 000
4 000 000
LLC PDU transferred
MR1
3 000 000
MR2-IP
2 000 000
1 000 000
0
10% 20% 30% 40% 50% 60% 70% 80% 90%
Figure 25: Number of LLC PDU transmitted in DL and UL per hour for GP (EGPRS case)
6000,00
5000,00
DL PPC Throughput [kbit/s]
4000,00
MR1
3000,00
MR2-IP
2000,00
1000,00
0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%
CPU Load [% ]
5000,00
4500,00
4000,00
DL PPC Throughput [kbit/s]
3500,00
3000,00
MR1
2500,00
MR2-IP
2000,00
1500,00
1000,00
500,00
0,00
10% 20% 30% 40% 50% 60% 70% 80% 90%
7 GSL CAPACITY
Bytes_xxx corresponds to the number of bytes transferred for the xxx procedure. It takes into account the
message flow for requests and responses.
Rate_xxx corresponds to the expected number of requests per second for a GPU / GP at nominal load. To
make this rate as independent as possible from a traffic model we take a worst-case hypothesis:
Rate_TBF_c is the maximum number of TBF establishment per second on the CCCH.
Rate_suspend_resume corresponds to 30% of the maximum number of Call Attempts per second
(CAPS) on the largest BSC. The value of 30% corresponds to the GPRS penetration rate.
Rate_paging_PS is expected negligible (most PS traffic is currently expected to be Mobile Originated)
and is not considered here,
Rate_paging_CS is the CS paging received on Gb (from Gs) and transmitted to the cells on the CCCH.
It is equal to 30% of the maximum number of CS paging received on a BSC on A interface. This case is
possible with Network Operating Mode I (with Gs interface).
of RAE-4 messages (RR Allocation Indication from BSC to MFS and RR Usage Indication from MFS to
All rights reserved. Passing on and copying of this
BSC).
For the transmission resource management, we assume that half of the established TBF (Rate_TBF_c)
requests some transmission resources.
Rate_DTM_TBF_c is the number of CS calls that goes from the dedicated mode to the DTM mode, and
then returns to the dedicated mode.
Rate_bsc_shared_infos is the number of BSC Shared DTM Indication Infos messages that are
tranmitted between the BSC and the MFS. This number depends on the number of MS DTM capable
that are in dedicated mode, and of a timer that defines the periodicity of the transmission
(T_SHARED_DTM_INFO).
24
Because we are not considering a traffic model for input, the GSL load is based on a worst case based on GPU / GP capacity.
TCH_INFO_PERIOD 5s
RR_ALLOC_PERIOD 2
T_SHARED_DTM_INFO 8s
Percentage of MS DTM capable 10%
Percentage of CS call in dedicated mode going to DTM mode 2%
Table 65 provides the size of the different messages exchanged on the BSCGP interface between the MFS
and the BSC.
MFS BSC BSC MFS
B10 messages
bytes bytes
Abis Nibbles Allocation 35 8
Transmission Allocation Request 75 8
Transmission Allocation Confirm 8 98
Abis Nibbles Allocation 35 8
Transmission Deallocation Command 43 8
Transmission Deallocation Complete 8 66
1 cycle transmission allocation / de-allocation 204 196
CS paging 50 8
Channel request 8 34
Channel Assignment 56 8
Total TBF establishment 64 42
DTM Assignement Command 81 8
MFS Shared DTM Info Indication 31 8
MFS Shared DTM Info Indication Ack 8 34
1 cycle of DTM establishment / release 120 50
BSC Shared DTM Info Indication 8 52
MS Resume 8 38
MS Resume Ack 38 8
MS Suspend 8 38
MS Suspend Ack 41 8
1 cycle of suspend / resume 95 92
RR Allocation Indication 8 31
RR Usage Indication 39 8
Table 66 provides the number of BSCGP messages exchanged on the BSCGP interface between the MFS and
thr BSC for each service (transmission, paging and so on). The messages used for acknowledgment are not
taken into account.
MFS BSC BSC MFS
1 cycle transmission allocation / de-allocation 4 2
CS paging 1 0
Total TBF establishment 1 1
1 cycle of DTM establishment / release 2 1
BSC Shared DTM Info Indication 0 1
1 cycle of suspend / resume 2 2
RR Allocation Indication 0 1
RR Usage Indication 1 0
With the introduction of Mx, there are several possible configurations with BSC and MFS, which are the
following ones:
BSC G2 and MFS GPU2,
BSC G2 and MFS GPU3
BSC G2 and MFS GP,
BSC Evolution and MFS GPU2,
BSC Evolution and MFS GPU3,
BSC Evolution and MFS GP.
The number of messages per second on the BSCGP depends on the type of BSC (G2 or Evolution) and of the
MFS (either GPU2, GPU3 or GP). Table 67 presents the assumptions for BSC G2 and Mx.
BSC G2 BSC Evolution MR1 BSC Evolution MR2
Maximum Erlang capacity 1900 4000 4500
Number of cells 264 500 500
Maximum paging per second 70 108 121
CAPS 38 80 90
Table 68 provides the occurences per second associated with each service (transmission, paging, TBF
establishment, and so on) for B10 MR1 and B10 MR2.
G2 / GPU2 G2 / GPU3 G2 / GP Ev / GPU2 Ev / GPU3 Ev / GP
B10 MR1 (GboFR)
1 cycle transmission (de)allocation 11.1 18.6 48.5 11.1 18.6 48.5
CS paging 21.0 21.0 21.0 32.4 32.4 32.4
Total TBF establishment 22.3 37.3 96.9 22.3 37.3 96.9
1 cycle of DTM (de) establishment 0.4 0.7 1.9 0.4 0.7 1.9
BSC Shared DTM Info Indication 23.8 23.8 23.8 50.0 50.0 50.0
1 cycle of suspend / resume 11.4 11.4 11.4 24.0 24.0 24.0
RR Allocation Indication 26.4 26.4 26.4 50.0 50.0 50.0
RR Usage Indication 52.8 52.8 52.8 100.0 100.0 100.0
B10 MR2 (GboIP)
1 cycle transmission (de)allocation 9.3 14.6 48.6 9.3 14.6 48.6
CS paging 21.0 21.0 21.0 36.3 36.3 36.3
Total TBF establishment 18.5 29.2 97.2 18.5 29.2 97.2
1 cycle of DTM (de) establishment 0.4 0.6 1.9 0.4 0.6 1.9
BSC Shared DTM Info Indication 23.8 23.8 23.8 56.3 56.3 56.3
1 cycle of suspend / resume 11.4 11.4 11.4 27.0 27.0 27.0
RR Allocation Indication 26.4 26.4 26.4 50.0 50.0 50.0
RR Usage Indication 52.8 52.8 52.8 100.0 100.0 100.0
Table 68: Occurrences per second for the available BSC / MFS hardware
From Table 65 and Table 68, we can evaluate the data load of one GP / GPU for the six possible mixed
configurations, with B10 MR1 and B10 MR2 (see Table 69).
GSL data load (kbit/s) MFS BSC BSC MFS
B10 MR1 (GboFR)
BSC G2 + MFS GPU2 71.0 54.6
Table 69: GSL data load for one GPU / GP (without location services)
Between the MFS and the BSC, the data load for one GPU / GP goes from 71.0 kbit/s (B10 MR1 GboFR) /
66 kbit/s (B10 MR2 GboIP) for a legacy configuration (BSC G2 and MFS GPU2) to 207.4 kbit/s (B10 MR1
GboFR) / 212 kbit/s (B10 MR2 GboIP) for an Mx configuration (BSC Evolution and MFS GP). Therefore, we
notice that the GSL load is multiplied by a factor of three with the introduction of Mx in the worst case.
From the results presented in Table 69, we notice that for the cases with an Mx MFS, 3 GSL links will be
needed with a BSC G2, and 4 GSL link will be needed with a BSC Evolution.
From Table 66 and Table 68, we can evaluate the total number of messages exchanged between the MFS
and the BSC (and vice-versa) per second (see Table 70).
GSL data load (messages per second) MFS BSC BSC MFS
B10 MR1 (GboFR)
BSC G2 + MFS GPU2 164 118
BSC G2 + MFS GPU3 210 148
BSC G2 + MFS GP 391 269
BSC Evolution + MFS GPU2 248 193
BSC Evolution + MFS GPU3 294 223
BSC Evolution + MFS GP 475 344
B10 MR2 (GboIP)
BSC G2 + MFS GPU2 153 110
BSC G2 + MFS GPU3 185 132
BSC G2 + MFS GP 392 269
BSC Evolution + MFS GPU2 247 198
BSC Evolution + MFS GPU3 279 219
BSC Evolution + MFS GP 486 357
Table 70: Number of messages per second exchanged on GSL link for GPU / GP
For a terrestrial link, assuming a K_GSL set to 7 and a round-trip delay of 25 ms, 280 messages can be
handled per second on one GSL link. In the worst case (BSC Evolution and MFS MX), two GSL links will be
needed. This means, that for a terrestrial GSL link, the available bandwidth is more restricting than the
window size.
total number of PS users in the BSS, is distributed on more GPUs / GPs. These functions are MS suspend /
All rights reserved. Passing on and copying of this
resume and CS paging. The load of messages due to RAE-4 is also distributed on the GPUs / GPs, as theses
messages are independent of the PS traffic.
We give in the tables below the data load expected for 1 to 6 GPU2 / GPU3 / GP for each possible
configuration defined in 7.3.
Number of GPUs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 30.0 30.0 30.0 30.0 30.0 30.0
Relative load (paging. suspend/resume. RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 71.0 50.5 43.6 40.2 38.2 36.8
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 25.0 25.0 25.0 25.0 25.0 25.0
Relative load (paging. suspend/resume. RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 66.0 45.5 38.6 35.2 33.2 31.8
Table 71: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU2)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 50.2 50.2 50.2 50.2 50.2 50.2
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GP (kbit/s) 91.2 70.7 63.8 60.4 58.4 57.0
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 39.3 39.3 39.3 39.3 39.3 39.3
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 80.3 59.8 53.0 49.6 47.5 46.2
Table 72: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GPU3)
Number of GPUs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 130.6 130.6 130.6 130.6 130.6 130.6
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 171.6 151.1 144.2 140.8 138.8 137.4
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 130.9 130.9 130.9 130.9 130.9 130.9
Relative load (paging, suspend/resume, RAE-4) 41.0 20.5 13.7 10.2 8.2 6.8
Total load per GPU (kbit/s) 171.9 151.4 144.6 141.2 139.1 137.7
Table 73: Data load on the GSL for up to 6 GPU (BSC G2 and MFS GP)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 30.0 30.0 30.0 30.0 30.0 30.0
Relative load (paging, suspend/resume, RAE-4) 76.8 38.4 25.6 19.2 15.4 12.8
Total load per GP (kbit/s) 106.8 68.4 55.6 49.2 45.3 42.8
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 25.0 25.0 25.0 25.0 25.0 25.0
Relative load (paging, suspend/resume, RAE-4) 81.0 40.5 27.0 20.3 16.2 13.5
Total load per GPU (kbit/s) 106.0 65.5 52.0 45.2 41.2 38.5
Table 74: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU2)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 50.2 50.2 50.2 50.2 50.2 50.2
Total load per GP (kbit/s) 127.0 88.6 75.8 69.4 65.5 63.0
All rights reserved. Passing on and copying of this
Table 75: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GPU3)
Number of GPs 1 2 3 4 5 6
B10 MR1 (GboFR)
Fixed load (TBFs allocation and de-allocation) 130.6 130.6 130.6 130.6 130.6 130.6
Relative load (paging, suspend/resume, RAE-4) 76.8 38.4 25.6 19.2 15.4 12.8
Total load per GP (kbit/s) 207.4 169.0 156.2 149.8 145.9 143.4
B10 MR2 (GboIP)
Fixed load (TBFs allocation and de-allocation) 130.9 130.9 130.9 130.9 130.9 130.9
Relative load (paging, suspend/resume, RAE-4) 81.0 40.5 27.0 20.3 16.2 13.5
Total load per GPU (kbit/s) 212.0 171.4 157.9 151.2 147.1 144.4
Table 76: Data load on the GSL for up to 6 GPU (BSC Evolution and MFS GP)
When there are several GPU / GP per BSS, the number of GSLs can be adjusted depending on the needs. A
maximum of 3 GSL per GPU / GP is needed for the case BSC Evolution and MFS Mx.
8 GLOSSARY
Layer 1 GCH
document, use and communication of its contents not
All rights reserved. Passing on and copying of this
9 ANNEX
LLC LLC
(Routed transparently
RRM RR through the BSC) RR BSCGP BSCGP RRM BSSGP BSSGP
RLC RLC
Relay Network Network
MAC MAC
Service Service
L2-RSL L2-GSL L2-GSL M-EGCH
M-EGCH L2-RSL
GSM RF GSM RF
L1-GCH L1-RSL L1-RSL L1-GSL L1-GSL L1-GCH L1-Gb L1-Gb
SGSN
(Routed transparently
RRM RR
through the BSC) RR BSCGP BSCGP RRM BSSGP BSSGP
RLC RLC
Relay Network Network
MAC MAC
Service Service
L2-RSL L2-GSL L2-GSL M-EGCH
M-EGCH L2-RSL
GSM RF GSM RF
L1-GCH L1-RSL L1-RSL L1-GSL L1-GSL L1-GCH L1-Gb L1-Gb
SGSN
LLC data FCS LLC header LLC data FCS LLC header RRM layer
Header :
RLC / MAC blocks 3 to n bytes
MAC RLC header RLC data BCS MAC RLC header RLC data BCS RLC/MAC layer
RLC datas : 22 bytes with CS1
32 bytes with CS2
L2-GCH blocks
Frame size : 40 bytes
Sync. L2 header RLC SDU Tail Sync. L2 header RLC SDU Tail L2-GCH Layer
Radio bursts
Radio burst Radio burst Radio burst Radio burst Radio burst Radio burst Radio layer
4 Radio bursts per RLC SDU
1 block = 4 radio bursts = 20 ms
Sync. = 0000 0000 001Z
FCS = Frame Check Sequence
MAC = Block Header
BCS = Block Check Sequence
(When SDCCH coding is used, BCS corresponds to the Fire code)
LLC data FCS LLC header LLC data FCS RRM layer
All rights reserved. Passing on and copying of this
BSSGP block
14 to 36 bytes 1 LLC frame per BSSGP block
LLC SDU BSSGP header LLC SDU BSSGP layer
NSC block
4 bytes 1 BSSGP block per NSC block
SDU NSC header BSSGP SDU NSC
sub-layer
NS
FR layer
Frame
1 byte 2 bytes 1 NSC block per FR frame 2 bytes 1 byte FR
Sync. Flag FR header NSC SDU FCS Sync. Flag sub-layer
Signalling messages
,
Paging, Access Request, Access Grant ,
Paging, Access Request, Access Grant RRM layer
LapD
Frame 1 byte 4 bytes 1 BSCGP block per LapD frame 2 bytes 1 byte
Sync. Flag LAPD header BSCGP SDU FCS Sync. Flag LapD layer
(nb_rau_dl_pdu_bh)
All rights reserved. Passing on and copying of this
user data,
UL applications: = average transaction size * 1000 / mean UL LLC PDU
size.
Number of DL TBF on (P)CCCH Input value (0, all transactions are supposed to be MS initiated, even
(nb_dl_tbf_pc_trans) for MMS-DL, for which the MS is paged and then respond to the paging)
Number of UL TBF on (P)CCCH Input value (1 except for MMS-DL (2): one for notification and one for
(nb_ul_tbf_pc_trans) MMS download)
Number of DL TBF on PACCH Input value (1 thanks to DL TBF release, excepts for MMS-DL (2): one
(nb_dl_tbf_pa_trans) for notification and one for MMS download)
Number of UL TBF on PACCH DL TCP based applications (WEB, MMS-DL): = 1 (extended UL mode
(nb_ul_tbf_pa_trans) taken into account and 1 UL TBF for initial request to server),
DL UDP based applications: = number of UL LLC PDU,
Other applications: = input value.
DL kbytes per transaction = Mean DL LLC PDU size * number of DL LLC PDU / 1000
(dl_kbytes_trans)
UL kbytes per transaction = Mean UL LLC PDU size * number of UL LLC PDU / 1000
(ul_kbytes_trans)
Data download / upload duration (s) = Average transaction size / allocated throughput
All rights reserved. Passing on and copying of this
Note Each signalling response-request counts as one ping duration. The following assupmptions are used
for ping duration:
First ping duration: 1s,
Other pings duration: 0,8s.
PPC capacity
Number of subscribers per GPU = N_GPU_user
Total of LLC-PDU / s (DL and UL) at target load Sum of DL and UL PDU rates
All rights reserved. Passing on and copying of this
25
Objective carried out from B7.2.
The DTC are used to route the traffic information and to terminate and to process the signalling
information exchanged with the MSC (SS7) and the MFS (GSL). For CS and (E)GPRS traffic, 16 kbit/s
channels from 4 DTCs are sub multiplexed to form an Ater mux interface.
For the GSL, the number of requested resources in transmission allocation and de-allocation requests has
a negligible influence.
Occurrences per
Processing cost (ms) Total cost (ms)
second per GPU
Paging 3.33 21 69.9
UL TBF establishment on CCCH 6.5 21 136.7
DL TBF establishment on CCCH 3.21 0 0
Radio allocation / de-allocation 11.0 10 110
Transmission allocation / de-allocation 38.6 10 386
Suspend and resume 12.2 11.4 139.1
Total DTC cost (ms) 841.7
DTC load if 2 GSL per GPU 42.1%
Number of GPUs 1 2 3 4 5 6
Fixed load (TBFs allocation and de-allocation) 522.7 522.7 522.7 522.7 522.7 522.7
Relative load (paging, suspend/resume, RAE-4) 319.0 159.5 106.3 79.8 63.8 53.2
Total CPU load (ms) 841.7 682.2 629.1 602.5 586.5 575.9
CPU load if 2 GSL per DTC (%) 42.1% 34.1% 31.5% 30.1% 29.3% 28.8%
26
Values measured on the BSC.
27
Note that this is an artefact from our model, which considers that the number of GPU in the BSS does not depend on the
proportion of PS users in a network The CPU Load for the one GPU case tends to be over-estimated. The model could be refined for
future releases.
We see that the GSL-DTC load decreases with more GPU per BSC. But in this case the number of BSSAP-
DTC will be reduced in the BSC, and the load on the remaining BSSAP-DTC has to be checked. This is
described in Section 9.5.3.2.
Due to load sharing between the GSL it is possible to minimize the number of GSL-DTC in the BSC provided
that the DTC load does not exceed 60%.
There is no load concern for GSL DTC, as long as two GSLs per GPU are configured, but the impact on
BSSAP load has to be checked.
The worst case (as far as BSC and GSL load are concerned) defined in Section 7.3 is used. This cost will be
distributed over all available BSSAP-DTC.
Occurrences per
Processing cost (ms) Total cost (ms)
second
Paging 5.9 21 123.9
UL TBF establishment on CCCH 5.1 21 107.3
DL TBF establishment on CCCH 2.31 0 0
Radio allocation / de-allocation 13.2 10 132.0
Transmission allocation / de-allocation 37.8 10 378.0
Suspend and resume 13.1 11.4 149.3
Total cost (ms) 890.5
28
Value measured on the BSC.
29
This case corresponds to a configuration, which is not allowed, but it provides an upper bound with a simplified model for the case
of a small proportion of Ater mux configured for PS so as to have a redundant GSL (for example, 1 Amux +1/12 Amux and 2 GSL
configured).
load is reduced.
All rights reserved. Passing on and copying of this
Each Ater-mux used for GPRS corresponds to about 100 Erlang lost for CS service. The BSSAP load
corresponding to one Atermux used for CS services for 100 Erlangs with Alcatel traffic model defined in
document [13] is around 717ms30.
Number of GPUs 3 4 5 6
Maximum number of BSSAP-DTC 44 44 44 44
Remaining BSSAP-DTC if 1 GSL per GPU 41 40 39 38
Remaining BSSAP-DTC if 2 GSL per GPU 38 36 34 32
Total GPRSAP DTC load (ms) 2 672 3 562 4 453 5 343
Removed CS load for 1 Amux per GPU 2 152 2 869 3 586 4 303
1) Added BSSAP-DTC load 1 Amux / 1 GSL per GPU 1.3% 1.7% 2.2% 2.7%
2) Added BSSAP-DTC load 1 Amux / 2 GSL per GPU 1.4% 1.9% 2.5% 3.2%
3) Added BSSAP-DTC load 2 Amux / 2 GSL per GPU -4.3% -6.0% -8.0% -10.2%
The cost for one GPU is provided hereafter. With the introduction of RAE-4 feature, every
RR_ALLOC_PERIOD * TCH_INFO_PERIOD, messages are handled in TCH-RM, concerning radio allocation /
de-allocation for PS traffic. Considering 100 cells and RR_ALLOC_PERIOD set to 2, it gives 10 messages per
seconds.
Occurrences per
Processing cost (ms) Total cost (ms)
second
Radio allocation / de-allocation 10.4 10 104
Total cost (ms) 104
A BSC may have from 2 to 6 TCH-RM pairs depending on the BSC type. The load due to PS services on
TCH-RM for BSC type 3 to 6 is provided in the table below.
BSC type 3 4 5 6
Number of TCH-RM pairs 4 4 6 6
TCH-RM DTC load 2.6% 2.6% 1.7% 1.7%
Table 82: Added load on TCH-RM DTC for one GPU depending on BSC type
In this case, the load related to the diminution of the CS capacity must be subtracted. As in Section
9.5.3.2, we assume that at least one Atermux per GPU is dedicated for GPRS, and that the CS load to be
subtracted corresponds to at least 100 Erlang. For TCHRM, the CPU cost for 100 Erlang with Alcatel CS
30
Computed with BSC load model, not presented in this document.
reduced with current BSCGP load hypothesis for a BSC type 6 (6 TCH-RM pairs).
All rights reserved. Passing on and copying of this
The average number of GICs per request will depend on the percentage of EGPRS MS.
To compute the cost we estimate an average number of GICs per transmission allocation requests
corresponding to:
4 PDCH requested per allocation,
30% of penetration rate for EGPRS MS,
Max_GPRS_CS and Max_EGPRS_MCS are set to respectively CS4 and MCS9.
This yields to an average of 10 GICs per requests. The corresponding load for trunk device handler for
one GPU is computed below, with the hypothesis of a maximum 10-transmission allocation and 10-
transmission de-allocation request per second per GPU (see Section 7.3 for justification).
Occurrences per
Processing cost (ms) Total cost (ms)
second
Transmission allocation / de-allocation 57.5 10 575
Total cost (ms) 575
Table 84: PS TRKDH CPU cost for one GPU (EGPRS penetration rate of 30%)
With our model, the load on one GPU is independent of the number of AterMux connected to this GPU. We
consider that the maximum rate for transmission allocation request on BSCGP interface can be reached
with only one Ater-mux per GPU. So connecting more AterMux per GPU implies distributing the
corresponding TRKDH load on more DTC (4 DTC per AterMux). This is quantified on the table below, where
the total load per GPU is divided per the number of DTC (4 per AterMux)
The corresponding TRKDH handler load for CS services for one Atermux (100 Erlang) has to be deduced
(2.2% CPU load per DTC).
31
Measured on B7.2 BSC.
Table 85: Added load on DTC for TRKDH for one GPU (EGPRS penetration rate of 30%)
The added load to all DTC due to TRKDH function is relatively high, especially if we assume that the
maximum rate for transmission allocation requests can be reached with only one Ater-mux per GPU. This
may cause problem on already loaded DTC.
The additional trunk handler load with EDGE is added on all DTC. When EGPRS is not activated or has a
lower penetration, the added DTC load will be lower than estimated below. Also the more AterMux per
GPU are configured, the more the CPU load due to CS traffic will be reduced.
BSSAP TCH-RM SS7 GSL
Load with CS at max capacity (w/o TRKHDL) 35.3% 54.1% 43.2% 2.4%
PS TRKDH load (1 Amux per GPU, 30% EGPRS) 14.4% 14.4% 14.4% 14.4%
PS TRKDH load (2 Amux per GPU, 30% EGPRS) 7.2% 7.2% 7.2% 7.2%
Added PS load other functions (1 Amux per GPU) 2.6% 0.0% 0.0% 42.6%
Added PS load other functions (2 Amux per GPU) -2.9% -5.2% 0.0% 42.6%
Resulting load (1 Amux per GPU) 52.2% 68.5% 57.6% 59.4%
Resulting load (2 Amux per GPU) 39.6% 56.1% 50.4% 52.2%
Table 86: DTC load on a type 6 BSC with 3 GPU and EGPRS penetration rate of 30%
BSSAP TCH-RM SS7 GSL
Load with CS at max capacity (w/o TRKHDL) 35.3% 54.1% 43.2% 2.4%
PS TRKDH load (1 Amux per GPU, 30% EGPRS) 14.4% 14.4% 14.4% 14.4%
PS TRKDH load (2 Amux per GPU, 30% EGPRS) 7.2% 7.2% 7.2% 7.2%
Added PS load other functions (1 Amux per GPU) 5.5% 0.0% 0.0% 40.0
Added PS load other functions (2 Amux per GPU) -6.9% -10.4% 0.0% 40.0%
Resulting load (1 Amux per GPU) 55.2% 68.5% 57.6% 56.8%
Resulting load (2 Amux per GPU) 35.6% 50.9% 50.4% 49.6%
Table 87: DTC load on a type 6 BSC with 6 GPU and EGPRS penetration rate of 30%
The DTC load is mainly increased due to the trunk device handler function, which is distributed on all the
DTCs. However the DTC load is expected to remain below targeted 60% even with high load for trunk
device handler with higher penetration rate for EGPRS MS.
A TCU controls a maximum of 4 TRX. These TRX may belong to the same cell or several distinct cells. The
added load due to GPRS in the TCU is of different kinds:
Load on the CCCH and SDCCH: the induced load varies with the cell size. The CCCH load is due to
Paging and TBF establishment on the CCCH, when the MPDCH is not configured in the cell.
Paging: The PS paging load is very small compared to CS Paging. Hence, we will consider it as
negligible. The number of CS paging is not increased by Gs interface. So there is no added
Paging load for GPRS in the TCU compared to CS traffic.
TBF establishment on CCCH: it depends on the number of GPRS users in the cell, hence on the
cell size.
Suspend/resume: This load depends on the number of GPRS users in the cell. At TCU level, it
depends on the number of SDCCH mapped on the TCU (maximum 32 SDCCH). As a simplification,
we will consider the global load in one cell. This simplification tends to slightly overestimate
the TCU load for large cells, where the computed load may in fact be distributed on several
TCUs.
Transmission allocation/de-allocation: In this case we have to compare the load generated by
transmission allocation and de-allocation for one TRX compared to the load if this TRX is used for CS
services.
The cost of the above procedures in the TCU is given in the table below, with N_GIC = number of GIC per
allocation or de-allocation request.
Procedure CPU cost at TCU level in ms
Suspend / resume cost 3.27
UL TBF establishment cost 8.62
DL TBF establishment cost 2.68
Transmission allocation and de-allocation cost = 4.37 + N_GIC * 7.23
9.5.4.2 TCU load (PS service) for CCCH and SDCCH channels
The TCU load for PS services on CCCH and SDCCH channels corresponds to TBF establishment and
Suspend/resume procedures. In this case the TCU load depends on the cell size and comes additionally to
the CS load. The number of GPRS users in the cell depends on:
The total number of CS users, which is obtained from the following relation:
Total CS users = Cell capacity in Erlang / mErlang per user
The GPRS penetration ratio.
The number of suspend and resume is one for each CS call corresponding to a GPRS attached users, so
proportional to the number of CS call attempts per second (CAPS) in the cell. CAPS is determined by the
relation:
CAPS = Cell traffic in Erlang / call TCH holding time
The basic traffic hypotheses presented in Table 89 are applied to the cell.
Transaction per PS user at BH 1
Number of UL access on CCCH per transaction 5
Number of suspend/resume per CS call attempts for GPRS users 1
GPRS penetration rate among CS subscribers 30%
Average TCH holding time for CS calls 50
CS traffic per subscriber (mErl) 15
the Alcatel GPRS traffic, model, which is at this stage not validated by feedback from operational
field.
From Table 89 and in Table 90, we can compute the PS cell load and TCU Load depending on the cell size,
and whether one or two CCCH are mapped on the TCU.
Cell size 2 TRX 4 TRX 8 TRX 12 TRX 16 TRX
Erlangs per cell 9 20 46 75 100
CS users in cell 600 1333 3067 5000 6667
GPRS users in cell 180 400 920 1500 2000
CS call attempts per second (CAPS) 0.18 0.4 0.92 1.5 2
Suspend / resume per second 0.054 0.12 0.28 0.45 0.60
UL TBF/s per cell on CCCH 0.25 0.56 1.28 2.08 2.78
DL TBF/s per cell on CCCH 0.00 0.00 0.00 0.00 0.00
Suspend / resume cost in ms 0.18 0.39 0.90 1.47 1.96
UL TBF establishment cost in ms 2.16 4.79 11.01 17.96 23.94
DL TBF establishment cost in ms 0.00 0.00 0.00 0.00 0.00
Induced TCU load for 1 CCCH in ms 2.33 5.18 11.92 19.43 25.91
Added TCU load if 1 CCCH mapped [%] 0.2% 0.5% 1.2% 1.9% 2.6%
Induced TCU load for 2 CCCH in ms 4.49 9.97 22.93 37.39 49.85
Added TCU load if 2 CCCH mapped [%] 0.4% 1.0% 2.3% 3.7% 5.0%
END OF DOCUMENT