Professional Documents
Culture Documents
DANIELE GIRELLA
Downlink TCP Proxy Solutions over HSDPA with Multiple Data Flow
DANIELE GIRELLA
February 2007
Abstract
In recent years, several proxy solutions have been proposed to improve performance of TCP over wireless. The wide popularity of this protocol has pushed for its adoption also in communication contexts, the wireless systems, where the protocol was not intended to be applied. This is the case of High Speed Downlink Packet Data Access (HSDPA), which is an enhancement of third generation wireless systems in that it provides control mechanisms to increase system performance. Despite shorter end-to-end delays, and more reliable successful packet transmission, improving solutions for TCP over HSDPA are still necessary. The goal of this Master thesis project is to explore the possibility of design TCP proxy solutions to enhance users data rates over HSDPA. As a relevant part of our activity, we have implemented a TCP proxy solution over HSDPA through a ns2 simulator environment, extending the EURANE simulator. EURANE has been developed within the SEACORN European project, and it introduces three additional nodes to existing UMTS modules for ns2: the Radio Network Controller, the Base Station and the User Equipment. The functionality of these additional nodes allow for the support of the new features introduced by HSDPA. The extension of the EURANE simulator includes all of these HSDPA new features, a proxy solution, as well as some TCP enhancing protocols (such as Eifel). The simulator allows for performance comparison of existing TCP solution over wireless, and the proxy we have studied in this thesis. An analysis of the effects of multi-user data ows over TCP performance have been also addressed.
iii
Introduction
HSDPA (High Speed Downlink Packet Access) represents a new high-speed data transfer feature whose aim is to empower UMTS downlink data rates. The need of increasing downlink data rates is due to the spreading of new 3G mobile services -such as web browsing, streaming live video, network gamingthat require high downlink sources and short latency. The impressive increase in data rate is achieved by implementing a fast and complex channel control mechanism based upon short physical layer frames, Adaptive Modulation and Coding (AMC), fast Hybrid-Automatic Repeat reQuest (H-ARQ) and fast scheduling. The HSDPA functionalities dene three new channel types: High-Speed Downlink Shared Channel (HS-DSCH), High-Speed Shared Control Channel (HS-SCCH) and High-Speed Dedicated Physical Control Channel (HS-DPCCH). HS-DSCH is multiplexed both in time and in code. In HSDPA each TTI lasts 2 ms compared to 10 ms (or more) of UMTS. This reduction of TTI size permits to achieve a shorter round trip delay between the User Equipment and the Node B, and improve the link adaptation rate and efciency of the AMC. The distinctive characteristic of 3rd Generation wireless networks is packet data services. The information provided by these services are, in the majority of the cases, accessible on the Internet which for the almost entirety works with TCP trafc. Thus, there is a wide interest in extending TCP application in mobile and wireless networks. The main problem of extending TCP over wireless networks is that it has been designed for wired networks where packet losses are almost negligible and where delays are mainly caused by congestion. Instead, in wireless networks the main source of packet losses is the link level errors of the radio channel, which may seriously degrade the achievable v
I NTRODUCTION
It is well known that the main problem with TCP over networks having both wired and wireless links is that packet losses are mistaken by the TCP sender as being due to network congestion. The consequences are that TCP drops its transmission window and often experiences time out, resulting in degraded throughput. The proposals to optimize TCP for wireless links can be divided into three categories: link layer, end-to-end and split connections. Link layer solutions (as Snoop Protocol) try to reduce the error rate of the link through some kind of retransmission mechanism. As the data rate of the wireless link increase, there will be more time for multiple link level retransmissions before timeout occurs at the TCP layer, making link layer solutions more viable. End-to-end solutions (as Eifel Protocol) try to modify the TCP implementation at the sender and/or receiver and/or intermediate routers, or optimizing the parameters used by the TCP connection to achieve good performance. Split connections (as Proxy Solutions) try to separate the TCP used in the wireless link from the one used in the wired one. The optimization procedure can be done separately on the wired and wireless part. In Chapter 1 will be introduced the High-Speed Downlink Packet Access concept and its main new features, such as the new channel types, the Adaptive Modulation and Coding, the Hybrid Automatic Repeat reQuest and the fast scheduling. In the last section will be introduced the proposed evolution for HSDPA. In Chapter 2 will be reported a TCP overview regarding the architecture of this protocol, its problems over 3G networks and a short a description of some TCP versions. In Chapter 3 will be introduced some TCP enhancing solutions, such as Eifel and Snoop protocols, proxy and ow aggregation solutions, and so on. In Chapter 4, using the network simulator ns-2 and a HSDPA implementation called EURANE, a comparative study of all the above solutions in an HSDPA scenario will be provided.
Acknowledgements
A thesis is the result of several years of study and hard work. Each of these years is marked by bad and good days, each of these days is marked by bad and good moments. On this way, one meets a lot of people that, in one way or another, inuence our life at that time. Many people have been a part of my graduate education, as teachers, friends, and workmates. To all of them I want to say thank you. First of all, I want to express my gratitude to my supervisor, Carlo Fischione, for the guidance, the support, and the many highlighting meetings he has provided me during this work. Thanks also for proposing me this master thesis project. I am very grateful to my swedish examiner, Karl H. Johansson, and to my italian examiner, Fortunato Santucci, for putting their faith in me and for giving me the opportunity of doing my thesis in a world-class research group as the Automatic Control Group of KTH. Thanks to Pablo Soldati for his willingness and for giving me countless and priceless advices during all my stay in Sweden. Thanks also to Alberto Speranzon for helping me when I was in trouble with some control systems and to Niels Moller for helping me with ns2. Now is the moment to thank all those that have left a mark on my life during my ve years of studying at University of LAquila. My rst thought goes to Marco Fiorenzi, the best workmate I could have wished for. He has always spurred me on to do my best, and to do it in that moment. Marco has been a perfect workmate, a fantastic fellow traveller during the months we stayed in Sweden but, rst of all, he has been a real friend. Is thanks to him if I am here now and if I have already nished my studies. Thanks to Gianluca Colantoni vii
viii
A CKNOWLEDGEMENTS
for its priceless friendship, for all the amusing and unique moments we have past together and for the large heart he has always demonstrated to posses. A special thought goes to Maria Ranieri, whose role in my life during all these years is hard to explain by words. The simplest thing I can say is that she was there, always, and she has always given me much more than I deserved. Thanks also to Davide and Matteo Pacico for their support, their willingness and their unique capacity to solve every kind of problem I had. I would also like to thank Massimo Paglia. Massimo has been a competent workmate, a wise interlocutor and an excellent companion for enjoying. Finally, a special thanks to those closest to me. Arianna, who shared my happiness, and made me happy. Thanks for the love, patience, understanding, and for putting your unreserved condence in me. Thanks for being a so special person. My last (but not least!) thought goes to my family. I want to thank my father Gabriele, my mother Uliana, and my sister Silvia for their understanding, endless patience and encouragement when it was most required. Is only thanks to them if I have achieved this goal, and is only thanks to them if am what I am today.
Contents
1
1 1 2 6 6 9 11 12 13 21 21 26 28
HSDPA Concept
. . . . . . . . . . . . . . . . . . . . . . . . . . .
Adaptive Modulation and Coding . . . . . . . . . . . . . Fast Hybrid Automatic Repeat reQuest . . . . . . . . . . Fast Scheduling . . . . . . . . . . . . . . . . . . . . . . . .
1.4 1.5 2
TCP Overview
2.1 2.2 2.3 2.4
33 37 37 39 43 46 47
Proxy Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Flow Aggregation . . . . . . . . . . . . . . . . . . . . . . . . . . . Eifel Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Snoop Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . Further Enhancing Protocols
. . . . . . . . . . . . . . . . . . . . ix
x 4
C ONTENTS
Simulation
4.1 4.2 4.3
53 53 60 63 73 75
Conclusions References
List of Figures
1.1 1.2 1.3 1.4 1.5 1.6 1.7 1.8 1.9 HS-PDSCH channel time and code multiplexing . . . . . . . . . HS-SCCH frame structure . . . . . . . . . . . . . . . . . . . . . . HS-DPCCH frame structure [1] . . . . . . . . . . . . . . . . . . . HSDPA channel functionality . . . . . . . . . . . . . . . . . . . . HSDPA physical layer . . . . . . . . . . . . . . . . . . . . . . . . HSDPA UE categories . . . . . . . . . . . . . . . . . . . . . . . . IR and CC state diagrams . . . . . . . . . . . . . . . . . . . . . . An example of Chase Combining retransmission . . . . . . . . . An example of Incremental Redundancy retransmission . . . . . 3 4 5 5 6 9 10 11 11 15 24 25 35 35 38 39 40 41 45 46 55 55 57
1.10 HSUPA peak throughput rates . . . . . . . . . . . . . . . . . . . 2.1 2.2 2.3 2.4 3.1 3.2 3.3 3.4 3.5 3.6 4.1 4.2 4.3 TCP slow start and congestion avoidance phase . . . . . . . . . TCP fast retransmit and fast recovery phase . . . . . . . . . . . . Mean value Ns as a function of BLER [26] . . . . . . . . . . . . . Variance as a function of BLER [26] . . . . . . . . . . . . . . . Proxy solution architecture . . . . . . . . . . . . . . . . . . . . . . RNF signalling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . TCP ow aggregation scheme [30] . . . . . . . . . . . . . . . . . Sample logical aggregate for a give Mobile Host [30] . . . . . . . Eifel procedure [35]. . . . . . . . . . . . . . . . . . . . . . . . . . . Snoop procedure [39] . . . . . . . . . . . . . . . . . . . . . . . . . UE side MAC architecture [50] . . . . . . . . . . . . . . . . . . . UTRAN side overall MAC architecture [50] . . . . . . . . . . . . Main characteristics of EURANEs schedulers [52] . . . . . . . . xi
2
L IST OF F IGURES Overview of physical layer model used in EURANE [52] . . . . Simulation scenario . . . . . . . . . . . . . . . . . . . . . . . . . . Available link bandwidth . . . . . . . . . . . . . . . . . . . . . . Network architecture . . . . . . . . . . . . . . . . . . . . . . . . . UEs throughput in the simple scenario . . . . . . . . . . . . . . Servers congestion window in simple and RNFProxy scenarios 58 60 61 61 63 64 65 66 68
4.10 Trends obtained in simple scenario setting servers cwnd to 19 . 4.11 Throughput improvements by adding Eifel and Snoop protocols 4.12 UEs throughput in RNFProxy scenario . . . . . . . . . . . . . . 4.13 Throughput improvements by adding Eifel and Snoop protocols to RNFProxy scenario . . . . . . . . . . . . . . . . . . . . . . . . . 4.14 UEs throughput in RNFProxy scenario with both Eifel and Snoop protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.15 Comparison between throughputs trend in simple scenario and in RNFProxy scenario . . . . . . . . . . . . . . . . . . . . . . . . . 4.16 Comparison between throughputs trend in RNFProxy scenario (with and without enhancing protocols) . . . . . . . . . . . . . . 4.17 Comparison between the throughput experienced interposing a RNFProxy and that experienced with a SimpleProxy . . . . . . .
69
70
70
71
71
List of Tables
1.1 1.2 2.1 4.1 4.2 4.3 2G to 3G throughput comparison . . . . . . . . . . . . . . . . . . . . Comparison between DSCH and HS-DSCH basic properties . . TCP versions comparison . . . . . . . . . . . . . . . . . . . . . . Scenarios characteristics . . . . . . . . . . . . . . . . . . . . . . . Simulation parameters . . . . . . . . . . . . . . . . . . . . . . . . Summary of simulation results . . . . . . . . . . . . . . . . . . . 2 13 33 61 62 72
xiii
List of Abbreviations
3G 3GPP ACB ACK AMC ARQ ATCP BCH BDP BER BLER C/I CC CDMA CQI CRC CWND DCH DPCH DSCH DUPACK E-DCH EDGE EURANE Third Generation 3rd Generation Partnership Project Aggregate Control Block Acknowledgment Adaptive Modulation and Coding Automatic Repeat Request Aggregate TCP Broadcast Channel Bandwidth Delay Product Bit Error Rate Block Error Rate Carrier to Interference Ratio Chase Combining Code Division Multiple Access Channel Quality Indicator Cyclic Redundancy Check Congestion window Dedicated Channel Dedicated Physical Channel Downlink Shared Channel Duplicate Acknowledgment Enhanced Dedicated Channel Enhanced Data rates for Global Evolution Enhanced UMTS Radio Access Network Extension
xv
xvi FACH FACK FDD FH GGSN GPRS GSM H-ARQ HSDPA HS-DPCCH HS-DSCH HS-PDSCH HS-SCCH HSPA HSUPA IR LTE MAC MAC-b MAC-c MAC-d MAC-hs MAC-sh MCS MH MIMO MSR MSS OFDM OFDMA PCH PF QAM Forward Access Channel Forward Acknowledgment Frequency Division Duplex Fixed Host Gateway GPRS Support Node General Packet Radio Service
L IST OF A BBREVIATIONS
Global System for Mobile communication Hybrid Automatic Repeat Request High-Speed Downlink Packet Access High-Speed Dedicated Physical Control Channel High-speed Downlink Shared Channel High-Speed Physical Downlink Shared Channel High-Speed Shared Control Channel High-Speed Packet Access High Speed Uplink Packet Access Incremental Redundancy Long Term Evolution Medium Access Control Medium Access Control for BCH Medium Access Control for PCH Medium Access Control for DCH Medium Access Control high-speed Medium Access Control for DSCH Modulation and Coding Scheme Mobile Host Multiple Input Multiple Output Mobile Support Router Maximum Segment Size Orthogonal Frequency Division Multiplexing Orthogonal Frequency Division Multiplexing Access Paging Channel Proportional Fair Quadrature amplitude modulation
L IST OF A BBREVIATIONS QPSK RACH RLC RNC RNF RR RTO RTT RWND SACK SAW SDMA SF SGSN SH SIR SSTHRESH SYN TCP TDD TLE TTI UDP UE UMTS UTRAN VOIP WCDMA WLAN Quadrature Phase Shift Keying Random Access Channel Radio Link Control Radio Network Controller Radio Network Feedback Round Robin Retransmission Timeout Round Trip Time Receiver Window Selective Acknowledgment Stop And Wait Space Division Multiple Access Spreading Factor Serving GPRS Support Node Supervisory Host Signal to Interference Ratio Slow Start Threshold Synchronize Transmission Control Protocol Time Division Duplex Transmission Layer Efciency Transmission Time Interval User Datagram Protocol User Equipment Universal Mobile Telecommunication System UMTS Terrestrial Radio Access Network Voice over IP Wideband Code Division Multiple Access Wireless Local Area Network
xvii
Chapter 1
HSDPA Concept
HSDPA (High Speed Downlink Packet Access) represents a new high-speed data transfer feature released by the 3rd Generation Partnership Project (3GPP) with the aim of empower UMTS downlink data rates. The need of increasing downlink data rates is due to the spreading of new 3G mobile services - such as web browsing, streaming live video, network gaming - which require high downlink sources whereas uplink is used only for control signalling. HSDPA offers a way to increase downlink capacity within the existing spectrum by a factor 2:3 compared to 3G Release 99. In Table 1.1 a comparison among 2G (the basic GSM), 2.5G (GPRS and EDGE) and 3G (UMTS Rel. 99 and HSDPA Rel. 5) downloading data rates is shown. Another important enhancement introduced by HSDPA is a three-to-vefold sector throughput increase, which means more data users on a single frequency (or carrier). The impressive increase of data rate is achieved by implementing a fast and complex channel control mechanism based upon short physical layer frames (cf. sec. 1.2), Adaptive Modulation and Coding (AMC) (cf. sec. 1.3.1), fast Hybrid-Automatic Repeat reQuest (H-ARQ) (cf. sec. 1.3.2) and fast scheduling (cf. sec. 1.3.3). 1
GSM
Typical max. data rate Theoret. peak data rate
It is important to note that HSDPA is a pure access evolution without any core networks impacts, except for minor changes due to the higher bandwidth access. For instance, in the 3GPP Rel. 5 the maximum throughput set into the signalling protocol has been increased from 2 Mbps to 16 Mbps in order to support the theoretical maximum limit of HSDPA data rate (14.4 Mbps). It follows that the deployment of HSDPA is very cost effective since the incremental cost is mainly due to Node B and Radio Network Controller (RNC) hardware/software upgrade while the operator cost to provide data services is signicantly reduced. In a typical dense urban environment, the operator cost to deliver a megabyte of data trafc is about three cents with HSDPA while it increases to about seven cents for UMTS. This is due to the high improvements in spectral efciency introduced by HSDPA.
1.2
Channel Structure
The HSDPA functionality denes three new channel types (see Fig. 1.4): - High-Speed Downlink Shared Channel (HS-DSCH) - High-Speed Shared Control Channel (HS-SCCH) - High-Speed Dedicated Physical Control Channel (HS-DPCCH) HS-DSCH is very similar to the DSCH transport channel dened in Rel. 99. HS-DSCH has been introduced in Rel. 5 as the primary radio bearer and its resources can be shared between all active HSDPA users in the cell. To obtain
higher data rates and greater spectral efciency, the fast power control and variable spreading factor of the DSCH are replaced in Rel. 5 by short packet size, multicode operation, and techniques such as AMC and HARQ on the HSDSCH. Another difference from DSCH is that the scheduling with HS-DSCH is done at the Node B rather than RNC. The HS-DSCH is mapped onto a pool of physical channels (i.e. channelization codes) denominated HS-PDSCHs (High Speed Physical Downlink Shared Channel) to be shared among all the HSDPA users on a time multiplexed manner. HS-PDSCHs are multiplexed both in time and in code. In Rel. 5, timeslots have the same length as in Rel. 99 (0.67 ms) but differently from the latter where each Transmission Time Interval (TTI) consists of 15 slots (i.e. each TTI lasts 10 ms), in HSDPA each TTI consists of three slots (i.e. 2 ms). This reduction of TTI size permits to achieve a shorter round trip delay between the User Equipment (UE) and the Node B, and improves the link adaptation rate and efciency of the AMC. Within each 2 ms TTI, a constant Spreading Factor (SF) of 16 is used with a maximum of 15 parallel channels for HS-PDSCHs. These channels may all be assigned to one user during the TTI, or may be split among several users (see Figure 1.1).
Spreading Codes
user 1
user 2
user 3
user 4
user 5
In order to support the HS-DSCH operation, an HSDPA UE needs new control channels: the HS-SCCH in the downlink direction and the HS-DPCCH in
the uplink direction. HS-SCCH is a xed rate (60 Kbps, SF=128) channel used for carrying downlink signaling between the Node B and the UE before the beginning of each scheduled TTI. This channel indicates the UE when there is data on the HSDSCH that is addressed to that specic UE, and gives the UE the fast changing parameters that are needed for HS-DSCH reception. This includes HARQrelated information and the parameters of the HS-DSCH transport format selected by the link adaptation mechanism (see Figure 1.2).
Tslot = 0.67 ms
HS-DPCCH (SF=256) is an uplink low bandwidth channel used to carry both ACK/NACK signaling indicating whether the corresponding downlink transmission was successfully decoded and Channel Quality Indicator (CQI) used to achieve link adaptation. To aid the power control operation of the HSDPCCH, an associated Dedicated Physical Channel (DPCH) is run for every user (see Figure 1.3). Figure 1.5 describes the downlink and uplink channel structure of HSDPA.
CQI ( HS-DPCCH ) Downlink Tranfer Information ( HS-SCCH ) Data Transfer ( HS-DSCH ) ACK / NACK (HS-DPCCH)
CQI
CQI
ACK
CQI
1.3
New Features
As mentioned in previous section, HSDPA introduces three new features: Adaptive Modulation and Coding (AMC). Hybrid Automatic Repeat reQuest (HARQ). Fast Scheduling.
1.3.1
Adaptive Modulation and Coding (AMC) represents a fundamental feature of HSDPA. It consists of continuously optimizing the modulation scheme, the code rate, the number of codes employed and the transmit power per code. This otpimization is based on various sources [2]: Channel Quality Indicator (CQI): the UE sends in the uplink a report denominated CQI that provides implicit information about the instantaneous signal quality received by the user. The CQI species the modulation, the number of codes and the transport block size the UE can support
with a detection error no higher than 10% [3]. This error is referred to the rst transmission and to a reference HS-PDSCH power. The RNC commands the UE to report the CQI every 2, 4, 8, 10, 20, 40, 80 or 160 ms [4] or to disable the report. In [3], the complete set of reference CQI reports is dened. Power Measurements on the Associated DPCH: every user to be mapped on to HS-PDSCH runs a parallel DPCH for signalling purposes, whose transmission power can be used to gain knowledge about the instantaneous status of the users channel quality. This information may be employed for link adaptation [5] as well as for packet scheduling. The advantages of using this information are that no additional signalling is required, and that it is available on a slot basis. However, it is limited to the case when the HS-DSCH and the DPCH apply the same type of detector (e.g. a conventional Rake), and can not be used when the associated DPCH enters soft handover. Hybrid ARQ Acknowledgements: the acknowledgement corresponding to the HARQ protocol may provide an estimation of the users channel quality too, although this information is expected to be less frequent than previous ones because it is only received when the user is served. Hence, it does not provide instantaneous channel quality information. Note that it also lacks the channel quality resolution provided by the two previous metrics since a single information bit is reported. Buffer Size: the amount of data in the Medium Access Control (MAC) buffer could also be applied in combination with previous information to select the transmission parameters. HSDPA uses higher order modulation schemes as 16-quadrature amplitude modulation (16-QAM) besides the existing QPSK used for Rel. 99 channels. The modulation to be used is adapted according to the radio channel conditions. The HS-DSCH encoding scheme is based on the Rel. 99 rate (1/3 turbo encoder) but adds rate matching with puncturing and repetition to improve the granularity of the effective code rate (1/4, 1/2, 5/8, 3/4). Different combinations of modulation and channel coding-rate can be used to provide different
peak data rates. In HSDPA, users close to the Node-B are generally assigned higher modulation with higher code rates (e.g. 16-QAM and 3/4 code rate), and both decreases as the distance between UE and Node-B increases. The HSDPA-capable UE can support the use of 5, 10 and 15 multi-codes. When a UE receives 15 multi-codes with a 16-QAM modulation scheme and no coding (effective code rate of one), the maximum peak data rate it can experience is 14.4 Mbits. Rel. 5 denes twelve new categories for HSDPA UEs according to the following parameters (see Figure 1.6): - Maximum number of HS-DSCH multi-codes that the UE can simultaneously receive (5, 10 or 15). - Minimum inter-TTI time, which denes the minimum time between the beginning of two consecutive transmissions to that UE. An inter-TTI of one means that the UE can receive HS-DSCH packets during consecutive TTIs (i.e. every 2 ms); an inter-TTI of two means that the scheduler would need to skip one TTI between consecutive transmissions to that UE. - Maximum number of HS-DSCH transport block bits received within an HS-DSCH TTI. The combination of this parameter and the inter-TTI interval determines the UE peak data rate. - The maximum number of soft channel bits over all the HARQ processes. A UE with a low number of soft channel bits will not be able to support Incremental Redundancy (cf. sec. 1.3.2) for the highest peak data rates and its performance will thus be slightly lower than for a UE supporting a larger number of soft channels. - Supported modulations (QPSK only or both QPSK and 16-QAM). AMC provides a link adaptation functionality at Node B is in charge of adapting the modulation, the coding format, and the number of multi-codes to the instantaneous radio conditions.
9
Max. peak data rate
1.2 Mbps 1.2 Mbps 1.8 Mbps 1.8 Mbps 3.6 Mbps 3.6 Mbps 7.2 Mbps 7.2 Mbps 10.2 Mbps 14.4 Mbps 0.9 Mbps 1.8 Mbps
UE category
1 2 3 4 5 6 7 8 9 10 11 12
Modulation
QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK & 16-QAM QPSK only QPSK only
1.3.2
HSDPA uses HARQ (Hybrid Automatic Repeat Request) retransmission mechanism with Stop and Wait (SAW) protocol. HARQ mechanism allows the UE to rapidly request retransmission of erroneous transport blocks until they are successfully received. HARQ functionality is implemented at MAC-hs (Medium Access Control - high speed) layer, which is a new sub-layer for HSDPA. MAC-hs is terminated at node B, instead of RLC (Radio Link Control) which is terminated at RNC (Radio Network Controller). This involves a shorter retransmission delay (< 10 ms) for HSDPA than Rel. 99 (up to 100 ms). In order to better use the waiting between acknowledgments, multiple processes can run for the same UE using separate TTIs. This is referred to as N-channel SAW (N = up to six for Advanced Node B implementation). In this way, while a channel is waiting for an acknowledgment, the remaining N 1 channels continue to transmit. HSDPA support both Chase Combining (CC) [6] and Incremental Redundancy (IR). CC consists in the retransmission from Node B of the same set of coded symbols of the original packet. The decoder at the receiver combines these multiple copies of the transmitted packet weighted by the received SNR prior to decoding (see Figure 1.8). This type of combining provides time diversity and soft combining gain at a low complexity cost and imposes the least demanding UE memory requirements of all Hybrid ARQ strategies. The com-
10
bination process incurs a minor combining loss to be around 0.2 0.3 dB per retransmission [7]. The state diagram of Figure 1.7(a) summarizes how the Chase Combining algorithm works.
Blo
ck
in
ro Er
HARQ
Block in Error
Blo i ck nE rro
HARQ
Block in Error
Data Block
Transmission
Error Detection
Error Detection
ro r
Data Block
Original Transmission
Error Detection
No Er
New transmission
Error Detection
ro
IR, on the other hand, sends different redundancy information during the re-transmissions (see Figure 1.9). This leads to an incremental increasing of the coding gain that can result in fewer retransmissions than for CC. IR is then particularly useful when the initial transmission uses high coding rates (e.g. 3/4) but it implies higher memory requirements for the mobile receivers and larger amount of control signaling compared to Chase Combining. Incremental Redundancy can be further classied in Partial IR and Full IR. Partial IR includes the systematic bits in every coded word, which implies that every retransmission is self-decodable, whereas Full IR only includes parity bits, and therefore its retransmissions are not self-decodable. According to [7], Full IR only provides a signicant coding gain for effective coding rates higher than 0.4 0.5, because for lower coding rates the additional coding rate is negligible, since the coding scheme is based on a 1/3 coding structure. On the other hand, for higher effective coding rates the coding gain can be signicant, for example a coding rate of 0.8 provides around 2 dB gain in Vehicular A (3km/h) scenario with a QPSK modulation. The state diagram of Figure 1.7(b) summarizes how the Chase Combining algorithm works. For a performance comparison of HARQ with Chase Combining and Incre-
No Er
No Error
No Error
1.3. New Features mental Redundancy for HSDPA systems see [8].
11
Data
Original Transmission
1 Retransmission
st
+ +
R = 1/2
2 Retransmission
nd
R = 1/3
Original Data
Effective rate after soft combining at decoder stage Original Transmission R=1
1 Retransmission
st
+ +
R = 1/2
2 Retransmission
nd
R = 1/3
1.3.3
Fast Scheduling
The scheduler is a fundamental element of HSDPA, it affects its behavior and its performance. At each TTI, the scheduler determines toward which terminal (or terminals) the HS-DSCH should transmit and, together with AMC, at which data rate. The HSDPA scheduler is located at the Node B. The algorithms used to schedule are Round Robin (RR), Maximum Carrier to Interference (Max C/I) and Proportional Fair (PF). RR schedules users with a rst-in rst-out approach. This approach involves a high fairness among all users, but at the same time it produces a reduction of the overall system throughput since users can be served even when they are experiencing weak signal.
12
C HAPTER 1. High Speed Downlink Packet Access (HSDPA) Maximum C/I schedules only users that are experiencing the maximum C/I during that TTI. This scheme provides the maximum throughput for the system but it produces unfairness of treatment among users penalizing those located at cell edge. PF offers a good trade-off between RR (high fairness and low throughput) and Maximum C/I (low fairness and high throughput). PF schedules users according the ratio between their instantaneous achievable data rate and their average served data rate.
1.4
We have described how HSDPA Rel. 5 represents an evolution of WCDMA Rel. 99 consisting in the introduction of a high speed transport channel (HSDSCH) and three new features: fast scheduling, fast link adaptation and fast Hybrid ARQ. The aim of these three new tools is that of providing a rapid adaptation to changing radio conditions. To achieve this aim, their functionalities are placed at the node B instead of the RNC as for the WCDMA. As depicted in Table 1.2 [9], some CDMA features have been changed in the HSDPA. In particular, Table 1.2 shows how the CDMA fast power control has been replaced by fast Adaptive Modulation and Coding (AMC) causing an HSDPA power efciency gain due to the elimination of the power control overhead. In addition, AMC provides a fast link adaptation, which is achieved following the policy that better are the link conditions experienced by the terminal, higher is the data with which they are served. Another change concerns the Spreading Factor (SF): in the CDMA it varies between 4 and 256 while in the HSDPA it assumes a xed value of 16. To support different data rates, the HSDPA supports a wide combination of channel coding rates and modulation format while WCDMA implements only the combination TC=1/3 and QPSK. In order to increase the AMC efciency and the link adaptation rate, the packet duration has been reduced from 10 or 20 ms (Rel. 99) to a xed value of 2 ms (Rel. 5). To decrease the round trip time (RTT), i.e. the round trip delay, the MAC funtionality of HS-DSCH has been placed at the node-B instead that at the RNC.
13
Another difference is about the retransmission functionality. The WCDMA Rel. 99 implements a simple ARQ scheme (the retransmitted packet are identical to those of the rst transmission) while HSDPA Rel. 5 implements an HARQ which supports both Chase Combining (CC) [6] and Incremental Redundancy (IR). The last difference is about the CRC policy: while in the WCDMA the CRC is implemented for each transport block, in the HSDPA it is implemented for each TTI (i.e. it uses a single CRC for all transport blocks in the TTI) with a consequent decrease of the overhead.
Feature
Variable spreading factor Fast power control Fast rate control Fast L1 HARQ HARQ with soft combining TTI Location of Mac CRC attachment Peak data rate
Rel.99 DSCH
Yes (4 - 256) Yes (1500 Hz) No (QPSK, TC=1/3) No ( 100 ms) No 10 or 20 ms RNC per transport block 2 Mbps
Rel.5 HS-DSCH
No (16) No Yes (AMC, 500 Hz) Yes ( 10 ms) CC or IR 2 ms Node-B per TTI 10 Mbps
14
HSUPA
Whereas HSDPA optimizes downlink performance, High Speed Uplink Packet Access (HSUPA), which uses the Enhanced Dedicated Channel (E-DCH), constitutes a set of improvements that optimize uplink performance. These improvements include higher throughputs, reduced latency, and increased spectral efciency. HSUPA is standardized in Release 6. HSUPA will result in an approximately 85 percent increase in overall cell throughput on the uplink and an approximately 50 percent gain in user throughput. HSUPA also reduces packet delays. Such an improved uplink will benet users in a number of ways. For instance, some user applications transmit large amounts of data from the mobile station, such as sending video clips or large presentation les. For future applications such as VoIP, improvements will balance the capacity of the uplink with the capacity of the downlink. HSUPA achieves its performance gains through the following approaches: - An enhanced dedicated physical channel. - A short TTI, as low as 2 ms, which allows faster responses to changing radio conditions and error conditions. - Fast Node-B-based scheduling, which allows the base station to efciently allocate radio resources. - Fast Hybrid ARQ, which improves the efciency of error processing. The combination of TTI, fast scheduling, and Fast Hybrid ARQ also serves to reduce latency, which can benet many applications as much as improved throughput. HSUPA can operate with or without HSDPA in the downlink, though it is likely that most networks will use the two approaches together. The improved uplink mechanisms also translate to better coverage, and for rural deployments, larger cell sizes. Apart from improving uplink performance, E-UL improves HSDPA performance by making more room for acknowledgment trafc and by reducing overall latency. HSUPA can achieve different throughput rates based on various parameters, including the number of codes used, the spreading factor of the codes, the TTI value, and the transport block size in bytes, as illustrated in Figure 1.10.
15
Spreading Factor
4 4 4 4 2 2 2 2 + 4 2 + 4
Codes
1 2 2 2 2 2 2 2 + 2 2 + 2
TTI
10 10 2 10 10 2 10 10 2
Data rate
0.73 Mbps 1.46 Mbps 1.46 Mbps 1.46 Mbps 2.00 Mbps 2.90 Mbps 2.00 Mbps 2.00 Mbps 5.76 Mbps
The combination of HSDPA and HSUPA is called High-Speed Packet Access (HSPA).
16
ceive diversity. Alternative advanced receiver approaches include interference cancellation and generalized rake receivers (G-Rake). Different vendors are emphasizing different approaches. However, the performance requirements for advanced receiver architectures are specied in 3GPP Release 6. The combination of mobile receive diversity and channel equalization (Type 3) is especially attractive as it results in a large gain independently of the radio channel. What makes such enhancements attractive is that no changes are required to the networks except increased capacity within the infrastructure to support the higher bandwidth. Moreover, the network can support a combination of devices, including both earlier devices that do not include these enhancements and those that do. Device vendors can selectively apply these enhancements to their higher performing devices. Another capability being standardized is Multiple Input Multiple Output. MIMO refers to a technique that employs multiple transmit antennas and multiple receive antennas, often in combination with multiple radios and multiple parallel data streams. The most common use of the term MIMO applies to spatial multiplexing. The transmitter sends different data streams over each antenna. Whereas multipath is an impediment for other radio systems, MIMO actually exploits multipath, relying on signals to travel across different communications paths. This results in multiple data paths effectively operating somewhat in parallel and, through appropriate decoding, in a multiplicative gain in throughput. Tests of MIMO have proven very promising in WLANs operating in relative isolation, where interference is not a dominant factor. Spatial multiplexing MIMO should also benet HSPA hotspots serving local areas such as airports, campuses, and malls, where the technology will increase capacity and peak data rates. However, in a fully loaded network with interference from adjacent cells, overall capacity gains will be more modest, in the range of 20 to 33 percent over mobile-receive diversity. Although MIMO can signicantly improve peak rates, other techniques such as Space Division Multiple Access (SDMA) (also a form of MIMO) may be even more effective than MIMO for improving capacity in high spectral efciency systems using a reuse factor of 1. 3GPP has enhanced the system to support SDMA operation as part of Rel. 6. In Rel. 7, Continuous Packet Connectivity enhancements reduce the
17
uplink interference created by dedicated physical control channels of packet data users when they have no user data to transmit. This helps increase the limit for the number of HSUPA users that can stay connected at the same time. 3GPP currently has a study item referred to as HSPA Evolution or HSPA+ that is not yet in a formal specication development stage. The intent is to create a highly optimized version of HSPA that employs both Rel. 7 features and other incremental features such as interference cancellation and optimizations to reduce latency. The goals of HSPA+ are to: - Exploit the full potential of a CDMA approach before moving to an OFDM platform in 3GPP LTE. - Achieve performance comparable to Long Term Evolution (LTE) in 5 MHz of spectrum. - Provide smooth interworking between HSPA+ and LTE that facilitates operation of both technologies. As such, operators may choose to leverage the SAE planned for LTE. - Allow operation in a packet-only mode for both voice and data. - Be backward compatible with previous systems while incurring no performance degradation with either earlier or newer devices. - Facilitate migration from current HSPA infrastructure to HSPA+ infrastructure.
18
technology is about as efcient as OFDM for delivering peak data rates of about 10 Mbps in 5 MHz of bandwidth. However, achieving peak rates in the 100 Mbps range with wider radio channels would result in highly complex terminals and is not practical with current technology. It is here that OFDM provides a practical implementation advantage. Scheduling approaches in the frequency domain can also minimize interference, and hence boost spectral efciency. On the uplink, however, a pure OFDMA approach results in high Peak to Average Ratio (PAR) of the signal, which compromises power efciency and ultimately battery life. Hence, LTE uses an approach called SC-FDMA, which has some similarities with OFDMA but will have a 2 to 6 dB PAR advantage over the OFDMA method used by other technologies such as IEEE 802.16e. LTE goals include: - Downlink peak data rates up to 100 Mbps with 20 MHz bandwidth. - Uplink peak data rates up to 50 Mbps with 20 MHz bandwidth. - Operation in both TDD and FDD modes. - Scalable bandwidth up to 20 MHz, covering 1.25 MHz, 2.5 MHz, 5 MHz, 10 MHz, 15 MHz, and 20 MHz in the study phase. 1.6 MHz wide channels are under consideration for the unpaired frequency band, where a TDD approach will be used. - Increase spectral efciency over Rel. 6 HSPA by a factor of two to four. - Reduce latency to 10 ms round-trip time between user equipment and the base station and to less than 100 ms transition time from inactive to active. The overall intent is to provide for an extremely high-performance radioaccess technology that offers full vehicular speed mobility and that can readily coexist with HSPA and earlier networks. Because of scalable bandwidth, operators will be able to easily migrate their networks and users from HSPA to LTE over time. The impressive improvements in the achievable peak data rates due to LTE will lead, in the next years, to the spreading of rich multimedia services and
19
applications over wireless networks. Since these services require the using of TCP (Transmission Control Protocol), TCP issues, performance, and enhancing solutions over HSDPA networks will be extensively discussed in Chapter 2 and in Chapter 3.
Chapter 2
TCP Overview
2.1
TCP Architecture
The distinctive characteristic of 3rd Generation wireless networks is packet data services. The information provided by these services are, in the majority of the cases, accessible on the Internet. Since internet communications are for the almost entirety constituted by TCP trafc, the research community is showing a wide interest in extending TCP application in mobile and wireless networks. TCP is a connection oriented transport protocol which provides a reliable byte stream to the application layer [11]. Reliability is achieved using ARQ mechanism based on positive acknowledgments. TCP provides transparent segmentation and reassembly of user data and handles ow and congestion control. TCP packets are cumulatively acknowledged when they arrive in sequence, out of sequence packets cause the generation of duplicate acknowledgments. TCP manages a retransmission timer which is started when a segment is transmitted. Retransmission timers are continuously updated on a weighted average of previous round trip time (RTT) measurements, i.e. the time it takes from the transmission of a segment until the acknowledgment is received. TCP sender detects a loss either when multiple duplicate acknowledgments (the default value is 3) arrive, implying that the next packet was lost, or when a retransmission timeout (RTO) expires. The RTO value is calculated dynamically 21
22
based on RTT measurements. This explains why accuracy in RTT measurements is critical: delayed timeouts slow down recovery, while early ones may lead to redundant retransmissions. A prime concern for TCP is congestion. Today all TCP implementations are required to use algorithms for congestion control, namely slow start, congestion avoidance, fast retransmit and fast recovery [12].
(2.1)
In the slow start phase, the congestion window is increased by one segment for each acknowledgment received (cwnd = cwnd+1). This phase is used both when new connections are established and after retransmissions due to time-
23
outs occurring. The slow start phase causes an exponential increase of the congestion window and it lasts until a timeout occurs or a threshold value (ssthresh) is reached. When the cwnd reaches the ssthresh value, the slow start phase ends and the congestion avoidance phase starts. While the slow start algorithm opens the congestion window quickly to reach the limit capacity of the link as rapid as possible, the congestion avoidance algorithm is conceived to transmit at a safe operating point and increase the congestion window slowly to probe the network for more bandwidth becoming available. In the congestion avoidance phase, the congestion window is increased by one packet per round trip time, which gives a linear increase of the window. More precisely, for each non duplicate ACK received the cwnd is increased according to the following equation:
(2.2)
Equation (2.2) provides an acceptable approximation to the underlying principle of increasing cwnd by 1 full-sized segment per RTT [12].
When a timeout occurs, the ssthresh is reduced to one-half the current window size (equation (2.3)), the congestion window is reduced to one MSS (Maximum Segment Size), and the slow start phase in entered again.
(2.3)
Figure 2.1 shows an example of how the congestion window changes during the slow start and the congestion avoidance phase. In this example the initial ssthresh is set to 16 and a timeout occurs after 8 round trip times. At that time, the cwnd assumes a value of 20, hence the new threshold after timeout (new sstresh) is set to 10.
24
20 18
Timeout
16 14 12 10 8 6 4 2 0 0
ssthresh
new ssthresh
10
11
12
13
14
25
gestion window is set to three segments more than ssthresh. These additional three segments take account of the number of segments (three) that have left the network and which the receiver has buffered. For each additional duplicate acknowledgment received, the cwnd is incremented by one (cwnd=cwnd+1) as well as in slow start phase, since each dupack indicates that one segment has left the network. The fast recovery phase ends when a non-duplicate acknowledgment arrives. The congestion window is then set to the same value as ssthresh and it is incremented by one segment for RTT as well as in congestion avoidance phase (equation (2.2)). With fast retransmit and fast recovery, TCP is able to avoid unnecessary slow starts due to minor congestion incidents (dupacks are indicators of some kind of network congestion but it is not as strict as a timeout).
20 18
16 14
12 10 8 6 4 2 0 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 new ssthresh
26
2.2
TCP has been designed for wired networks where packet losses are almost negligible and where packet losses and delays are mainly caused by congestion. Instead, in wireless networks the main source of packet losses is the link level error of the radio channel, which may seriously degrade the achievable throughput of the TCP protocol. Thus, TCP performance over wireless networks can differ from TCP performance over wired networks. The main problem with TCP performance in networks that have both wired and wireless links is that packet losses that occur because of bad channel conditions are mistaken by the TCP sender as being due to network congestion, causing it to drop its transmission window, resulting in degraded throughput. From a wireless performance point of view, the ow control represents one of the most important aspects of TCP. The ow control is in charge of determining the load offered by the sender to achieve maximum connection throughput while preventing network congestion or receivers buffer overow. The main characteristics of wireless networks that can affect TCPs performance are the following: Block Error Rate As mentioned above, in wired networks losses are mainly due to congestion caused by buffer overows. Wireless networks are instead characterized by high bit error rate (BER). If these errors are not corrected, they lead to block error rate (BLER). Since TCP ow and congestion control mechanisms assume that losses are only due to congestion, when packet losses due to corruption in the wireless link occur, TCP congestion control mechanism will react reducing the cwnd and resetting the retransmission timer. This TCP erroneous interpretation of errors leads to poor performance due to under utilization of the bandwidth and to very high delay jitter. Latency Latency in 3G wireless networks is mainly due to transmission delays in the radio access network and to the extensive processing required at the physical layer. Larger latency can be mistaken for congestion.
27
A delay spike is a sudden increase in the latency of the link [14] . The main causes of delay spikes are: - Link layer recovery from a outage due to a temporal loss of radio coverage (e.g. driving into a tunnel) - Inter-frequency handovers or inter-system handovers. Inter-frequency handovers occur when the UE is handed over another operators Node B that uses different frequency; inter-system handovers occur passing from a technologies to another (e.g. from 2G to 3G). - High priority trafc (e.g. voice) can block low priority applications (e.g. data connection) whether terminals do not handle both voice and data connection at the same time. In this case, low priority applications can be suspended so that high priority ones can be completed. Delay spikes can cause spurious TCP timeouts (cf. sec. 3.3), unnecessary retransmissions and a multiplicative decrease in the cwnd size. Serial Timeouts When the connection is paused for a certain time (for example, due to hard-handover), several retransmissions of the same segment can be lost during this pause. Since TCP uses an exponential backoff mechanism, when a timeout occurs TCP increases the retransmission timeout by some factor (usually, a doubling) before retransmitting the unacknowledged data. This increasing lasts until the RTO reaches a limit value (usually, about a minute). This means that when mobile resumes its connection, there is the possibility that no data will be transmitted for up to a minute, degrading the performance drastically. Data Rates Data rates in wireless networks are very dynamic due to mobility, varying channel conditions, effects from other users and even from varying demands from the connection. Moreover, when user move into another cell he can experience a sudden change in available data rate. An increasing in the available bandwidth can lead to an under utilization of it due
28
C HAPTER 2. TCP Overview to TCP slow start phase. On the other hand, when the data rate decrease, the TCP congestion control mechanism takes care of it but sudden RTT increase can cause a spurious TCP timeout [14].
2.3
TCP Versions
In this section some different congestion control and avoidance mechanisms will be studied, which have been proposed for TCP/IP protocols, namely: Tahoe, Reno, NewReno, Westwood Vegas, SACK and FACK. Each of the above implementations suggest a different mechanism to determine when a segment should be retransmitted and how should the sender behave when it encounters congestion. In addition, they suggest what pattern of transmission they have to follow to avoid congestion.
TCP Tahoe
TCP Tahoe refers to the TCP congestion control algorithm proposed in [15]. This implementation adds new algorithms and renements to earlier implementations. The new algorithm include slow-start, congestion avoidance and fast retransmit (cf. sec. 2.1). The renements include a modication to the round trip time estimator used to set retransmission timeout values. The problem of Tahoe is that it takes a complete timeout interval to detect a packet loss. In addition, it performs slow start if a packet loss is detected even if some packet can still ow through the network. This leads to an abrupt reducing of the ow.
TCP Reno
TCP Reno retains the enhancements incorporated into Tahoe adding to the fast recovery phase the fast recovery algorithm [16]. TCP Reno provides an important enhancement compared to TCP Tahoe, preventing the communication path (usually called pipe) from going empty after fast retransmit, thereby avoiding the need to slow start to re-ll it after a single packet loss.
29
Renos fast recovery mechanism is optimized for the case when a single packet is dropped from a window of data but it can suffer from performance problems when multiple packets are dropped from a window of data. In the case of multiple packets dropped, Renos performance are almost the same as Tahoe. This is due to the fact that the fast recovery algorithm mechanism implemented by TCP Reno can lead to a stall. Indeed, TCP Reno goes out of fast recovery when it receives a new partial ACK (i.e. a new ACK which does not represent an ACK for all outstanding data). That means that if a lot of segments from the same window are lost, TCP Reno is pulled out of fast recovery too soon, and it may stall since no new packets can be sent.
TCP NewReno
NewReno [17] represents a slight modication over TCP Reno. It is able to detect multiple packet losses and thus it appears much more efcient than TCP Reno when they occur. NewReno, as well as Reno, enters into fast retransmit when it receives multiple duplicate packets, but differently from the latter it does not exit from fast recovery phase until all outstanding data at the time it entered fast recovery are acknowledged. This means that in NewReno partial ACK do not take TCP out of fast recovery but they are treated as an indicator that the packet immediately following the acknowledged packet in the sequence space has been lost, and should be retransmitted. Thus, when multiple packets are lost from a single window of data, NewReno can recover without a retransmission timeout, retransmitting one lost packet per round trip time until all of the lost packets from that window have been retransmitted. NewReno exits fast recovery phase when all data outstanding when this phase was initiated has been acknowledged (i.e., it exits fast recovery when all data injected into network, and still waiting for an acknowledgment at the moment that fast recovery was initiated, has acknowledged). The main NewRenos issue is that it takes one round trip time to detect each packet loss.
30
TCP Westwood
TCP Westwood represents a modied version of TCP Reno since it enhances the window control and backoff process [18]. Westwood sender monitors the acknowledgment stream it receives and from it estimates the data rate currently achieved by the connection. Whenever the sender perceives a packet loss (i.e. a timeout occurs or 3 DUPACKs are received), the sender uses the bandwidth estimate to properly set the congestion window and the slow start threshold. By backing off to cwnd and ssthresh values that are based on the estimated available bandwidth (rather than simply halving the current values as Reno does), TCP Westwood avoids reductions of cwnd and ssthresh that can be excessive or insufcient. In this way TCP Westwood ensures both faster recovery and more effective congestion avoidance. Experimental studies reveal the benets of the intelligent backoff strategy in TCP Westwood: better throughput, goodput and delay performance.
TCP SACK
TCP with Selective Acknowledgment represents an extension of TCP Reno and NewReno. It provides a solution both to the problem of the detection of multiple lost packets and to the retransmission of more than one lost packet per round trip time. TCP SACK requires that segments are acknowledged selectively rather than cumulatively. It uses the option eld in the TCP header to store a set of properly received sequence numbers [19]. During fast recovery, SACK maintains a variable called pipe, that represents the estimated number of packets outstanding on the link. The sender only sends new or retransmitted data when the value of pipe is less than the cwnd. The variable pipe is incremented each time the sender sends a packet, and is decremented when the sender receives duplicate ACK with a SACK option reporting that new data has been correctly received. When the sender is allowed to send a packet, it sends the next packet known as missing at the receiver if such a packet exists, otherwise it sends a new packet. When a retransmitted packet is lost, SACK detects it through a classic RTO and then goes into slow
31
start. The sender only goes out of fast recovery when an ACK is received acknowledging all data that was outstanding when fast recovery was entered. Because of this, SACK appears closer to NewReno than to Reno, since partial ACKs do not pull the sender out of fast recovery.
TCP FACK
TCP with Forward Acknowledgment is an extension of TCP SACK. It has the same functionalities of TCP SACK but it introduces some improvements compared to it: - A more precise estimation of outstanding. It uses SACK option to better estimate the amount of data in transit [20]. - A data smoothing. It introduces a better way to halve the window when congestion is detected. When the cwnd is immediately halved, the sender stops transmitting for a while and then resumes when enough data has left the network. This unequal distribution of segments over one RTT can be avoided when the window is gradually decreased [20]. - A new slow start and congestion control. When congestion occur, the window should be halved according to the multiplicative decrease of the correct cwnd. Since the sender identies congestion at least one RTT after it happened, if during that RTT it was in Slow Start mode, then the current cwnd will be almost double than the cwnd when congestion occurred. Therefore, in this case, the cwnd is rst halved to estimate the correct cwnd that should be further decreased.
TCP Vegas
In contrast to the TCP Reno algorithm which induces congestion to learn the available network capacity, Vegas algorithm anticipates the onset of congestion by monitoring the difference between the rate it is expecting to see and the rate it is actually realizing [21]. Vegas strategy is to adjust the sources sending rate (i.e. the cwnd) in an attempt to keep a small number of packets buffered in the routers along the transmission path. The TCP Vegas sender stores the current
32
value of the system clock for each segment it sends. By doing so, it is able to know the exact RTT for each sent packet.
The main innovations introduced by TCP Vegas are the following: - New retransmission mechanism. When a duplicate acknowledgment is received, the sender checks if (current time - segment transmission time) > RTT. If it is true, the sender provides a retransmission without waiting for the classic retransmission timeout nor for three duplicate ACKs. To catch any other segments that may have been lost prior to the retransmission, when a duplicate acknowledgment is received, if it is the rst or second one after a fresh acknowledgment then it again checks the timeout values and if the segment time exceeds the timeout value then it retransmits the segment without waiting for a duplicate ACK. In this way Vegas can detect multiple packet losses. Moreover, it only reduces its window if the retransmitted segment was sent after the last decrease. Thus it also overcome Renos shortcoming of reducing the congestion window multiple time when multiple packets are lost. - New congestion control mechanism. TCP Vegas does not use segment losses to signal that there is congestion. It determines congestion by calculating the difference between the calculated throughput and the value it would achieve if the network was not congested. If that difference is smaller than a boundary, the window is increased linearly to make use of the available bandwidth, otherwise it is decreased linearly to prevent over saturating the bandwidth. The throughput of an uncongested network is dened as the window size in bytes divided by the BaseRTT, which is the value of the RTT in an uncongested network. - New slow start mechanism. The cwnd is doubled every time the RTT changes instead of every RTT. The reason for this modication is that when a connection starts for the rst time the sender has no idea of the available bandwidth. Thus it may happen that during exponential increase it over shoots the available bandwidth by a big amount inducing congestion.
OVER
2.4. R OUND T RIP T IME AND MEAN NUMBER OF RETRANSMISSIONS FOR TCP 3G 33 The slow start phase in terminated when a boundary value is reached in the difference between the current RTT and the last RTT. This represents a modication compared to others TCP versions where the boundary is set in the cwnd size.
TCP Tahoe
Slow Start Yes
TCP Reno
Yes
TCP
TCP
TCP SACK
Yes
TCP FACK
Enhanc. Version
TCP Vegas
Enhanc. Version Enhanc. Version
N.Reno West.
Yes Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes No
Yes Yes
Yes Yes
Normal
Normal
Normal
New mechan.
Normal
Normal
Normal
Normal
Normal
New mechan.
New mechan. No
No
No
No
No
Yes
Yes
2.4 Round Trip Time and mean number of retransmissions for TCP over 3G
A correct estimate of the round trip time is fundamental. The round trip time represents a merit gure of any connection since it gives an indication on how fast the transmitter can react to any event that occurs in the connection. It could be dened as the elapsed period since the transmitter sends a packet until it receives the corresponding acknowledgement. With the purpose of accelerating such transmitter response time, the round trip time should be minimized as much as possible.
34
In HSDPA, the size of a TCP segment is 1500 byte and each TTIs lasts 2 ms. According to the modulation and coding schemes used on the radio interface, transmitting a TCP segment requires since 12 up to 60 TTIs. How well known, the wireless channel presents variable characteristics both from the point of view of link conditions (expressed in terms of block error rate (BER)) and from that of transmission time delay. Let [22] NT T I (i) the number of transmissions of TTI i due to HARQ, Tj the transmission time of a segment on the radio interface (it depends by the bit rate chosen by the the scheduler), RT Twired the average RTT of the wired part of the network, ns the number of TTIs needed to transmit a TCP segment when no errors occurs on the radio interface. Then the round trip time (RTT) of the whole link (wired part plus wireless part) is given by:
RT T =
ns i=1
NT T I (i) Tj + RT Twired ns
(2.4)
The term:
ns
Ni =
i=1
NT T I (i) ns
(2.5)
represents the number of transmissions of a TCP segment (Ni ). Since errors on each TTI are independent and identical distributed (i.i.d.) [23], Ni can be modelled by a Gaussian variable. Then, also the RTT expressed by equation 2.4 can be modelled by a Gaussian variable. Is now possible to dene the mean Ns [23] [22] [24] [25] and the variance 2 [23] [24] of Ni :
Ns =
1 + Pe Pe Ps 1 Pe Ps
(2.6)
2 =
Pe (1 Pe + Pe Ps ) (1 Pe Ps )2
(2.7)
where Ps is the probability of errors after soft combining two successive transmission of the same information block and Pe is the probability of errors after
OVER
2.4. R OUND T RIP T IME AND MEAN NUMBER OF RETRANSMISSIONS FOR TCP 3G 35
decoding the information block, i.e. it represents the BLER. In such way, we have dened Ni N(Ns , 2 ). From Figure 2.3 and Figure 2.4 we can extract Ns and 2 values corresponding to different values of BLER.
Chapter 3
3.1
Proxy Solution
Proxy solutions consist in splitting the connection between the sender (i.e. the server) and the terminal (i.e. the UE) by means of an interposed proxy. This 37
38
solution permits to split the connection serverterminal into one connection between the server and the proxy, and another between the proxy and the terminal (see Figure 3.1). In this way, the server will continue to see an ordinary wired network while changing in the system will be made only to the proxy and possibly to the terminal. This solution has been introduced by [27] and it is also known with the name split TCP.
Terminal
Node B
RNC
Server
TCP
(a)
Terminal
Node B
RNC
Proxy
Server
TCP
TCP
(b)
An accurate studying about proxy solution over WCDMA networks is reported in [28], where it is shown how local knowledge (in the proxy) about the state of a TCP connection can be used to enhance performance by shortcutting the ACKs transmission or packets retransmission. Moreover, it demonstrates that split TCP solution is particularly useful for radio links with high data rates, since they are characterized by a large bandwidth delay product. The proxy solution used in this thesis is the one proposed by [29], which allows to improve both the user experience of wireless internet and the utilization of the existing infrastructure. The proxy-based scheme introduced in [29] uses a new custom protocol between the RNC and the proxy. This protocol provides information from the data-link layer within the RNC to the transport layer within the proxy. This
39
communication is called Radio Network Feedback (RNF) and it is sent via UDP (User Datagram Protocol). The RNF message is sent from the RNC to the proxy every time the available link bandwidth over the wireless channel is computed. The link bandwidth represents the instantaneous channel capacity of the wireless link, computed with a given frequency. When the proxy receives the RNF message, it takes appropriate action by adjusting the TCP window size. The computation of the cwnd in the proxy also takes into consideration the queue in the RNC. It is important to note that bandwidth variations act as a disturbance which is possible to measure but not to affect, while the queue length is a parameter that is possible to affect. This is the reason why the part of RNF message concerning the available bandwidth is a feed-forward while the part concerning the queue length is a feedback. Figure 3.2 shows how the RNF signalling works.
W le B iab r a V
RNF message
RNC
RNFProxy
UE
Node B
queue
Server
- recompute cwnd - update cwnd
3.2
Flow Aggregation
In conventional TCP implementations every connection is independent and thus for each is kept a different state information (such as cwnd, ssthresh and so on). However, since all TCP connections to a mobile host share the same wireless link, they are statistically dependant thus ows to the same mobile host might share certain TCP state information. The solution proposed in [30] treats all the ows to the same mobile host as a single aggregate. The scheme is depicted in Figure 3.3. Treating all TCP ows to a particular mobile host as an
40
aggregate, is possible to perform better scheduling and ow control in order to maximise link utilization, reduce latency, and improve fairness between ows. The introduced proxy can share state including a single congestion window and RTT estimates across all TCP connection within the aggregate. Sharing state information enables all the connections in an aggregate to have better, reliable, and more recent knowledge of the wireless link. All the state information are then grouped together into one structure called Aggregate Control Block (ACB). Details of this structure are given in Figure 3.4.
Fixed Hosts
Flow Control (with Aggregate_1)
S11 MHn
Mobile Hosts
MH1
S12
Base Station
Aggregation Proxy
The wired proxy interface is called AggregateTCP (ATCP) client while the wireless one is called ATCP sender. Packets are received by the ATCP client into small per-connection queues. The ATCP sender feeds these packets into a scheduler operating in behalf of the whole aggregate. A single congestion window is maintained for the whole aggregate. Every time the level of unacknowledged data on the wireless link drops one MSS below the current congestion window, the scheduler select a connection with queued data from which
41
Figure 3.4: Sample logical aggregate for a give Mobile Host [30]
a further segment will be sent. During this selection, the scheduler must respect the mobile hosts receive window for each of the individual ows. After transmitted, packets are kept in a queue of unacknowledged data until they are acknowledged by the mobile host. In this way, in case of losses signalled by the mobile host or deduced from the expiry of the aggregates retransmission timer, the ATCP sender can perform a retransmission of lost packets withdrawing them from this queue. Another characteristic of this solution is the early ACKing employed by ATCP sender. The ATCP sender acknowledges packets received from hosts as soon as they arrive, before they are received by the destination end system. However early acknowledgments are never used for FINs (i.e. for packets used to terminate the connection) and this mitigates the effect of this policy on TCPs end-to-end semantic. Connection scheduling strategies employed by this proxy can be different, depending by the nature of the incoming trafc. In this solution, to select from which connection transmit is used a combination of priority-based and ticket-based stride scheduling [31]. Stride scheduling is a deterministic allocation mechanism for time-shared resources. Resources are allocated in discrete time slices. Resource rights are represented by tickets-abstract, rst-class objects that can be issued in different amounts and passed between clients. Throughput rates for active clients are directly proportional to their ticket allocations. Client response time are inversely proportional to ticket allocations. Three state variables are associated with each client: tickets, stride and pass. The tickets eld species the clients resource allocation, relative to other clients. The stride eld is inversely proportional to
42
tickets, and represents the interval between selection, measured in passes. The pass eld represents the virtual time index for the clients next selection. Performing a resource allocation is very simple: the client with the minimum pass is selected, and its pass is advanced by its stride. If more than one client has the same minimum pass value, then any of them may be selected. The previous scheduling strategies permit to give strict priority to interactive ow (like telnet) and sharing out the remaining bandwidth between other applications (such as WWW, FTP and so on). To optimize link performance, the proxy uses the following three key mechanisms. ATCP Sender congestion window strategy. The poor performance of TCP over wireless networks are mainly due to under-utilization of available bandwidth during the rst few seconds of a connection due to the pessimistic nature of the slow start algorithm. The ATCP sender uses a xed congestion window shared across all connections in the aggregate. The size of the window is xed at a relatively static estimate of the link BDP (Bandwidth Delay Product). The congestion cant grow beyond this value (this is called TCP cwnd clamping) and the slow start is eliminated. Furthermore, a fair sharing of bandwidth among users is ensured by the underlying network. Once the mobile proxy is successful in sending an amount of Cclamp data, it goes in a self-clocking state. When the mobile proxy is in this state, it checks out one segment (from whatever connection the scheduler has selected) each time it receives an ACK for an equivalent amount of data from the receiver. Fixing Cclamp at an ideal value, if there is data to transmit the link will be never under utilised and the queuing at the CGSN gateway will be minimal. The value of Cclamp is usually a 30% higher than the calculates BDP. This excess is required due to link jitter, use of delayed ACKs by the TCP receiver in the mobile host and ACK compression occurring due to the link layer. ATCP Client ow control scheme. As introduced in previous sections, when
the proxy early ACKs packets, it stores them in a buffer until they are successfully delivered to the mobile host. In this way, committing buffer space, the proxy must pay attention how much data it accepts on each connection to avoid to be swamped. The control of accepted data can be performed through
43
the receive window it advertises to hosts. The proxy must then ensure that sufcient data from connections is buffered avoiding that the link goes idle unnecessarily (e.g. it may need to buffer more data from senders with long RTTs respect to other mobile hosts) and must limit the total amount of buffer space committed to each mobile host.
ATCP Sender error detection and recovery. Over a wireless link, a packet lost can be due both to bursty radio losses and to cell reselection due to cell update procedure. TCP detects these losses through duplicate ACKs or timeouts. Since TCP doesnt know the nature of the losses, it reacts by invoking congestion control measures such as fast retransmit or slow start. Since link conditions return to a good state after the loss, the invocation of backoff often leads to under-utilization of the available bandwidth. This solution aim is to perform an aggressive recovery from transient losses which permits to keep the link in a state of full utilization. To achieve this objective TCP uses SACK signals which allows the receiver to inform the sender of packets which it has received out of order. In this way the sender can retransmit missing packets selectively.
3.3
Eifel Protocol
An important aim in wireless communications is the senders ability of correctly estimating the round trip time and the retransmission timeout. This estimation can be inaccurate due to wireless link delay spikes that can lead the TCP to bad estimate the RTT and consequently the RTO. Delay spikes (cf. sec. 2.2) are dened as a situation where the round trip time suddenly increases for a short duration of time, and then drops to the previous value. This can lead to two undesired events: spurious timeouts and spurious fast retransmits. As explained in sec. 2.1, the TCP sender uses two different error recovery strategies: timeout-based retransmission and dupack-based retransmission. The problem of spurious timeouts affects the rst strategy while spurious fast retransmits the latter. In dupack-based retransmission, a retransmission (known as fast retransmission) is triggered when three (this is the default threshold but it can
44
be changed) successive dupacks for the same sequence number have been received. According to [32], we can dene spurious timeouts as timeouts that would not have occurred had the sender waited longer. Since TCP receiver generates a duplicate ACK for each segment that arrives out-of-order, this can result in a spurious fast retransmit if three or more data segments arrive outof-order at a TCP receiver, and at least three of the resulting duplicate ACKs arrive at the TCP sender. Spurious timeouts and spurious retransmission cause the so called retransmission ambiguity.
The Eifel algorithm was designed with the specic aim of improving TCP performance in presence of delay spikes [33]. This algorithm uses extra information in the ACKs to eliminate the retransmission ambiguity. In particular, it assigns to each segment and to the corresponding ACK a timestamp that allows the sender to distinguish the ACK for the original transmission from the ACK for its retransmission. The timestamp is a 12 bytes eld added in the header of the segment. When the sender detects a timeout or a triple dupack for a packet, it reacts retransmitting the interested packet. During this operation, the sender stores both a copy of the rst retransmissions packet (irrespective of whether the retransmission is triggered by a timeout or a fast retransmission) and the size assumed in that moment by the congestion window. Before providing the retransmission, the sender sets the congestion window to one segment. When the ACK for the retransmitted segment comes back, the sender compares ACKs timestamp with the one it had stored previously. If the timestamp is smaller than the one stored, the sender concludes that the retransmission was spurious and therefore unnecessary. In this case, the sender restores the congestion window to the pre-retransmission value. Otherwise, if it detects that the retransmission was not spurious, it sets the congestion window at the half of the pre-retransmission value. In the case that two retransmissions have occurred, it is also halved. In the case that three or more retransmissions have occurred, the congestion window is set to one segment [34]. Figure 3.5 shows how Eifel protocol works when timeouts occur (Figure 3.5(a)) and when ACKs arrive (Figure 3.5(b)).
45
No
Proceed as normal TCP
3 or more
Transmission count
Yes
2
1. Save timestamp of first retx in tmst_first_retx 2. Save cwnd and ssthresh Set cwnd=1 segment
No
Yes
Half the cwnd
Spurious retransmission
Although Eifel algorithm represents a powerful solution against retransmission ambiguity, it presents two drawbacks. The rst is the header overhead incurred by additional 12 bytes required for the TCP timestamp option eld in the TCP header. This overhead reduces the transmission transport layer efciency (TLE), dened as the ratio of bandwidth used by the transport layer segment payload to the total size of a segment:
T LE =
(3.1)
As shown in (3.1), the smaller is the total size of a segment, the smaller is its TLE [36]. The second drawback is that Eifel algorithm introduces performance improvements on TCP transmissions only when the network presents delay spikes without packet losses. Contrariwise, in case of delays spikes with packet losses, Eifel suffers from long transmission stall introducing a worsening in transmission performance respect to other solutions (e.g. TCP Reno). A transmission performance improvement in presence of delay spikes and several packet losses is achievable combining Eifel with TCP NewReno [37].
46
3.4
Snoop Protocol
The Snoop protocol is a TCP-aware link layer protocol designed to improve TCP performance over networks made up of wired and single-hop wireless links [38] [39]. The Snoop protocol works by deploying a Snoop agent at the base station and performing retransmissions of lost segments based on duplicate acknowledgments (which are a strong indicator of lost packets) and locally estimated last-hop round trip times. The agent also suppresses duplicate acknowledgments corresponding to wireless losses from the TCP sender, thereby preventing unnecessary congestion control invocations at the sender. A retransmission over the wireless link is triggered at the base station (which is assumed to be the receiver) when a duplicate acknowledgment arrives from the mobile station or after a link layer timeout period. Figures 3.6(a) and 3.6(b) show how the Snoop protocol works when ACKs or packets arrive, respectively.
Ack arrives Yes 1. Free buffers 2. U p d a t e RT T estimate 3. P r o p a g a t e ack to sender Common case
Packet arrives No 1. Forward packet 2. Reset local rexmit counter Sender rexmission
Discard
Spurious ack Yes Discard No First one? Yes Retransmit lost packet with high priority Next pkt lost
In-sequence? Yes
No
This combination of local retransmissions based on suppression of duplicate acknowledgments, is the reason for classifying Snoop as a transport-aware reliable link protocol. The state maintained at the base station is soft, which does not complicate handoffs or overly increase their latency. Simulation studies [38] have shown that for BER greater than 5x107 the
47
throughput of the connection is improved up to 2000%. This protocol does not work if packets are encrypted (it is not able to see-through them) and it does not perform well if the wireless RTT is very high, as this leads to redundant retransmissions.
3.5
Further enhancing protocols have been dened to improve TCP performance. Some of these are listed in this section.
Split connections
Indirect TCP (I-TCP) I-TCP [40] is a transport layer protocol based on the Indirect Protocol model for mobile hosts, which suggests that any interaction between a mobile host (MH or UE) and a machine on the xed network (FH) should be split into two separate interactions: one between the MH and its Mobile Support Router (MSR) over the wireless medium and another between the MSR and the FH over the xed network. Data sent to the wireless host is rst received by the MSR. The MSR sends acknowledgement to the FH on behalf of the MH, and forwards the data to the MH on a separate connection. The MSR and the MH need not use TCP for communication. They can use a variation of TCP that is tuned for wireless links and is also aware of mobility. The FH only sees an image of its peer MH that in fact resides on the MSR. If the MH switches cells during the lifetime of an I-TCP connection the state of the connection is passed on to the new MSR. The main drawback of I-TCP is that end-to-end semantic of TCP acknowledgements is violated, since acknowledgements can reach the FH even before the packets can reach the MH. Another incovenient of I-TCP is that every packet incurs the overhead of going through the TCP protocol processing twice at the base station, as compared to just once in a nonsplit-connection approach.
The MTCP protocol [41] is almost the same as the I-TCP. It is also based on the Indirect Protocol model for mobile hosts, which suggests that any interaction between a mobile host and a machine on the xed network should be split into two separate interactions. No change is required in the TCP software on the FH, while a session layer protocol is introduced on top of the transport protocol in the MH and the MSR. The session layer protocol is designed to exploit available knowledge of the wireless link characteristics and host migration and to compensate for the highly unpredictable and unreliable link between the MSR and the MH. The MTCP protocol suffers from the same disadvantages as the I-TCP. M-TCP The M-TCP protocol [42] is very similar to the I-TCP, but tries to overcome the disadvantages of I-TCP. Similar to I-TCP, M-TCP also splits a TCP connection into two - one from the MH to an intermediate intelligent station (called Supervisory Host, SH) and another between this intermediate station to the FH. The TCP sender on the xed network uses unmodied TCP to send data to the SH while the SH uses M-TCP for delivering data to the MH. When the host sends segments, the SH receives them and passes them to the MH. Unlike I-TCP or MTCP, ACKs will not be sent to the sender until they are received by the UE. Mobile End Transport Protocol (METP) The Mobile End Transport Protocol [43] replaces TCP/IP over wireless link with a simpler protocol that uses smaller headers. Thus, the functionalities needed for communication with an Internet host are shifted from the wireless host to the base station. The protocol tries to take advantage of the base station support since functions of mobile devices are limited. Shifting the majority of the network protocols to the base station has the advantage of delegating some work of the wireless host to the more powerful base station and hiding the communication between the base station and mobile host to the external network. The protocol also exploits link-layer acknowledgements and retransmissions for quick recovery from losses over the wireless link. This distinguishes METP from
3.5. Further Enhancing Protocols other split-connection approaches like I-TCP and Mobile-TCP.
49
Eliminating TCP and IP layers from the mobile hosts, METP also eliminates TCP/IP headers from the packets transmitted over the wireless link. In this way the mobile host only needs simple multiplexing and demultiplexing mechanisms. METP does away with the IP layer in the wireless link by taking advantage of the fact that the hop between the base station and mobile host is the rst or last in the connection. Thus, the METP at the base station accepts an IP packet destined for the mobile hosts as if it were meant for itself, strips its IP header, and delivers it to the higher layer. The METP at the base station handles any TCP connection involving the mobile host. The base station temporarily stores packets sent by the mobile host, before they are forwarded by METP to the wired network, in a buffer. Similarly, data packets meant for the mobile host are received at the base station, stored in the receive buffer, and then forwarded to the mobile host. Multiple Acknowledgments Multiple Acknowledgments method [44] distinguishes the losses due to congestion or other errors on the wired link to those on the wireless link. It is similar to the Snoop protocol, described above. Instead of splitting the connection into two parts as in the I-TCP, Multiple Acknowledgments method, it generates two types of ACKs: - ACKp : this partial acknowledgment with sequence number Na informs the sender that the packet(s) with sequence numbers up to Na - 1 had been received by the base station. - ACKc : this complete acknowledgment has the same semantics as the normal TCP acknowledgment, i.e it indicates the MH has received the packet. ACKp , in particular, indicates that the base station has some problems in sending data to the mobile link. Following these two ACK denitions, two RTT and RTO values are also dened, one end-to-end and one from base station to MH. These RTT and RTO values will be estimated accordingly when ACKp or ACKc is
50 received.
End-to-end Solutions
Internet Control Message Protocol (ICMP) The ICMP protocol [45] tries to avoid spurious timeouts by using an explicit feedback mechanism. Instead of trying to hide problems due to the wireless link, this solution proposes to transmit an explicit notication to the TCP sender. In this way the sender can distinguish whether the losses are due to congestion or wireless errors, thus cutting down its congestion window only when necessary. This protocol uses a message, called ICMP-DEFER, for explicit notication. Another message, called ICMP-RETRANSMIT, is generated by the base station when all the local retransmission attempts have been exhausted. When data is lost over the wireless link, the base station generates an ICMP-DEFER message and sends it to TCP sender. This policy ensures that within one round trip time TCP will receive either an acknowledgment or an ICMP message. Moreover, this ensures that end-to-end retransmissions do not start while link layer retransmissions may be going on. A lack of both is a proof of congestion loss. TCP can then distinguish between two kinds of losses. When the TCP sender receives an ICMP-DEFER message, it resets its transmission timer without changing its cwnd and ssthresh. Postponing the timer of a RTO, the base station has sufcient time to exhaust the local retransmission attempt for the lost packet. In case of successive failures of packet transmission, an ICMP-RETRANSMIT message is sent. When TCP sender receives the ICMP-RETRANSMIT message, it reacts retransmitting the indicated segment. As soon as the destination receives subsequent packets it generates duplicate ACKs. When the source TCP receives the rst of such duplicate ACKs, it switches to fast recovery algorithm. When it nally receives a new ACK it comes out of the fast recovery algorithm and resets cwnd to the value prior to its entering the fast recovery phase.
51
This Fast-Retransmit approach [46] does not split the TCP connection neither does it require the TCP at the FH to be modied. However a modied version of TCP is used at the MH. This approach addresses the issue of TCP when communication resumes after a handoff. The unmodied TCP at the sender assumes the delay caused by a handoff process to be due to congestion (since TCP assumes that all delays are caused by congestion), and when a timeout occurs it reduces its window size and retransmits these packets. Often, handoffs complete relatively quickly, and long waits are required by the mobile before timeouts occur at the sender and packets start getting retransmitted. This is because of coarse timeout granularities in most TCP implementations. The fast retransmit approach alleviates this problem by having the mobile host send a certain threshold number of duplicate acknowledgments to the sender, a step that causes TCP at the sender to immediately reduce its window size and retransmit packets starting from the rst missing one (for which the duplicate acknowledgment was sent). This method is shown to reduce the maximum possible latency of one minute due to serial timeouts to about 50 ms [46]. Another way to enhance fast retransmit phase when multiple packets are lost within a window is described in [17]. Selective Acknowledgment (SACK) With Selective Acknowledgment [19], the data receiver can inform the sender about all segments arrived successfully. Thus, the sender needs to retransmit only the segments that have actually been lost. This changes TCP from a Go-back N protocol to a selective request protocol. SACK uses a TCP option that is comprised of a set of ordered pairs of left and right edge block numbers that specify the sequence numbers of the packets that were properly received (cf. sec. 2.3).
52
Chapter 4
Simulation
4.1
ns-2 Simulator
The network simulator 2 is an object-oriented simulator developed as part of the VINT project at the University of California in Berkeley [48]. It is an opensource simulator widely used in the academic community. The simulator is event-driven and runs in a non-realtime fashion. ns is based on two languages: C++ for the object oriented simulator and an OTcl interpreter to execute users command scripts. The OTcl language represents an object oriented extension of Tcl. It allows users to dene arbitrary network topologies composed of nodes, routers, links and shared media. It also permits to dene which protocols they want to use and to attach them to nodes, usually as agents (agents are the objects that actually produce and consume packets). In addition, it allows users to dene the form of the output they want to obtain from the simulator. The simulator suite also includes a graphical visualizer called network animator (nam) to assist the users get more insights about their simulation by visualizing packet trace data. nam animation shows network topology, packet ows, queued and dropped packets at buffers. ns is a discrete event simulator, where the advance of time depends on the timing of events which are maintained by a scheduler. An event is an object in 53
54
C HAPTER 4. Simulation
the C++ hierarchy with an unique ID, a scheduled time and the pointer to an object that handles the event.
EURANE
The Enhanced UMTS Radio Access Network (EURANE) [49] represents a ns-2 simulator developed within the SEACORN project for Ericsson Teleccomunicatie B.V. EURANE introduces three additional nodes to existing UMTS modules for ns-2: - Radio Network Controller (RNC) - Base Station (BS) - User Equipment (UE) whose functionality allow for the support of the following transport channels: Forward Access Channel (FACH) Random Access Channel (RACH) Dedicated Channel (DCH) High-Speed Downlink Shared Channel (HS-DSCH) Common channels (FACH and RACH) and the dedicated channel DCH use a standard error model provided by ns-2 while high speed channel (HS-DSCH) uses pre-computed input les (usually generated with Matlab) as error model and BLER curve. The main functionality additions to ns-2 come in the form of the RLC Acknowledged Mode (AM), Unacknowledged Mode (UM), Mac-d/-c/sh support for RACH/FACH and DCH, and MAC-hs support for HS-DSCH, i.e. HSDPA.
Figures 4.1 and 4.2 show, respectively, the overall MAC architecture, supporting HSDPA, at the UE side and at the UTRAN side (Node B and RNC).
55
DTCH
MAC-d
MAC-hs MAC-c/sh
PCH
DCH
DCH
MAC Control
CTCH
SHCCH
TDD only
MAC Control
DTCH
MAC-d
Configuration with MAC-c/sh
MAC-hs
MAC-c/sh
DCH
56
C HAPTER 4. Simulation In the Unacknowledged Mode, no retransmission protocol is in use and data
delivery is not guaranteed. Received erroneous data is either marked erroneous or discarded depending on the conguration. A Radio Link Control (RLC) entity in the unacknowledged mode is dened as unidirectional because no association between the uplink and downlink is needed. The unacknowledged mode is used, for example, by Voice-over-IP (VoIP) applications, in which RLC level retransmissions are not required. In the Acknowledged Mode, an HARQ mechanism is used for error correction. Segmentation, concatenation, padding and duplicate detection are provided by means of header elds added to the data. The AM entity is bi-directional and capable of piggybacking an indication of the status of the link in the opposite direction into user data. The AM is the normal RLC mode for packettype services, such as web browsing and le downloading. The transmission of the MAC-hs protocol data units to their respective UEs is achieved through the use of parallel Stop-and-Wait HARQ processes. The implemented HARQ scheme in EURANE uses Chase-Combining, which utilises retransmissions to obtain a higher likelihood of packet acknowledgment.
EURANE implements three scheduling methods: Round Robin (RR) Fair Channel-Dependent Scheduling (FCDS) Max C/I As introduced in section 1.3.3, the RR method is based on fair-share principle while the Max C/I method is based on the current channel conditions. The FCDS method [51] instead represents a trade-off between RR and Max C/I. It permits to get a higher fairness than Max C/I and a more efcient power use than RR. The decrease in fairness, due to a more efcient power use, still satises a constraint on the number of packets that may be delayed (or lost, depending on the type of application) at most. The latter is given in statistical measures as the probability that the time for each particular UE to wait for the next packet does not exceed a critical amount of milliseconds. In practise,
57
the signal is uctuating around a mean value that exhibits slow trends as well. This underlying slow uctuation accounts for the distance from the base station. The time scale of so-called fading variations in the signal itself, due to multi-path reception and/or shadow fading, is much smaller than that of the variations of this so-called local mean. The scheduling is done based on the relative power, i.e. the instantaneous power relative to its own recent history. So, the transmission level of all mobile terminals is rst translated with respect to their local means, and subsequently normalised with their local standard deviations. A transmission is scheduled to the UE that has the lowest value for this so-called relative power. In EURANE, when the FCDS method is selected, a parameter called alpha must be set. This parameter denes the amount of weighting used in the algorithm. A value of 0.0 would equate to the Round Robin case, while a value of 1.0 would equate to the Max C/I case.
The Channel Quality Indicator is a 5-bit feedback from the receiving UE to the transmitting Node B. Each CQI value represents a specic combination of number of codes, modulation and code rate, resulting in a specic transport block size. The period over which this is determined is three TTIs long and ends one TTI before the current block.
In HSDPA the interference is the sum of intra-cell and inter-cell interference. Both have a noise-like character. EURANE considers both inter-cell and
58
C HAPTER 4. Simulation
intra-cell interference constant. The intra-cell interference is added at the input of the channel model, the inter-cell interference is added at the input of the receiver (see Figure 4.4). This simplication does not impose severe limitations to the accuracy of the overall model, as the variance of the interference power is mainly due to the number of sources transmitting, which is nearly constant during the holding time of a connection [52].
The channel model consists of three parts: distance loss (A), shadowing (S) and multi-path (R). Each part is considered independent from each other. The attenuation is then dened as follows [52]:
L=A+S+R
(4.1)
The attenuation term (A) is described by the Okamura-Hata propagation reference model for suburban areas:
(4.2)
where = 3.52 represents the path loss exponent and x is the distance from Node B to UE (expressed in kilometers). The slow fading (S) is caused by obstacles in the propagation path between the UE and the Node B. The common assumption that shadowing is indepen-
59
dent from one location to another is not valid in a dynamic model with mobile users, where location dependent correlation must be accounted for in order to provide continuity. The correlated slow fading contribution to the total loss in constructed from the following algorithm:
S (x + x) = aS (x) + b N
(4.3)
where x is the distance between two subsequent time samples and N a random variable that satises the standard normal distribution. The parameter b is usually taken such that the standard deviation of the vector containing all realisations equals . This prescribes that b2 = 1 a2 . The remaining parameter, a, is determined by the following demand which concerns the autocorrelation function of S : E [S (x)S (x + x)] = E [a(S (x))2 + b N ] = a 2
(4.4)
This expression should be equal to exp(x/D) 2 and results in the demand that a = exp(x/D) with D the correlation distance. In our simulations, D is taken equal to 40 m. In a Pedestrian A scenario (users move at 3 km/h), a correlation distance of 40 m corresponds to a correlation time of about fty seconds. A typical value for the standard deviation is suburban areas is = 8 dB [53]. The typical length unit in shadow fading is only related with the size of the objects that block or absorb the propagated signal. For the last term, the fast fading contribution (R), a Rician distribution is assumed. Fast fading is caused by multipath propagation. For Rician fading, the distance between the fading dips is related by the carrier frequency and speed of sound. As a result, the fades for HSDPA are shorter (in distance and time) compared to the GSM situation.
60
C HAPTER 4. Simulation
4.2
Simulation Scenario
RNC
Core Network
UE
UE
UE UE UE UE UE
UE
BS
UE
UE
It consists of 10 UEs, one of which is the UE of reference while the remaining are considered competitors. The UE of reference is located at a distance of 450 meters from the Node B. Competing UEs are located at a distance that varies from 50 up to 750 meters from Node B. All the UEs are considered being pedestrian and they move at a speed of 3 Km/h. When not differently specied, the term UE refers to the UE of reference. Table 4.1 shows the scenarios characteristics. Figure 4.7 shows the network architecture, Table 4.2 shows its parameters. In these conditions, the available link bandwidth for the UE (i.e. the wireless channel capacity experienced by the UE) has the trend depicted in Figure 4.6 with an average value of 1480 Kbits/s. The proxy solution used in this chapter is based on [29] (cf. sec. 3.1). The sampling time of the link bandwidth is set to 70 ms since this is the time it takes to be computed by the UE (about 40 ms, [54]) and then to reach the RNFProxy within the RNF message. RNCs and Node Bs buffer size are set to EURANE default value (500 and 250, respectively). Modifying these values, simulation results can vary signicantly. However, is not the purpose of this
61
Parameter Number of active UEs Number of competing UEs Distance of reference UE from Node B Distance of competing UEs from Node B Speed of UEs Path loss exponent Correlation in shadow fading Standard deviation in shadow fading
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth
UE
Node B
RNC
Proxy
Server
62
C HAPTER 4. Simulation
Parameter UE - Node B distance Node B - RNC link delay Node B - RNC link capacity RNC - Proxy link delay RNC - Proxy link capacity Proxy - Server link delay Proxy - Server link capacity RNC buffer size Node B buffer size Scheduling scheme UEs elaboration delay Requested le size Sampling time Total simulation time
Value 450 m 15 ms 622 Mbit 0.1 ms 622 Mbit 60 ms 10 Mbit 500 250 FCDS (alpha=0.5) 40 ms 4 MByte 70 ms 15 s
Table 4.2: Simulation parameters thesis investigate these changing. The TCP version used in our simulations is TCP Reno. Though some other versions of TCP (such as SACK) could lead to better performance [55], TCP Reno holds a signicant role in the future mobile applications since it is widely utilized in Internet. The communication between UE and server is started by UE sending a download request message (SYN message) to the server. When the server receives the SYN message, it responds acknowledging it by means of a SYN-ACK message. Once the UE receives the SYN-ACK, it responds sending an ACK to the server. The connection between UE and server is then open and the server can start to send to the requested le.
63
4.3
Simulation Results
The rst simulation carried out regards the effective transmission rate of the UE in the simple scenario (neither proxy between server and RNC nor enhancing protocols on Node B).
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth Throughput
In this scenario, the average value of the UEs transmission rate is 624 Kbits/s. Servers congestion window is showed in Figure 4.9(a). The initial value of the ssthresh was set to 62. This value has been chosen since it ensures that the TCP sender does not enter congestion avoidance phase prematurely, allowing better performance. Otherwise, using a smaller initial ssthresh as well as the one used in the RNFProxy scenario (ssthresh=19), the server enters soon into congestion avoidance phase and the transmission rate experienced by the UE is lower (about 574 Kbits/s) (see Figure 4.10(a)). Figure 4.10(b) shows servers congestion window when, in the simple scenario, the ssthresh is set to 19.
From Figure 4.9(a) is possible to gather some important details about the simulation. At t=1.7 s, servers cwnd reaches the ssthresh value, the slow start ends and the congestion avoidance phase starts. At t=2.1 s, the server receives
64
C HAPTER 4. Simulation
three duplicate acknowledgments. The ssthresh is then set to 31 and the cwnd is reduced according to fast recovery algorithm (cf. sec. 2.1). Since the server does not receive ACK that acknowledges new data before a RTO expires, at t=2.7 s the ssthresh is halved again and the cwnd is set to one. The slow start phase is then started again.
65
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth Reno
66
C HAPTER 4. Simulation
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth Throughput
67
Figure 4.11 shows the UEs data rate when Snoop (Figure 4.11(b)) and Eifel (Figure 4.11(a)) protocols are implemented on Node B. Adding Eifel protocol the average throughput is 629 Kbits/s, adding Snoop it is worth 666 Kbits/s. These values show how the improvement achieved adding Snoop is higher than the one achieved with Eifel. This is due to the fact that in this scenario the number of dupacks that occur is much higher than the number of spurious timeouts. This leads to a substantial performance improvement when Snoop protocol is implemented since it hides a great number of dupacks to the server, saving it from reducing its congestion window. Eifel benets are instead less clear since the number of spurious timeouts during a 15 s simulation and in a not so critical scenario is very low. Figure 4.12 shows how the UEs throughput raises introducing the RNFProxy between RNC and server. In this case, servers initial ssthresh is set to a smaller value (ssthresh=19) than in simple scenario case (ssthresh=62). The trend of servers congestion window when RNFProxy is introduced is depicted in Figure 4.9(b). Figure 4.13 shows throughputs trend in RNFProxy scenario adding Eifel protocol (Fig. 4.13(a)) and Snoop protocol (Fig. 4.13(b)). In RNFProxy scenario the average value of the throughput is 1110 Kbits/s, in RNFProxy plus Eifel scenario it is worth 1111 Kbits/s and in RNFProxy plus Snoop it is worth 1130 Kbits/s. When RNFProxy is added, the enhancement introduced by Snoop and Eifel are less clear than in the simple scenario. This is due to the fact that the RNFProxy provides by itself to cut the number of dupacks and spurious timeouts arriving to the server. Thus, its introduction decreases the work for Snoop and Eifel protocols, making their benets less plain. Figure 4.16 shows a comparison between RNFProxy behavior with and without enhancing protocols on Node B. Figure 4.14 shows the trend adding both Eifel and Snoop protocols to RNFProxy scenario.
Despite the lower initial value of servers ssthresh, improvements achieved by adding the RNFProxy are plain and can be estimated in Figure 4.15. Thanks to RNFProxy, there is a signicant improvement of startup performance. This
68
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0
C HAPTER 4. Simulation
is why in simple scenario the server does not have information about the available bandwidth of the wireless channel, therefore it has to begin the transmission with the lowest rate possible. By the contrary, since RNFProxy knows the available link bandwidth, it can set its congestion window to fully utilize the available link bandwidth. Knowledge about the available link bandwidth also leads to enhanced performance of RNFProxy compared to that of SimpleProxy. This is why in the SimpleProxy scenario the proxy acts only as a splitter, that is it splits the connection between server and UE in two parts: one between server and proxy, and the other between proxy and UE. As introduced in sec. 3.1, by splitting the connection between server and UE, the transmission rate experienced by the UE is larger. This is due to the fact that proxy shortcuts ACKs receiving and packets retransmission. Figure 4.17 shows a comparison between the throughput experienced by UE using a SimpleProxy or a RNFProxy. With SimplePproxy the average value of UEs data rate is 975 Kbits/s, with RNFProxy it is worth 1110 Kbits/s.
69
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth Throughput
Figure 4.13: Throughput improvements by adding Eifel and Snoop protocols to RNFProxy scenario
70
C HAPTER 4. Simulation
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth Throughput
Figure 4.14: UEs throughput in RNFProxy scenario with both Eifel and Snoop protocols
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth Simple scenario RNFProxy
Figure 4.15: Comparison between throughputs trend in simple scenario and in RNFProxy scenario
71
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth RNFProxy + S. + E. RNFProxy only
Figure 4.16: Comparison between throughputs trend in RNFProxy scenario (with and without enhancing protocols)
Rate (Mbits/s) 2.200 2.000 1.800 1.600 1.400 1.200 1.000 0.800 0.600 0.400 0.200 0.000 0.0 5.0 10.0 15.0 Time (s) Link bandwidth SimpleProxy RNFProxy
Figure 4.17: Comparison between the throughput experienced interposing a RNFProxy and that experienced with a SimpleProxy
72
C HAPTER 4. Simulation
Implemented Solution Simple scenario (no proxy) Simple scenario + Eifel Simple scenario + Snoop SimpleProxy scenario RNFProxy scenario RNFProxy scenario + Eifel RNFProxy scenario + Snoop RNFProxy scenario + Eifel + Snoop
Average data rate 624 Kbits/s 629 Kbits/s 666 Kbits/s 975 Kbits/s 1110 Kbits/s 1111 Kbits/s 1130 Kbits/s 1131 Kbits/s
Chapter 5
Conclusions
In this thesis, a proxy solution to improve users data rate over HSDPA network has been investigated. The studied solution is based on the signalling scheme introduced in [29], which uses a new custom protocol between RNC and proxy. This protocol provides information from the data-link layer within the RNC to the transport layer within the proxy including the instantaneous available link bandwidth on the wireless channel and the queue length in the RNC. This communication is called Radio Network Feedback (RNF). Furthermore, the impact of Eifel and Snoop protocols over users transmission performance has been investigated. Simulation results show that the RNFProxy solution allows to enhance signicantly transmissions startup performance. In the considered scenario, we have obtained an increasing of the average data rate of about 80% introducing the RNFProxy . This is why in simple scenario where proxy is not implemented, the server does not have information about the available bandwidth of the wireless channel, therefore it has to begin the transmission with the lowest rate possible. By the contrary, since RNFProxy knows the available link bandwidth, it can set its congestion window to fully utilize the available link bandwidth. Moreover, simulations show that implementing Snoop and Eifel protocols on Node B, the data rate experienced by the UE improves further. By adding Eifel and Snoop protocols to the RNFProxy solution, the average data rate has been increased of a further 2%. This increasing is almost entirety due 73
74
C HAPTER 5. C ONCLUSIONS
to the Snoop protocol. This is why Eifel protocol works on spurious timeouts and spurious fast retransmits, which are events that in a 15 seconds simulation and in a not so critical simulation scenario as the one used this thesis (there are only 10 concurring users and the reference UE is located at just 450 meters from Node B), are not very frequent. Eifel and Snoops benets may result more plain in a more critical scenario and if investigated in long-lived simulations. The achieved performance improvements can be measured in terms of end users experience, as well as from the mobile operators point of view. The spreading of interactive and real time services over the new high-speed mobile networks increases more and more the interest in reducing end-to-end-delay and delay variation. Furthermore, operators are interested in fully utilizing the poor and expensive radio resources, as well as to support the maximum number of users per cell and at the maximum allowable data rate. Furthermore, since mobile operators are interested in making their systems scalable, proxy solution allows them to make network adaptations without the need to change neither the remote servers, nor the mobile terminals.
References
[1] 3GPP, TS 25.211 Technical Specication Group Radio Access Network; Physical channels and mapping of transport channels onto physical channels (FDD)(Release 7), March 2006, v7.0.0. [2] P. J. A. Guti errez, Packet Scheduling and Quality of Service in HSDPA, Ph.D. dissertation, Department of Communication Technology Institute of Electronic Systems, Aalborg University, 2003. [3] 3GPP, TS 25.214 Physical Layer Procedures (FDD), v5.11.0. [4] , TS 25.331 Radio Resource Control (RRC), v5.5.0. [5] K. Miyoshi, T. Uehara, and M. Kasapidis, Link Adaptation Method for High Speed Downlink Packet Access for W-CDMA, Wireless Personal Multimedia Communications (WPMC) Proceedings, vol. 2, pp. 455460, September 2001. [6] D. Chase, Code Combining - A maximum likelihood decoding approach for combining an arbitrary number of noisy packets, IEEE Transactions on communications, vol. COM-33, no. 5, pp. 385393, May 1985. [7] F. Frederiksen and T. E. Kolding, Performance and Modeling of WCDMA/HSDPA Transmission/H-ARQ Schemes, Proceedings of the IEEE 56th Vehicular Technology Conference (VTC), vol. 1, pp. 472476, 2002. [8] P. Frenger, S. Parkvall, and E. Dahlman, Performance Comparison of HARQ with Chase Combining and Incremental Redundancy for HSDPA, IEEE, pp. 18291833, 2001. 75
76
R EFERENCES
[9] C.-S. Chiu and C.-C. Lin, Comparative Downlink Shared Channel Performance Evaluation of WCDMA Release 99 and HSDPA, in Proceedings of the 2004 IEEE International Conference on Networking, Sensing & Control, Taipei, Taiwan, March 21-23 2004, pp. 11651170. [10] Global mobile Suppliers Association (GSA), http://www.gsacom.com. [11] J. Postel, RFC 793: Transmission Control Protocol, IETF, Tech. Rep., September 1981. [12] M. Allman, V. Paxson, and W. R. Stevens, RFC 2581: TCP congestion control, IETF, Tech. Rep., April 1999. [13] M. Allman, S. Floyd, and C. Partridge, RFC 3390: Increasing TCPs initial window, IETF, Tech. Rep., October 2002. [14] H. Inamura, R. Ludwig, A. Gurtov, and F. Khazov, RFC 3481: TCP over Second (2.5G) and Third (3G) Generation Wireless Networks, IETF, Tech. Rep., February 2003. [15] V. Jacobson, Congestion Avoidance and Control. SIGCOMM Sympo-
sium on Communications Architectures and Protocols, 1999, pp. 314329. [16] , Modied TCP Congestion Avoidance Algorithm, end2end interest mailing list, Tech. Rep., April 1990. [17] S. Floyd, T. Henderson, and A. Gurtov, RFC 3782: The NewReno Modication to TCPs Fast Recovery Algorithm, IETF, Tech. Rep., April 2004. [18] C. Casetti, M. Gerla, S. Mascolo, M. Sanadidi, and R. Wang, TCP Westwood: End-to-End Congestion Control for Wired/Wireless Networks, Wireless Networks, vol. 8, pp. 467479, 2002. [19] M. Mathis, S. Floyd, and A. Romanow, RFC 2018: TCP Selective Acknowledgment Options, IETF, Tech. Rep., October 1996. [20] M. Mathis and J. Mahdavi, Forward Acknowledgment: Rening TCP Congestion Control, ACM SIGCOMM, 1996. [21] L. Brakmo, S. O. Malley, and L. Peterson, TCP Vegas: New Tecniques for Congestion Detection and Avoidance, ACM SIGCOMM, 1994.
R EFERENCES [22] M. Assaad and D. Zeghlache, On the capacity of HSDPA. GLOBECOM, 2003, pp. 6064.
77 IEEE
[23] M. Assaad, B. Jouaber, and D. Zeghlache, Effect of TCP on UMTSHSDPA system performance and capacity, Globecom 2004. [24] M. Assaad and D. Zeghlache, Scheduling study in HSDPA system, in IEEE 16th International Symposium on Personal, Indoor and Mobile Radio Communications, 2005, pp. 18901894. [25] , Cross-layer design in HSDPA system to reduce the TCP effect, IEEE Journal on selected areas in communications, vol. 24, no. 3, March 2006. [26] M. Assaad, B. Jouaber, and D. Zeghlache, TCP Performance over UMTSHSDPA System, Telecommunication Systems, vol. 27, no. 2-4, October 2004. [27] J. Border, M. Kojo, J. Griner, G. Montenegro, and Z. Shelby, RFC 3135: Performance enhancing proxies intended to mitigate link-related degradations, IETF, Tech. Rep., June 2001. [28] M. Holze, M. Meyer, and J. Sachs, Performance Evaluation of a TCP Proxy in WCDMA Networks, IEEE Wireless Communications, October 2003. [29] N. Moller, I. C. Molero, K. H. Johansson, J. Petersson, R. Skog, and Arvidsson, Using Radio Network Feedback to Improve TCP PerforA. mance over Cellular Networks, Proc. of the 44th IEEE Conference on Decision and Control, December 2005. [30] R. Chakravorty, S. Katti, J. Crowcroft, and I. Pratt, Flow Aggregation for Enhanced TCP over Wide-Area Wireless, Proceedings of the IEEE Infocom, 2003. [31] C. A. Waldspurger and W. E. Weihl, Stride Scheduling: Deterministic Proportional-Share Resource Management, MIT Laboratory for Computer Science Cambridge, TM 528, 2005. [32] A. Gurtov and R. Ludwig, Responding to spurious timeouts in TCP, Proceedings of IEEE Infocom, 2003.
78
R EFERENCES
[33] R. Ludwig and R. H. Katz, The Eifel algorithm: making TCP robust against spurious retransmissions, ACM Computer Communications Review, vol. 30, no. 1, pp. 3036, January 2000. [34] R. Ludwig and A. Gurtov, The Eifel response algorithm for TCP, IETF, Tech. Rep. RFC 4015, February 2005. [35] O. Teyeb and J. Wigard, Deliverable 2.11: Emulation of TCP Performance Over WCDMA, FACE: Future Adaptive Communication Environment, June 2003. [36] S. Fu and M. Atiquzzaman, DualRTT: detecting spurious timeouts in wireless mobile environments, Performance, Computing, and Communications Conference, 2005. 24th IEEE International Conference, pp. 129 133, April 2005. [37] Y. Guan, B. den Broeck, J. Potemans, J. Theunis, D. Li, E. V. Lil, and A. V. de Capelle, Simulation study of TCP Eifel algorithms, Opnetwork, 2005. [38] H. Balakrishnan, S. Seshan, E. Amir, and R. H. Katz, Improving TCP/IP Performance over Wireless Networks, ACM Wireless Networks, November 1995. [39] H. Balakrishnan, S. Seshan, and R. H. Katz, Improving Reliable Transport and Handoff Performance in Cellular Wireless Networks, ACM Wireless Networks, vol. 1, no. 4, 1995. [40] A. V. Bakre and B. R. Badrinath, Implementation and performance evaluation of Indirect TCP, IEEE Transactions on computers, vol. 46, no. 3, pp. 260278, 1997. [41] R. Yavatkar and N. Bhagawat, Improving End to End Performance of TCP over Mobile Internetworks, IEEE Workshop on Mobile Computing Systems and Applications, 1994. [42] K. Brown and S. Singh, M-TCP: TCP for Mobile Cellular Networks, ACM SIGCOMM Computer Communication Review, vol. 27, no. 5, pp. 1942, October 1997.
R EFERENCES
79
[43] K.-Y. Wang and S. K. Tripathi, Mobile-End Transport Protocol: An Alternative to TCP/IP over Wireless Links, Proceedings IEEE INFOCOM, 1998. [44] S. Biaz, M. Mehta, S. West, and N. H. Vaidya, TCP over Wireless Networks Using Multiple Acknowledgements, Texas A&M University, Tech. Rep. 97-001, 1997. [45] S. Goel and D. Sanghi, Improving TCP Performance Over Wireless Links, Proceedings of TENCONS, Tech. Rep., 1998. [46] R. C aceres and L. Iftode, Improving the Performance of Reliable Transport Protocols in Mobile Computing Environments, IEEE Journal of Selected Areas in Communications, vol. 13, no. 5, June 1995. [47] N. Vaidya, M. Mehta, C. Perkins, and G. Montenegro, Delayed Duplicate-Acknowledgements: A Proposal to Improve Performance of TCP on Wireless Links, Texas A&M University, Tech. Rep. 99-003, 1999. [48] The network simulator ns-2, http://www.isi.edu/nsnam/ns/. [49] EURANE, http://www.ti-wmc.nl/eurane/. [50] 3GPP, TS 25.308 Technical Specication Group Radio Access Network; High Speed Downlink Packet Access (HSDPA); Overall description, December 2004, v5.7. [51] I. de Bruin, G. Heijenk, M. E. Zarki, and J. L. Zan, Fair channel-dependent scheduling in CDMA systems, Proceedings IST Mobile & Wireless Communications Summit, pp. 737741, June 2003. [52] N. Whillans, SEACORN. End-to-end network model for Enhanced UMTS, October 2003, http://www.ti-wmc.nl/eurane/D32v2Seacorn.pdf.gz. [53] W. C. Jakes, Microwave mobile communications. Wiley, 1974. [54] Y.-S. Kim, VoIP Service on HSDPA in Mixed Trafc Scenario, Proceedings of the sixth IEEE International Conference on Computer and Information Technology, 2006.
80
R EFERENCES
[55] F. Xinz and A. Jamalipour, TCP throughput and fairness performance in presence of delay spikes in wireless networks, International Journal of Communication Systems, 15 March 2005.