You are on page 1of 8

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No.

1, March 2013

An Accurate, Agile and Stable Traffic Rate Estimation Technique for TCP Traffic
Mary Looney1 and Oliver Gough1
1

Department of Electronic Engineering, Cork Institute of Technology, Cork, Ireland. Email: {mary.looney, oliver.gough}@cit.ie existence and it is the TSW and EWMA filters that are the most widely known recursive rate estimation techniques. The TSW rate estimator was proposed in [2] to act as a profile meter of a Traffic Conditioning technique and was subsequently specified in RFC 2859[3] as a rate estimation technique. The EWMA is traditionally a weight based estimator but may be used for rate estimation if its configurable weight translates into a time window [3]. Although these algorithms have been widely used as rate estimation techniques, they suffer from one main limitation. They cannot be configured to be agile and stable simultaneously. Hence their performance is dependent on a configurable parameter which dictates the agility or stability of the filter [1, 4]. To overcome this drawback, the configurable parameter could be adaptively adjusted, however this would only be applicable to traffic that tends to change quite smoothly. Hence, a rate estimation technique known as the flip-flop filter was proposed in [4] and suggests using two rate estimation techniques (i.e. TSW filters) where one is configured to be an agile filter and the other is configured to be stable. A controller is used to determine which output should be chosen as the estimated rate. Although, this approach has shown to be successful in providing both agile and stable estimated rates, quantitative measures of agility and stability have not been discussed. Additionally, the flipflop uses a controller based on the 3-sigma rule which assumes that the sample population it works with is normally distributed. Hence, this technique may not be applicable to more realistic heavy tailed distributions.The purpose of this paper is therefore to quantitatively analyse the flip-flop filter in terms of accuracy, agility and stability. Additionally, it proposes a rate estimation technique known as SARE (Stable Agile Rate Estimator) that is similar to the flip-flop filter but differs in its use of filters and controller to allow for more stable and agile results. As both techniques are composed of TSW or EWMA filters we also investigate the ability of these single estimators in producing agile, stable and accurate results. The paper is organised as follows: Section II investigates existing rate estimation techniques. Section III introduces the proposed rate estimation technique, SARE. Individual components of the proposed SARE algorithm are investigated in Section IV. Section V uses simulation analysis to compare the proposed approach in comparison to the flipflop filter. The paper concludes in Section VI. II. EXISTING RATE ESTIMATION ALGORITHMS Recursive rate estimators are the most commonly used

Abstract Traffic rate estimation is an integral part of many high speed network services and components. Algorithms such as traffic conditioning, scheduling and admission control are dependent on accurate rate estimation. Several rate estimation techniques have been proposed however, the inherent bursty nature of Internet traffic, especially TCP traffic, does not allow for easy rate estimation. Short term changes may obscure output results or a change in traffic rate may not always be detected. Thus estimators may not always possess ideal characteristics of agility, stability and accuracy. As agility and stability are inter-dependent a single rate estimator cannot always be configured to be both agile and stable. In this paper a rate estimation technique is proposed that uses two rate estimation techniques to configure an agile estimator in measuring the actual changes of traffic in a timely manner as well as a stable one in ignoring short term variations of traffic. The performance of the proposed algorithm is analysed in comparison to that of an existing flip-flop filter using simulation analysis. Improved performance of the proposed estimator over that of the flip-flop filter is demonstrated using quantitative measures of agility, stability and accuracy. Existing TSW and EWMA algorithms are also investigated. Index TermsTraffic Rate Estimation, Metering, Flip-Flop filter

I. INTRODUCTION Traffic rate estimation is an integral part of many high speed network services and components [1]. Algorithms such as traffic management and control, scheduling, monitoring and admission control are dependent on accurate rate estimation. The inherent bursty nature of Internet traffic, especially TCP traffic, does not allow for easy rate estimation. Short term changes may obscure output results or a change in traffic rate may not always be detected easily. Hence, a rate estimator is required to be accurate, agile, stable and cost effective [1]. An accurate rate estimator should provide an estimated rate close to that of the actual rate. An agile rate estimator should track the changes in the actual data rate of the traffic in a timely and accurate manner. It should quickly discover an increase or decrease in the actual rate of traffic. A stable rate estimator should ignore short term changes in traffic behaviour that are natural to the traffic. This is in conflict with agility. A cost effective estimator should be fast and simple and should not require a lot of computational power in processing samples of data or large memory constraints in storing data. However, not all existing rate estimation techniques are capable of satisfying all of these characteristics [1]. Various rate estimation techniques are in 15 2013 ACEEE DOI: 01.IJIT.3.1. 1126

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No. 1, March 2013 rate estimation techniques [4]. As the TSW and EWMA are the two most commonly known rate estimation techniques these approaches are now investigated as well as proposed variations in terms of agility, stability, accuracy and cost when dealing with TCP traffic. A. TSW Algorithm The TSW is a time-based estimator that employs a rectangular data weighting function based on a fixed window length of time. It was designed to allow the estimator to maintain time based history that discounts the estimated rate by a factor of e according to a window length of time (win_length) rather than decaying the estimated rate according to packet arrivals. The configuration of win_length is crucial to the performance of the TSW and non-optimal values can produce over or under estimations of the rate. Win_length is usually established a-priori, based on the assumed or specified characteristics of the traffic flow. A small win_length applies less weight to previous estimations to allow new changes in the traffic rate to be depicted quickly thus producing an agile estimator. Conversely, a large win_length gives more weight to past estimations allowing short transient temporary spikes to be ignored thus producing a stable estimator. Hence, the TSW cannot be configured to be both agile and stable as these two factors are somewhat in conflict and work against each other [4]. The TSW was analysed in [1] in terms of accuracy, agility, stability and cost for constant traffic as well as synthesised bursty traffic with empirical packet sizes. Performance results demonstrate its ability in producing accurate estimations and also show that with the correct configured win_length the TSW can produce good stability results. However, this is at a trade-off to agility. When configured as an agile estimator it also produces good performance results but this time at a trade-off to stability. This highlights the inter-dependency of agility and stability and the limitation of the TSW in being both an agile and stable estimator. In terms of computational complexity, the design of the TSW is extremely simple but can be computationally expensive due to its packet by packet estimations. As the traffic load increases, the computational cost increases linearly and with the advent of high speed networks this computational cost may prove to be substantial. Whether or not this is a trade off to accuracy and agility or stability needs to be considered. B. EWMA Algorithm The EWMA is an ideal maximum likelihood estimator that employs exponential data weighting [5]. It is widely used in many different areas and applications including the calculation of the RTT in TCPs congestion control. The algorithm of EWMA uses a weighting factor that depicts the working of the algorithm and assumes a constant value between 0 and 1. When is closer to 1, new observations are given more weight and thus the EWMA acts as an agile filter detecting current performance changes in the traffic. In contrast, when is closer to 0, old estimates are given more weight and the EWMA acts as a stable estimator resisting noise in individual 2013 ACEEE DOI: 01.IJIT.3.1.1126 16 observations. Thus, as with the TSW, the EWMA can only be configured to be either agile or stable and this is depicted by the weighting factor, . In contrast to the TSW, the configuration of the weighting factors is crucial to the workings of the algorithm. Nevertheless, it is the TSW that exhibits greater performance than the EWMA in terms of accuracy, agility and stability as demonstrated in [1]. EWMA exhibits poor accuracy, agility and stability results under the same circumstances as the analysis carried out on the TSW. It produces inaccurate estimations and does not track changes in traffic in an accurate manner. This is attributed to the use of the constant weighting factor. In terms of computational cost, the algorithms are comparable as they are both packet by packet estimators and thus the computational load will increase linearly with an increase in traffic. An alternate approach used in [6-8] makes use of a an exponential weighting factor that allows the estimated rate to asymptotically converge to the real rate, thereby acting similar to that of the TSW. In this approach the weighting factor is a function of the inter-arrival time of packets (interpk_time) as shown in (1) where T is a constant value. This dynamic weighting factor eliminates the sensitivity to packet length distribution as it more closely reflects a fluid averaging process that is independent of the packetizing structure. (1) A small value of T increases the systems responsiveness to rapid rate fluctuations whereas larger values of T better filters the noise and avoids potential systems instability. C. Dynamic Weight EWMA Another approach proposed to overcome the shortcomings of the static EWMA is introduced in [9]. It uses a dynamic weight that allows for short transient peaks to be ignored but allows for a persistent change in traffic to be recognised. This dynamic weight differs from that introduced in [6]. It is determined from a gradient measure which is used to capture the change of the averaged rate over time. This allows for temporary and brief traffic changes to be treated with lower weight, but persistent and more permanent changes to be treated with higher weight. It differs from the TSW and EWMA algorithms in that the rate estimations are performed periodically, as opposed to a packet by packet basis, thereby reducing computational cost. Nevertheless, it does require more computational complexity in terms of calculations, although it is expected that the overall cost in terms of CPU cycles is less than that of the TSW and EWMA with the condition that the selected value of the time window is greater than the average inter-arrival rate of incoming packets [1]. Thus, on average it consumes far fewer CPU cycles.In comparison to the TSW and EWMA algorithms in [1], performance results indicate that the Dynamic Weight EWMA outperforms the EWMA algorithm in terms of accuracy, agility and stability but only outperforms the TSW in terms of stability. It is comparable to the latter however in terms of accuracy thereby proving to be both an accurate

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No. 1, March 2013 and stable estimator. This is accredited to the dynamic weighting factor. In terms of agility however the Dynamic Weight EWMA reacts slowly to changes in traffic with the TSW exhibiting much better performance. This slow reaction may be as a result of the jumping window employed in the algorithm being too long and thus not allowing the algorithm to detect changes in traffic in a quick manner. D. Time Window based EWMA Another approach proposed to overcome limitations of the static EWMA is known as the Time Window based EWMA [1]. This algorithm is only a minor modification of the static EWMA. It uses a time-window approach in estimating the rate instead of the inter-arrival time of packets as used with the static EWMA. It sums the number of arriving packets over a fixed time window. The ratio of this sum to time then provides the estimated rate. As with the TSW and EWMA algorithms, the size of the time window over which the rate is estimated affects the quality of the estimator in terms of stability and agility. A larger time window produces a more stable estimator, but introduces delays in tracking any changes in traffic rate, resulting in less agility as shown in the simulation analysis of [1] with the algorithm reacting slowly to permanent traffic changes. On comparison to the TSW, EWMA and Dynamic Weight EWMA algorithms, performance results indicate similar operation to the TSW and Dynamic Weight EWMA in terms of accuracy. Once optimal algorithm parameters are configured it also demonstrates good stability results, but at the cost of agility. The improved accuracy and stability results must be attributed to the jumping window as this is the only change from the original EWMA algorithm. A smaller window length would likely produce more agile results but again this would be at the expense of stability. Another improvement over the EWMA algorithm is that the Time Window based EWMA does not require estimations at each packet arrival and hence is less computationally expensive. It is also less complex than the Dynamic Weight EWMA as it requires fewer computational steps that are simpler than those used in the former algorithms. E. Flip-Flop Filter An approach introduced in [4] uses a system of two TSW algorithms where one is configured to be agile with a small window length and the other configured to be stable with a large window length. A controller is used to select the agile or stable estimate as the output estimated rate. This approach is known as the flip-flop filter and its idea is derived from a similar method proposed for RTT estimation in [10].The underlying principle of the estimator is to employ the stable estimated rate when possible but to switch to the agile estimated rate on detection of a change in traffic rate. The change in traffic is determined by a controller. The controller is based on the 3 Sigma Rule that is frequently used in Statistical Process Control. In a process control context, the sigma is the standard deviation and the process is considered to be out of control if the output varies from its nominal value 2013 ACEEE DOI: 01.IJIT.3.1. 1126 17 by more than three times this standard deviation as in a normaldistribution 99.73% of values lie within 3 standard deviations of the mean. Because the standard deviation is not easily derived, the flip-flop filter uses an estimated value. If the difference between the stable and agile estimated rates exceeds 3 times this estimated value over a consecutive count then the agile estimated rate is used as the output estimated rate. The stable estimated rate is then set to this value. The flip-flop filter is certainly a viable approach in producing an accurate, agile and stable rate estimation technique. This is shown by performance results presented, demonstrating that the filter does detect changes in traffic rate when necessary and produces a stable estimate otherwise. However, no quantitative measures are provided for how stable the results are and how fast a true change in traffic is detected. Albeit it does show a change in traffic is detected. Although the results presented show on occasion under estimation of the estimated rate, this is typically due to the configuration of the window length for the TSW and not a direct effect of the flip-flop filter. The 3-sigma rule controller employed by the algorithm may impose restrictions in the use of the algorithm. The 3sigma rule assumes that the vast majority of data samples lie within 3 sigma of the mean and thus may not be appropriate for use with other distributions. The estimation of accurate standard deviation measurements is also required for correct use of the flip-flop filter. In terms of cost effectiveness, the flip-flop filter is more costly than those algorithms already discussed. We have already seen that both the TSW and EWMA algorithms, although simple, may be costly due to their packet by packet estimations. Hence, employing two TSWs or two EWMAs as well as a controller will lead to much more computational cost than that of a single estimator even though the controller does not require complex computational steps. The increase in computational complexity of the flip-flop filter is at a tradeoff with being both a stable and agile estimator. III. A STABLE AGILE RATE ESTIMATOR The main objective of this section is to propose a rate estimation technique that is accurate, agile and stable. Although the flip-flop filter has been shown to provide stable and agile results it requires further investigation as quantitative analysis of the system does not exist to the authors knowledge. Further, the functioning of the flip-flop filter is based on the use of the 3-sigma rule as a controller. Although this approach is a viable one, it requires the accurate estimation of a standard deviation value which is difficult to achieve, especially when using an unknown traffic distribution. Additionally it assumes the vast majority of data samples lie within 3 sigma of the mean and with the current trend of internet traffic tending towards a heavy tailed distribution this approach may not always be suitable. Thus, the purpose of this section is to present a rate estimation technique that is agile and stable simultaneously, that can be applied to many traffic distributions and does not require

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No. 1, March 2013 accurate estimations of statistical values.The proposed rate estimation technique is known as SAR E, a Stable Agile Rate Estimator. Its main contribution is in producing an accurate estimate of the traffic rate whilst maintaining characteristics of stability and agility simultaneously. The proposed algorithm is presented in Fig. 1 where rate estimation is performed on a per aggregate basis. avg_rate_agile denotes the estimated agile traffic rate. avg_rate_stable denotes the estimated stable traffic rate. avg_rate is the estimated rate used as the output of the entire algorithm. win_length is the window length of time of the TSW. T is the period of time over which the EWMA decays its estimated rate. interpk_time is the inter-arrival time of packets. pk_size is the packet size. t-test represents a one sample t-test statistical controller. alpha is the alpha level used for the statistical t-test. p is the value used to determine if there is statistical significance in the output of the controller. Two rate estimation techniques are used in SARE, a TSW to determine an agile estimated rate and an EWMA to determine a stable estimated rate. The TSW is configured with a small win_length for agility. The EWMA uses the dynamic weighting factor of Eqn. 1 that is a function of interpk_time and T as in [6-8] with a large T value for stability. Optimal win_length and T values are derived from trial and error approaches. On the arrival of a packet at the transmission controller both agile and stable traffic rates are updated. These estimates are input into a controller with degrees of freedom that allow for transient spikes or short bursts of traffic to be ignored. The controller used is a basic (one sample) statistical t-test [11]. The purpose of the t-test controller is to compare the agile estimated rates against the mean of the stable rates to determine if a statistical significance exists between the two measures. Statistical significance would indicate that there is a significant difference between the means of the 2 data sets and thus would indicate that a change in traffic has been detected and not just a transient spike or short burst in traffic. The significance level of the t-test controller is set to a value of 0.0027 i.e. 0.27%. Although this value covers a wide distribution (when compared to typical values used by statistical t-tests such as 0.05) it reflects the characteristics of the 3-Sigma rule as used by the flip flop filter. This allows for good agility and stability and eliminates the need for estimations of standard deviation. Hence, if the p-value returned is less than 0.0027 it indicates a persistent change in traffic denoting a statistical significance in the difference of the means of the agile and stable estimated rates. If a statistical significance is indicated, then the output avg_rate used is that of the agile estimated rate. In all other cases the stable estimated rate is used as the avg_rate. IV. ANALYSIS AND DISCUSSION OF THE COMPONENTS OF SARE As a functional unit, the proposed SARE algorithm is comprised of three components: a TSW as the agile estimator, an EWMA with exponential weighting function as the stable estimator and the t-test statistical controller to allow for stability and agility simultaneously over many traffic distributions. Before the SARE algorithm as a complete system can be analysed quantitatively and compared against existing algorithms, its individual components need to be investigated. The TSW must be validated as the most agile estimator to use in SARE when compared against EWMA algorithms. Likewise the EWMA with exponential weighting factor must be validated for stability. The suitability and applicability of the t-test statistical controller also needs to be explored. Hence, we now explore each of these components individually. Subsequently, the performance of the proposed SARE algorithm as an overall rate estimation algorithm is analysed. (Henceforth the EWMA with exponential weighting factor will be referred to as EWMA(dynamic) whereas the EWMA with constant weighting factor will be referred to as EWMA(static)). The algorithms investigated in this section are the TSW, EWMA(static) and EWMA(dynamic). A. Simulation Setup The algorithms are implemented using the network simulator OPNET in a DOCSIS network as shown in Fig. 2 [12]. A cable modem termination server (CMTS) acts as an FTP/TCP server and cable modems (CMs) act as the clients with a downstream data rate of 10Mbps. This data rate does not influence the workings of the algorithms. It is set to a rate to allow for the generated traffic to be transmitted without congestion in the network. All analysis is validated using the MINITAB software package [13]. Traffic sources of CBR, Poisson and Pareto traffic are used for three different traffic scenarios to allow for analysis of the algorithms across a range of different traffic distributions. The actual traffic generated is TCP/FTP based with Maximum Segment Size (MSS) of 1500Bytes. CBR and 18

Fig. 1. Proposed SARE Algorithm

2013 ACEEE DOI: 01.IJIT.3.1.1126

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No. 1, March 2013 Poisson traffic precisely quantifies the behaviour of the estimator. Pareto traffic is used as it corresponds to more commonly used statistically based models that are frequently encountered in network traffic simulation studies and has self-similar traffic behaviour often considered to be a realistic model of network traffic [14-16]. For all three traffic sources a traffic load of approximately 6.6Mbps is generated. CBR and Poisson traffic use files sizes of 25700Bytes (with constant packet lengths), inter-request times of 0.25s and variances of bits and bits respectively. Pareto traffic uses a shape parameter of 1.9 and a location parameter is 0.25 giving a variance of bits. It should be noted that there was no specific reasoning for the choice of traffic load, file sizes and the inter-request times other than to generate CBR, Poisson and Pareto traffic. The simulation scenarios are replicated five times with different random seeds and run for a duration of 10minutes. After 3 minutes all of the traffic rates are reduced by 70% and not increased again. This provides an overall variance of { , , } bits for CBR, Poisson and Pareto traffic respectively. estimate within 10% of its nominal value. The settle times are measured when the traffic rate is decreased by 70% to accurately record this metric. The mean of the settle times over five simulation results are presented along with a 95% confidence interval. Lower settle times are better and results are presented in Table 2. As can be seen it is the TSW that has the lowest settle time for all three traffic scenarios, thereby proving to be the most agile estimator. EWMA(static) exhibits the least favourable performance. Although EWMA(static) and EWMA(dynamic) produce good results, it is the TSW that demonstrates the most encouraging results with settle times at least half of that experienced by the other algorithms. C. EWMA as the Stable Estimator of SARE The algorithms are now investigated to determine the most suitable one for use as the stable estimator of SARE. The same simulation setup as described in Section A is used without the 70% decrease in traffic. Stable rate estimators place more emphasis on past observations whilst resisting noise in individual observations. Their purpose is to ignore short term changes that are natural to the traffics behaviour. Thus to evaluate stability two quantitative measures of stability are used.
TABLE II. SUMMARY OF AGILITY RESULTS

Fig. 2. Network Topology for Simulation Implementation

Table I provides a summary of the optimal configuration parameters of the algorithms for the agile and stable estimators for the three traffic scenarios of CBR, Poisson and Pareto traffic. These were derived from trial and error approaches.
TABLE I. SUMMARY OF CONFIGURED O PTIMAL PARAMETER VALUES

B. TSW as the Agile Estimator of SARE The TSW, EWMA(static) and EWMA(dynamic) algorithms are now investigated to determine the one that exhibits the most favourable performance in terms of agility. The performance metric used is that of settle time as it measures the elapsed time it takes an estimator to produce an 2013 ACEEE DOI: 01.IJIT.3.1. 1126 19

The first metric used is Coefficient of Variation (CV) which is the ratio of standard deviation to mean. It measures the degree to which measurement noise affects a filters estimates. The second metric used is Mean Squared Error (MSE) which measures the resistance to transients. MSE penalises filters that are disturbed by large transients for a short duration more so than those filters that are disturbed by small transients for a longer duration of time. A large MSE is more likely to cause an adaptive system to make a poor decision. Lower CV and MSE values indicate better stability. Although these metrics are appropriate for stability analysis, the accuracy of the results also needs to be analysed. For weighting factors that are too large, under-estimation of the rate is the result. Hence, even though the stability performance metrics of CV and MSE may produce low values, the actual estimated rate may be inaccurate. Therefore the accuracy of the results also needs to be considered. To do this a relative estimation error (REE) is used. The REE is the relative error between the estimated rate and the average reference as shown in (2). Results for CV, MSE and REE respectively for CBR, Poisson and Pareto traffic with 95% confidence intervals are presented in Tables 3-5. (2)

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No. 1, March 2013 For CBR traffic all three algorithms show comparable performance in terms of MSE and CV values, however EWMA(static) exhibits the poorest accuracy results. For Poisson traffic, EWMA(static) exhibits greater performance in terms of MSE and CV however it again has greater inaccuracy when compared against TSW and EWMA(dynamic). For that reason it would not be appropriate for use as the stable estimator in SARE. Its inaccuracy is also evident with Pareto traffic. On analysis of the Pareto scenario the TSW exhibits the poorest performance results, illustrating its sensitivity to bursty traffic for stability, perhaps as a result of the non-Normal behaviour of the Pareto distribution. It is EWMA(dynamic) that exhibits the most favourable performance in this scenario. Consequently the choice of EWMA(dynamic) as the stable estimator of SARE is validated as it is the most consistent stable estimator across all three traffic scenarios.
TABLE III. CV RESULTS FOR STABILITY ANALYSIS

achieve.Using a t-test controller can eliminate the need for accurate estimations of sigma. The t-test statistic determines if there is a significant difference between the means of two data sets i.e. it determines if the two data sets come from the same or different population sets. In this case the data sets are those of the agile and stable estimated rates. The null hypothesis indicates that both samples come from the same population. The alternate hypothesis indicates that the two sample means are from different populations. The t-test decides which of these hypotheses to accept. The Null hypothesis is assumed true until the t-test produces a value that is less than that of a defined significance level. The significance level, referred to in our context as alpha, indicates whether there is a statistical significance in the difference in the sample means. It is alpha that dictates the levels by which the means are examined. To reflect the characteristics of the 3-Sigma rule that in a normal distribution 99.73% of values lie within 3 standard deviations of the mean, alpha or the statistical significance can be expressed in units of standard deviation of the normal distribution. This is achieved via an error function as shown in (3): (3) where n is the number of standard deviations. Therefore for 3 standard deviations an alpha value of 0.0027 is appropriate. Thus, if the means of the agile and stable estimated rates conform to an alpha value of 0.27%, then the system is deemed to be in control. If it is not in control then the stable estimated rate is set to the agile rate value and this value as chosen as the output estimated rate. We believe that the use of the ttest statistical controller will allow the application of the SARE algorithm to heavy tailed distributions with greater performance exhibited than that of the flip-flop filter. V. EVALUATION AND ANALYSIS OF THE PROPOSED SARE ALGORITHM The purpose of this section is to analyse the proposed SARE algorithm in terms of accuracy, agility, stability and cost and to compare its performance against that of the flipflop filter. Thus as before, REE is used as the measure of accuracy, settle times are used for investigation of agility and MSE and CV metrics are used for stability analysis. Up to now the accuracy, agility and stability of TSW and EWMA algorithms have been considered, however the cost, in terms of the computational complexity of the proposed SARE algorithm, has not yet been discussed. This is also investigated in this section.The same simulation setup and traffic scenarios as discussed in Section IV are used. The accuracy, agility and stability of the algorithms is analysed after a decrease in traffic has been introduced to the system. This will indicate how well the controllers perform in detecting a change in traffic and how well the individual rate estimators perform once this change has been detected. A. Accuracy The mean REE results with 95% confidence levels are 20

TABLE IV. MSE RESULTS FOR STABILITY ANALYSIS

TABLE V. REE R ESULTS FOR STABILITY ANALYSIS

D. T-test Statistical Controller Having validated the choice of agile and stable estimators of SARE, the use of the one sample t-test controller is now discussed. Although the SARE and flip-flop filters differ in their choice of stable rate estimators, their main difference is that they employ different controllers.The 3-Sigma Rule used by the flip-flop filter is typically used in a Statistical Process Control context to detect the occurrence of a shift in a process mean, typically in a production area. It is based on the use of the 3-Sigma Shewhart control chart [10]. Although this approach has proved to be significant in detecting shifts in mean, it is dependent on the accurate estimation of the standard deviation i.e. sigma, which is a difficult task to 2013 ACEEE DOI: 01.IJIT.3.1. 1126

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No. 1, March 2013 presented in Table 6 respectively for CBR, Poisson and Pareto traffic. Only a small difference exists in results for CBR and Poisson traffic indicating somewhat comparable performance. A notable difference is shown for the Pareto results with SARE vastly outperforming the flip-flop filter that demonstrates extreme inaccuracy for bursty traffic. This is attributed to a combination of the 3-Sigma Rule controller and/ the TSW of the flip-flop filter working with a heavytailed distribution instead of a Normal distribution. Thus, the controller may inaccurately detect a change in traffic or may not detect a change in traffic in a timely manner and the TSW subsequently inaccurately estimates the traffic rate. Alternatively it may simply be due to the TSWs inaccuracy with heavy tailed distributions. With Pareto traffic more closely modelling the current Internet traffic trend than Poisson we highlight the significance of using the t-test statistical controller as well as the EWMA(dynamic) as the stable estimator. Overall, SARE produces comparable results to the flip-flop filter for CBR and Poisson traffic but SARE greatly outperforms the flip-flop filter for the Pareto scenario. B. Agility The main purpose of the agility measure here is to determine how fast a change in traffic is detected when the traffic rate is reduced by 70%. Consequently the settle time measures the elapsed time it takes the algorithm to produce an estimate within 10% of its nominal value. Results are presented in Table VII.
TABLE VI. REE RESULTS
FOR

exhibits poorer performance results than that of the proposed SARE algorithm for all three traffic scenarios. This is attributed to a combination of the EWMA(dynamic) algorithm of SARE and the t-test statistical controller. We have already seen that EWMA(dynamic) outperforms the TSW in terms of stability, however with the t-test statistical controller detecting a change in traffic more quickly than the 3-sigma rule controller, allows the estimated rate to stabilise faster. Although there are only differences in CV values of approximately 0.8% and 0.6% for CBR and Poisson traffic respectively, there is a significant difference exhibited with Pareto traffic with an approximate difference of 25%. The same is true for MSE values and so we verify that the SARE algorithm is more stable than that of the flip-flop filter. D. Cost Algorithms that are fast and simple, that do not require a lot of computational power in processing samples of data and those that do not require large memory constraints in terms of storing data are deemed to be cost effective algorithms. In other words they are low in computational complexity. Already discussed is that the TSW and EWMA algorithms are not as cost effective as other algorithms as they are based on packet by packet estimations. As the traffic load increases the computational cost increases linearly and with the advent of high speed networks this computational cost may prove to be substantial.
TABLE VIII. CV RESULTS FOR STABILITY ANALYSIS

ACCURACY ANALYSIS

TABLE IX. MSE RESULTS FOR STABILITY ANALYSIS TABLE VII. SETTLE TIME RESULTS FOR AGILITY ANALYSIS

The lowest settle times are experienced for the proposed SARE algorithm in all three scenarios with an approximate decrease of at least 1.5seconds exhibited. The SARE algorithm therefore produces more agile results than that of the flipflop filter. However, the results experienced by the flip-flop filter are not a reflection on the inaccurate REE results in that an extremely poor settle time result was not experienced. This demonstrates that the flip-flop filter did detect a change in traffic and settled at its inaccurate estimated rate after approximately 4 seconds. C. Stability Stability analysis is performed on the output estimated rates of the algorithms after a change in traffic has been detected. Results are presented in Tables 8 and 9 for CV and MSE measurements respectively. The flip-flop filter again 21 2013 ACEEE DOI: 01.IJIT.3.1. 1126

We also noted that the flip-flop filter is more complex than the TSW and EWMA filters however this is at a tradeoff to better performance as the flip-flop can provide agile and stable estimates simultaneously, a feature that the TSW and EWMA algorithms are incapable of.As the proposed SARE algorithm is composed of TSW and EWMA(dynamic) algorithms it is far more computationally expensive than these algorithms as single entities considering it also employs the t-test statistical controller. However, it needs to be evaluated in terms of how it compares against that of the flip-flop algorithm. Both the SARE and flip-flop have two estimators, one agile and one stable and are therefore comparable in this respect. However, it is the controller of each of the algorithms that need to be considered as this is the main difference between the algorithms.The 3-sigma rule controller of the flip-flop filter only requires one degree of freedom whereas the t-test statistic controller of the SARE algorithm uses 20

Full Paper ACEEE Int. J. on Information Technology, Vol. 3, No. 1, March 2013 or 30 degrees of freedom. This increases the computational complexity of SARE when compared against that of the flipflop filter. Eliminating the generation of the p-value in the ttest statistical controller and replacing it with a look up table of t-statistic critical values would reduce this complexity somewhat by eliminating a number of computational steps.However, although the proposed SARE algorithm is more computationally expensive than the TSW and EWMA filters, it is at a trade-off with being agile and stable simultaneously. CONCLUSION In this paper a traffic rate estimation technique known as SARE was proposed to produce accurate, agile and stable results. It is composed of two filters, one (a TSW) acting as an agile filter and the other (EWMA(dynamic)) acting as a stable filter. It uses a statistical t-test as a controller to detect changes in input traffic and to output the most applicable estimated rate i.e. either agile or stable. Before SARE was analysed in terms of its accuracy, agility and stability its individual components were analysed and discussed. Analysis and investigation found that (with the optimal configuration parameters) the TSW is the most agile estimator when compared against EWMA(static) and EWMA(dynamic) under various traffic scenarios, whereas EWMA(dynamic) is more stable than that of EWMA(static) and TSW. This validated the choice of these estimators in the proposed SARE approach. The use of a t-test statistical controller was also discussed.On comparison of SARE to a similar approach i.e. the flip-flop filter, SARE produced more favourable performance results in terms of accuracy, agility and stability. Although SARE is more computationally expensive this is at a trade-off to agility and stability. REFERENCES
[1] Salah, K. and Haidari, F., Performance Evaluation and Comparison of Four Network Packet Rate Estimators. International Journal of Electronic Communications, 2010, 64(11): pp. 1015-1023. [2] Clark, D. and Fang, W., Explicit Allocation of Best-Effort Packet Delivery Service. IEEE/ACM Transactions on Networking, 1998, 6 (4): pp. 362 373. [3] Fang, W., Seddigh, N. and Nandy, B. A time sliding window three color marker. RFC 2859, 2000. Available from. Last accessed. [4] Agharebparast, F. and Leung, V.C.M., A new traffic rate estimation and monitoring algorithm for the QoS-enabled Internet in Proc. of Globecom, 2003. [5] Young, P., Recursive Estimation and Time-Series Analysis. 1984: Springer-Verlag. [6] Stoica, I., Shenker, S. and Zhang, H., Core-Stateless Fair Queuing: Achieving Approximately Fair Bandwidth Allocations in High Speed Networks in Proc. of ACM Conference on Applications, Technologies, Architectures and Protocols for Computer Communication, SIGCOMM, 1998. [7] Kusmierek, E. and Koodli, R., Random Packet Marking for Differentiated Services, in Report from 2000. [8] Kusmierek, E. and Koodli, R., Flow Fairness using Aggregate Packet Marking for Wireless Mesh Networks in Proc. of Communication Systems Software and Middleware and Workshops, COMSWARE, 2008. [9] Burgstahler, I. and Neubauer, M., New Modifications of the Exponential Moving Average Algorithm for Bandwidth Estimation in Proc. of 15th ITC Specialist Seminar, 2002. [10] Kim, M. and Noble, B., Mobile Network Estimation in Proc. of Mobile Computing and Networking, MobiCom, 2001. [11] Montgomery, D.C. and Runger, G.C., Applied Statistics and Probability for Engineers. 2006: Wiley. [12] OPNET, OPNET; Available from: www.opnet.com. Last accessed: 20/07/2012. [13] Minitab, MINITAB; Available from: www.minitab.com. Last accessed: 20/07/2012. [14] Xie, M., Performance of a Queuing Model with Pareto Input Traffic for Wireless Network Nodes in Proc. of IEEE Wireless Communications, Networking and Mobile Computing, 2005. [15] Lackovi, M., Mikac, B. and Sinkovi, V., Network Performance Evaluation by means of Self Similar Traffic Model in Proc. of Mipro 2003. [16] Erramilli, A. et al., Self-Similar Traffic and Network Dynamics in Proc. of IEEE, 2002.

2013 ACEEE DOI: 01.IJIT.3.1. 1126

22

You might also like