You are on page 1of 123

Traffic Models -II

9/20/2012

Simulation

Agenda Self-similar Traffic Models

Motivation Nature of self-similar traffic Statistical properties of self-similar processes Self-similarity in data traffic Self-similarity in simulation, performance issues

9/20/2012

Simulation

In the Past (-1990) One kind of traffic of any significance- voice (telephony) Traffic modeling in the world of telephony was the basis for initial network models

Assumed Poisson arrival process Assumed Poisson call duration Well established queuing literature based on these
assumptions Enabled very successful engineering of telephone networks
Simulation 3

9/20/2012

The Story Begins with Measurement In 1989, Leland and Wilson begin taking high resolution traffic traces at Bellcore Ethernet traffic from a large research lab Mostly IP traffic (a little NFS) Four data sets over three year period

Measurement Methodology Collected lengthy traces of Ethernet LAN traffic on Ethernet LAN(s) at Bellcore High resolution time stamps Analyzed statistical properties of the resulting time series data Each observation represents the number of packets (or bytes) observed per time interval bin: 10ms, 100ms, 1s, 10s etc

9/20/2012

Simulation

Self-Similarity in Traffic Measurement Network Traffic

9/20/2012

Simulation

Time scale: 100 sec Time scale: 100 sec


T im

ep

fa

ct or

:1

at e

es ca le

ac h

st

10 msec

Ti

sc

al

fac tor :1 0a te ac h

ste p

10 msec

Time scale: 100 sec Time scale: 100 sec


Va rib i
ep

lit y

st

te ac

sc

ale

10

^( -0

.5

sl ik e

)a

es

lik

10 ^(0.8 )

10 msec

Va

ria

bi

lit

sc

al

at ea ch

ste p

10 msec

Agenda Modern Traffic Models

Nature of self-similar traffic Statistical properties of self-similar processes Self-similarity in data traffic Self-similarity in simulation, performance issues

9/20/2012

Simulation

12

Self Similarity
The idea is that something looks the same when viewed from different degrees of magnification or different scales on a dimension, such as the time dimension. Its a unifying concept underlying fractals, chaos, power laws, and a common attribute in many laws of nature and in phenomena in the world around us. Implications:

Burstiness exists across many time scales No natural length of a burst Traffic does not necessarily get smoother when you aggregate it (unlike Poisson traffic)
Simulation 13

9/20/2012

Problems with Poisson Models A Poisson process

When observed on a fine time scale (small bin


size) it will appear bursty As bin sizes increase, Poisson traffic will smooth, eventually eaching a flat line at the distribution mean

A Self-Similar process

When aggregated over wide range of time


scales will maintain its bursty characteristic

14

Self-Similarity and Fractals

9/20/2012

Simulation

15

Self-Similarity and Fractals Objects of infinite detail Self-similar:

A part is identical to the whole


Scale-free:

Statistical properties are independent of scale of observation The closer I look, the more I see

Infinite detail:

9/20/2012

Simulation

16

Example of a Fractal Line: Kochs Snowflake Start with a line segment 1 unit long Replace each line segment with the following generator Repeat forever:

9/20/2012

Simulation

17

Example of a Fractal Line: Kochs Snowflake In the limit - Koch curve.

Infinite length. 1 Self-similarity-

"zooming in " the part of the Koch curve, we see copies of itself

9/20/2012

Simulation

18

Self-similarity in Kochs curve

9/20/2012

Simulation

19

Kochs Snowflake

Kochs Snowflake: repeatedly adds triangle to middle of each side of previous triangle at 1/3 previous size. Note: boundary length approaches infinity; yet, area is less than that of circle around original triangle.
9/20/2012 Simulation 20

Examples of fractals in nature Trees Lungs Rain clouds Electrical discharges Shorelines

9/20/2012

Simulation

21

Fractals

River drainage

9/20/2012

Simulation

22

More fractals

9/20/2012

Simulation

23

More fractals

9/20/2012

Simulation

24

More fractals

9/20/2012

Simulation

25

More fractals

9/20/2012

Simulation

26

Fractal Life

9/20/2012

Simulation

27

Fractal Life

9/20/2012

Simulation

28

Fractal Life

9/20/2012

Simulation

29

Fractal Life

9/20/2012

Simulation

30

Fractal Life

9/20/2012

Simulation

31

Fractal Life

9/20/2012

Simulation

32

Power Law (Pareto) Distribution Scale-free process is best described with the power law:

> 0: shape parameter (slope) k > 0: location parameter PDF: Infinite mean for 1 Infinite variance for 2
33

Observed Phenomena

Few multi-billionaires, but many with modest income


[Pareto, 1896]

Few frequent words, but many infrequent words [Zipf,


1932]

Few mega-cities but many small towns [Zipf, 1949] Few web pages with high degree, but many with low degree [Kumar et al, 99] [Barabsi & Albert, 99] All the above obey power laws.
34

Distributional Tails
A particularly important part of a distribution is the (upper) tail P[X>x] Large values dominate statistics and performance Shape of tail critically important

Light Tails, Heavy Tails


Light Exponential or faster decline Heavy Slower than any exponential

f1(x) = 2 exp(-2(x-1))

f2(x) = x-2

A Fundamental Shift in Viewpoint Traditional modeling methods have focused on distributions with light tails

Tails that decline exponentially fast (or faster) Arbitrarily large observations are vanishingly
rare

Heavy tailed models behave quite differently

Arbitrarily large observations have non


negligible probability Large observations, although rare, can dominate a systems performance characteristics

Heavy Tails in Computer Networks


Sizes of data objects in computer systems

Files stored on Web servers Data objects/flow lengths traveling through the Internet Files stored in general-purpose Unix file systems I/O traces of filesystem, disk, and tape activity

Process/Job lifetimes Node degree in certain graphs

Inter-domain and router structure of the Internet Connectivity of WWW pages

Examining Tails
Best done using log-log complementary CDFs Plot log(1-F(x)) vs log(x)

1-F2(x)

1-F1(x)

Aggregated Traffic Superposition of High Variability ON-OFF Sources

Extension to traditional ON-OFF models by


allowing the ON and OFF periods to have infinite variance (high variability or Noah Effect)
X1(t) X2(t) X3(t) off on off 3 2 S3(t) 1 1 time 2 1
40

on

2 0

Aggregated Traffic
What can be said about the aggregate?

In terms of assumed type of randomness for durations and components In terms of implied type of burstiness

Explanation of Self-Similarity Consider a set of processes which are either ON or OFF

The distribution of ON time is heavy tailed


The size of files on a server are heavy-tail The transfer times also have the same type of characteristics.

The distribution of OFF time is heavy tailed


Since some source model phenomena that are triggered by humans (e.g. HTTP sessions) have extremely long period of latency.
42

Mandelbrots Types of Randomness


Distribution functions/random variables

Mild finite variance (Gaussian) Wild infinite variance (heavy tails) Mild short-range dependence (SRD, Markovian) Wild long-range dependence (LRD)

Correlation function of stochastic process

By Walter Willinger AT&T Labs-Research

Mandelbrots Types of Burstiness


Distribution function Wild

Bursty smooth
SRD

BURSTY bursty
LRD

Mild

Correlation structure

Tail-driven burstiness (Noah effect) Dependence-driven burstiness (Joseph effect)


By Walter Willinger AT&T Labs-Research

Type of Burstiness: Smooth


W h it e N o is e

CDF Function 1-F(x) 1-F(x) on log scale

Correlation Function r(n) r(n) on log scale

x on linear scale

Log-linear scales
lag n on linear scale
By Walter Willinger AT&T Labs-Research

Type of Burstiness: bursty


C o lo r e d N o is e

CCDF Function 1-F(x) 1-F(x) on log scale

Correlation Function r(n) r(n) on log scale

x on linear scale

Log-linear scale

Log-log scale

lag n on log scale

By Walter Willinger AT&T Labs-Research

Type of Burstiness: Bursty


S ta b le N o is e

CCDF Function 1-F(x)

1-F(x) on log scale

Correlation Function r(n) r(n) on log scale

x on log scale

?
lag n on linear scale

Log-log scale

Log-linear scale

By Walter Willinger AT&T Labs-Research

Type of Burstiness: BURSTY


C o lo r e d S ta b le N o is e

CCDF Function 1-F(x)

?
Correlation Function r(n) r(n) on log scale

1-F(x) on log scale

x on log scale

?
Log-log scales
lag n on log scale
By Walter Willinger AT&T Labs-Research

Noah Effect
; , , , ; - -, . , , . ' -' And the flood was forty days upon the earth; and the waters increased, and bore up the ark, and it was lifted up above the earth. And the waters prevailed, and increased greatly upon the earth; and the ark went upon the face of the waters. Genesis, 7,17-18

The idea that persistent burst may abruptly arise in a time series Describes discontinuity.

9/20/2012

Simulation

49

Joseph Effect
. -, ; , , , -- ,
Behold, there came seven years of great plenty throughout the land of Egypt and there shall arise after them seven years of famine. Genesis, 41, 29-30

The idea that changes in a time series tend to be part of larger trends and cycles more often than they are completely random. Describes persistence.

9/20/2012

Simulation

50

Impact on the Network Performance The "Noah" and "Joseph" Effects push in different directions, but they add up to this: trends in traffic are real, but they can vanish as quickly as they come. Thus, we can expect what's been happening to continue to happen, but we should also expect the unexpected.

9/20/2012

Simulation

51

Impact on Network Performance Self-similar burstiness can lead to the amplification of packet loss. The burstiness cannot be smoothed. Limited effectiveness of buffering

queue length distribution decays slower than


exponentially v.s. the exponential decay associated with Markovian input

52

Agenda Modern Traffic Models

Nature of self-similar traffic Statistical properties of self-similar processes Self-similarity in data traffic Self-similarity in simulation, performance issues

9/20/2012

Simulation

53

Self-Similarity: Statistical Property Self-similarity manifests itself in several equivalent fashions:

Hurst effect Slowly decaying variance Long range dependence Non-degenerate autocorrelations

9/20/2012

Simulation

54

Self-Similarity Definition
Stationary process representing the amount of data transmitted in consecutive short time periods.

X = ( X 1 , X 2 ,...)
m aggregated process (m 1 ) X
X t( m )

( m)

( m) ( m) = ( X 1 , X 2 ,...)

1 = ( X tmm+1 + ... + X tm ), m ,t N m

is self-similar if X and m1-H X(m) have the same variance and autocorrelation ( with Hurst parameter H ).

9/20/2012

Simulation

55

Aggregation Aggregation of the time series X(n) means smoothing the time series by averaging the observations over non-overlapping blocks of size m to get a new series Xm(i)

9/20/2012

Simulation

56

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated series for m = 2 is:

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated series for m = 2 is:

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated series for m = 2 is: 4.5

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated series for m = 2 is: 4.5 8.0

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated series for m = 2 is: 4.5 8.0 2.5

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated series for m = 2 is: 4.5 8.0 2.5 5.0

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated series for m = 2 is: 4.5 8.0 2.5 5.0 6.0 7.5 7.0 4.0 4.5 5.0...

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 5 is:

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 5 is:

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 5 is: 6.0

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 5 is: 6.0 4.4

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 5 is: 6.0 4.4 6.4 4.8 ...

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 10 is:

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 10 is: 5.2

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 10 is: 5.2 5.6

Aggregation: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1... Then the aggregated time series for m = 10 is: 5.2 5.6 ...

The Hurst Effect


For almost all naturally occurring time series, the rescaled adjusted range statistic (also called the R/S statistic) for sample size n obeys the relationship

S(m) is the standard deviation Wk = (X1 + X2 + + Xk) - kE[X(m)], k= 1, 2, , m


9/20/2012 Simulation 73

The Hurst Effect For models with only short range dependence, H is almost always 0.5 For self-similar processes, 0.5 < H < 1.0 This discrepancy is called the Hurst Effect, and H is called the Hurst parameter Single parameter to characterize self-similar processes

9/20/2012

Simulation

74

R/S Statistic: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1 There are 20 data points in this example For R/S analysis with n = 1, you get 20 samples, each of size 1:

Block 1: X n = 2;W1 = 0; R(n) = 0; S (n) = 0 Block 2: X n = 7;W1 = 0; R(n) = 0; S (n) = 0

R/S Statistic: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1 For R/S analysis with n = 2, you get 10 samples, each of size 2:

Block 1: X n = 4.5;W1 = 0;W2 = 2.5; R(n) = 2.5; S (n) = 2.5; R (n) / S (n) = 1;

R/S Statistic: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1 For R/S analysis with n = 2, you get 10 samples, each of size 2:

Block 2: X n = 8;W1 = 4;W2 = 0; R(n) = 4; S (n) = 4; R(n) / S (n) = 1;

R/S Statistic: An Example etc

R/S Statistic: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1 For R/S analysis with n = 10, you get 2 samples, each of size 10:

R/S Statistic: An Example Suppose the original time series X(t) contains the following (made up) values: 2 7 4 12 5 0 8 2 8 4 6 9 11 3 3 5 7 2 9 1 For R/S analysis with n = 20, you get 1 sample of size 20:

R/S Diagram

R/S Statistic

Slope 1.0

Selfsimilar process

Slope 0.5
Block Size n
9/20/2012 Simulation 81

Heavy-tailed Distribution
A distribution of a random variable P is said to be heavy-tailed if P{ X > x } x - , as x & 0 <
< 2 If 1, the distribution has an infinite mean. If 2, the distribution has an infinite variance.

9/20/2012

Simulation

82

Slowly Decaying Variance For most processes, the variance of a sample diminishes quite rapidly as the sample size is increased, and stabilizes soon For self-similar processes, the variance decreases very slowly, even when the sample size grows quite large

9/20/2012

Simulation

83

Variance-Time Plot The variance-time plot is one means to test for the slowly decaying variance property Plots the variance of the sample versus the sample size, on a log-log plot For most processes, the result is a straight line with slope -1 For self-similar, the line is much flatter

9/20/2012

Simulation

84

Variance-Time Plot

Variance

9/20/2012

Simulation

85

Variance-Time Plot

100.0 10.0

Variance

Variance of sample on a logarithmic scale

0.01 0.001 0.0001


9/20/2012 Simulation

86

Variance-Time Plot

Variance

Sample size m on a logarithmic scale

9/20/2012

10

Simulation

100

10

10

5
87

10

10

Variance-Time Plot

Variance

9/20/2012

Simulation

88

Variance-Time Plot

Variance

9/20/2012

Simulation

89

Variance-Time Plot

Variance

Slope = -1 for most processes

9/20/2012

Simulation

90

Variance-Time Plot

Variance

9/20/2012

Simulation

91

Variance-Time Plot

Variance

Slope flatter than -1 for self-similar process

9/20/2012

Simulation

92

Long-range Dependency ( LRD )


Autocorrelation r(k) k -, as k , which means the process follows a power law, rather than exponential decaying.( 0<<1 ) Hurst parameter H=1-/2

H=1-/2, so self-similar process shows long-range

dependency if 0.5<H<1

9/20/2012

Simulation

93

Long Range Dependence (LRD)


LRD captures the memory of the behavior Recall: Autocorrelation is a statistical measure of the relationship, if any, between a random variable and itself, at different time lags Positive correlation: big observation usually followed by another big, or small by small Negative correlation: big observation usually followed by small, or small by big No correlation: observations unrelated
9/20/2012 Simulation 94

Autocorrelation function
Shows the value of the autocorrelation coefficient for different time lags k

Autocorrelation Coefficient

+1

Significant positive correlation at short lags

0 No statistically significant correlation beyond this lag -1 0


Simulation

lag k
95

100

9/20/2012

Long Range Dependence For most processes (e.g., Poisson, or compound Poisson), the autocorrelation function drops to zero very quickly (usually immediately, or exponentially fast) For self-similar processes, the autocorrelation function drops very slowly (i.e., hyperbolically) toward zero, but may never reach zero

9/20/2012

Simulation

96

Autocorrelation function
Autocorrelation Coefficient +1 Typical short-range dependent process 0

-1

0
Simulation

lag k
97

100

9/20/2012

Autocorrelation function
+1 Autocorrelation Coefficient Typical long-range dependent process 0

-1

lag k
9/20/2012 Simulation 98

100

Autocorrelation function
+1 Autocorrelation Coefficient Typical long-range dependent process

0 Typical short-range dependent process -1 0


Simulation

lag k
99

100

9/20/2012

Non-Degenerate Autocorrelations For self-similar processes, the autocorrelation function for the aggregated process is indistinguishable from that of the original process If autocorrelation coefficients match for all lags k, then called exactly self-similar If autocorrelation coefficients match only for large lags k, then called asymptotically self-similar
9/20/2012 Simulation 100

Non-Degenerate Autocorrelations
+1 Autocorrelation Coefficient Original self-similar process 0 Aggregated self-similar process -1

0
Simulation

lag k
101

100

9/20/2012

Agenda Modern Traffic Models

Nature of self-similar traffic Statistical properties of self-similar processes Self-similarity in data traffic Self-similarity in simulation, performance issues

9/20/2012

Simulation

102

Why Is Traffic Self-Similar? Nature works non uniformly Applications/users are bursty File sizes and requests are skewed [Crovela et al] Effect of topology and TCP [Feldman+] Not all flows are equal [Sarvotham Riedi et al]

A few flows dominate a link (Alpha flows)

Advanced Networks

M. and C. Faloutsos

File Sizes in the WWW

9/20/2012

Simulation

104

Web Traffic and Distributions

Distr. Of Transmission Time

Distribution of file requests by size

Real Web traces Distributions are skewed


M. and C. Faloutsos

[Crovela et al]
Advanced Networks

Link Traffic and Dominant Flows

Overall traffic

1 Strongest connection

Residual traffic

The dominant flows are responsible for bursts The other flows exhibit long range dependence
Riedi Baraniuk+, INCITE project, Rice U.
Advanced Networks M. and C. Faloutsos

Statistical Behavior of Link Traffic

9/20/2012

Simulation

107

Averaging Traffic
9/20/2012

Simulation

108

Nature of self-similar traffic Burstiness: small variations over small time periods, big variations over big time periods (see figure two slides ago)

As a result of this: If the traffic averaged over


longer periods is plotted, one sees the same percentage of variation as when averaged over short time periods (figure on slide 87, columns left and right) Note: In the case of Poisson traffic, the percentage variations decrease as the time period over which the traffic values are averaged increases (middle column on prev. slide)
Simulation 109

9/20/2012

Application to Data Traffic Data traffic is often well-modeled as a selfsimilar process in many practical networking situations

Ethernet traffic WWW traffic SS7 traffic TCP traffic FTP and Telnet traffic VBR video

9/20/2012

Simulation

110

Self-Similarity of Ethernet Traffic (1/3)


Pictorial Proof. Source: W. E. Leland, et al., On the Self-Similar Nature of Ethernet Traffic, ACM SIGComm93.

9/20/2012

Simulation

111

Self-Similarity of Ethernet Traffic (2/3)


Pictorial Proof. Source: W. E. Leland, et al., On the Self-Similar Nature of Ethernet Traffic, ACM SIGComm93.

9/20/2012

Simulation

112

Self-Similarity of Ethernet Traffic (3/3) Authors estimated Hurst parameter of H = 0.9

The higher the load, the higher H


Aggregating several streams does not remove self-similarity We know that Poisson is not a good model in this case. But what is?

Superposition of many Pareto-like ON/OFF


sources

9/20/2012

Simulation

113

World Wide Web Traffic Crovella and Bestavros studied over 0.5 M requests for web documents at BU Traffic generated by browsers was selfsimilar

Good fit by modeling each browser as an on/off


source with a Pareto distribution Size distribution of files available over the web is heavy-tailed

Source: Crovella and Bestavros, Self-similarity in World-Wide WebTraffic: Evidence and Possible Causes, ACM Sigmetrics, May 1996.

9/20/2012

Simulation

114

TCP, FTP, and Telnet


Poisson models underestimate the burstiness of TCP traffic under a wide range of time scales For Telnet, connection arrivals are well-modeled as Poisson, but Poisson assumption for packet arrivals is not warranted FTP session arrivals are well-modeled as Poisson, but packet arrivals much more bursty For FTP, the distribution of number of bytes in each burst has a heavy upper tail
Source: Paxson and Floyd, Wide Area Traffic: The Failure of the Poisson Model, IEEE/ACM Transactions on Networking, June 1995.

9/20/2012

Simulation

115

Variable Bit Rate Video JPEG video

Variable-length frames due to compression and


encoding algorithm

Experiments with Star Wars and other content

Video transmission exhibits self-similarity Frame length conforms to Pareto distribution


(at least in the tail of distribution)

9/20/2012

Simulation

116

Influence of TCP
The TCP protocol, which is currently used by most applications over the Internet, introduces self-similarity.

The speed of data transmission of TCP is influenced by congestion control when a packet gets lost. Through this mechanism, the many independent TCP connections that run over the Internet become dependent on one another. The interaction is quite complex and involves the retransmission process after time-out.

Voice and video streaming does not use TCP. As these types of applications become more important over the Internet, it can be expected that the traffic will become less self-similar. Arrival pattern of new sessions (e.g. TELNET sessions or Web server sessions) have been observed to follow a Poisson distribution.

9/20/2012

Simulation

117

Agenda Modern Traffic Models

Nature of self-similar traffic Statistical properties of self-similar processes Self-similarity in data traffic Self-similarity in simulation, performance issues

9/20/2012

Simulation

118

Self Similar Traffic in Simulation A superposition of many Pareto-distributed ON-OFF sources can be used to generate self similar traffic. Pareto distribution is a heavy-tailed distribution: the tail decays much more slowly than the exponential distribution. Typical sample includes many small values and a few very large values (bursty).

9/20/2012

Simulation

119

How Inaccurate Are Older Models?

9/20/2012

Simulation

120

Self-Similarity Performance Implications If actual data is more bursty than originally modeled, then the original models underestimate average delay and blocking Self-similarity leads to higher delays and higher blocking probabilities Conclusion: self-similarity leads to a poor fit with traditional queuing theory results

9/20/2012

Simulation

121

Traffic Models (1/2)

9/20/2012

Simulation

122

Traffic Models (2/2)

9/20/2012

Simulation

123

You might also like