Professional Documents
Culture Documents
ABSTRACT
In this paper, we provide complete end-to-end delay analyses including the relay
nodes for instant messages. Message Session Relay Protocol (MSRP) is used to
provide congestion control for large messages in the Instant Messaging (IM)
service. Large messages are broken into several chunks. These chunks may
traverse through a maximum number of two relay nodes before reaching
destination according to the IETF specification of the MSRP relay extensions. We
discuss the current solutions of sending large instant messages and introduce a
proposal to reduce message flows in the IM service. The analysis presented in this
paper is divided into two parts. At the former part, we consider virtual traffic
parameter i.e., the relay nodes are stateless non-blocking for scalability purpose.
This type of relay node is also assumed to have input rate at constant bit rate. The
later part of the analysis considers relay nodes to be blocking and the input
parameter to be exponential. The performance analysis with the models introduced
in this paper is simple and straight forward, which lead to reduced message flows
in the IM service. Also, using our model analysis a delay based optimization
problem can be easily deduced.
α
⎝ i ⎠ burst of each flow is bounded and the capacity of any
link is no less than the average rate of the flows
Aik are the input rate, chunk size and the arrival traversing the link, there exists a worst case delay
time of chunk k of flow i respectively; the delivery bound in the network, i.e., the worst case delay of a
order time stamp of chunk k of flow i is updated at flow to traverse any pair of relay nodes is bounded.
the next relay node with an increment Since we consider that the relay node need not
keep transaction states of the SEND flows, chunks in
⎛ Lmax ⎞ ⎛ Li ⎞
of ⎜ ⎟ + ⎜⎜ ⎟ , and chunks are served at the
⎟
the buffer are served by the order of their delivery
⎝ c ⎠ ⎝αi ⎠ time stamp tags, not their arrival times. There is also
no distinct relation between the delivery time stamp
increment order of their previous node’s delivery
of a chunk and its arrival time. Thus, a chunk with an
order time tag, where, Lmax is the maximum size of earlier delivery time stamp than another chunk,
chunk in all flows. Under these conditions, it is easy though it arrives later, may be served first. This may
to perceive that the worst case delay of a flow i at happen due to the well-known traffic distortion
i =m
[ ( {
≥ β1 +α1 × Rin − max Ri1−1, min Ai1 , Ai2 ,....,Ain { }})]
where Ai is the arrival time of chunk Pi, i=1,2,…; we
refer to F (t1 , t 2 ) = β + α (t 2 − t1 ) in the time [ ( { { }})]≥ ∑L
k
+ β2 +α2 × Rjp − max Rj1−1, min Aj1 , Aj2 ,....,Ajp s
s=m
interval (t1 , t 2 ] as the traffic function of this flow
(4)
with the traffic parameter (β , α ) . We apply the Application of Theorem 1: If the function of all
additive property of (σ , ρ ) traffic model [12] to traffic flows are known, the virtual traffic aggregated
function can be derived by Theorem 1.
obtain the following:
However, the chunk pattern may be distorted at a
Proposition 1: Given two flows with traffic
relay node. In such case, we can provide the
parameters (β 1 , α 1 ) and (β 2 , α 2 ) the traffic following relation for a flow in terms of worst case
parameter of the aggregated traffic of the two flows delay of the outgoing traffic.
is (β 1 )
+ β 2 , α1 + α 2 . Proposition 2: Assume that the traffic parameter
of the input traffic of a SEND chunk flow at a relay
Proof: Assume that chunks are ordered by their
delivery order. Given any two chunks Pk and
( )
node is β , α and the worst case delay to traverse
a relay node is D (let the mean service time of a
Pm (k ≥ m ) of the aggregated flow, assuming chunk at this current node is d). We can characterize
chunks Pi1 , Pi2 ,...... and the output traffic of this flow as β ′, ( α ) where the
(
Pin , i1 < i2 < .... < in and n ≤ (k − m + 1) bel ) buffer requirement
{
β ′ = max 0, α (D − d ) + Lmax + β . }
is
[ {
≥ β ′ + α Rk + d − max Rm−1 + d , Rm + D }] and Tk > Tm where Ri and Ti are the previous
node’s delivery time tag and the delivery time of Pi
{
≥ min β + α (Rk − Rm−1 ), β + Lmax + α (Rk − Rm ) } at current node. Thus
(6) Rm > Rk ≥ Ri , for all m<i<k
And
(10)
β +α[Rk − max {Rm−1, min [Am , Am+1,....,Ak ]}]
k
≥ ∑Li Tk > Ti ≥ Tm , for all m<i<k
i=m (11)
k
i..e., α(Rk −Rm−1 ) ≥ ∑Li
In other words, Pm is transmitted before chunks
i=m Pm +1 ,..., Pk ; however, its previous node’s delivery
(7)
time tag is greater than that of chunks Pm +1 ,...Pk .
Now let the previous node’s delivery order of a
Thus
chunk Pi , i = 1,2,..., at the outgoing link of the
Lm
⎛L ⎞ min{Am +1 ,..., Ak } > Tm − (12)
relay node is: Ri′ = Ri + D + ⎜ max ⎟ . Thus c
⎝ α ⎠ ⎛L ⎞
from Eq. (6) and (7) we have: Since, Pm +1 ,..., Pk arrive after Tm − ⎜ m ⎟ and
β + α[Rk′ − max{Rm′ −1 , min[Tm ,Tm+1 ,...Tk ]}] ⎝ c ⎠
depart before Pk at the current relay node, we have
≥ min{β + α(Rk − Rm−1 ), β + Lmax + α(Rk − Rm )} k
⎧k k
⎫ k
∑L i
∑ i i i i
node, then input traffic parameter for the next relay
i =1 ⎩ i =m+1 ⎭ node is (β ′, α ) where buffer requirement
i.e., is β ′ = max{0, α (D + δ − d ) + L }+ β . max
k v
⎛ ⎞⎡ ⎛ L ⎞⎤ v
The delay bound of proposition 3 can further be
∑ Li ≤ ∑(βi − αiθi ) + ⎜ ∑αi ⎟⎢Rk − ⎜Tm − m ⎟⎥
⎝ i=1 ⎠⎣ ⎝ c ⎠⎦ ⎛ v
αi ⎞
i =m+1 i =1
tightened. For instance, if ⎜
⎝
∑ ⎟ → 0, then the
c ⎠
(15) i =1
From Eq. (13) and Eq. (15) we have worst case delay bound would be
k
⎛ v
⎞
∑L m
⎜ ∑ β i + Lmax ⎟
Tk = Tm + i=m+1 ⎝ i =1 ⎠ . On the other hand, if
c c
θ = min i {θ i } , and the delivery time tag at the
⎛ v ⎞⎡ ⎛ L ⎞⎤ v
⎜∑αi ⎟⎢Rk −⎜Tm − m ⎟⎥ + ∑(βi −αiθi ) pervious node of all chunks are decreased by θ ,
⎝ i=1 ⎠⎣ ⎝ c ⎠⎦ i=1 then the traffic functions of all flows remain the
≤ Tm +
c same and the actual worst case delay bound from
⎛ v ⎞
⎜ ∑ β i + Lmax ⎟
v
(βi −αiθi )
Lmax ∑ ⎝ i =1 ⎠ − θ . Therefore, it
≤ Rk + + i=1 proposition 3 is
c c c
(16) is possible to tighten the worst case delay as well in
this instance. If all chunks’ delivery time stamps at
If there does not exist such m, then P1 ,..., Pk −1 all
the previous node are increased or decreased by a
leave the node before Pk and thus have constant at the entrance to a relay node, their
delivery time remains unchanged. If all chunks’
⎛k
⎞ v v
∑ Li ⎜ ∑ i ⎟ k ∑ (β i − α iθ i )
α R + previous node’s delivery time tag decreased by θ ,
Tk = i =1 ≤ ⎝ =1 ⎠
i i =1 applying proposition 3, for any chunk Pk we have the
following:
c c v
i.e., ∑ [β i − α i (θ i − θ )] + Lmax
v Tk − (Rk − θ ) ≤ i −1
∑ (β i − α iθ i ) c
Tk − Rk ≤ i =1
i.e.,
c v
i − α iθ i c
Lmax (18)
i =1
+ and proposition 3 is i.e., the worst case delay is bounded by
c c v
proved.
Application of proposition 2 and proposition 3: ∑ [β i − α i (θ i − θ )] + Lmax
The proposed propositions are straight forward for
i −1
−θ
performance analysis. From the above relation, we c
can also characterize the outgoing traffic parameter Now if we take the propagation delay into account,
of a relay node for a given propagation delay, δ . the increment for flow n, 1 ≤ n ≤ v , should be
Let δ be the propagation delay of a chunk of a flow v
λ′ = (27)
arrival buffer at a node is full, the rejection of an 1 − ρ1A +1 1
) (1− (A +1)ρ )
the last chunk in the nth batch departs and G X (z ) be 1
W2 = A2
+ A2 ρ2A2 +1
µ2 (1− ρ2 )(1− ρ
A2 2 2
the generating function of the probability mass 2
function of discrete random variable X. Let (35)
U n denotes the number of chunks that arrive during The total expected response time W, i.e., the
the service of (all the chunks in) the nth batch. Let combined time spent in the two relay nodes on a
π n (k ) = P(Yn = k ) for n ≥ 1, k ≥ 0, so that successful transmission attempt, is the sum of the
expected response times at each node, i.e.,
∞
GYn ( z ) = ∑ π n (k ) z k . At equilibrium, assuming W = W1 + W2 (36)
k =0 Thus, the mean transmission time (MTT) for a
chunk that is successful on its first attempt is
this exists, let π n (.) → π (.) and
MTT = W + T (37)
GYn (.) → GY (.) as n → ∞ . The random variable Eq. (37) achieves the goal of out model. Failed
B denotes a generic batch size random variable chunks, due to full buffer, retry a number of times
Bn and we use V similarly to denote a generic given by the retransmission probability p r . Because
each retry is made independently of previous
instance of Vn . attempts, this number of attempts is a geometric
Let the sojourn time, or waiting time, in the random variable with parameter p r . The overhead
queue of the last chunk in a batch – i.e. the sum of
the time it spends waiting to start service and its incurred by a failed transmission, i.e. the elapsed
service time – be W. time between the start of an attempt that
The Laplace-Stieltjes transform of the response time subsequently fails and the start of the next attempt,
distribution in such an M/GI/1/∞ queue with batch consists of the time-out delay of k*MTT for chunks
arrivals can then be shown to be given by: lost due to a full buffer (k mean successful
[1 − GB (H )]W * (θ ) = (1 − ρ1′)[GB (S * (θ )) − GB (H )]
transmission times). We express this overhead, L as
follows:
(29) L = k * MTT * p f (38)
θ
−1
Where z = G (1 − ) (30)
λ′
B