Professional Documents
Culture Documents
Bell Labs, Alcatel-Lucent, Orange Labs, France Telecom, University of Lugano Tsinghua University.
first.last@{alcatel-lucent.com, orange.com, usi.ch}, wangsen@netarch.tsinghua.edu.cn
I.
I NTRODUCTION
P ROBLEM S TATEMENT
P ROBLEM F ORMALIZATION
pulled down over link (j, i) according to ICN symmetric routing principle. Links have finite capacities Cij > 0, (i, j) A.
A content retrieval (also denoted as flow) n N is univocally
associated to a user node in U V and one or multiple
repositories. The variable xnij (
xnji ) denotes the Data (Interest)
rate of flow n over link (i, j) (j, i), in packet/s. Given the ICN
link flow balance between Interest and Data packets on the
reverse link, xnij and x
nji coincide in the fluid representation of
flow rates, which can be interpreted as a long term average of
the instantaneous rates. Thus, we will omit the indication of
Interest rate in the formulation of the optimization problem.
For each node i V, we define + (i) and (i) respectively
the set of egress and ingress nodes. Each network node
i V has a finite size cache to store in-transit Data. In the
current deterministic fluid representation of system dynamics,
hn (i) {0, 1} defines the binary function equal to 1 when
node i may serve Data of flow n and 0 otherwise. As a result,
we can denote by S(n) = {i : hn (i) = 1} the set of available
sources for flow n. Notation is summarized in Tab.I.
Observation 3.1: Remark that, from a macroscopic viewpoint, the system evolves driven by a stochastic users demand
affecting link bandwidth and nodes storage (as analyzed in
[7]), while here we focus on a microscopic flow-level description of network resource sharing, more suitable to protocol
design and optimization. From a macroscopic point of view
the hn (i) become the cache hit probabilities.
2) Optimization problem. The global objective of the optimization problem is a joint user performance maximization and
network cost minimization. We consider a convex users utility
function U n and a concave link cost function C ij , and we
compute the global objective as the difference between the
total users utility and total network cost (1). The problem
is formulated as a multi-commodity flow problem with linear
constraints and convex objective:
X
X
n
max
U
(y
)
C ij (ij )
(1)
n
i
y
n
i,jA
xnij = ij
(i, j) A
(2)
nN
X n
x`i = yin , i, n
(3)
(i)
j (i)
Cij
(i, j) A
(5)
ij
xnij 0
i, j, n
(6)
With ij we denote the link load as the total flow rate flowing
through link (i, j) (2), while yin denoting flow ns rate arriving
at node i is defined as the sum of the ingress rates xn`i ,
` (i) (3). yin is also equal to the sum of egress rates
xnij , j + (i), unless hn (i) = 1 (hence, Interest are locally
satisfied and not forwarded upstream). This holds for all nodes
i except the user node for flow n, for whom xnij = 0, j. Data
rates are subject to link capacities constraints (5) and must be
non negative (6).
3) Problem Decomposition. The problem can be decomposed
according to the two objectives: utility maximization of endusers throughput (at the receiver) and network cost minimization (at in-path nodes). Indeed, a primal decomposition of
max
U n (y n )
(7)
X
y n Cij
(i, j) A
(8)
n:(i,j)L(n)
n
y 0
n N
(9)
where we denoted L(n) = {(i, j) A : xnij > 0}, the set of all
links used by flow n. The utility maximization in (7) is Kellys
formulation of distributed congestion control ([15], [20]). We
will show in Sec.IV how to derive an optimal congestion
controller for ICN, based on Interests transmission control at
the receiver.
The master problem is the overall network cost minimization:
!
X
X
min
C ij
xij
(10)
i,jA
nN
xn`i = yin , i, n
(11)
` (i)
j+ (i)
xnij 0
(i, j) A
i
/ U, n
(12)
n N
(13)
Network cost minimization is a multi-commodity flow problem, whose flows may have multiple sources (caches or Data
repositories) and one unique receiver i U.
Parameter
V, A, U
N
L(n) = {(i, j) A : xn
ij > 0}
Cij
hn (i) {0, 1}
S(n) = {i : hn (i) = 1},
xn
xn
ij (
ji )
y n (
y n ), yin
ij
+ (i) = {j V : (i, j) A}
(i) = {` V : (`, i) A}
Definition
Vertices, Arcs, Users sets
Flows (content retrievals) set
Set of links used by flow n
Capacity of links (i, j) A
Binary cache function (flow n, node i)
Set of sources for flow n
Link Data (Interest) rate
Total Data (Interest) rate
Link load over (i, j) A
Set of egress nodes for i V
Set of ingress nodes for i V
TABLE I.
4) Solution. To solve both problems via distributed algorithms: (i) we compute the respective Lagrangians LU
and LC ; (ii) we decompose LU and LC with respect to
users and in-path nodes respectively; (iii) we identify local
Lagrangians at users and in-path nodes to be respectively
maximized/minimized. Let us consider LU and LC separately.
X
X
X
LU (yy , ) =
U n (y n )
ij (
y n Cij )
n
X
n
X
n
ij
n
U n (y )
n:(i,j)L(n)
ij y n +
n (i,j)L(n)
(U n (y n ) n y n ) +
ij Cij
ij
ij Cij
(i,j)
P
where n (i,j)L(n) ij . Hence, every user maximizes the
local Lagrangian LnU :
LnU = U n (y n ) n y n
(14)
which requires to compute the Lagrange multipliers ij associated to the capacity constraints, whereas multipliers associated
to constraints (9) are always null as we assume fixed positive
y n > 0 n. The utility maximization is responsible for
adjusting the rates yin , which are, instead, assumed to be
constant in the network cost problem minimization. Let us then
compute LC and decompose it as a function of in-path nodes
which are coupled by the flow balance constraint (4). Hence,
x, ) =
LC (x
!
X
X
XX
X
X
n
C ij
xij
ni (
xnli
xnij )
i,j
nN
X X
C ij (ij )
i,j
XX
ni xnli +
l (i)
XX
n
i,l
j+ (i)
i,j
(i,j)r:rRn
ni xnij
X
[C li (li )
(ni nl )xnli ]
l (i)
a priori knowledge of the sources and to the coupling with innetwork caching makes it inefficient to control Interest rate by
simply monitoring the total flow delay n (t) (also noticed in
[8]).
Therefore, we decompose n (t) into the sum of route
delays, n (r, t), where a route r Rn identifies a unique
sequence of nodes from any
P source s S(n) to the receiver
node, for flow n, n (t) = rRn n (r, t)(r, t), where (r, t)
is the fraction of flow routed over route r at time t (as decided
in a distributed fashion by the request forwarding algorithm).
Also,
X
n (r, t) =
ij (t)
d n
y (t) = n (t)(U 0n (y n (t)) n (t))
(17)
dt
where ij (t) is a function of link (i, j). Let us take for instance
ij (t) = C1ij , then ij (t) can be interpreted as (i, j)s link
delay, as its evolution (16) is determined
by a fluid queue
P
n
evolution equation with input rate
n:(i,j)L(n) y (t) and
n
service rate Cij . Thus, (t) accounts by for the total network
delay experienced by flow n.
Observation 4.1: Link, flow and route delays. The request/reply nature of ICN communication model suggests a
natural way to measure n (t) at the receiver via the Interest/Data response time, while ij (t) are not known and hard
to measure at the receiver.
However, as mentioned in Sec.II-B, the accrued delay
variability of ICN multipath communication due the lack of
(20)
W (t)
d
W (t) =
W (t)
pr (t)r (t)
,
dt
R(t)
Rr (t)
(21)
rR
V.
(22)
for all ingress interfaces j (i) such that xnji > 0 and
C ji
C li
,
xnji
xnli
l (i)
(23)
Eq. (23) states that the interfaces selected for Interest forwarding at node i are those minimizing the cost derivatives. This
implies that there can be one or multiple interfaces with equal
and minimum cost derivative. For every of such interfaces, let
us iterate the reasoning and apply (22) at all subsequent hops
j, j+1, j+2, ..., r up to one of the sources s S(n). At the last
hop minimization of the cost derivative corresponds directly to
the minimization of nr , while at previous hops the variable to
be minimized is the difference nk nk+1 for k = j+1, ..., r and
ni nj at the first hop. Clearly, starting from the source, once
its value, the minimization
minimized nr and denoted by n,
r
n
of (nr1 n,
r ) is equivalent to the minimization of r1 and
so on until ni .
Thus, we can conclude that a family of optimal distributed
algorithms for request forwarding can be derived by simply
minimizing ni at each node i and n N .
To compute ni we apply a gradient algorithm on the
x, ) and obtain (See [5]):
Lagrangian LC (x
X
X
dni
= in
xnij
xnli
(24)
dt
+
j (i)
l (i)
j
j
j
j
j (i) j
response time at output interface j. Hence, at each output interface,
this requires to solve the following
problem:
P
P
max j (i) xj VRTTj (xj ), subject to
j (i) xj = y
and xj 0. Assuming that: 1) VRTTj (xj ) are monotonically increasing with xj , 2) VRTTj (0) = VRTTmin , and 3)
VRTTj (xj ) is differentiable, the optimal allocation can be
easily proved by imposing KKT conditions, which
xj =
P give:
01
01
j (), with unique solution of the equation
j j () =
y.
Observation 5.2: Observe that this algorithm has in practice two major drawbacks: (i) it requires explicit knowledge
of VRTT, (ii) it just exploits a subset of paths depending on
the value of y. This latter results from the fact that 0j (xj ) =
VRTTmin + xj VRTT0 (xj ). As VRTTj (0) = VRTTmin . Thus
> 0 : 01
j () = 0, . This means that slower
interfaces could never be probed for some y.
As we observed in Sec.II-B, two important challenges ICN
needs to cope with are the unknown sources and the accrued
delay variability due to in-network caching.
To guarantee optimal interface selection over time through
continuous monitoring of the interfaces, one needs a slightly
different objective at each output interface of node i, which
minimizes the number of pending Interests associated to the
most
P loaded output interface, i.e. min maxj j (xj ), subject to
j xj = y xj 0, The objective can be attained very easily,
by observing that the optimal allocations are such that j =
for all interfaces j + (i). Consider now an interval of time
T , such that every interface j has been probed and P
selected,
i.e. xj (t) = y(t), for a fraction of time Tj over T = j Tj
and zero elsewhere. Thus,
RT
Z
xj (t)VRTTj (xj (t))dt
1 T
P
= lim
j (t)dt = lim 0
T T 0
T
l Tl
Tj
j
(25)
=P
yVRTTj (y) = j
l Tl
where
j is the average measured number of pending Interests
at interface j, j = /
jPrepresents the probability to select
interface j, and 1/ =
j j is a normalization constant.
This algorithm does not require the explicit knowledge of the
VRTTj and probes slow interfaces as well, with rate xj =
j /VRTTj . The algorithm only requires to locally measure
the average number of interests per used interface, which is an
available information in CCN by design.
A. Request forwarding algorithm
Request forwarding decisions are based on the selection
of longest prefix matching interfaces in the FIB. Indeed, FIB
entries specify name prefixes rather than full object names and
clearly appears to be unfeasible to maintain per-object name
information. We apply optimal interface selection algorithm per
output interface and per prefix rather than per objects. Notice
that applying the same transmission strategy to different objects
with the same name-prefix preserves reasonably limited FIB
VI.
E XPERIMENTAL E VALUATION
Case II
xi,j [Mbps]
Experim/Optim
1.79 / 2
18.35 / 20
2.53 / 2.5
2.46 / 2.5
20.12 / 22
4.85 / 5
100
experimental
optimal
80
Link
(0,4)
(1,4)
(2,5)
(3,5)
(4,6)
(5,6)
Case I
xi,j [Mbps]
Experim/Optim
4.74 / 5
9.18 / 10
2.44 / 2.5
2.40 / 2.5
13.91 / 15
4.82 / 5
node 4
60
node 5
40
node 6
20
0
0
Fig. 1.
Fig. 2.
10
15
20
Time [s]
25
30
Link
(4,2)
(5,2)
(6,2)
(7,4)
(8,4)
(9,5)
(10,5)
Optimal
0 Cache
33.33
66.66
50
50
50
50
66.6
0
36.13
63.21
49.25
51.93
47.61
50.51
65.61
Experimental
6.4GB
14.4GB
37.76
38.11
59.58
57.65
49.63
49.53
55.25
56.99
47.79
47.74
56.30
57.56
71.32
73.87
0.8
0.6
0.4
0.2
0
0
Fig. 4.
Scenario 2 topology.
Fig. 5.
Fig. 6.
4
6
8 10 12
Total cache size [GB]
14
16
300
250
ccnx:/amazon.com
ccnx:/google.com
ccnx:/warner.com
30
Singlepath
Multipath
350
200
150
100
25
20
15
10
50
11
13
14
15
17
18
20
21
Object-name
Prefix
0
Node ID
Fig. 7.
Scenario 3 topology.
250
0
600
From16 to15
400
200
0
600
From16 to17
200
0
400
800
1200
300
200
400
20
200
150
100
50
1600
Time
0
Before Failure
Scenario 4: Topology.
15
400
Fig. 10.
10
Object rank
From16 to12
600
Fig. 11.
Fig. 12.
During Failure
After Failure
to compensate for link failure over (16, 12). Once (16, 12)
becomes available, the request forwarding protocol sends most
of the Interests across such link to probe the response time,
until the number of pending Interests on (16, 12) reaches the
same level attained by the other two faces and the split ratios
stabilize to the optimal values.
In Fig.12, we report the average content delivery time
estimated at the receiver at the different access nodes in the
three test phases (requests for different nodes are plotted one
on top of the other). Users performance are determined by
the maximum throughput of the topology (an aggregate of
150Mbps at the repositories) that is fairly shared. However
users at node #16 are forced to use longer paths during the
failure and pay for more network congestion.
VII.
R ELATED W ORK
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
[28]