You are on page 1of 8

1

16

Cache Networks: An Information-Theoretic View

Ali Maddah-Ali and Urs Niesen


Cache Networks:Mohammad
An Information-Theoretic
View

Mohammad Ali Maddah-Ali and Urs Niesen


AbstractCaching is a popular technique that duplicates
content in memories distributed across the network in order
to enhance throughput and latency in a variety of applications.
Cache systems were the subject of extensive study, mostly in
the computer science community, in the 80s and 90s. However, the fundamental results derived during that period were
mainly developed for systems with just a single cache and only
heuristically extended to networks of caches. In this newsletter
article, we argue that information theory can play a major
role in establishing a fundamental understanding of such cache
networks. In particular, we show that cache networks can be cast
in an information-theoretic framework. Using this framework,
we demonstrate that the aforementioned heuristics, which utilize
caches to deliver part of the content locally, can be highly
suboptimal when applied to cache networks. Instead, we identify
cache memories as limited spaces to plan side information from
among a fixed set of pre-recorded content (e.g., movies) to
facilitate future communication. This new understanding of the
role of caching creates various coding and signaling opportunities
and can offer gains that scale with the size of the network.

I. I NTRODUCTION
Caching is an essential technique to improve throughput
and latency in a vast variety of applications such as virtual
memory hierarchies in CPU design, web caching for content
delivery networks (CDNs), and inquiry caching in domain
name systems. The core idea of caching is to use memories
distributed across the network to duplicate data. This stored
data can be then used to facilitate the delivery of future
requests, thereby reducing network congestion and delivery
delay. Companies like Akamai, Facebook, Netflix, Google, etc.
are heavily investing in their cache networks to increase the
performance of their systems.
There is a rich and beautiful theory, developed mostly in the
computer science community during the 80s and 90s, for systems with a single cache. However, when it comes to networks
of caches, the existing theory falls short, and engineers instead
rely on heuristics and the intuition gained from the analysis of
single-cache systems. Quoting Van Jacobson, one of the key
contributors to TCP/IP and expert on content distribution:
ISPs are busily setting up caches and CDNs to
scalably distribute video and audio. Caching is a
necessary part of the solution, but there is no part of
todays networkingfrom Information, Queuing, or
Traffic Theory down to the Internet protocol specs
that tells us how to engineer and deploy it. [1,
p. 302]
We argue that information theory can in fact provide the
theoretical underpinnings for the deployment and operation of
cache networks. Indeed, we show that the caching problem
M. A. Maddah-Ali is with Bell Labs, Alcatel-Lucent. U. Niesen is with
Qualcomms New Jersey Research Center. Emails: mohammadali.maddahali@alcatel-lucent.com, urs.niesen@ieee.org.

IEEE Information Theory Society Newsletter

can be formulated as a network information theory problem.


Applying information-theoretic tools for the analysis of cache
networks reveals that the conventional way to operate these
networks can be substantially suboptimal.
Cache networks have two distinctive features that differentiate them from other problems in multi-user information
theory:
Budget for side information: In information theory, we
often consider networks with side information, with the
objective to characterize system performance in the presence of given side information. In contrast, in cache
networks, the side information itself is subject to design
and optimization. Each cache has a fixed memory limit,
and the system designer is allowed to choose the side information in the cache subject to the memory constraint.
Pre-recorded content: In network information theory,
we usually assume that each source locally generates a
message (e.g., voice) at transmission time for a particular
user/destination. However, over the last decade or so, the
bulk of traffic has shifted to content (e.g., movies), which
is typically recorded centrally well ahead of transmission time, and which is not generated for a particular
user/destination. It is this generation of messages ahead
of transmission time that allows their duplication across
the network.
In the remainder of this newsletter article, we discuss
various opportunities and challenges in the area of cache
networks with emphasis on the role of information theory
in offering a fundamental view on this problem. We start in
Section II with a canonical cache network, which provides
an information-theoretic framework for the analysis of such
systems. We then review an approximately optimal solution
for this problem and compare it to conventional approaches.
We proceed with recent results on cache networks in a variety
of scenarios, comparing offline versus online caching, delaytolerant versus delay-sensitive content (both in Section III),
single layer versus hierarchical caching, server-oriented versus
device-to-device settings (both in Section IV), among others.
Throughout, we point out open problems motivated by real-life
applications of caching.
II. C ANONICAL C ACHE N ETWORK
We consider the following canonical cache network introduced in [2]. A server is connected through a shared bottleneck
link to K users as shown in Fig. 1. The server has a database
of N files W1 , . . . , WN each of size F bits. Each user k has
an isolated private cache memory of size M F bits for some
real number M [0, N ]. In this article, we assume N K
to simplify the exposition.
This setting can model a wireless network with an access
point and several users, all sharing the common wireless
December 2015

caches

size M

Fig. 1. Canonical cache network from [2]: A server containing N files of


size F bits each is connected through a shared link to K users each with an
isolated cache of size M F bits. The goal is to design the placement phase
and the delivery phase such that the peak rate of the shared bottleneck link
is minimized. In the figure, N = K = 3 and M = 1.

channel. It can also model a wireline network with several


server connected to a common server;
N files
caches
here the shared link
models a bottleneck along the path between the server and
the users.
The system
operates in two phases: a content placement
shared
link
phase and a content delivery phase. The placement phase
occurs during a time of low network traffic, say in the early
morning, so that network resources are abundant and cheap.
The main constraint during this phase is the limited cache
K users
memory. We model this placement phase by giving each user
access
user
cachesto the entire database W1 , . . . , WN of files. Each
size M
is thus able to fill its own cache subject only to the memory
Fig. 1. Canonical
network
from [2]: A
containing N phase,
files of
constraint
of Mcache
F bits.
Critically,
in server
the placement
size
bits each
connected
a shared
link requests,
to K users so
eachthat
withthe
an
the Fsystem
is isnot
aware through
of users
future
isolated cache of size M F bits. The goal is to design the placement phase
content
cached
by
the
users
cannot
depend
on
them.
and the delivery phase such that the peak rate of the shared bottleneck link
is minimized.
In thephase
figure,occurs
N = K after
= 3 and
= 1.
The delivery
theMplacement
phase during
a time of high network traffic, say in the evening. Network
resources are now scarce and expensive and become the main
channel.
canmodel
also model
a wireline
with
several
constraint.It We
this delivery
phase network
as follows.
Each
user
caches
connected
to afiles
common
here theThe
shared
link
k requests
one of the
Wdk inserver;
the database.
server
is
models a of
bottleneck
alongand
theresponds
path between
the server
informed
these requests
by sending
a signaland
of
the users.
size
RF bits over the shared link for some fixed real number
operates
in twosent
phases:
content
R The
calledsystem
the rate.
This signal
from athe
server placement
has to be
phase and asuch
content
delivery
The placement
phase
constructed
that each
user phase.
can recover
its requested
file
occurstheduring
time of over
low network
traffic,
say the
in the
early
from
signala received
the shared
link and
contents
morning,
that network resources are abundant and cheap.
of
its ownsocache.
The
main
constraint
thiscontent
phase is
the limited
cache
We need to designduring
both the
placed
in the users
memory.
We model
this placement
givingsent
each
caches
during
the placement
phase phase
and thebysignal
byuser
the
access during
to the entire
database
W1 ,The
. . . ,objective
WN of files.
user
server
the delivery
phase.
is to Each
minimize
is thus
to fill its
ownconstraint
cache subject
only to
the memory
the
rateable
R subject
to the
that every
possible
set of
constraint
of can
M Fbebits.
Critically,
in the
placement
user
demands
satisfied.
We again
emphasize
that,phase,
while
the signal
systemsent
is not
of users
future requests,
so phase
that the
the
overaware
the shared
link during
the delivery
is
cached
byusers
the users
cannotthe
depend
them. designed
acontent
function
of the
requests,
cache on
content
The delivery
phase
occurs after
thecannot
placement
phase
during
the earlier
placement
phase
depend
onduring
those
a time of(since
high network
sayatinthe
the time).
evening.
Network
requests
they are traffic,
unknown
In addition,
resources
now scarce
andrespect
expensive
and
become
the main
since
R isare
determined
with
to the
worst
possible
user
constraint.the
Wecache
modelcontent
this delivery
as follows.
Each user
requests,
cannotphase
be tuned
for a specific
set
k requests
of
requests.one of the files Wdk in the database. The server is2
informed of these requests and responds by sending a signal of
Example
1 (Uncoded
baseline,
letreal
us review
size RF bits
over the Caching).
shared linkAs
fora some
fixed
numbera
conventional
uncoded
solution,
where
in
the
placement
R called the rate. This signal sent from the server has phase
to be
each
user caches
the each
same user
M/N
file. The
constructed
such that
canfraction
recover of
itseach
requested
file
motivation
for this
approach
thatshared
the system
should
ready
from the signal
received
overisthe
link and
the be
contents
for
anyown
possible
of its
cache.demand, therefore each user should give the
same
fraction
its memory
file. Moreover,
there
We need toofdesign
both to
theeach
content
placed in since
the users
is
no
statistical
difference
in
the
user
demands
known
during
caches during the placement phase and the signal sent by the
the
placement
phase,
the phase.
contentThe
of the
cachesis for
different
server
during the
delivery
objective
to minimize
users
should
be
the
same.
the rate R subject to the constraint that every possible set of
In demands
the delivery
theWe
server
transmits
the
user
can bephase,
satisfied.
again simply
emphasize
that, while
remaining
1

M/N
fraction
of
any
requested
file
over
the signal sent over the shared link during the delivery phasethe
is
shared
link,ofand
each
user cantherecover
requested
file.
a function
thethus
users
requests,
cache its
content
designed
size M
Since,
are Kplacement
requests to
be delivered,
the worst-case
during there
the earlier
phase
cannot depend
on those
delivery
rate
is
ontaining N files of
requests (since they are unknown at the time). In addition,
K users each with an
since R is determined
the worst
possible user
)  respect
K (1 toM/N
).
(1)
RU (Mwith
the placement phase
requests,
the
cache
content
cannot
be
tuned
for
a
specific set
hared bottleneck link
The function RU (M ) describes the trade-off between rate and
of requests.
memory for the baseline uncoded caching scheme. The factor
K
in (1) 1is(Uncoded
the rate that
we would
achieve without
accessa
Example
Caching).
As a baseline,
let us review
to
any
caches.
The
factor
1

M/N
arises
because
an
M/N
ork with several
fraction
of
each
file
is
locally
cached
at
each
user.
We
call
this
December
2015
the shared link
second
factor
in
(1)
the
local
caching
gain.
n the server and
We refer to this caching strategy as uncoded caching, since
both
the content placement and delivery are uncoded. From
ontent placement
the above discussion, we see that the role of caching in this

shared link, and thus each user can recover its requested file.
Since, there are K requests to be delivered, the worst-case
delivery rate is
RU (M )  K (1 M/N ).

(1)

The function RU (M ) describes the trade-off between rate and


memory for the baseline uncoded caching scheme. The factor2 17
K in (1) is the rate that we would achieve without access
to any caches.uncoded
The factor
1 M/N
an M/N
conventional
solution,
wherearises
in thebecause
placement
phase conventional uncod
fraction
of caches
each filethe
is locally
cachedfraction
at files
each of
user.
Wefile.
call The
this each user caches
each
user
same M/N
each
server
N
second factor
the localis caching
gain. should be ready motivation for this
motivation
for in
this(1)approach
that the system
to thisdemand,
cachingtherefore
strategy as
uncoded
caching,
forWe
anyrefer
possible
each
user should
givesince
the for any possible de
both the
content
placement
and
delivery
are uncoded.
From
same
fraction
of
its
memory
to
each
file.
Moreover,
since
there
same fraction of its
shared link
thenoabove
discussion,
we see
thatuser
the demands
role of caching
this is no statistical dif
is
statistical
difference
in the
known in
during
uncoded
scheme
is tothe
deliver
partofofthethecaches
requested
content the placement pha
the
placement
phase,
content
for different
locally.
users should be th
users
should be the same.
In
the
delivery
phase,
the
server
simply
transmits
the
In the delivery
The uncoded caching scheme in Example 1 is just one
K users 1 M/N fraction of any requested file over the remaining 1 M/
remaining
among a long list of conventional uncoded approaches, deshared
link, and thus each user can recover its requested file. shared link, and th
veloped
caches for different applications, scenarios, and objectives.
size M
Since,
there
K requests
to be
delivered,
the worst-case
This includesarepopular
schemes
such
as least-recently
used Since, there are K
delivery
rate
is
Fig.
1. Canonical
cache network from
A server
containing
filesAll
of delivery rate is
(LRU)
and least-frequently
used[2]:(LFU)
(see,
e.g., N
[3]).
size
F
bits
each
is
connected
through
a
shared
link
to
K
users
each
with
an
these conventional
principles:(1)
(M )  K share
(1 three
M/Nmain
).
RUapproaches
R
isolated cache of size M
F bits. The goal is to design the placement phase

The
role
of
caching
is
to
deliver
part
of
the content
and
delivery R
phase
such
that the peak
rate
of the shared
bottleneck
link
(M
)
describes
the
trade-off
between
rate
and
Thethefunction
U
The function RU (M
is minimized.
locally.In the figure, N = K = 3 and M = 1.
memory for the baseline uncoded caching scheme. The factor memory for the ba
Users with statistically identical demands have the same
K in (1) is the rate that we would achieve without access K in (1) is the ra
cache contents.
to any caches.
factor
1
M/N arises
because
M/N to any caches. The
channel.
can The
also
model
a wireline
network
withan
several
For It
isolated
private
caches,
each user
can only
derive
a
fraction
of
each
file
is
locally
cached
at
each
user.
We
call
this fraction of each file
caches
connected
a common
server; here the shared link
caching
gain to
from
its own cache.
second factor
in (1) the
local
gain. the server and second factor in (1
models
a bottleneck
along
thecaching
path between
As
see caching
next,
these
three
principles,
We we
referwill
to this
strategy
as main
uncoded
caching,which
since
We refer to this
the
users.
are
sensible
for
single-cache
systems,
do
not
carry over
to
both the content placement and delivery are uncoded.
From
both
the content p
The system
operates
in two argue
phases:that
a content
placement
networks
caches.
Indeed,
of
caching
the aboveofdiscussion,
we seewethat
the role the
of role
caching
in this the above discussi
phase
and beyond
a content delivery
phase.
Thelocal
placement
phase
goes
well
deliverypart
andofthat
delivery
only
uncoded
scheme islocal
to deliver
the requested
content
occurs
during
a time
of lowofnetwork
traffic,
say
in the
early uncoded scheme i
achieves
a
small
fraction
the
gain
that
cache
networks
locally.

morning,
network
abundant
cheap. locally.
can offer.soWethat
explain
the resources
main ideaare
with
two toyand
examples
The
uncoded
caching
scheme
in Example
1 is justcache
one
The
main
constraint
during
this phase
is the limited
The uncoded ca
from
[2].
among a We
longmodel
list of
uncoded
approaches,
de- among a long list
memory.
thisconventional
placement phase
by giving
each user
Example
2 (Coded
Caching
K=
N. , =
3, of
M files.
=
Consider
velopedto for
different
applications,
and1).objectives.
access
the
entire database
W
WN
Each
user veloped for differe
1 , . . scenarios,
a
system
with
K
=
3
users,
each
with
a
cache
large
enough
to
This
popular
schemes
such asonly
least-recently
used
is
thusincludes
able to fill
its own
cache subject
to the memory
This includes pop
store
one
file,
i.e.,
M
=
1.
Assume
that
the
server
has
N
=
(LRU) andofleast-frequently
used (LFU)
e.g., [3]).
All3 (LRU) and least-f
constraint
M F bits. Critically,
in the(see,
placement
phase,
files,system
A,
B,isand
We of
split
each
file
into
three
subfiles
of
these
conventional
approaches
share
three
main
principles:
the
not C.
aware
users
future
requests,
so
that the
these conventional
,
A
,
A
),
B
=
(B
,
B
,
B
),
and
equal
size,
i.e.,
A
=
(A
1
2
3
1
2
3
content
cached
by caching
the usersiscannot
depend
on of
them.
The
role of
to deliver
part
the content
The role of
C = (C1 , C2 , C3 ). In the placement phase, instead of placing
Thelocally.
delivery phase occurs after the placement phase during
locally.
the same content in all caches, we place different content
Users
withnetwork
statistically
identical
have the
same
a time
of high
traffic,
say indemands
the evening.
Network
Users with sta
pieces at the users caches as shown in Fig. 2. Formally, the
cacheare
contents.
resources
now scarce and expensive and become the main
cache content
cache of user k is populated with (Ak , Bk , Ck ). Since the size
For isolated
private
can onlyEach
derive
constraint.
We model
this caches,
deliveryeach
phaseuser
as follows.
usera
For isolated p
of each subfile has 1/3 of the size of a whole file, the size
caching
from
its W
own
cache.
k requests
onegain
of the
files
in
the database. The server is
caching gain
k
file, satisfying the memory
of (Ak , Bk , Ck ) is equal to done
informed
these
and responds
by sending
a signal
of
As we ofwill
seerequests
next, these
three main
principles,
which
As
we will see
constraint of M = 1.
size
RF bits for
oversingle-cache
the shared link
for some
real number
are sensible
systems,
do fixed
not carry
over to are sensible for si
R
called the
rate. This
signalwesent
from
has
to be networks of caches
networks
of caches.
Indeed,
argue
thatthe
theserver
role of
caching
constructed
such that
each
user can
its requested
file goes well beyond
goes well beyond
local
delivery
andrecover
that local
delivery only
from
the signal
received
over
the networks
contents achieves a small
achieves
a small
fraction
ofthe
theshared
gain link
that and
cache
of
own We
cache.
canitsoffer.
explain the main idea with two toy examples can offer. We exp
from
We [2].
need to design both the content placed in the users from [2].
caches during the placement phase and the signal sent by the
Example 2 (Coded Caching K = N = 3, M = 1). Consider
server during the delivery phase. The objective is to minimize Example 2 (Coded
a system with K = 3 users, each with a cache large enough to
the rate R subject to the constraint that every possible set of a system with K =
store one file, i.e., M = 1. Assume that the server has N = 3
user demands can be satisfied. We again emphasize that, while store one file, i.e.,
files, A, B, and C. We split each file into three subfiles of
the signal sent over the shared link during the delivery phase is files, A, B, and C
equal size, i.e., A = (A1 , A2 , A3 ), B = (B1 , B2 , B3 ), and
a function of the users requests, the cache content designed equal size, i.e., A
C = (C1 , C2 , C3 ). In the placement phase, instead of placing C = (C1 , C2 , C3 ).
during the earlier placement phase cannot depend on those
the same content in all caches, we place different content
requests (since they are unknown at the time). In addition, the same content
pieces at the users caches as shown in Fig. 2. Formally, the
since R is determined with respect to the worst possible user pieces at the users
cache of user k is populated with (Ak , Bk , Ck ). Since the size
requests, the cache content cannot be tuned for a specific set cache of user k is p
of each subfile has 1/3 of the size of a whole file, the size of each subfile ha
of requests.
of (Ak , Bk , Ck ) is equal to one file, satisfying the memory of (Ak , Bk , Ck ) is
constraint1of(Uncoded
M = 1. Caching). As a baseline, let us review a constraint of M =
Example
IEEE Information Theory Society Newsletter

A1 , A2 , A3
A11 ,, A
A22 ,, A
A33
A
B
, B33
1 , B2
A
B111 ,,, A
B222 ,,, A
B33
B
B
B
C11 ,, B
C22 ,, C
B
B3
C
,
C
,
C
C11 , C22 , C333
C1 , C2 , C3
A2 B1 , A3 C1 , B3 C2
A22
B
B11 ,, A
A33
C
C11 ,, B
B33
C
C22
A
A2 B1 , A3 C1 , B3 C2

A
B
C
A
B
C
A
B
C
A
B
C
A1 , B1 , C1
A2 , B2 , C2
A3 , B3 , C3
A11 ,, B
A22 ,, B
A33 ,, B
B11 ,, C
C11
B22 ,, C
C22
B33 ,, C
C33
A
A
A
A1 , B1 , C1
A2 , B2 , C2
A3 , B3 , C3
Fig. 2. Coded caching strategy for K = 3 users, N = 3 files, and
Fig.
for
3
3
files,
and
Fig. 2.
2.sizeCoded
Coded
caching
strategy
for K
K
=three
3 users,
users,
N of=
=size
3 1/3,
files, e.g.,
and
cache
M =caching
1. Eachstrategy
file is split
into =
subfilesN
cache
size,Coded
M, A
=caching
1. The
Eachcontent
file is
isplacement
split
into =
three
subfiles
of=
size
1/3,
e.g.,
cache
size
M
=
1.
Each
file
split
into
three
subfiles
of
size
1/3,
e.g.,
Fig.
strategy
for
K
3
users,
N
3
files,
and
A = 2.
(A
A
).
is
not
a
function
of
the
demands.
1
2
3
A
= delivery
(A
AM
A
).
The
content
placement
isthree
notuser
function
ofsize
the 1/3,
demands.
1 ,, A
2phase
A
(A
,, A
The
is
not
aasubfiles
function
the
demands.
cache
size
=33 ).
1.uses
Eachcontent
file isplacement
into two
ofof
e.g.,
1
2
The=
coding
tosplit
satisfy
demands
with
a single
The
delivery
phase
uses
coding placement
to satisfy
satisfy two
two
user
demands
with
single
The
coding
to
demands
aa single
A
= delivery
(A1 , A2phase
, A3 ). uses
The content
is notuser
a function
of with
the demands.
transmission.
transmission.
transmission.
The delivery phase uses coding to satisfy two user demands with a single
transmission.

For the delivery phase, let us consider a generic case in


For user
the delivery
phase,
ususer
consider
a generic
in
which
one requests
filelet
two requests
file case
B, and
For user
the delivery
phase,
letA,
ususer
consider
a generic
case
in
which
one
requests
file
A,
two
requests
file
B,
and
user
file C.
the two
missing
subfiles
are and
A2
whichthree
userrequests
one requests
fileThen
A, user
requests
file B,
user
three
requests
file
C.
Then
the
missing
subfiles
are
A
and Athree
user one,file
B1C.
andThen
B3 for
user
two, subfiles
and C1 and
C222
3 for requests
user
the
missing
are
A
3
1
3
1
and
A
for
user
one,
B
and
B
for
user
two,
and
C
and
C22
3 three. In other words,
1
3
1
for
user
similar
to thetwo,
uncoded
and
A3 for
user
one,
Bwords,
and Capproach,
1 and B
3 for user
1 and C2
for
user
three.
In
other
similar
to
the
uncoded
approach,
1/3
of athree.
usersInrequested
file is
available
that users
private
for
user
other words,
similar
to thein
approach,
1/3
ofand
a users
requested
file
is
available
inuncoded
that
users
private
cache
can
therefore
be
delivered
locally.
The
server
could
1/3
ofand
a users
requestedbefile
is available
in that
users
private
cache
can
therefore
delivered
locally.
The
server
could
now
transmit
the
remaining
6
subfiles,
each
with
size
of
1/3,
cachetransmit
and canthe
therefore
be delivered
locally.
The server
now
remaining
6 subfiles,
each with
size ofcould
1/3,
for
a
total
rate
of
2.
This
would
be
the
same
rate
as
for
the
nowa transmit
6 subfiles,
with
for
total ratetheofremaining
2. This would
be theeach
same
ratesize
as of
for1/3,
the
uncoded
scheme
in
Example
1.
However,
as
we
will
see
next,
for a totalscheme
rate ofin2.Example
This would
be the same
fornext,
the
uncoded
1. However,
as werate
willassee
making
of theinparticular
of theascontent
uncodeduse
scheme
Example pattern
1. However,
we willplacement
see next,
making
use
of the particular
pattern
of the content
placement
helps
ususe
achieve
better rate.pattern of the content placement
making
of theaa particular
helps
us achieve
better rate.
Note
that
user
two
has access
A , which user one needs,
helps
achieve
a better
rate. to
Noteusthat
user two
has access
to A222 , which user one needs,
andNote
userthat
oneuser
has two
access
to
B
,
which
two
needs.
These
1
has to
access
to A2 , user
which
user
one needs,
and user one has access
B11 , which
user
two
needs.
These
two
users
would
like
to
exchange
this
side
information
but
and
user
one
has access
toexchange
B1 , which
user
two
needs. These
two
users
would
like
to
this
side
information
but
cannot
since
their
caches
are
isolated.
Instead
the
server
can
two users
would
like
to exchange
thisInstead
side information
but
cannot
since
their
caches
are
isolated.
the
server
can
B1 over
the shared
exploit
this situation
by transmitting
A2Instead
cannot since
their caches
are isolated.
the
server
can
2
1

B
over
the
shared
exploit
this
situation
by
transmitting
A
2
1user one already
link, where
denotes
XOR.ASince
over thealready
shared
exploit
this situation
by bitwise
transmitting
2 B1user
link,
where
its
denotes
bitwise
XOR.
Since
from
local
cache,
it
can
recover
A2 one
fromalready
A2
has
B
1
link,Bwhere

denotes
bitwise
XOR.
Since
user
one
1
2
from
its
local
cache,
it
can
recover
A
from

has
1
2to A , A
B
.
Similarly,
since
user
two
already
has
access
it 22can
1 B from its local cache, it can recover A from
2 A

has
1
2
2
1
2
B
.
Similarly,
since
user
two
already
has
access
to
A
,
it
can
1
2
A2 user
B1two
. Thus,
the signal
A2 toB1Areceived
recover
B1 from
B
.
Similarly,
since
already
has
access
,
it
can
1
2
1 from A2
2 B1
1 . Thus, the signal A2
2 B1
1 received
recover
1
over
theB
link
helps
to effectively
A2
B1 . both
Thus,users
the signal
A2 B1exchange
received
recover
Bshared
1 fromlink
over
the
shared
helps
both
users
to
effectively
exchange
the
missing
subfiles
the cache
of the other
user.
over
the shared
link available
helps bothin
users
to effectively
exchange
the
missing
subfiles
available
in
the
cache
of the
the shared
other
user.

C
over
link
Similarly,
the
server
transmits
A
3
1
the missingthesubfiles
cache
of the
the shared
other user.
C
link
Similarly,
server available
transmits in
A33the
1 over
1
to
user
one
and
C
to
user
three.
Finally,
the
to
deliver
A
3
1
the Finally,
shared link
Similarly,
to usertransmits
one and A
C311 toCuser
three.
the
to
deliver the
A33 server
1 over
server
transmits
B

C
to
deliver
B
to
user
two
and
C
to
3
2
3
2
one
and
C
to
user
three.
Finally,
to
deliver
A3 toBuser
server
transmits

C
to
deliver
B
to
user
two
and
C
to
1
3
2
3
2the
3
2
3
2
user
three
as
shown
in
Fig.
2.
Since
each
server
transmission
is
server
transmits
B3 in
C
B3 toserver
user transmission
two and C2 to
user
three
as shown
Fig.
2.deliver
Since each
is
2 to
simultaneously
useful
for
two
users,
the
load
of
the
shared
link
user three as shown
in for
Fig.two
2. Since
server
is
simultaneously
useful
users,each
the load
of transmission
the shared link
is
reduced by auseful
factor 2 compared
approach.
simultaneously
two users, to
thethe
loaduncoded
of the shared
link
is
reduced by a factorfor
2 compared
to
the
uncoded
approach.
The
resulting
is equal to
is reduced
by delivery
a factor rate
2 compared
to 1.
the uncoded approach.
The
resulting
delivery
rate
is equal to
1.
Here
we
have
focused
on
the
demand
(A, B, C). It is
The
resulting
delivery
rateonisthe
equal
to 1. tuple
Here
we
have
focused
demand
tuple
(A, B,
C). It is
straightforward
to
verify
that
the
same
rate
is
also
achievable
Here we havetofocused
on the
demand
tuple
(A, B,
C). It is
straightforward
verify
that
the
same
rate
is
also
achievable
for
all other 26 to
possible
tuples.rate is also achievable

straightforward
verify demand
that the same
for
all other 26 possible
demand
tuples.

forThe
all other
possible highlights
demand tuples.

above26example
that, in addition to local
The above example highlights that, in addition to local
delivery
of
content,
caching
offers
another
benefit.
The
conThe above
example
highlights
in benefit.
addition The
to local
delivery
of content,
caching
offers that,
another
con3
tent
placed
the caches
opportunities
delivery
of into
content,
cachingcreates
offers multicasting
another benefit.
The content
placed
into
the caches
creates
multicasting
opportunities
through
coding
that
can
further
reduce
the
rate
over
the
shared
tent placed
intothat
the can
caches
creates
multicasting
opportunities
through
coding
further
reduce
the rate over
the shared
through
coding that
canuncoded
further reduce
theofrate
over the1.shared
link compared
to the
scheme
Example
Both
schemes enjoy the gain of local delivery as 1/3 of the content
is delivered locally,
the coded
scheme enjoys an additional
IEEE Information
Theorybut
Society
Newsletter
gain of a factor 2 due to coded multicasting.
How do these two caching gains, i.e., the gain of local
delivery and the gain of coded multicasting, scale with the
parameters of the problem? To get some insight, we increase

link compared to the uncoded scheme of Example 1. Both


link compared to the uncoded scheme of Example 1. Both
of the content
schemes
enjoy the
gainuncoded
of local delivery
link compared
to the
scheme as
of 1/3
Example
Both
schemes
enjoy the
gain of local delivery
as
1/3
of the 1.
content
is
delivered
locally,
but
the
coded
scheme
enjoys
additional
schemes
enjoy
the gain
of local
1/3 ofan
content
is
delivered
locally,
but the
codeddelivery
schemeasenjoys
anthe
additional
gain
of a factor
2 due
is delivered
locally,
butto
thecoded
codedmulticasting.
scheme enjoys an additional
gain
of adofactor
2 two
due
to
coded
multicasting.
How
these
caching
i.e., the gain of local
gain
of adofactor
2 two
due to
coded gains,
multicasting.
How
these
caching
gains,
i.e., the scale
gain with
of local
delivery
and
the
gain
of
coded
multicasting,
the
How do
these
two of
caching
gains,
i.e., the scale
gain with
of local
delivery
and
the
gain
coded
multicasting,
the
parameters
of
the
problem?
To
get
some
insight,
we
increase
delivery andof the
gain
of coded
multicasting,
scale
with
the
parameters
the
problem?
To
get
some
insight,
we
increase
the
size of of
thethe
cache
from M
= 1some
to Minsight,
= 2 and
how
parameters
problem?
To get
we see
increase
the
size
of
the
cache
from
M
=
1
to
M
=
2
and
see
how
these
twoofgains
change.
the size
the cache
from M = 1 to M = 2 and see how
these
two gains
change.
these two gains change.
A12 , A13 , A23
A12
A13
A23
12 ,, A
13 ,, A
23
A
B
,B
,, B
23
A12
A13
A
12 ,, B
13
23
B
,
B
12
13
23
B
,
B
,
B
12
13
23
C
,
C
,
C
12
13
23
B
,, B
B23
12
13 ,, C
23
C
C
12
13
C12 , C13 , C23
C12 , C13 , C23
A23 B13 C12
A23
B
B13
C
23
13
12
A
C12
A23 B13 C12

A
B
C
A
B
C
A
B
C
A
B
C
A12 , B12 , C12
A12 , B12 , C12
A13 , B13 , C13
A12
A
A
,
B
,
C
,
B
,
C
,
B
C13
12
12
12
12
12
13
13
A
A
A
,
B
,
C
,
B
,
C
,
B
12
12
12
12
12
12
13
13 ,, C
13
A
,, B
,, C
A
,, B
,, C
A
,, B
,, C
23
23
23
A13
A23
A23
B13
C13
B
C
B
C23
12
12
12
12
12
12
13
13
A
,
B
,
C
A
,
B
,
C
A
,
B
,
13
13
13
23
23
23
23
23
23
A13 , B13 , C13
A23 , B23 , C23
A23 , B23 , C
C13
23
A13 , B13 , C13
A23 , B23 , C23
A23 , B23 , C23
Fig. 3. Coded caching strategy for K = 3 users, N = 3 files, and
Fig.
3.sizeCoded
Coded
caching
for K
K
=three
3 users,
users,
N of=
=size
3 1/3,
files, e.g.,
and
Fig. 3.
strategy
for
3
3
files,
and
cache
M =caching
2. Eachstrategy
file is split
into =
subfilesN
cache
sizeCoded
M
=
2.). Each
Each
file
isdelivery
split
into
three
subfiles
of=
size
e.g.,
cache
size
file
split
into
subfiles
1/3,
e.g.,
Fig.
for K
=three
3uses
users,
N of
3 1/3,
files,three
and
A =3.(A
A2 ,=
Acaching
Here,strategy
the is
phase
coding
tosize
satisfy
1,M
32.
A =
=demands
(A
A22with
A332.).
).
Here, file
the isdelivery
delivery
phase
uses
codingof to
tosize
satisfy
three
A
(A
A
,,=
A
Here,
the
phase
coding
satisfy
cache
size11 ,, M
Each
split into
threeuses
subfiles
1/3, three
e.g.,
user
a single
transmission.
user
aa single
user
demands
with
single
transmission.
A =demands
(A1 , A2with
, A3 ).
Here, transmission.
the delivery phase uses coding to satisfy three
user demands with a single transmission.

Example 3 (Coded Caching K = N = 3, M = 2).


Example
3 (Coded
Caching
K =
= file
3, M
2).
In the placement
phase,
we again
splitN
into=
Example
3 (Coded
Caching
K =
Neach
= file
3, M
=three
2).
In
the
placement
phase,
we
again
split
each
into
three
subfiles
of equal phase,
size. However,
willeach
be convenient
to
In the placement
we again it
into three
subfiles
of equal size. However,
itsplit
will be file
convenient
to
label
these
subfiles
differently,
namely
A
=
(A
,
A
,
A
),
12
13
23
subfiles
of subfiles
equal size.
However,
it will
label
these
differently,
namely
A =be(Aconvenient
,
A
,
A
),
12
13
23to
12
13
23
,
B
,
B
),
and
C
=
(C
,
C
,
C
).
User
k
B
=
(B
12
13
23
12
13
23
label= these
subfiles
differently,
namely
A
=
(A
,
A
,
A
),
,
B
,
B
),
and
C
=
(C
,
C
,
C
).
User
k
B
(B12
12
13
23
12
13
23
12
13
23
13
23
12
13
23
caches
those
content
pieces
that
have
k
in
the
index
set
as
and that
C =have
(C12
C13the
, Cindex
k
B
= (B
caches
those
content
pieces
k ,in
set as
12 , B
13 , B23 ),
23 ). User
shown
Fig.content
3
caches in
those
pieces that have k in the index set as
shown
in
Fig. 3
For the
delivery
let us again assume as an example
shown
in Fig.
3 phase,
For the
delivery
phase, let us again assume as an example
that
user
one
requests
file
user
twoassume
requests
fileexample
B, and
Foruser
the one
delivery
phase,
letA,
again
as an
that
requests
file(see
A,us
user
two
requests
file
B, each
and
user
three
requests
file
C
again
Fig.
3).
In
this
case,
that user
one
requests
file(see
A, again
user two
requests
file
B, each
and
user
three
requests
file
C
Fig.
3).
In
this
case,
user
can fetch
2/3 file
of its
requested
file from
user three
requests
C (see
again Fig.
3). Inthe
thislocal
case,cache
each
user
can
fetch
2/3
of
its
requested
file
from
the
local
cache
and misses
the 2/3
remaining
1/3 of the
In the
particular,
user
user
can fetch
of its requested
filefile.
from
local cache
and
misses
the remaining
1/3 of
the
file.
In
particular,
user
one
misses
subfile
A
,
which
is
available
at
both
users
two
23
and
misses subfile
the remaining
1/3
of
the
file.
In
particular,
user
23
one
misses
A
,
which
is
available
at
both
users
two
23
,
which
is
available
and
three.
User
two
misses
subfile
B
13
one misses
subfile
A23
, which
is available
at bothis users
two
13 , which
available
and
three.
User
two
misses
subfile
B
13
at
both
users
onetwo
andmisses
three.subfile
And user
three
misses
subfile
,
which
is
available
and
three.
User
B
13 three misses subfile
at
both
usersis one
and three.
And
userone
C
, which
available
at both
users
and misses
two. Insubfile
other
12both
at
usersis one
and three.
And
userone
three
12 , which
C
available
at
both
users
and two.
In
other
12
words,
the
three
users
would
like
to
exchange
the
subfiles
C
is available
at bothlike
users
and two.
other
words,
the three
users would
to one
exchange
the In
subfiles
12 , which
, C12
, butusers
are unable
to like
do sotobecause
theirthe
caches
are
A
23 , B13
words,
the
three
would
exchange
subfiles
,
B
,
C
,
but
are
unable
to
do
so
because
their
caches
are
A
23
13
12
23
13
12
isolated.
The
server
can
remedy
this
situation
by
transmitting
C12server
, but are
unable
to do
sosituation
because their
caches are
A23 , B13 ,The
isolated.
can
remedy
this
by transmitting
B13
C12 over
shared by
link.
Given its
the
signalThe
A server
isolated.
can
thisthe
transmitting
remedy
C12
thesituation
shared link.
Given its
the
signal A23
23 B13
13
12 over
23
cache
content,
each
user
can
then
recover
the
missing
subfile.
B13
can
C12 then
overrecover
the shared
link. Given
its
the
signal
A23 each
cache
content,
user
the missing
subfile.
Since
the
coded
transmission
is
simultaneously
useful
for all
cache the
content,
user can then
recover the missing
Since
codedeach
transmission
is simultaneously
usefulsubfile.
for all
three
the coded
caching approach
reduces the
loadfor
of the
Sinceusers,
the coded
transmission
is simultaneously
useful
all
three
users,
the
coded
caching
approach
reduces
the
loadscheme
of the
shared
link
by
a
factor
of
3
compared
to
the
uncoded
three users,
the
coded
caching
approachtoreduces
the loadscheme
of the
shared
link
by
a
factor
of
3
compared
the
uncoded
of
Example
1, resulting
in 3a compared
rate of 1/3.
26 possible
shared
link by
a factor of
to All
the other
uncoded
scheme
of
Example
1,beresulting
inina arate
of 1/3.
All
other
26 possible
requests
can
satisfied
similar
manner.

of
Example
of 1/3.
All other 26 possible
requests
can1,beresulting
satisfiedinina arate
similar
manner.

requests
can last
be satisfied
similar

From the
example in
wea see
that,manner.
as we increase the size
From the last example we see that, as we increase the size
From
lastboth
example
we see
wecoded
increase
the size
of
the the
cache,
the local
gainthat,
andasthe
multicasting
gain improve. For the general case, it is shown in [2] that
for arbitrary number N of files and K N users
each with
December
2015
cache of size M {0, N/K, 2N/K, . . . , N }, coded caching
achieves a rate of
1
.
(2)
RC (M )  K (1 M/N )
1 + KM/N

30

ate (R)

A3 C1 , B3 C2

3
3
3

18

20

4
4
4

December 2015

30
30
30

bottleneck
bottleneck
bottleneck
rate
rate
rate
(R)
(R)
(R)

of the cache, both the local gain and the coded multicasting
of the cache, both the local gain and the coded multicasting
gaintheimprove.
For the
the general
is shownmulticasting
in [2] that
of
cache, both
gaincase,
and it
gain improve.
For the local
general
case,
ittheis coded
shown in [2] that
for arbitrary
number
N general
of files case,
and Kit
N
usersineach
gain
improve.
For
the
is
shown
[2] with
that
for arbitrary number N of files and K N users each
with
cache
of sizenumber
M {0,
N/K,
2N/K,
. . ., N
},users
codedeach
caching
for
arbitrary
N
of
files
and
K
N
with
cache of size M {0, N/K, 2N/K, . . . , N }, coded caching
achieves
rateMof {0, N/K, 2N/K, . . . , N }, coded caching
cache of aasize
achieves
rate of
1
achieves a rate of
1
.
(2)
RC (M )  K (1 M/N )
.
(2)
RC (M )  K (1 M/N ) 1 + KM/N
1
(2)
RC (M )  K (1 M/N ) 1 + KM/N .
1 +convex
KM/Nenvelope of
For general 0 M N , the lower
For general 0 M N , the lower convex envelope of
these points is
achievable.
The
case
K > N canenvelope
be handled
For
0 achievable.
M NThe
, the
lower
of
thesegeneral
points is
case
K >convex
N can be handled
similarly,
but istheachievable.
resulting expression
is
a>
bitNmore
complicated
these
points
The
case
K
can
be
handled
similarly, but the resulting expression is a bit more complicated
(see [2]). The
function
R (M
) describes
trade-off
between
similarly,
but the
resulting
expression
is a the
bit more
complicated
(see [2]). The
function
RC
C (M ) describes the trade-off between
rate and
memory
for the
coded
caching
scheme.
(see
[2]).
The
function
R
(M
)
describes
the
trade-off
between
rate and memory for theCcoded caching scheme.
compare
thefor
three
terms
incaching
the ratescheme.
expression RC (M )
rateWe
and
memory
the
coded
We compare the three terms in the rate expression RC (M )
in We
(2) achieved the
by codedterms
caching
with
two termsRCin(M
the
in the
ratethe
expression
in (2) compare
achieved by three
coded caching
with
the
two terms in the)
rate
expression
R
(M
)
in
(1)
achieved
by
uncoded
caching.
in
achieved R
byU (M
coded
with by
the uncoded
two terms
in the
rate(2)
expression
) in caching
(1) achieved
caching.
U
The first term
K, )representing
the rate
without caching.
caching,
rate expression
RU (M
in (1) achieved
by uncoded
The first term K, representing the rate without caching,
thefirst
same
in K,
both rate expressions.
is
The
term
the rate without caching,
is
the same
in bothrepresenting
rate expressions.
is
The
second
term
1M/N
, representing the local caching
the
same
in
both
rate
expressions.
The second term 1M/N , representing the local caching
gain,second
is also the1M/N
same in both rate expressions.
Thus,
The
the local caching
gain, is alsoterm
the same in, representing
both rate expressions.
Thus,
both the
coded
and
uncoded
schemes
enjoy theThus,
gain
gain,
is
also
the
same
in
both
rate
expressions.
both the coded and uncoded schemes enjoy the gain
from the
having
a fraction
M/N ofschemes
each fileenjoy
beingthelocally
both
coded
and
uncoded
gain
from having a fraction M/N of each file being locally
available.
from
having
a
fraction
M/N
of
each
file
being
locally
available.
On
top of this, the coded scheme alone enjoys a second
available.
On top of this, the coded scheme alone enjoys a second
gain
that is absent
the scheme
uncoded scheme.
Thisa gain
is
On top
the in
coded
enjoys
second
gain thatofisthis,
absent
in
the uncodedalone
1scheme. This gain is
quantified
by
the
extra
factor
,
which
captures
1scheme. This gain is
gain
that isbyabsent
in the
uncoded
1+KM/N
quantified
the extra
factor
, which captures
1+KM/N
1
the gain resulting
fromfactor
creating
and exploiting
coded
quantified
by
the
extra
, which captures
1+KM/N
the gain resulting from creating
and exploiting
coded
multicasting
opportunities.
Perhaps
surprisingly,
we
see
the
gain resulting
from creating
exploiting we
coded
multicasting
opportunities.
Perhapsand
surprisingly,
see
that
this
gain
is
a
function
of
the
cumulative
memory
size,
multicasting
surprisingly,
wesize,
see
that this gain opportunities.
is a function ofPerhaps
the cumulative
memory
i.e.,
KMgain
, even
though
theofcaches
are isolated.
We refer
that
this
is
a
function
the
cumulative
memory
size,
i.e., KM , even though the caches are isolated. We refer
to
this
gain
as thethough
global caching
gain.
attain this
i.e.,
KM
, even
caches
are To
isolated.
We gain,
refer
to this
gain
as the globalthe
caching
gain.
To
attain this
gain,
we
follow
a
particular
pattern
of
content
placement.
the
to this
gainaas
the global
caching
gain. To
attain thisIn
gain,
we
follow
particular
pattern
of content
placement.
In
the
delivery
phase,
this pattern
allows
the creation
of coded
we
follow
a
particular
pattern
of
content
placement.
In
the
delivery phase, this pattern allows the creation of coded
packets each
useful
for 1 +allows
KM/N
users.
This
coded
delivery
phase,
this
pattern
the
creation
of
coded
packets each useful for 1 + KM/N users. This coded
multicasting
available
for
packets
each opportunity
useful for 1is
KM/N simultaneously
users. This coded
multicasting
opportunity
is+available
simultaneously
for
set of user
demands, i.e.,for
it
every one of the
NK
multicasting
opportunity
is
available
simultaneously
K possible
every one of the N K possible set of user demands, i.e., it
provides
aofsimultaneous
codedset
multicasting
opportunity.
possible
of
user
demands,
i.e.,
it
every
one
the
N
provides a simultaneous coded multicasting opportunity.
Weprovides
next compare
the two caching
gains in more
detail.
a simultaneous
coded multicasting
opportunity.
We next compare the two caching gains in more detail.
The
caching
1 M/N
if the
We
nextlocal
compare
the gain
two caching
gainsis insignificant
more detail.
The local caching gain 1 M/N is significant if the
local
cache
size
M
is
comparable
to
the
size
of
the
entire
The local caching gain 1 M/N is significant if
the
local cache size M is comparable to the size of the entire
content
N
.
local
cache
size
M
is
comparable
to
the
size
of
the
entire
content N .
1
content
The global
is significant if the
1
N . caching gain
The global caching gain 1+KM/N
is significant if the
1+KM/N
1comparable to the size of
cumulative
cache
size
KM
is
The global caching gain
is significant
if the
cumulative cache size KM1+KM/N
is comparable
to the size
of
the
entire content
N
. As
a result,
the globaltocaching
gain
cumulative
cache
size
KM
is
comparable
the
size
of
the entire content N . As a result, the global caching gain
can entire
reducecontent
the load
ofAs
thea shared
linkglobal
in thecaching
order ofgain
the
the
N
.
result,
the
can reduce the load of the shared link in the order of the
number
of caches
in theshared
system.
can reduce
the loadK
link in the order of the
number
of caches
Kofinthe
the system.
Thusnumber
we see of
that,
for networks
caches, the global gain can
caches
K in theofsystem.
Thus we see that, for networks of caches, the global gain can
be
much
morethat,
important
than the
local gain.
Thus
we see
for networks
of caches,
the global gain can
be much
more important
than the
local gain.
The
order
difference
between
the
local
and global gains is
be The
much
more
important
than
the
local
gain.
order difference between the local and global gains is
illustrated
in difference
Fig. 4 for a system
= global
30 users.
The order
thewith
localK
gainsFor
is
illustrated
in Fig. 4 forbetween
a system
with
Kand
= 30 users.
For
example,
if
each
user
has
space
to
cache
half
of
the
content,
illustrated
in
Fig.
4
for
a
system
with
K
=
30
users.
For
example, if each user has space to cache half of the content,
then
uncoded
caching
thetoload
of the
from
example,
if each
user reduces
has space
cache
halfshared
of thelink
content,
then
uncoded
caching
reduces
the load
of the
shared
link
from
30 files
down caching
to the equivalent
of load
15 files.
Onshared
the other
hand,
then
uncoded
reduces
the
of
the
link
from
30 files down to the equivalent of 15 files. On the other hand,
coded
caching
reduces
the load of
of 15
thefiles.
shared
link
to
lesshand,
than
30
files
down
to
the
equivalent
On
the
other
coded caching reduces the load of the shared link to less than
the equivalent
of
just a the
single
file.
coded
caching
reduces
load
of
the
shared
link
to
less
than
the equivalent of just a single file.
can be shown
thata the
ratefile.
RC (M ) of the coded caching
theIt
equivalent
of
just
single
It can be shown that the rate RC (M ) of the coded caching
scheme
within
factor
the) information-theoretic
It canis
shownaa constant
that the rate
RCof
of the coded caching
scheme
isbewithin
constant
factor
of(M
the information-theoretic
optimumis for
all avalues
of the
problem
parameters [2]. This
scheme
within
constant
factor
of
the
information-theoretic
optimum for all values of the problem parameters [2]. This
optimum for all values of the problem parameters [2]. This

19

uncoded caching
uncoded
caching
coded caching
uncoded
caching
coded
caching
coded caching

20
20
20

10
10
10

0
0
0 0
0
0

0.2
0.4
0.6
0.8
1.0
0.2
0.4
0.6
0.8
1.0
normalized
cache
size
(M/N
)
0.2
0.4 cache0.6
0.8)
1.0
normalized
size (M/N
normalized cache size (M/N )
Fig. 4. Rate R required in the delivery phase as a function of normalized
Fig.
4. Rate
R required
the30delivery
phase[2].
as aThe
function
normalized
memory
size M/N
for Kin =
users from
figure of
compares
the
Fig. 4. Rate
Rthe
required
the
phase
as aThe
function
normalized
memory
sizeofM/N
for Kin =
30delivery
users
from
[2].
figure
compares
the
performance
proposed
coded
caching
scheme
with
that ofof
conventional
memory caching.
sizeofM/N
for K =
30 users
from
[2]. The
the
performance
the proposed
coded
caching
scheme
with figure
that ofcompares
conventional
uncoded
performance
of the proposed coded caching scheme with that of conventional
uncoded
caching.
uncoded caching.

implies that the local and global gains identified above are
implies that the local and global gains identified above are
fundamental,
i.e.,local
there are global
no other gainsidentified
that scale with the
implies
that the
are
fundamental,
i.e., thereand
are no othergains
gains that scaleabove
with the
system
parameters.
fundamental,
i.e.,
there
are
no
other
gains
that
scale
with
the
system parameters.
system
Openparameters.
Problem 1: Sharpening the approximation of the rateOpen Problem 1: Sharpening the approximation of the ratememory
trade-off1:is Sharpening
of both theoretical
and practical
interest.
Open
the approximation
of the
ratememory Problem
trade-off is of both theoretical
and practical
interest.
It is known
that both
theboth
achievable
scheme
and the converse
memory
trade-off
is
of
theoretical
and
practical
interest.
It is known that both the achievable scheme and the converse
canis be
improved
[2]. the
For achievability,
the first
is if
It
known
that both
scheme
andquestion
the converse
can be
improved
[2]. For achievable
achievability,
the first
question
is if
linear
codes
are sufficient
for optimalitythe
or if
nonlinear
codes
can
be
improved
[2].
For
achievability,
first
question
is if
linear codes are sufficient for optimality or if nonlinear codes
are
needed.
The
second
question
is if, within
the
class of codes
linear
linear
codes
are
sufficient
for
optimality
or
if
nonlinear
are needed. The second question is if, within the class of linear
codes,
largerThe
field
sizes question
can improve
performance.
are
needed.
second
is if,the
within
the class ofFinally,
linear
codes,
larger field
sizes can improve
the
performance.
Finally,
the
content
placement
presented
so
far
is
uncoded
and only
the
codes,
larger
field
sizes
can
improve
the
performance.
Finally,
the content placement presented so far is uncoded and only the
delivery
is
coded.
It
is
known
that
coded
content
placement
the
content
so farcoded
is uncoded
only the
delivery
is placement
coded. It ispresented
known that
contentand
placement
can
improve
system
performance
for
smallcontent
cache placement
sizes [2].
delivery
is
coded.
It
is
known
that
coded
can improve system performance for small cache sizes [2].
Whether
coded
content
placement for
can increase
performance
can
improve
system
performance
cache
sizes [2].
Whether
coded
content
placement cansmall
increase
performance
for
larger
cache
sizes
as
well
is
unknown.
There
have also
Whether
coded
content
placement
can
increase
performance
for larger cache sizes as well is unknown. There have also
been
some cache
recent effortsastowell
improve
the converse
parthave
[4], [5].
for
is unknown.
There
beenlarger
some recent sizes
efforts to improve
the converse
part [4], also
[5].
been
some
recent 2:
efforts
improve thetrade-off
converseispart
[4], [5].
Open
Problem
The to
rate-memory
known
exOpen Problem 2: The rate-memory trade-off is known exactly
for
a
system
with
K
=
2
users
and
N
=
2
files
[2].
Open
The K
rate-memory
exactly
forProblem
a system2:with
= 2 users trade-off
and N =is 2known
files [2].
Finding
the
exact trade-off
for
Kusers
= 3 and
N
=
3,2 the
nextactly
for
a
system
with
K
=
2
and
N
=
files
[2].
Finding the exact trade-off for K = 3 and N = 3, the nextbigger
case,
of interest.
There
has=been
some
progress
Finding
the is
exact
trade-off
for K
3 and
N recent
= 3, the
nextbigger case,
is
of interest.
There
has been
some
recent
progress
in
this
direction,
and
for
some
values
of
cache
size
M the
bigger
case,
is
of
interest.
There
has
been
some
recent
progress
in this direction, and for some values of cache size M the
optimal
trade-off is
[6]. However,
general
in
this direction,
andknown
for some
values of for
cache
size M
M,, the
the
optimal
trade-off is
known
[6]. However,
for
general
M
the
K
=
3
and
N
=
3
case
is
still
open.
optimal
trade-off
is
known
[6].
However,
for
general
M
,
the
K = 3 and N = 3 case is still open.
K = 3 and N = 3 case is still open.
III. OTHER S ERVICE R EQUIREMENTS
III. OTHER S ERVICE R EQUIREMENTS
OTHER S
ERVICE
R EQUIREMENTS
Practical III.
applications
and
constraints
may necessitate differPractical applications and constraints may necessitate differentPractical
service requirements
thanconstraints
the ones inmay
the canonical model.
applications
and
different service requirements than the ones in the necessitate
canonical model.
We service
next discuss
several of
those
requirements.
ent
requirements
than
the
ones
in
the
canonical
model.
We next discuss several of those requirements.
We next discuss several of those requirements.
A. Decentralized Caching
A. Decentralized Caching
A. InDecentralized
the canonicalCaching
cache network both the number and the
In the canonical cache network both the number and the
identity
ofcanonical
the userscache
in the delivery both
phase arenumber
already known
In
the
the
identity of the users in thenetwork
delivery phasethe
are alreadyand
known
in the prior
placement
phase.
This phase
is clearly
not a realistic
identity
of
the
users
in
the
delivery
are
already
known
in the prior placement phase. This is clearly not a realistic
in the prior placement phase. This is clearly not a realistic
IEEE Information Theory Society Newsletter

20

assumption, because we would likely be unaware during the


placement phase, say in the early morning, which users will
be active in the following evening. In addition, users may join
or leave the network asynchronously, so that the number of
users in the delivery phase may also be time varying.
To deal with these issues, [7] develops a decentralized
caching scheme, in which the placement phase is independent
of the number and the identity of the users. In this scheme,
cache stores a randomly selected subset of the bits. The rate
of this decentralized scheme is shown to be within a constant
factor of optimal universally for any number of users K.
This universality property allows to address the problem of
asynchronous user requests. In addition, this decentralized
caching scheme is a key ingredient to handle online and
nonuniform demands discussed below.
Open Problem 3: It is shown analytically in [7] that the
rate of the decentralized caching scheme is within a constant
factor of the centralized scheme. Numerically, this factor can
be evaluated to be 1.6. This shows that there is at most a small
price to be paid for a placement phase that is universal with
respect to the number of users K. It is of interest to know if
there is, in fact, a cost for this universality at all.
B. Nonuniform Demands

phase, the universality of the decentralized caching scheme is


critical for this file-grouping approach to work.
Subsequently, [9] proposed to use only two such file groups
with all memory dedicated to the first group. [9] also showed
that this approach is asymptotically within a constant factor
from optimal for the important special case of Zipf popularity
distributions in the limit as K, N . Finally, [10] showed
that this approach with only two groups is in fact optimal to
within a constant multiplicative-plus-additive gap for all popularity distributions and all finite values of K and N (assuming
M 2). These two results thus show that, surprisingly, two
groups are sufficient to adapt to the nonuniform nature of the
popularity distribution.
This conclusion changes when, instead of a single user per
cache, many users are attached to each cache. In this scenario,
the grouping strategy with many groups is approximately
optimal [11].
C. Online Caching
The canonical caching problem in Section II has two distinct
phases: placement and delivery. The cache is updated only
during the placement phase, but not during the delivery phase.
In other words, caching is performed offline, meaning ahead
of delivery.
7500

pn

Troy
Natl. Treasure

ratings

102

103

5000

2500

104

105

0
0

106
100

n
101

102

103

104

10

20

30
week

40

50

Fig. 5. File popularities pn for the Netflix movie catalog from [8].

Fig. 6. Number of ratings in the Netflix database for two movies (Troy and
National Treasure) as a function of week in 2005 from [12]. Each movie was
very popular upon release and then gradually reduced its popularity thereafter.

The canonical cache network focuses on the peak rate over


the shared link, i.e., the rate for the worst user demands. In
practice, the content files have different popularities, modeled
as the probabilities of being requested by the users (see Fig. 5).
Consequently, in some settings a more natural performance
criterion is the expected rate over the shared link.
If the file popularity is uniform, the coded caching scheme
from Section II also approximately minimizes the expected
rate (as opposed to peak rate) [8]. For nonuniform popularity distributions, a different approach is needed. For such
nonuniform distributions, [8] suggests to split the content files
into several groups and to dedicate a fraction of the cache
memory at each user to each group. The placement phase
and delivery phase of the decentralized coded caching scheme
are then applied within each group of files. Since the number
and identity of users requesting files from each group is only
known during the delivery phase but not during the placement

However, in many practical systems, the set of popular files


is constantly changing. Some new popular files can be added to
the content database, and some old files can become unpopular
or be removed from the content database (see Fig. 6). In order
to adapt to this dynamic content popularity, caching schemes
that update their cache content online, i.e., during the delivery
phase, are needed.
One popular cache update rule is least-recently used (better
known by its abbreviation LRU), in which the least-recently
requested file is evicted from the cache to open up space for a
newly requested file. While LRU is proven to be efficient for
single-cache systems [13], it is shown in [12] that for cache
networks it can be significantly suboptimal. Instead, a coded
version of LRU, in which the caches are updated during the
delivery phase such as to preserve the coding gain, is proposed.
For a probabilistic model of request dynamics, this update rule
is shown to be approximately optimal in [12].

IEEE Information Theory Society Newsletter

December 2015

Open Problem 4: The approximate optimality result in [12]


holds only under a probabilistic model of request dynamics.
An open question is to develop schemes that have stronger
competitive optimality guarantees valid for any individual
sequence of users requests as shown in [13] for the singlecache setting.
D. Delay-Limited Content

server

21

N files

rate R1

K1 mirrors

size M1
rate R2

K1 K2 caches

rate R2

size M2

K1 K2 users
Fig. 8. System setup for the hierarchical caching problem from [15].

A. Other Network Topologies

Fig. 7. Screenshot of video-streaming demo from [14]. The lower left window
shows the server process. The upper left window shows the decoding process
at a local cache. The three windows on the right show the reconstructed videos
being played in real time.

Video streaming is a popular application for caching. In


this setting each user sequentially requests small chunks of
content. Each such chunk has to be delivered within a limited
delay in order to enable continuous playback at the user.
Thus the server can only exploit coding opportunities among
the requested chunks within a given time window. In such
scenarios, the ultimate gain of coded caching, as seen in
the analysis of the canonical caching problem, is achievable
only if the tolerable delay is very large. [14] investigates
the trade-off between the performance of coded caching and
delay tolerance, and proposes a computationally efficient,
coded caching scheme that respects the delay constraint. This
approach was demonstrated in a practical setting with a videostreaming prototype (see Fig. 7). The same approach also
works for settings with small files.
Open Problem 5: Approximately characterizing the fundamental trade-off between the rate versus cache size under a
delay constraint is of great interest.
Open Problem 6: The demo in [14] works for a small
number of caches and users. Scaling the system up to say
100 caches with 100 users per cache is of interest. This will
require addressing a significant number of systems issues such
as how to maintain state and how to handle disk reads both at
the server and at the caches, among others.
IV. OTHER N ETWORK AND C HANNEL M ODELS
The canonical cache network has a noiseless broadcast
channel topology. Here, we discuss other network and channel
models.
December 2015

In practice, many caching systems consist of not only one,


but multiple layers of caches connected to each other to from a
tree. The objective is to minimize the transmission rates in the
various layers. [15] models this scenario as the network shown
in Fig. 8 and approximately characterizes the rate-memory
trade-off.
A different generalization of the broadcast topology is to
allow each user to connect to several close-by caches. This
scenario, particularly relevant for mobile users with caches
located at femtocells, is analyzed in [16].
Scenarios with multiple servers have been considered
in [17]. Coded caching for the device-to-device communication setting, where users help each other to deliver content,
has been analyzed in [18].
Another topology arising in the context of distributed computation has been analyzed in [19]. This network topology
models a data center with multiple servers, each performing
part of a larger MapReduce job. Here the repetition in map
assignments is used to create coding opportunities and to
reduce the communication load of the shuffling phase.
Open Problem 7: An interesting open problem is to characterize the rate-memory trade-off for hierarchical cache networks with multiple levels within a constant factor independent of the number of levels.
Open Problem 8: Devising easily implementable and efficient algorithms for hierarchical cache networks with nonuniform file popularities and online cache updating is of practical
interest.
Open Problem 9: Developing caching strategies with some
optimality guarantee for general network topologies is a likely
difficult but interesting open problem.
B. Noisy Channels
The noisy version of the noiseless broadcast channel in the
canonical cache network is considered in [20]. Here, the noise
is modeled as an erasure broadcast channel. The setting is
IEEE Information Theory Society Newsletter

22

particularly interesting for asymmetric erasure probabilities,


where unequal cache sizes can be used to improve system
performance. A similar setting but with feedback was analyzed
in [21].
cache 1

X1

cache 2

X2

..
.

..
.

..
.

cache K

XK

YK

A, B
M

M
R

Y1

interference
channel

Y2

Fig. 9. K transmitters, each connected to a local cache, communicating to


K receivers over a Gaussian interference channel from [22].

To reduce load and delay of the backhaul, cache-aided cellular base stations have received considerable attention [23]
[25]. This raises the question if caches at the base stations
can also improve communication rate over the wireless links.
This question is investigated in [22] using the interference
channel model depicted in Fig. 9. It is shown that there are
three distinct gains from caching at the transmitters of an
interference channel: a load balancing gain, an interference
cancellation gain, and an interference alignment gain. The
load balancing gain is achieved through specific file placement, creating a particular pattern of content overlap in the
caches. This overlap also enables interference cancellation
through transmitters cooperation. Finally, the cooperation
among transmitters creates many virtual transmitters, which
in turn increases interference alignment possibilities.
Open Problem 10: The rate-memory trade-off for cacheaided interference channels is still unknown. Even characterizing the degrees-of-freedom version of this trade-off is open.
Open Problem 11: Many multi-user channels could have a
cache-aided version, where caches can be at the transmitters
side or at the receivers side or both. Cataloguing what type
of gains (similar to the coded multicasting, load balancing,
interference cancellation, and alignment gains seen so far)
caching can provide in these settings will be useful to guide
the design and operation of noisy cache networks.
V. C ONNECTION WITH N ETWORK AND I NDEX C ODING
Having surveyed the coded caching problem for various
network topologies and service requirements, we now return to
the basic canonical cache network and explore its connection
to network and index coding.
The canonical caching problem is related to the network
coding problem [26]. Indeed, the canonical cache network
with K users and N files can be expressed as a single-source
multiple-multicast problem with KN K sinks and N multicast
groups (see Fig. 10). Unlike the single-source single-multicast
problem, the single-source multiple-multicast problem is a
hard problem in general [27]. It is the special structure of
IEEE Information Theory Society Newsletter

Fig. 10. The K = 2-user, N = 2-file canonical cache network expressed as


a single-source multiple-multicast network coding problem.

this network coding problem induced by the caching setting


that allows for the constant-factor approximation in [2].
The canonical caching problem is also related to the index
coding problem [28], [29]. Consider again the canonical cache
network with K users and N files. Then for fixed and uncoded
cache content chosen in the placement phase and for fixed
user demands, the delivery phase of the caching problem is
exactly a K-user index coding problem. Since there are N K
possible user demands, the complete delivery phase consists of
N K parallel such index coding problems. Unfortunately, the
general index coding problem is hard to solve even approximately [30]. The main difference with the canonical caching
problem is that here we are tasked with also designing the side
information (which may not be uncoded) subject to a memory
constraint. In other words, instead of fixed side information
as in index coding, we have a budget for side information.
Moreover, we have to be able to handle any possible user
demands. Interestingly, it is exactly this additional freedom to
design the side information that renders the canonical caching
problem more tractable.
VI. C ONCLUDING R EMARKS
In this newsletter article, we have argued that information
theory can play an important role in providing a fundamental
understanding of how to design and operate cache networks.
Many open questions remain to complete this understanding,
and we have pointed out a number of them.
R EFERENCES
[1] J. F. Kurose and K. W. Ross, Computer Networking: A Top-Down
Approach. Pearson, sixth ed., 2012.
[2] M. A. Maddah-Ali and U. Niesen, Fundamental limits of caching,
IEEE Trans. Inf. Theory, vol. 60, pp. 28562867, May 2014.
[3] A. Silberschatz, P. B. Galvin, and G. Gagne, Operating System Concepts.
Wiley, eighth ed., 2008.
[4] A. Sengupta and R. Tandon, Improved approximation of storage-rate
tradeoff for caching via new outer bounds, in Proc. IEEE ISIT, June
2015.
[5] H. Ghasemi and A. Ramamoorthy, Improved lower bounds for coded
caching, in Proc. IEEE ISIT, June 2015.
[6] C. Tian, A note on the fundamental limits of coded caching,
arXiv:1503.00010 [cs.IT], Feb. 2015.
[7] M. A. Maddah-Ali and U. Niesen, Decentralized coded caching attains
order-optimal memory-rate tradeoff, IEEE/ACM Trans. Netw., vol. 23,
pp. 10291040, Aug. 2015.

December 2015

[16] J. Hachem, N. Karamchandani, and S. Diggavi, Content caching


and delivery over heterogeneous wireless networks, arXiv:1404.6560
[cs.IT], Apr. 2014.
[17] S. Shariatpanahi, A. S. Motahari, and B. H. Khalaj, Multi-server coded
caching, arXiv:1503.00265 [cs.IT], Mar. 2015.
[18] M. Ji, G. Caire, and A. F. Molisch, Fundamental limits of caching in
wireless D2D networks, arXiv:1405.5336 [cs.IT], May 2014.
[19] S. Li, M. A. Maddah-Ali, and S. Avestimehr, Coded MapReduce, in
8
Proc. Allerton Conf., Sept. 2015.
[20] R. Timo and M. Wigger, Joint cache-channel coding over erasure
broadcast channels, arXiv:1505.01016 [cs.IT], May 2015.
[21] A. Ghorbel, M. Kobayashi, and S. Yang, Cache-enabled broadcast
packet erasure channels with state feedback, arXiv:1509.02074 [cs.IT],
Sept. 2015.
[22] M. A. Maddah-Ali and U. Niesen, Cache-aided interference channels,
in Proc. IEEE ISIT, June 2015.
[23] N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and
G. Caire, Femtocaching: Wireless video content delivery through
distributed caching helpers, in Proc. IEEE INFOCOM, pp. 11071115,
Mar. 2012.
[24] A. Liu and V. K. N. Lau, Exploiting base station caching in MIMO
cellular networks: Opportunistic cooperation for video streaming, IEEE
Trans. Signal Process., vol. 63, pp. 5769, Jan. 2015.
[25] K. Poularakis, G. Iosifidis, and L. Tassiulas, Approximation algorithms
for mobile data caching in small cell networks, IEEE Trans. Commun.,
vol. 62, pp. 36653677, Oct. 2014.
[26] R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, Network information flow, IEEE Trans. Inf. Theory, vol. 46, pp. 12041216, Apr.
2000.
[27] A. R. Lehman and E. Lehman, Complexity classification of network
information flow networks, in Proc. ACM-SIAM SODA, pp. 142150,
Jan. 2004.
[28] Y. Birk and T. Kol, Coding on demand by an informed source (ISCOD)
for efficient broadcast of different supplemental data to caching clients,
IEEE Trans. Inf. Theory, vol. 52, pp. 28252830, June 2006.
[29] Z. Bar-Yossef, Y. Birk, T. S. Jayram, and T. Kol, Index coding with
side information, in Proc. IEEE FOCS, pp. 197206, Oct. 2006.
[30] M. Langberg and A. Sprintson, On the hardness of approximating the
network coding capacity, IEEE Trans. Inf. Theory, vol. 57, pp. 1008
1014, Feb. 2011.

23

[8] U. Niesen and M. A. Maddah-Ali, Coded caching with nonuniform


demands, arXiv:1308.0178 [cs.IT], Aug. 2013.
[9] M. Ji, A. M. Tulino, J. Llorca, and G. Caire, Order-optimal
rate of caching and coded multicasting with random demands,
arXiv:1502.03124 [cs.IT], Feb. 2015.
[10] J. Zhang, X. Lin, and X. Wang, Coded caching under arbitrary
popularity distributions, in Proc. ITA, Feb. 2015.
[11] J. Hachem, N. Karamchandani, and S. Diggavi, Effect of number of
users in multi-level coded caching, in Proc. IEEE ISIT, June 2015.
[12] R. Pedarsani, M. A. Maddah-Ali, and U. Niesen, Online coded
caching, arXiv:1311.3646 [cs.IT], Nov. 2013. To appear in IEEE/ACM
Trans. Netw.
[13] D. D. Sleator and R. E. Tarjan, Amortized efficiency of list update and
paging rules, Communications ACM, vol. 28, pp. 202208, Feb. 1985.
[14] U. Niesen and M. A. Maddah-Ali, Coded caching for delay-sensitive
content, in Proc. IEEE ICC, June 2015.
[15] N. Karamchandani, U. Niesen, M. A. Maddah-Ali, and S. Diggavi,
Hierarchical coded caching, arXiv:1403.7007 [cs.IT], Mar. 2014.
[16] J. Hachem, N. Karamchandani, and S. Diggavi, Content caching
and delivery over heterogeneous wireless networks, arXiv:1404.6560
[cs.IT], Apr. 2014.
[17] S. Shariatpanahi, A. S. Motahari, and B. H. Khalaj, Multi-server coded
caching, arXiv:1503.00265 [cs.IT], Mar. 2015.
[18] M. Ji, G. Caire, and A. F. Molisch, Fundamental limits of caching in
wireless D2D networks, arXiv:1405.5336 [cs.IT], May 2014.
[19] S. Li, M. A. Maddah-Ali, and S. Avestimehr, Coded MapReduce, in
Proc. Allerton Conf., Sept. 2015.
[20] R. Timo and M. Wigger, Joint cache-channel coding over erasure
broadcast channels, arXiv:1505.01016 [cs.IT], May 2015.
[21] A. Ghorbel, M. Kobayashi, and S. Yang, Cache-enabled broadcast
packet erasure channels with state feedback, arXiv:1509.02074 [cs.IT],
Sept. 2015.
[22] M. A. Maddah-Ali and U. Niesen, Cache-aided interference channels,
in Proc. IEEE ISIT, June 2015.
[23] N. Golrezaei, K. Shanmugam, A. G. Dimakis, A. F. Molisch, and
G. Caire, Femtocaching: Wireless video content delivery through
distributed caching helpers, in Proc. IEEE INFOCOM, pp. 11071115,
Mar. 2012.
[24] A. Liu and V. K. N. Lau, Exploiting base station caching in MIMO
cellular networks: Opportunistic cooperation for video streaming, IEEE
Trans. Signal Process., vol. 63, pp. 5769, Jan. 2015.
[25] K. Poularakis, G. Iosifidis, and L. Tassiulas, Approximation algorithms
for mobile data caching in small cell networks, IEEE Trans. Commun.,
continued from page 2
vol. 62, pp. 36653677, Oct. 2014.
[26] R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, Network information flow, IEEE Trans. Inf. Theory, vol. 46, pp. 12041216, Apr.
Suhas2000.
Diggavi, Vijay Kumar, Pierre Moulin, David Tse, and Raymond
Yeung
have prepared
a report
summarizing
the new
initia[27] A.
R. Lehman
and E. Lehman,
Complexity
classification
of network
information flowconducted
networks, in
ACM-SIAM SODA,
pp.flagship
142150,
tives/experiments
inProc.
the organization
of our
Jan. 2004.
conference
ISIT that took place over the summer. Georg Bcherer,
[28] Y. Birk and T. Kol, Coding on demand by an informed source (ISCOD)
Gianluigi
Liva, and Gerhard Kramer have prepared a report on
for efficient broadcast of different supplemental data to caching clients,
the Munich
Workshop
on Coding
and28252830,
Modulation
IEEE Trans.
Inf. Theory,
vol. 52, pp.
June (MCM
2006. 2015).
Also
Munich,Y.Stefan
Dierks,
Markus
Kramer,
[29] from
Z. Bar-Yossef,
Birk, T.
S. Jayram,
and T.Jger,
Kol,Gerhard
Index coding
with
side Timo
information,
in Proc. IEEE
FOCS,on
pp.the
197206,
Oct.Workshop
2006.
and Roy
have prepared
a report
Munich
[30]
M. Langberg
and(MMM
A. Sprintson,
OnVincent
the hardness
approximating
the
on
Massive
MIMO
2015).
Tan,ofMatthieu
Bloch,
network coding capacity, IEEE Trans. Inf. Theory, vol. 57, pp. 1008
and Merouane
Debbah report on the Mathematical Tools of In1014, Feb. 2011.

From the Editor

formation-Theoretic Security Workshop that took place recently


in the Huawei Mathematical and Algorithmic Sciences Lab, Paris, France. Many thanks for all the contributors for their efforts!

With sadness, we conclude this issue with tributes to two prominent members of our community, Oscar Moreno de Ayala who
passed away on July 14, and Victor Wei who passed away on October 17th. Thanks to Heeralal Janwa, P. Vijay Kumar and Andrew
Z. Tirkel; and to Lolita Chuang, Yu Hen Hu, Yih-Fang Huang, and
Ming-Ting Sun for preparing the tributes.

of searching for contributions outside our community, which may


introduce our readers to new and exciting problems and, in such,
broaden the influence of our society. Any ideas along this line will
also be very welcome.
Announcements, news and events intended for both the printed
newsletter and the website, such as award announcements, calls
for nominations and upcoming conferences, can be submitted at
the IT Society website http://www.itsoc.org. Articles and columns can be e-mailed to me at mikel@buffalo.edu with a subject
line that includes the words IT newsletter.
The next few deadlines are: January 10, 2016 for the issue of March
2016. April 10, 2016 for the issue of June 2016.
Please submit plain text, LaTeX or Word source files; do not worry
about fonts or layout as this will be taken care of by IEEE layout
specialists. Electronic photos and graphics should be in high resolution and sent as separate files.
I look forward to hearing your suggestions and contributions.

Please help to make the newsletter as interesting and informative


as possible by sharing with me any ideas, initiatives, or potential
newsletter contributions you may have in mind. I am in the process
December 2015

With best wishes,


Michael Langberg.
IEEE Information Theory Society Newsletter

You might also like