Caching: A New Vision - ECT

Download Report

Transcript Caching: A New Vision - ECT

Decentralized Coded Caching Attains
Order-Optimal Memory-Rate Tradeoff
Mohammad Ali Maddah-Ali
Bell Labs, Alcatel-Lucent
joint work with
Urs Niesen
Allerton
October 2013
Video on Demand
Normalized Demand
100
80
60
40
20
0
0
2
4
6
8
10 12 14 16 18 20 22
Time
• High temporal traffic variability
• Caching (prefetching) can help to smooth traffic
Caching (Prefetching)
Server
• Placement phase: populate caches
• Demands are not known yet
• Delivery phase: reveal request, deliver content
Problem Setting
Server
N Files
Shared Link
K Users
Cache
Contents
Size M
Placement:
Delivery:Smallest
- Cacheworst-case
arbitrary
function
the filesin
(linear,
nonlinear,
-Requests
are revealed
toR(M)
theof
server
Question:
rate
needed
delivery
phase?…)
How to choose
(1) caching
delivery functions
- Server
sends a functions,
function of (2)
the files
Coded Caching
N Files, K Users, Cache Size M
• Uncoded Caching
•
Caches used to deliver content locally
•
Local cache size matters
• Coded Caching [Maddah-Ali, Niesen 2012]
•
The main gain in caching is global
•
Global cache size matters
(even though caches are isolated)
Centralized Coded Caching
N=3 Files, K=3 Users, Cache Size M=2
Maddah-Ali, Niesen, 2012
A12
Approximately Optimum
A13
A23
B12 B13 B23
C12 C13 C23
A23
B13
A23⊕B13⊕C12
C12
1/3
A12 B12 C12
A12 B12 C12
A13 B13 C13
A13 B13 C13
A23 B23 C23
A23 B23 C23
Multicasting Opportunity between three users with different demands
Centralized Coded Caching
N=3 Files, K=3 Users, Cache Size M=2
A12
• Centralized caching needs
• Number and identity of the
users in advance
A13
A23
B12 B13 B23
C12 C13 C23
• In practice, it is not the case,
• Users may turn off
• Users may be asynchronous
• Topology may time-varying
(wireless)
A12 B12 C12
A12 B12 C12
A13 B13 C13
A13 B13 C13
A23 B23 C23
A23 B23 C23
Question: Can we achieve similar gain without such knowledge?
Decentralized Proposed Scheme
N=3 Files, K=3 Users, Cache Size M=2
0 1 2 3
12
13
23
123
0 1 2 3
12
13
23
123
0 1 2 3
12
13
23
123
Delivery:
Prefetching:
Greedy
Eachlinear
user caches
encoding
2/3 of the bits of each file
- randomly,
- uniformly,
- independently.
23 ⊕ 13 ⊕ 12
1 ⊕ 2
2 ⊕ 3
1 ⊕ 3
0
0
0
1
12
13
123
2
12
23
123
3
13
23
123
1
12
13
123
2
12
23
123
3
13
23
123
1
12
13
123
2
12
23
123
3
13
23
123
Decentralized Caching
Decentralized Caching
Centralized Prefetching:
12
13
23
12
13
23
12
13
23
Decentralized Prefetching:
0 1 2 3
12
13
23
123
0 1 2 3
12
13
23
123
0 1 2 3
12
13
23
123
Comparison
N Files, K Users, Cache Size M
Uncoded
Local Cache Gain:
• Proportional to local
cache size
• Offers minor gain
Coded (Centralized): [Maddah-Ali, Niesen, 2012]
Global Cache Gain:
• Proportional to global
cache size
Coded (Decentralized)
• Offers gain in the
order of number of
users
Can We Do Better?
Theorem:
The proposed scheme is optimum within a constant factor in rate.
• Information-theoretic bound
• The constant gap is uniform in the problem parameters
• No significant gains beside local and global
Asynchronous Delivery
Segment 1
Segment 2
Segment 3
Segment 1
Segment 2
Segment 3
Segment 1
Segment 2
Segment 3
Conclusion
• We can achieve within a constant factor of the optimum caching
performance through
• Decentralized and uncoded prefetching
• Greedy and linearly coded delivery
• Significant improvement over uncoded caching schemes
• Reduction in rate up to order of number of users
• Papers available on arXiv:
•
Maddah-Ali and Niesen: Fundamental Limits of Caching (Sept. 2012)
•
Maddah-Ali and Niesen: Decentralized Coded Caching Attains Order-Optimal MemoryRate Tradeoff ( Jan. 2013)
•
Niesen and Maddah-Ali: Coded Caching with Nonuniform Demands (Aug. 2013)