Optimal Multicast Algorithms

Download Report

Transcript Optimal Multicast Algorithms

Hongyi Yao
Tsinghua University
Sidharth Jaggi
Chinese University
of Hong Kong
Theodoros Dikaliotis
California Institute
of Technology
Tracey Ho
California Institute
of Technology
MANIACs
MANIACs
Multiple Access Network Informationflow And Correction codes
MANIACs
Multiple Access Network Informationflow And Correction codes
MANIACs
Multiple Access Network Informationflow And Correction codes
MANIACs
Multiple Access Network Informationflow And Correction codes
MANIACs
Multiple Access Network Informationflow And Correction codes
MANIACs
Multiple Access Network Informationflow And Correction codes
Multi-source Multicast
Sources
Network
M1
R1
M2
R2
A1
A2
Receivers
Multi-source Multicast
Sources
Network
M1
R1
C1
A1
C2
A2
M2
R2
C
Receivers
Multi-source Multicast
Sources
Network
M1
R1
R2
C2 C1
A1
C2
A2
M2
R2
Receivers
C
R1+R2=C
C1
R1
Single-source Multicast with errors
Source
M1
Network
R1
Receivers
Single-source Multicast with errors
Source
M1
A1
Network
R1
Receivers
Single-source Multicast with errors
Source
M1
A1
Network
R1
Receivers
Single-source Multicast with errors
Source
M1
A1
Network
R1
Receivers
Single-source Multicast with errors
Source
M1
A1
Network
R1
Receivers
Single-source Multicast with errors
Source
A1
Network
M1
R1
f(M1)
2z
R1  C  2z
Receivers
Single-source Multicast with errors
Source
Network
M1
R1
f(M1)
2z
Receivers
R1  C  2z
C
A1
O
C-2z C
R1
Single source versus Multiple sources
Super Source
M1
M2
A1
A2
M1
R1
M2
R2
f(M1,M2)
2z1
g(M1,M2)
2z2
Single source versus Multiple sources
Super Source
M1
M2
A1
A2
M1
R1
M2
f(M1,M2)
2z1
g(M1,M2)
R2
2z2
f(M1,M2)
2z1
g(M1,M2)
2z2
Single source versus Multiple sources
C1=6, C2=6, C=9, z=1
R2
A1
(0,4)
A2
Time sharing
(4,0)
R1
Single source versus Multiple sources
C1=6, C2=6, C=9, z=1
R2
A1
(0,4)
A2
(4,1)
Time sharing
(4,0)
R1
Single source versus Multiple sources
C1=6, C2=6, C=9, z=1
R2
A1
A2
(0,4)
(1,4)
(4,1)
(4,0)
Time sharing
Naïve implementation
R1
Single source versus Multiple sources
C1=6, C2=6, C=9, z=1
R2
A1
(0,4)
(1,4) (3,4)
(4,3)
A2
(4,1)
(4,0)
Time sharing
Naïve implementation
Rate region
R1
Reed Solomon
codes
Gabidulin codes
•
Operate over vectors
•
Operate over matrices
•
Hamming distance
•
Rank metric distance
•
Efficient encoding and
decoding
•
Efficient encoding and
decoding
Field Extensions
2  F4  [1 0]  F212
3  F4  [1 1]  F212
1 0
1 1

0
0
0
 2 0
8
24
22
21

F


F


F
2
4
16





1
3 1
13
Gabidulin encoding
n1
X=
X = Fqq
n
R
R
Gabidulin encoding
n 1
X =FqFq
GX =
GX
Channel
1
n
R
= Fq
GX +E
n
R+t
Field Extensions
FqC2
Fq C
Fq
Encoding
G1
G2
G1 can correct
xrank z
Merrors
=
1
over Fq
n
G2 can correct
xrank z
Merrors
=
2
over FqC
X1
n
X2
R2+2z
R2+2z
R1
n
n
R1+2z
R1+2z
R1
Network transmission
Y=T1G1M1+T2G2M2+E
Network transmission
Y=T1G1M1+T2G2M2+E  Y  T1
G11M 1 
G
T2  
 E (1)

G2 M 2 
G
Network transmission
Y=T1G1M1+T2G2M2+E  Y  T1
G11
 Y  T1G
G11M 1 
G
T2  
 E (1)

G2 M 2 
G
 M1 
T2  
 E (2)

G 22 M 2 
G
CC
D  T1G
D
G11 T2  Fq
invertible with high probability
C
Network transmission
Y=T1G1M1+T2G2M2+E  Y  T1
G11
 Y  T1G
G11M 1 
G
T2  
 E (1)

G2 M 2 
G
 M1 
T2  
 E (2)

G 22 M 2 
G
CC
D  T1G
D
G11 T2  Fq
invertible with high probability
C
(D-1Y)
 M1 
1
-1

D
D
EE (3)
=

G2 M 2 
G
Network transmission
Y=T1G1M1+T2G2M2+E  Y  T1
G11
 Y  T1G
G11M 1 
G
T2  
 E (1)

G2 M 2 
G
 M1 
T2  
 E (2)

G 22 M 2 
G
CC
D  T1G
D
G11 T2  Fq
invertible with high probability
C
(D-1Y)
 M1 
1
-1

D
D
EE Lower R +2z
=

2
G2 M 2 
G
columns
1
 (D-1Y)  G
G22 M 2  (D
( D-1E)
E )  (4)

-1
(D Y)
= G2M2

-1
+(D E)

-1
(D Y)
= G2M2

D
1
1

 2
3
0
1
0
-1
+(D E)

Matrix with rank 1 over F2
0
0


33
0  F4 , E  0
0
1
1

32
0  F2
0
-1
(D Y)
= G2M2

D
1
1

 2
3
0
1
0
-1
+(D E)

Matrix with rank 1 over F2
0
0


33
0  F4 , E  0
0
1
Matrix with rank 1 over F4
1


1
31
D E  2  F4
3
1

32
0  F2
0
-1
(D Y)
= G2M2

D
1
1

 2
3
0
1
0
-1
+(D E)

Matrix with rank 1 over F2
0
0


33
0  F4 , E  0
0
1
Matrix with rank 1 over F4
1

32
0  F2
0
Matrix with rank 2 over F2
1
0



1
31
1
D E  2  F4 or D E  1
3
1
1

32
0  F2
1
M2 can be decoded
Y=T1G1M1+T2G2M2+E

(Y-T2G2M2)=T1G1M1+E
M1 can be decoded
Questions?
Network transmission
Y  T1 X1  T2 X 2  E
Network transfer matrices
Error matrix
Network transmission
Y  T1 X1  T2 X 2  E
 X1 
(1)
Y  T1 T2  

E

X2 
X1=G1M1 , X2=G2M2
Network transmission
Y  T1 X1  T2 X 2  E
 X1 
(1)
Y  T1 T2  

E

X2 
X1=G1M1 , X2=G2M2
Network transmission
 X1 
(1)
Y  T1 T2  

E

X2 
X1=G1M1 , X2=G2M2
Y  T1
G11M 1 
G
(2)
T2  

E

G
M
G
 2 2
Network transmission
Y  T1
Y  T1G11
G11M 1 
G
(2)
T2  

E

G2 M 2 
G
 M1 
(3)
T2  

E

G22 M 2 
G
Network transmission

T1, T2 are the transform matrix from sources 1
and 2 respectively to the receiver. Ti is of
dimensions Cx(Ri+2z) for i={1,2}.

The receiver gets
 X1 
Y  [T1 , T2 ]
E

X2 
Error matrix E has
rank less than z in F
Decoding of M2

T1, T2 are the transform matrix from sources 1
and 2 respectively to the receiver. Ti is of
dimensions Cx(Ri+2z) for i={1,2}.

The receiver gets
G1M 1 
Y  [T1 , T2 ]
E

G2 M 2 
Error matrix E has
rank less than z in F
Decoding of M2

T1, T2 are the transform matrix from sources 1
and 2 respectively to the receiver. Ti is of
dimensions Cx(Ri+2z) for i={1,2}.

The receiver gets
 M1 
Y  [T1G1 , T2 ]
E

G2 M 2 

Error matrix E has
rank less than z in F
Matrix D=[T1G2 T2] is of dimensions CxC over F1
and invertible with high probability
Decoding of M2

T1, T2 are the transform matrix from sources 1
and 2 respectively to the receiver. Ti is of
dimensions Cx(Ri+2z) for i={1,2}.

The receiver gets
 M1 
1
D Y 

D
E

G2 M 2 
Error matrix E has
rank less than z in F
1

Matrix D-1E is of rank no more than z over F1,
but it might have rank more than z over F
Decoding of M2
T1, T2 are the transform matrix from sources 1
and 2 respectively to the receiver. Ti is of
Cx(Rover
i+2z)
for i={1,2}.
G2 isdimensions
the Gabidulin matrix
F2 that

corrects rank z errors even over F1.

The receiver gets
 M1 
1
D Y 

D
E

G2 M 2 
Error matrix E has
rank less than z in F
1

Matrix D-1E is of rank no more than z over F1,
but it might have rank more than z over F
Non-Coherent Decoding


Previous construction are for coherence codes, where the receiver knows T1 and
T2.
Non-coherence codes can be achieved by adding headers to each packet
R1+2Z
X1 =
IR1+2Z
R1+2Z
X2 =
O
R2+2Z
nC2
O
G1M1
R2+2Z
nC2
IR2+2Z
G2M2
R1+2Z
over F
R2+2Z
over F
Non-Coherent Decoding

The receiver gets Y  [Y1 , Y2 , Y3 ]  [T1 , T2 , T1G1M1  T2G2 M 2 ]  E.

The receiver computes (over field F1)
 G1

Ya  Y   0
0

0
I R2  2 Z
0
0 

0   [Y1G1 , Y2 , Y3 ]  [ D, DX ]  E ',
I nC 
 M1 
where D=[T1G1,T2], X  
, and E’ is matrix of rank no

G2 M 2 
more than z over F1.
Non-Coherent Decoding

Using invertible rows operation for Ya (over F1), Bob
gets the row-reduced echelon of Ya as:
r
C
V
b
IC+LU ,
O
C
,
nC
L: Error Locations, of dimensions C x A.
V: Error Values, of dimensions B x nC.
r: Codeword Matrix, of dimensions C x nC.
Let the rank distance between r and X be d.
Result: 2d-a-b<2z+1 and r=X+ELEV, where EL is of dimensions C x d with the first
A columns being L, and EV is of dimensions d x nC with the last B rows being V.
1
D. Silva, F. Kschischang, and R. Koetter, “A rank-metric approach to error control in random network coding,” IEEE Tran. on Infor. Theory 2008
Non-Coherent Decoding

Consider the last R2+2z rows of r and L, named r’ and L’,
respectively.

The row-space distance Drow(r’,G2M2) is also no more than d,
and: r’=G2M2+E’LEV, where EL is of dimensions (R2+2Z) x d with A
columns being L’, and EV is of dimensions d x nC with B rows being
V.

All above matrixes are over the extended field F1.

Since multiple-field-extension is used, G2 is over F2, and able to
decode M2 using r’, L’ and V which are all over F1.

In the end, similarly subtracting G2M2 from Y, Bob is able to decode
M1.
Comments

The scheme is of polynomial complexity over the size
of the network, but of exponential complexity over
the number of sources.
 For s sources S1, S2,…,Ss, the Gabidulin generated matrix of Si
must be over finite field Fi to correct rank Z errors over finite
filed Fi-1.
 Thus, Fi must be the field extension of Fi-1.
 In the end, Ss must use Gabidulin generated matrix over
field with s extensions from the base field F.

The code construction with polynomial complexity
over source numbers is open.
Contribution of this Work

Efficient Multi-source Multicast ErrorCorrecting Code that:
 Achieves the complete rate region;
 Each source codes independently;
 All internal nodes are oblivious to the code and
simply apply random linear network coding.
 No information for the network topology is
needed (non-coherent coding).