RAPTOR CODES

Download Report

Transcript RAPTOR CODES

RAPTOR CODES
AMIN SHOKROLLAHI
DF2003-06-001
Digital Fountain Technical Report
OutLine






Introduction
LT codes
Raptor codes
Pre codes
Systematic Codes
Conclusion
OutLine






Introduction
LT codes`
Raptor codes
Pre codes
Systematic Codes
Conclusion
Introduction - Digital Fountain codes

What is Digital fountain codes






A code with robust “recover” ability
Data are broken up many components
Redundant duplicate information is used
Decoding with enough received components
Without re-transmission on TCP
Issue


High speed (almost linear)
Low error rate (1/kc)
Introduction - Digital Fountain codes
Introduction - Digital Fountain codes

Principle of Digital fountain codes:





Linear independent
Redundant equation
Example:
Ex:Reed-Solomon Codes , Tornado Codes , LT codes , Raptor codes
Reference:Digital Fountain with Tornado Codes and LT Codes (kcyang)
Introduction - Transmission Error

Binary Symmetric Channel (BSC)
P
0
0
1-P
1-P
1

1
P
Binary Erasure Channel
(BEC)
P
0
0
1-P
1-P
1
e
1
P
Introduction - Distribution




F2k:The space of linear forms in k variables with coefficients in
F2 , it can be isomorphism mapped to vector (a1 , a2 , … ak).
Weight of v :The number of 1’s in vector v .
Distribution:LetΩ1,..., Ωk be a distribution on {1,...,k} so that
Ωi denotes the probability that the value i is chosen. Often denote
this distribution by Ω(x) =ΣiΩi*xi .
Distribution over F2k:
 For any vector v in F2k, the probability of v is Ωw/C(k,w) ,
where w is the weight of v.
 A simple algorithm for this distribution would be to sample
first from the distribution Ω(x) to obtain a weight w , and then
the sample a vector of weight w in uniformly at random.
OutLine






Introduction
LT codes
Raptor codes
Pre codes
Systematic Codes
Conclusion
LT Codes - Encoding



A transformation from F2k to F2N .
Parameter with (k,D) , where k is the input size and , D is a
distribution over F2k .
Encoder Algorism (repeat N times)

select a value d from D

select a vector v uniformly at random from F2k with weight d



the value of output symbol yl is calculated asΣivi AND xi
(addition is XOR operation)
Update encoding graph and l
F2 can be generalized to F .
LT Codes - Encoding
d
v
2
2
2
1
1
2
1
1
3
1
(101000)(110000)(000011)(001000)(000100)(000101)(010000)(000010)(100101)(001000)
LT Codes - Decoding




A transformation from F2N to F2k .
Parameter with encoder graph G .
ML decoding and BP decoding
Decoder Algorism (while(1))





Find a sub-graph G’ depend on received data
select a node yl which has degree 1 connecting to xi
Recover the xi with yl , and update the y value which is
connected by xi
Remove xi
If all x is decoded , break loop
LT Codes - Decoding
Fault!!!
d
v
2
2
1
2
1
3
(101000)(110000)(001000)(000101)(000010)(100101)
LT Codes - Decoding
OK!!!
d
v
2
2
1
1
2
1
3
(101000)(110000)(001000)(000100)(000101)(000010)(100101)
LT Codes – Property

If an LT-Code with k input symbols possesses a reliable decoding
algorithm, then there is a constant c such that the graph associated to the
decoder has at least cklog(k) edges

A random LT-Code with k input symbols has encoding cost k/2, and ML
decoding is a reliable decoding algorithm for this code of overhead
1+O(log(k)/k).

Luby’s paper for a description of LT-Codes with a distribution Ω (x) with
Ω’(1) = O(log(k)) and for which the BP decoder is a reliable decoder of
overhead k(1 +2 √O(log (k)/ k)) .
OutLine






Introduction
LT codes
Raptor codes
Pre codes
Systematic Codes
Conclusion
Raptor codes



The graph for LT codes needs to have of the order of klog(k)
edges in order to make sure that all the input nodes are covered
with high probability.
Reason:
 The information theoretic lower bound may not be
matched with an algorithm
 We need to recover all the input symbols
Solution:
 design the traditional code and the LT-Code appropriately
 encode input using a traditional erasure correcting code
Raptor codes

Two extreme example

LT codes



PCO codes



No Pre codes
Time large
Pre codes with Ω(x) = x
Space large
A Good Asymptotic Performance Raptor codes

modify LT Distribution
Raptor codes

With parameter (k , C , ΩD)

With suitable Pre codes



The rate R of Cn is (1+ε/2)/(1 +ε) ,
The BP decoder can decode Cn on a BEC
with erasure probability δ = (ε/4)/(1+ε) = (1 R)/2
with O(nlog(1/ε)) arithmetic operations.
What kind of Pre codes is suitable

LDPC codes is adopted
OutLine






Introduction
LT codes
Raptor codes
Pre codes
Systematic Codes
Conclusion
Pre Codes

LDPC






Low-density paritycheck codes
Error correct codes
Low density give a
low complexity
N inputs produce r
check point
Ex: N=10 r=5
Null Space
Pre Codes

Null Space
Null space finding


Find rref()

http://www.stat.nctu.edu.tw/MISG/SUmmer_Course/C_language/Ch06/GaussEli
mination.htm
Pre Codes

Encoding (repeat r times)






Set binary matrix M with column Ci is input Xi
Find Null Space N(M)
Random find vector v From N(M)
Calculate check point Rl = M*v (element operation is XOR )
Update encoding graph G and l
Decoding



Set all check point equal 0
For all received input Xi , Set Ri which is connected to Xi as Xi XOR Ri
and remove Xi together with all edges emanating from it from the graph.
If there is a check node c of degree one , substitute its value into the value
of its unique neighbor among the input , add that value into the values of
all adjacent check point and remove input and all edges emanating from it
from the graph
Pre Codes
Encode
Decode
1
1
1
1?
0
1
0
1
0
0
0
0
0
1?
0
1?
1
1
0
0
10
10
10
0
Pre Codes – Repeat Accumulate codes


How about the check
point fault ?
RA codes




More robust correct codes
Add redundant for check
point to recover
But don’t add too large
over head
EX: RA code
Pre Codes - More
OutLine






Introduction
LT codes
Raptor codes
Pre codes
Systematic Codes
Conclusion
Systematic Codes




One of the disadvantages of Raptor Codes is that they
aren’t systematic.
systematic is means that the input symbols are not
necessarily reproduced by the encoder.
Systematic codes offer better performance .
Systematic Raptor Codes:


Find systematic position i1 , i2 , … , ik
Idea : a pre-decoded-like processing is applied to systematic position
Systematic Codes - Encoding
Systematic Codes
k
n
k(1+ε)
LT codes
A
×
k
n Pre codes
G
k
=
k(1+ε)
Inv - LT
R
k
vi1 , vi2 ,… , vik
k(1+ε)
Inv - LT
R
k
R
Inversible
Systematic Codes - Encoding
Systematic Codes - Encoding
k
n
k
×
n Pre codes
G
×
k
k
RR-1
×
k
k
k(1+ε)
LT codes
A
k
For position i1 , i2 , … ,ik ,
Input is fist decoded , and then encoded
So that output xi1 , xi2 , … ,xik are equal zi1 , zi2 , … ,zik
X
Systematic Codes - Decoding
Systematic Codes – Considerations



In practice , it’s a good to permute the vectors v1,...,vk(1+ε)
so that the systematic positions become the first k positions.
It’s possible to reduce the error probability of encoding by
generating many more initial vectors than k(1+ε) in Alg. 7
To improve decoding running time


it is not necessary to entirely re-encode the vector y in Step 2 of Alg. 14.
This is because the decoding process in Step 1 will have recovered a
large fraction of the coordinate positions of the vector obtained by
applying the pre-code to y. These coordinate positions do not need to
be recalculated.
We also comment that in practice the cost of multiplying with R in
Algorithm 11 is much smaller than O(k2 ). This is because the matrix R
can be “almost” upper triangularized,
OutLine






Introduction
LT codes
Raptor codes
Pre codes
Systematic Codes
Conclusion
Conclusion


The first class of universal Fountain Codes was LT Codes
invented by Luby . It can be decoded with a error probability
that is at most inversely polynomial in input symbols , and
the average weight of the n output symbols is Ω(log(k))
When n  k , and it is possible to find weight distribution
that can match the lower bound via a fast decoder. One were
exhibited by Luby.
The basic idea of Raptor codes is a additional pre-coding on
an appropriate LT-Code. In asymptotic settings , a class of
universal Raptor Codes with linear encode/decode time
for which the failure probability converges to 1 polynomial
fast in input size.
Conclusion


In practice , it’s important to bound failure probability. A
finite length Raptor Codes which exhibit low decoding
failure probabilities , by designing a specific Raptor Code
with guaranteed bounds on its error performance.
One disadvantage of LT/Raptor Codes is assystematic.
This means that the input symbols are not necessarily
reproduced among the output symbols. A effcient
systematic versions of Raptor Codes is designed to get
better performance,