Problems in LDPC Coding with Applications

Download Report

Transcript Problems in LDPC Coding with Applications

ON THE ANALYSIS AND APPLICATION OF
LDPC CODES
OLGICA MILENKOVIC
UNIVERSITY OF COLORADO, BOULDER
A joint work with:
VIDYA KUMAR (Ph.D)
STEFAN LAENDNER (Ph.D)
DAVID LEYBA (Ph.D)
VIJAY NAGARAJAN (MS)
KIRAN PRAKASH (MS)
OUTLINE







A brief introduction to codes on graphs
An overview of known results: random-like codes for standard
channel models
Code structures amenable for practical implementation: structured
LDPC codes
Codes amenable for implementation with good error-floor properties
Code design for non-standard channels
 Channels with memory
 Asymmetric channels
Applying the turbo-decoding principle to classical algebraic codes:
Reed-Solomon, Reed-Muller, BCH…
Applying the turbo-decoding principle to systems with unequal
error-protection requirements
THE ERROR-CONTROL PARADIGM
Noisy channels give rise to data errors: transmission or storage systems
Need powerful error-control coding (ECC) schemes: linear or non-linear
Linear EC Codes: Generated through simple generator or parity-check
matrix
Binary linear codes:
1

0
G
0

0
0 1
 1 1 1 0 1 0 0

1 0


H  0 1 1 1 0 1 0 
1 1
 1 0 1 1 0 0 1 

0 1 1 
0 0 0 1
1 0 0 1
0 1 0 1
0 0 1
Binary information vector u  (u1 , u2 , u3 , u4 )
(length k)
Code vector (word): x  uG, HxT  0
(length n)
Key property: “Minimum distance of the code”, dmin , smallest separation
between two codewords
Rate of the code R= k/n
LDPC CODES
More than 40 years of research (1948-1994) centered around dmin
Weights of errors that a code is guaranteed to correct
 ( dmin  1)/2
“Bounded distance decoding” cannot achieve Shannon limit
Trade-off minimum distance for efficient decoding
Low-Density Parity-Check (LDPC) Codes
Gallager 1963, Tanner 1984, MacKay 1996
1. Linear block codes with sparse (small fraction of ones) parity-check matrix
2. Have natural representation in terms of bipartite graphs
3. Simple and efficient iterative decoding in the form of belief propagation
(Pearl, 1980-1990)
THE CODE GRAPH AND ITERATIVE DECODING
Most important consequence of graphical description:
efficient iterative decoding
Message passing:
Variable nodes: communicate to check nodes their
reliability (log-likelihoods)
Variable nodes Check nodes
(Irregular degrees/codes)
 1 1 1 0 1 0 0


H  0 1 1 1 0 1 0 
 1 0 1 1 0 0 1 
Check nodes: decide which variables are not reliable
and “suppress” their inputs
Small number of edges in graph = low complexity
Nodes on left/right with constant degree: regular code
Otherwise, codes termed irregular
Can adjust “degree distribution” of variables/checks
Best performance over standard channels: long, irregular, random-like LDPC codes
Have dmin proportional to length of code, but correct many more errors
DECODING OF LDPC CODES
Iterative decoding optimal only if the code graph has no cycles
Vardy et.al. 1997: All “good codes” must have cycles
• What are desirable code properties?
• Large girth (smallest cycle length): for sharp transition to waterfall
region; large minimum distance;
• Number of cycles of short length as small as possible;
• Very low error-floors (girth-optimized graphs are not a very good choice
Richardson 2003);
• Make the performance capacity approaching: irregular degree
distribution (depends on the channel, “safe” gap to capacity about 1dB);
(Density Evolution, Richardson and Urbanke, 2001)
• Practical applications: mathematical formula for positions of ones in H
HOW CAN ONE SATISFY THESE
CONSTRAINTS?
For “standard channels” (symmetric, binary-input) the questions are
mostly well-understood
BEC – considered to be a “closed case”
(Capacity achieving degree distributions, complexity of decoding,
stopping sets, have simple EXIT chart analysis…)
Error-floor greatest remaining unsolved problem!
For “non-standard” channels still a lot to work on
CODE CONSTRUCTION AND
IMPLEMENTATION FOR
STANDARD CHANNELS
STRUCTURED LDPC CODES WITH GOOD
PROPERTIES


Structured LDPC: amenable for practical VLSI implementations
Construction issues: minimum distance, girth/small number of
short cycles, error-floor properties
Cycle-related constraints
!
Trapping-set related constrained
THE WATERFALL AND ERROR-FLOOR REGION
Encyclopedia of
Sparse Codes on
Graphs:
by D. MacKay
http://www.inference.phy.cam.ac.uk/mackay/codes/data.html



Waterfall: Optimization of minimum distance and cycle length
distribution (regular/irregular codes)
Waterfall: Optimization of degree distribution and code construction
Error floor: Optimization of trapping sets or … maybe a different
decoding algorithm?
Laendner/Leyba/Milenkovic/Prakash: Allerton 2oo3, 2004, ICC 2004, IT 2004, IT 2005;
Tanner 2000; Vasic and Milenkovic 2001; Kim, Prepelitsa, Pless 2002; Fossorier 2004
 P1,1
 P
2,1
H 
 ...

 Pn k ,1
P1,2
P2,2
...
Pn k ,2





... Pn k ,n 
...
...
...
P1,n
P2,n
...
0

0
P

0
1

 P i1
 is
P
H 
 ...
 i s m 2
 P
1 0 ... 0 

0 1 0 ...


0 ... 0 1 
0 0 ... 0 
Pi , j  P(i 1)( j 1)
I 
I
I

I P 

q 1
P







r 1
 q 1r 1 
 P
I P

Pi , j  P
P
P
( ri 1)( c j 1)
P
P
P
P
P
0

P

0
0

P
Masking…
 P i1
 i
P 9

 0
H  0

 0
 0

 0
 I

 I
H  0

P 2

0
0
P i1
i
P9
P i8
0
0
0
P i2
P i3
P i1
P i2
P
...
...
i s m 3
i s m 4
P
P
0
0
0
P
0
0
P
P
P
0
P
0
P
P
0
0
0
P
0
0
P
P
P
P
P
0
P
0
0
0
0
P
0
P
0
0
P i1
i
P9
P i8
0
0
0
0
0
P i1
i
P9
i8
P
0
0
0
0
0
P i1
i
P9
P i8
P i6
i
P5
0
0
0
P i1
i
P9
P
0
P

0

P
0

P
i
P7
P i6
i
P5
0
0
0
P i1
Irregular
Codes
P i8
i
P7
i6
P
i
P5
0
0
0
0 

P i8 
i 
P7
P i6 
i 
P 5
0 

0 
I
I
0
0
P2
P2
0
2
P
I
0
0
0
I
0
P2
0
I
I
0
I
P2
I
I
0
0
0
I
I
0
I
P2
0
P2
I
P2
0
I
0
I
I
P is 

... P i s 1 
...
... 

... P is m1 
...
Codes
with large
girth
0 

P2
0 

I 

I 
P
MATHEMATICAL CONSTRUCTIONS OF
EXPONENTS
LOW TO MODERATE RATE CODES:
P P P r1
P
P P P
r2
Fan, 2000: Cycles exist if sum of exponents in
closed path is zero modulo size (P)=q
P
P P P
r3
(r1  r2 )ci1  (r2  r3 )ci2  ... (rl  r1 )cil  0 mod q
P
P P P
r4
ci
cj ck cl
Proper Array Codes (PAC): Row-Labels {ri}
form an Arithmetic Progression with common
mean ri+1-ri=a;
Improper Array Codes (IAC): Row-Labels {ri}
do not form an Arithmetic Progression;
i+j-2k=0 mod q (CW3)
i+j-2k=0 and i+2j-3k=0 mod q (CW4)




2i+j-k-2l=0
i+j-k-l=0
2k+i-3j=0
2j-i-k=0
ANALYTICAL RESULTS ON ACHIEVABLE
RATES

Number theoretic bounds for contructions:
 For PACs with girth g=8:  C q loglog q / log q
 For IACs with girth g=10:  2q

Results based on work of Bosznay, Erdos,Turan, Szemeredi, Bourgain
q
 1  (1 /(2i  1))  ,
1/ 3
or
1i  2 ( q 1)
(d  1)k 2  1

,
k
s=1 0,1,3,4,9,10,12,13,27,28,30,38,...
s=2 0,2,3,5,9,11,12,14,27,29,30,39,...
s=3 0,3,4,7,9,12,13,16,27,30,35,36,...
 3q 2 

 
 2( q  1) 
 log q 
k

 log(dD  1) 
s=1
0,1,5,14,25,57,88,122,198,257,280,...
s=2 0,2,7,18,37,65,99,151,220,233,545,...
s=3 0,3,7,18,31,50,105,145,186,230,289,...
Mathematical constructions: Joint work with N. Kashyap, Queen’s ,
University, AMS Meeting, Chicago 2004
CYCLE-INVARIANT DIFFERENCE SETS









Definition:V an additive Abelian group, order v.
(v,c,1) difference set Ω in V is a c-subset of V with exactly one ordered pair
(x,y) in Ω s.t. x-y=g, for a given g inV.
Examples: {1,2,4} mod 7, {0,1,3,9} mod 13.
“Incomplete” difference sets – not every element covered
CIDS: Definition
Elements of Ω arranged in ascending order.
C i operator:cyclically shifts sequence i positions to the right
For i = 1,…, m, i  C i    mod v (component-wise)
are ordered difference sets as well
Ω is an (m+1)-fold cycle invariant difference set over V
Example: V=Z7 and Ω ={1,2,4}, m=2
1  {4,1,2}  {1,2,4}  {3,6,5}  3{1,2,4}
mod 7
CIDS – GENERALIZATION OF BOSE METHOD
S = i1 , i2 ,..., is  constructed as {a : 0  i  q 2  1,  a    GF(q)}
where  is the primitive element of GF(q2) , q odd prime
 P i1
 is
P
H 
 ...
 i s m 2
 P
P i2
P i3
P i1
P i2
...
...
P i s m 3
P i s m 4
P is 

... P i s 1 
...
... 
i s  m 1 
... P

...
GF (qm ) : {a : 0  i  qm  1,a    GF (q)}
Excellent performance for very low rate codes
HIGH-RATE CODES: LATIN SQUARE
CONSTRUCTION
I

I
H 0
 2
P
0

I
P2
I
0
0
I
0
0
I
P2
I
0
P2
0
I
0
I
I
0
P2
0
I
P2
I
0
P2
I
0
0
I
P2
0
I
I
0
0
0
I
P2
I
0

P2 
0

I 
I 
ERROR FLOOR ANALYSIS



Near codewords (Neal/MacKay, 2003)
Trapping sets (Richardson, 2003)
Considered to be the most outstanding problem in the area!
Inherent property of both the decoding algorithm and code structure
APPROACH: Try to predict error floor (Richardson, 2003, 2004), or
Try to change the decoding algorithm
Observations leading to new decoding algorithm:
1)
Different computer codes give very different solutions; quantization has a
very large influence on the error floor
2)
Certain variables in the code graph show extremely large and uncontrolled
jumps in likelihood values
3)
Strange things happen when message values are very close to +1/-1
OBJECT OF INVESTIGATION: MARGULIS CODE


Elementary trapping set (ETS): small subset of variable nodes for which
induced subgraph has very few checks of odd degree
INTUITION: channel noise configuration such that all variables in ETS are
incorrect after certain number of iterations; small number of odd degree
checks indicates that errors will not be easily corrected
Margulis code: based on Cayley graphs (algebraic construction), has good girth
Frame error rate curve has floor at 10E-7, SNR=2.4dB, AWGN
Iteration number:
1
2
3
182 142 108
4
5
6
7
8
9 10 11 12
13
14
15
16 17
18
19 ….
73
66 55 38 29 20 16 15 14 14
14
14
14
14 14
14 ….
Number of erroneous bits:
At “freezing” point, all values of variable messages are +1/-1 and significant
oscillations happen +1 → -1
Frozen activity lasts for 13 iterations: then “bad messages” propagate through
the whole graph; can have jumps from -0.33 to 49.67
ALGORITHM FOR ELIMINATING THE ERROR FLOOR




“Oscillations” reminiscent to problems encountered in theory of
divergent series (G.H. Hardy, Divergent Series)
Trick: use a low-complexity “extra-step” in sum-product algorithm
based on result by Hardy
Parameters tunable: when do you “start” using “extra-step”, what
“numerical biases” should one use
Can do density evolution analysis for optimization purposes – no loss
expected in waterfall region (nor one observed)
(Paper with S. Laendner, in preparation)
1000 good frames: Standard and modified message passing both take 7.4
iterations on average for correct decoding
Modified message passing never takes more than 16 iterations to converge
20 Bad frames (in MATLAB): Corrected 2/3 fraction of errors after 35, 41, 26, 119,
38, 98… iterations (standard failed even after 10000 iterations)
Can use slightly more complicated result from G.H.H to reduce number of iteration
to 20-30
VARIABLE AND CHECK NODE ARCHITECTURE
(JOINT WORK WITH N. JAYAKUMAR, S. KHATRI)
Globecom 2004, TCAS 2005

Check node

PLA
Adder
Variable node
From check 1
PLA
Adder
PLA
Adder
Array
From
channel
Adder
Array
To checks
To variable
nodes
PLA
-
Adder
PLA
log(tanh) and arctanh PLAs
Adder
From check 2
-
Log(dv+1)+1 Stages of Two-Input
Manchester Adders, Rabaey 2000
LDPC ENCODING: SHIFT AND ADDITION UNITS FOR QUASI-CYCLIC CODES
ADVANTAGES OF NETWORK OF PLA
ARCHITECTURE








Logic function in 2-level form: good delay characteristics as long as
size of each PLA is bounded (Khatri, Brayton, Sangoivani-Vincentelli,
2000)
Standard cell implementation: considerable delay in traversing the
different levels (i.e. gates)
Local wiring collapsed into compact 2-level core: crosstalk-immunity
local wiring delays are reduced
Devices in PLA core are minimum-sized: compact layouts
NMOS devices used exclusively: devices placed extremely close
together
In a standard cell layout, both PMOS and NMOS devices are present
in each cell: PMOS-to-NMOS diffusion spacing requirement results
in loss of layout density
PLAs dynamic: faster than static standard cell implementation
Can easily accommodate re-configurability requirement
S1 Bank
C/V Clusters
S3
S2
Check Node
S4 Bank
Clocking and Logic Control
Variable Nodes
Ring of C/V Node Clusters
HILBERT SPACE FILLING CURVES
 P i1
 i6
P
0

0
H 
0

0
 P i4

 P i3
P i2
P i3
P i4
P i5
P i6
i1
i2
i3
i4
i5
0
0
0
0
0
P
0
P
0
P
0
P
0
P
0
0
P i1
0
P i2
0
P i3
0
P i4
0
P i5
0
0
0
0
0
P i6
P i1
P i2
P i3
P i4
0
0
P i1
P i2
P i3
P i4
P i5
P i6
0
0
0
i6
i1
i2
i1
i4
i5
0
0
0
P
i5
P i4
P
P
P
P
P
i6
0
0
0
0
0
P i5
0
0
0
0
0
P
P
0
P
i1
P i2
0
P i6
P i1
S1 , S2 , S3 , S4
Group variable/check nodes based on distance in code graph
and place on adjacent fields in HP curve: proximity preserving
0

0
P i6 

P i5 
0

0
P i3 

P i2 
CHIP PARAMETERS: 0.1 MICRON PROCESS
Throughput (Gbps)
Flat-out (max.
cycle of 77.14%)
Side of chip
(mm)
Power
(W)
11.0923
104.5185
958.9158
11.0923
83.6372
2
11.0923
13.3214
duty
1479.4
50% Duty cycle
Lower clock for practical
applications
Estimates for n=28800
Throughput (Gbps)
Side of chip (mm)
Power
(W)
Flat out (max. duty
cycle of 77.14%)
369.8537
5.5461
30.4711
50% duty cycle (max.
performance)
239.7289
5.5461
20.9093
Lower
clock
for
practical applications
2
5.5461
3.4406
Estimates for n=7200
LDPC CODES FOR
NON-STANDARD CHANNELS
PRIOR WORK


Partial analysis of the Gilbert-Elliott’s (GE) channel: (Eckford,
Kschischang, Pasupathy, 2003-2004)
Polya’s Urn (PC) Channel Model (Alajaji and Fuja, 1994):
Simpler description, more accurate in terms of autocorrelation
function for many fading channels
B
G
State transition probabilities:
p(B,G), p(G,B)
1-p(G)
Binary symmetric channel
transition probability
p(G), p(B)
1-p(G)
FINITE MEMORY POLYA URN CHANNEL




Binary additive channel with memory
Occurrence of error increases probability of future errors
Output sample at time i (Yi) is modulo-two sum of input Xi and noise sequence Zi ,
where {Xi} and {Zi} are independent sequences
The noise process is formed based on the following scheme:
Noise samples are outcomes of consecutive draws from
urn, and Zi is one if ith ball was red and zero if black


Stationary, ergodic channel with memory M
Three parameters: (σ, δ, M), where ρ=R/T < ½ , σ = 1- ρ = S/T and δ = Δ/T
M=1 PC CHANNEL
(DEMONSTRATIVE EXAMPLE)




Reduced 2-parameter space
a = (σ + δ)/(1+ δ) and b = (1- σ + δ)/( 1+ δ)
A two-state Markov chain with states S0 and S1 transition
probabilities 1-a and 1-b
Channel transition matrix is
 a 1  a

b 
1  b

State S0 corresponds to a noise sample of value zero, and state S1
to one

Observation: this type of channel is not included in the GE
analysis by Eckford et. al.
Nagarajan and Milenkovic, IEEE CCEC, 2004
JOINT DECODING/CHANNEL STATE
ESTIMATION

Use extrinsic information from the decoder to improve the quality of
the estimate of the channel state and vice-versa
BCJR EQUATIONS FOR PC CHANNEL
ESTIMATION

Forward message vector

   (1), (2)
 (1)  P { S (i )  S0 ;Y1i ;  }    (1) aq(  )    (2)(1  b)q(  )
 (2)  P { S (i )  S1 ;Y1i ;  }    (1)(1  a)1  q(  )    (2) b1  q(  )

Backward message vector

  ( (1),  (2))
 (1)  P {Yin | S (i )  S0 ,  }    (1) aq(  )    (2)(1  a)(1  q(  ))
 (2)  P {Yin | S (i )  S1 ,  }    (1)(1  b) q(  )    (2) b(1  q(  ))

Channel information passed to decoder


 

  (1)   1a    21  b
LLR  (1)Yi log 
  (2)   11  a     2b

CODE DESIGN CONSTRAINTS: ESTIMATOR
AND DECODER

Avoid short cycles from
channel state estimator

Avoid short cycles from LDPC
graph

Optimize degree distribution
SIMULATIONS


Performance of CIDS-based structured code in PC channel with
channel estimation as compared to random codes
δ = 0.2 and σ = 0.95-1 to generate pairs of (a,b)
ASYMMETRIC CHANNELS: DISTRIBUTION SHAPING
Gallager 1968: Linear codes can achieve capacity on any Discrete Memoryless
Channel (DMC) when combined with “distribution shapers”
Mapping not 1-1
Rate loss?
McEliece, 2001: Attempts to use this scheme with Repeat-Accumulate or LDPC
codes failed
Biggest obstacle: cannot do “soft” de-mapping
Solution: Use soft Huffman encoding
(extension of idea by Hagenauer et.al. for joint source-channel coding)
In preparation: Leyba, Milenkovic 2005
ASYMMETRIC CHANNELS: DISTRIBUTION SHAPING
Completion of codeword
“Interior bits”
BCJR Equations for trellis
Bit
Time
Largest achievable probability bias
Symbol Time
ITERATIVE DECODING OF
CLASSICAL ALGEBRAIC CODES
APPLICATIONS FOR UNEQUAL
ERROR PROTECTION
GRAPHICAL REPRESENTATIONS OF
ALGEBEBRAIC CODES

Consider a [7,4,3] Hamming code
0 0 0 1 1 1 1
H  0 1 1 0 0 1 1


1 0 1 0 1 0 1



Idea put forward by Yedidia, Fossorier
Support Sets: S1={4,5,6,7}; S2={2,3,6,7}; Intersection Set  ={6,7}
Set entries corresponding to  in these two rows to zero; insert new
column with non-zero entries in these two rows; call this an auxiliary
variable VB
Insert a new row with non-zero entries at positions indexed by   VB
0
0
H1  
1

0
0
1
0
0
0
1
1
0
1
0
0
0
1
0
1
0
0
0
0
1
0
0
1
1
1
1
0

1
MAXIMUM CYCLE NUMBER (MC)
ALGORITHM

Approach by Sundararajan
and Vasic, 2004 (SVS): use
w 1
Hw  H
nw  n
mw  m
Determine the coordinates i, j of the (non -diagonal)
largest element in H w H wT
4 2 2
HH  2 4 2
2 2 4
T
to identify all cycles
No attempt to minimize number
of Auxiliary variables
 Key Ideas: Always select the two
rows i, j involved in maximum
number of cycles
4 2 3
HH  2 4 5


 6 2 4
T
Is
(H w H wT )ij  1
Yes
No
Determine  . For x   set,
(H w ) ix  0, (H w ) jx  0.
Set, w  w  1, mw  mw1  1, nw  nw1  1 and
(H w ) inw  1, ( H w ) jnw  1, ( H w ) mw x  1, ( H w ) mwnw  1.
Determine the rows l with   S l . Set
Output
Kumar, Milenkovic: CISS’2005, IEEE CTL 2005
Hw
(H w )lx  0, (H w )l nw  1.
CYCLE OVERLAP STRATEGY (COS)



COS is a variant of MCS
algorithm
Consider two rows i, j with
intersection set  of their
support sets
Let s be number of rows with
a support set Sl such that
  Sl

Choose indices that
maximize
 ( HH T ) ij  s 

 

2  2 

i0  1, j0  2,
D0  0
Determine B. Construct submatrix HB of H w
with columns indexed by B.
Compute s the cardinality of the set
{(l, m) | l  m and (H B H BT ) lm  (H w H wT ) ij }
T
Compute D=  ( HH ) ij  s 

2  2 

.
 
If D  D then, D0  D, i  i0 , j  j0 .
0
Update the values of jo . i0 sequentially.
Dimensions of four-cycle free parity-check
representations
Code
Hamming
Golay
BCH
RS_bin
Original
(3,7)
(5,31)
(11,23)
(8,15)
(10,31)
(33,63)
(36,60)
SVS
(4,8)
(25,31)
(29,41)
(22,29)
(57,78)
(300,327)
(158,182)
MCS
(4,8)
(10,36)
(23,35)
(16,23)
(29,50)
(166,193)
(110,134)
COS
(4,8)
(10,36)
(22,34)
(16,23)
(29,50)
(157,184)
(102,126)
EXTENSION: CODES OVER NON-BINARY
FIELDS

Motivation
Applications to Reed-Solomon (RS) codes
 Two Approaches:


Binary image mapping: Kumar, Milenkovic, Vasic CTW 2004,
Milenkovic, Kumar ISIT 2004
0 1 0
0 0 1

M    




 0  1  2





0 
0 

 

 
 m 1 
Choice of matrix field representation and primitive polynomial, initial
choice of parity-check matrix
followed by cycle elimination for the binary parity-check matrix
 Direct cycle elimination followed by vector message
passing
DIRECT CYCLE ELIMINATION


If

Define two types of cycles
Type 1 cycle:
x
R   xh

{v1 , v2 , v3 }  GF(q)
y 

 yh 
RCF
0
  0
 x
0
0
y
1
 h 
1 
are the values taken by the three variable nodes in RCF then,
v3   x v1   y v2
Type 2 cycle:
 x
D z

 

 w
y
DCF
 x

  z
0

0
w
1
y

0
1 
If {v1 , v2 , v3 }  GF(q) are the values taken by the three variable nodes in RCF then,
v3  v 2



Type 2 cycle elimination introduces six-cycle while eliminating four-cycle
Number of Type-1 cycles:
(q  1) 3
Number of Type-2 cycles: (q  2)(q  1) 3
ALGEBRAIC CODES AND UNEQUAL
ERROR PROTECTION BASED ON
ITERATIVE DECODING
 Codes which provide UEP: practical applications for




Non-uniform channels
Joint coding for parallel channels
Image transmission over wireless channels
Page oriented optical memories
 Fekri 2002 – LDPC codes for UEP based on optimizing degree
distributions
No guarantee/tuning possibility for degree of protection
 Dumer 2000 - Decoding algorithm for RM codes based on their
Plotkin-type structure
 Decodes different parts of the codeword separately
 Inherently achieves UEP for different portions of the codeword
PLOTKIN CONSTRUCTION


Parity-check matrix of Depth-1 Plotkin-type code:
0 
H
C = {|u|u+v |; u in C1; v in C2 }
H  1

H 2 H 2 
Global iterative decoder not a good option: H2 placed side by side,
introduces at least
 ci 
four-cycles
  
i
2

The UEP obtained for Plotkin-type codes is an inherent characteristic of
the proposed decoding algorithm

As in other UEP schemes involving LDPC codes, one can vaguely associate
the UEP obtained here to the different degree distributions

Encoding: Individual for all different component codes
INTUITION



|u|u+v|:
Have “two copies” of u
Only one copy of v, which is implicit
ADVANTAGES OF THE SCHEME:




Reduced-complexity encoding
Low-complexity soft-output decoders
Flexibility of UEP code construction, levels of UEP error
protection and adaptability to channel conditions
Excellent performance for best protected components
MS  Multi-Stage
TMS  Threshold Multi-Stage
MR-MS  Multi-Round Multi-Stage
Kumar, Milenkovic
Globecom 2004, TCOMM 2004
FLOWCHART OF MS, MS-MR DECODER
Channel
Information
Channel
Information
Liy ''
Liy '
Liy ''
Liy '
2
4
 comp
,1   ch / j
v
i
L
for a1  0,
for a1  1
Lvi
Decode v to
ˆ
v
2
2
 comp
,1   ch /( j  1)
Decode v to
ˆ
v
2
2
 comp
,1   ch /( j  1)
2
2
 comp
,1   ch
for a1  0,
for a1  1
Lui  Liy '  vˆLiy ''
Lui  Liy '  vˆLiy ''
Equivalent noise
variances
Decode v to
ˆ
u
Output
| uˆ | uˆ  vˆ |
j times
Decode u to
ˆ
u
THANK YOU!