Chapter 4 Channel Coding Copyright © 2003, Dr. Dharma P. Agrawal and Dr.

Download Report

Transcript Chapter 4 Channel Coding Copyright © 2003, Dr. Dharma P. Agrawal and Dr.

Chapter 4
Channel Coding
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
1
Outline









Introduction
Block Codes
Cyclic Codes
CRC (Cyclic Redundancy Check)
Convolutional Codes
Interleaving
Information Capacity Theorem
Turbo Codes
ARQ (Automatic Repeat Request)



Stop-and-wait ARQ
Go-back-N ARQ
Selective-repeat ARQ
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
2
Introduction
Information to
be transmitted Source
coding
Channel
Channel
coding
coding
Modulation
Transmitter
Channel
Information
received
Source
decoding
Channel
Channel
decoding
decoding
Demodulation
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
Receiver
3
Forward Error Correction (FEC)


The key idea of FEC is to transmit enough
redundant data to allow receiver to recover
from errors all by itself. No sender
retransmission required.
The major categories of FEC codes are





Block codes,
Cyclic codes,
Reed-Solomon codes (Not covered here),
Convolutional codes, and
Turbo codes, etc.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
4
Linear Block Codes





Information is divided into blocks of length k
r parity bits or check bits are added to each block
(total length n = k + r),.
Code rate R = k/n
Decoder looks for codeword closest to received
vector (code vector + error vector)
Tradeoffs between



Efficiency
Reliability
Encoding/Decoding complexity
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
5
Linear Block Codes
The uncoded k data bits be represented by the m vector:
m=(m1, m2, …, mk)
The corresponding codeword be represented by the n-bit c
vector:
c=(c1, c2, …ck, ck+1, …, cn-1, cn)
 Each parity bit consists of weighted modulo 2 sum of the data
bits represented by  symbol.

c1  m1

c2  m2
...

ck  mk
c  m p
1 1( k 1)  m2 p2 ( k 1)  ...  mk pk ( k 1)
 k 1
...

cn  m1 p1n  m2 p2 n  ... mk pkn
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
6
Block Codes: Linear Block Codes
 Linear Block Code
The block length C of the Linear Block Code is
C=mG
where m is the information codeword block length, G is the
generator matrix.
G = [Ik | P]k × n,
where pi = Remainder of [xn-k+i-1/g(x)] for i=1, 2, .., k, and I
is unit matrix.
 The parity check matrix
H = [PT | In-k ], where PT is the transpose of the matrix
p.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
7
Block Codes: Example
Example : Find linear block code encoder G if code generator
polynomial g(x)=1+x+x3 for a (7, 4) code.
We have n = Total number of bits = 7, k = Number of information bits = 4,
r = Number of parity bits = n - k = 3.
1 0 0 p1 
0 1 0 p 
2 
G  I | P   
,
 


0 01 pk 
where
n  k  i 1

x
pi  Re m ainderof 
 g ( x)

, i  1, 2, , k

Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
8
Block Codes: Example (Continued)
 x3 
p1  Re 
 1  x  110
3
1  x  x 
 x4 
2
p2  Re 

x

x
 011

3
1  x  x 
 x5 
2
p3  Re 

1

x

x
 111
3
1  x  x 
1000110
0100011

G
0010111


0001101
 x6 
2
p4  Re 

1

x
 101
3
1  x  x 
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
9
Block Codes: Linear Block Codes
Message
vector
Generator
matrix
Code
Vector
Code
Vector
Parity
check
matrix
m
G
C
C
HT
Null
vector
0
Operations of the generator matrix and the parity check matrix
The parity check matrix H is used to detect errors in the received code by using the fact
that c * HT = 0 ( null vector)
Let x = c
e be the received message where c is the correct code and e is the error
Compute S = x * HT =( c
e ) * HT =c HT
e HT = e HT
If S is 0 then message is correct else there are errors in it, from common known error
patterns the correct message can be decoded.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
10
Linear Block Codes

Consider a (7,4) linear block code, given by G as
1000111
0100110

G
0010101


0001011


Then,
1110100
H  1101010
1011001
For m = [1 0 1 1] and c = mG = [1 0 1 1 0 0 1].
If there is no error, the received vector x=c, and s=cHT=[0, 0,0]
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
11
Linear Block Codes
Let c suffer an error such that the received vector
x=c  e
=[ 1 0 1 1 0 0 1 ]  [ 0 0 1 0 0 0 0 ]
=[ 1 0 0 1 0 0 1 ].
Then,
111
s=xHT


=
110
101
1001001011  [101]
100


010


001


=(eHT)
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
12
Cyclic Codes
It is a block code which uses a shift register to perform encoding and
decoding
The code word with n bits is expressed as
c(x)=c1xn-1 +c2xn-2……+ cn
where each ci is either a 1 or 0.
c(x) = m(x) xn-k + cp(x)
where cp(x) = remainder from dividing m(x) xn-k by generator g(x)
if the received signal is c(x) + e(x) where e(x) is the error.
To check if received signal is error free, the remainder from dividing
c(x) + e(x) by g(x) is obtained(syndrome). If this is 0 then the received
signal is considered error free else error pattern is detected from
known error syndromes.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
13
Cyclic Code: Example
Example : Find the codewords c(x) if m(x)=1+x+x2 and g(x)=1+x+x3
for (7,4) cyclic code.
We have n = Total number of bits = 7, k = Number of information bits = 4,
r = Number of parity bits = n - k = 3.

Then,
 m( x ) x n  k 
c p ( x)  rem 

 g ( x) 
5
4
3
x  x  x 
 rem  3
x

 x  x 1 
c( x)  m( x) xnk  c p ( x)  x  x 3  x4  x5
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
14
Cyclic Redundancy Check (CRC)

Cyclic redundancy Code (CRC) is an error-checking code.

The transmitter appends an extra n-bit sequence to every
frame called Frame Check Sequence (FCS). The FCS holds
redundant information about the frame that helps the
receivers detect errors in the frame.

CRC is based on polynomial manipulation using modulo
arithmetic. Blocks of input bit as coefficient-sets for
polynomials is called message polynomial. Polynomial with
constant coefficients is called the generator polynomial.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
15
Cyclic Redundancy Check (CRC)


Generator polynomial is divided into the message
polynomial, giving quotient and remainder, the
coefficients of the remainder form the bits of final CRC.
Define:
M – The original frame (k bits) to be transmitted before
adding the
Frame Check Sequence (FCS).
F – The resulting FCS of n bits to be added to M (usually
n=8, 16, 32).
T – The cascading of M and F.
P – The predefined CRC generating polynomial with
pattern of n+1 bits.
The main idea in CRC algorithm is that the FCS is
generated so that the remainder of T/P is zero.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
16
Cyclic Redundancy Check (CRC)

The CRC creation process is defined as follows:





Get the block of raw message
Left shift the raw message by n bits and then divide it by
p
Get the remainder R as FCS
Append the R to the raw message . The result is the
frame to be transmitted.
CRC is checked using the following process:



Receive the frame
Divide it by P
Check the remainder. If the remainder is not zero, then
there is an error in the frame.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
17
Common CRC Codes
Code
CRC-12
Generator polynomial
g(x)
1+x+x2+x3+x11+x12
Parity check
bits
12
CRC-16
1+x2+x15+x16
16
CRC-CCITT
1+x5+x15+x16
16
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
18
Convolutional Codes




Encoding of information stream rather than
information blocks
Value of certain information symbol also affects
the encoding of next M information symbols,
i.e., memory M
Easy implementation using shift register
 Assuming k inputs and n outputs
Decoding is mostly performed by the Viterbi
Algorithm (not covered here)
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
19
Convolutional Codes: (n=2, k=1, M=2)
Encoder
y1
Input
D1
x
Output
D2
c
y2
Di -- Register
Input:
Output:
1
11
1
01
1
10
0
01
0
11
0
00
…
…
Input:
Output:
1
11
0
10
1
00
0
10
0
11
0
00
…
…
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
20
State Diagram
10/1
11
01/0
01/1
10/0
10
01
00/1
11/1
00
11/0
00/0
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
21
Tree Diagram
00
0
……
First input
00
00
…11001
11
10
11
11
01
10
First output
… 10 11 11 01 11
11
10
00
01
00
1
01
10
11
00
11
11
10
01
00
01
01
11
01
00
01
10
10
10
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
22
Trellis
… 11 0 0 1
00
00
00
00
00
11
11
10
00
10
11
10
11
01
01
01
11
11
11
10
11
10
01
11
11
11
01
10
11
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
11
…
00
10
01
01
00
10
00
10
01
01
00
00
10
00
10
10
01
00
00
01
01
01
10
11
23
Interleaving
Input Data
a1, a2, a3, a4, a5, a6, a7, a8, a9, …
Interleaving
Transmitting
Data
Read
Write
a1,
a5,
a9,
a13,
a2, a3,
a4
a6,
a7,
a8
a10, a11, a12
a14, a15, a16
a1, a5, a9, a13, a2, a6, a10, a14, a3, …
De-Interleaving
Output Data
Write
Read
a1,
a5,
a9,
a13,
a2, a3,
a4
a6,
a7,
a8
a10, a11, a12
a14, a15, a16
a1, a2, a3, a4, a5, a6, a7, a8, a9, …
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
24
Interleaving (Example)
Burst error
Transmitting
Data
0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0,…
De-Interleaving
Output Data
Write
Read
0,
0,
0,
1,
1,
1,
1,
0,
0,
0,
0,
0,
0
0
0
0
0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 1, …
Discrete error
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
25
Information Capacity Theorem
(Shannon Limit)

The information capacity (or channel capacity)
C of a continuous channel with bandwidth B
Hertz can be perturbed by additive Gaussian
white noise of power spectral density N0/2,
provided bandwidth B satisfies

P 
 bits / sec ond
C  B log2 1 
 N0 B 
where P is the average transmitted power P =
EbRb (for an ideal system, Rb = C).
Eb is the transmitted energy per bit,
Rb is transmission rate.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
26
Shannon Limit
Rb/B
20
Region for which Rb>C
10
Capacity boundary Rb=C
Shannon Limit
Region for which Rb<C
-1.6
1
0
10
20
30
Eb/N0 dB
0.1
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
27
Turbo Codes




A brief historic of turbo codes :
The turbo code concept was first introduced by C. Berrou in
1993. Today, Turbo Codes are considered as the most
efficient coding schemes for FEC.
Scheme with known components (simple convolutional or
block codes, interleaver, soft-decision decoder, etc.)
Performance close to the Shannon Limit (Eb/N0 = -1.6 db
if Rb 0) at modest complexity!
Turbo codes have been proposed for low-power applications
such as deep-space and satellite communications, as well as
for interference limited applications such as third generation
cellular, personal communication services, ad hoc and sensor
networks.
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
28
Turbo Codes: Encoder
Data
Source
X
X
Convolutional Encoder
1
Y1
Interleaving
Convolutional Encoder
2
Y2
Y
(Y1, Y2)
X: Information
Yi: Redundancy Information
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
29
Turbo Codes: Decoder
De-interleaving
Y1
Convolutional
Decoder 1
X
Interleaving
Interleaver
Convolutional
Decoder 2
X’
De-interleaving
Y2
X’: Decoded Information
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
30
Automatic Repeat Request (ARQ)
Source
Transmitter
Channel
Receiver
Destination
Transmit
Encoder
Transmit
Modulation
Demodulation
Controller
Decoder
Controller
Acknowledge
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
31
Stop-And-Wait ARQ (SAW ARQ)
Retransmission
Transmitting
Data
Received Data
1
2
1
3
3
2
Time
3
Time
Error
Output Data
1
2
3
Time
ACK: Acknowledge
NAK: Negative ACK
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
32
Stop-And-Wait ARQ (SAW ARQ)
Throughput:
S = (1/T) * (k/n) = [(1- Pb)n / (1 + D * Rb/ n) ] * (k/n)
where T is the average transmission time in terms of a block duration
T = (1 + D * Rb/ n) * PACK + 2 * (1 + D * Rb/ n) * PACK * (1- PACK)
+ 3 * (1 + D * Rb/ n) * PACK * (1- PACK)2 + …..

= (1+ D * Rb/n) * PACK  i * (1-PACK)i-1
i 1
= (1+ D * Rb/n)
* PACK/[1-(1-PACK)]2
= (1 + D * Rb/ n) / PACK
where n = number of bits in a block, k = number of information bits in a block,
D = round trip delay, Rb= bit rate, Pb = BER of the channel, and PACK = (1- Pb)n
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
33
Go-Back-N ARQ (GBN ARQ)
Go-back 3
Transmitting
Data
Received Data
1 2 3 4 5 3 4 5 6 7 5
1 2
3 4
Error
Output Data
Go-back 5
1 2
Time
5
Time
Error
3 4
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
5
Time
34
Go-Back-N ARQ (GBN ARQ)
Throughput
S = (1/T) * (k/n)
= [(1- Pb)n / ((1- Pb)n + N * (1-(1- Pb)n ) )]* (k/n)
where
T = 1 * PACK + (N+1) * PACK * (1- PACK) +2 * (N+1) * PACK *
(1- PACK)2 + ….
= PACK+PACK * [(1-PACK)+(1-PACK )2 +(1-PACK)3+…]+
PACK[N * (1-PACK)+2 * N * (1-PACK )2 +3 * N * (1-PACK)3+…]
= PACK+PACK *[(1-PACK)/PACK + N * (1-PACK)/PACK2
= 1 + (N * [1 - (1- Pb)n ])/ (1- Pb)n
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
35
Selective-Repeat ARQ (SR ARQ)
Retransmission
Transmitting
Data
Received Data
Retransmission
1 2 3 4 5 3 6 7 8 9 7
1 2
4 5 3 6
Error
Buffer
1 2
Output Data
1 2
Time
8 9 7
Time
Error
4 5 3 6
8 9 7
3 4 5 6
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
7 8 9
Time
Time
36
Selective-Repeat ARQ (SR ARQ)
Throughput
S = (1/T) * (k/n)
= (1- Pb)n * (k/n)
where
T = 1 * PACK + 2 * PACK * (1- PACK) + 3 * PACK * (1- PACK)2
+ ….

= PACK  i * (1-PACK)i-1
i 1
= PACK/[1-(1-PACK)]2
= 1/(1- Pb)n
Copyright © 2003, Dr. Dharma P. Agrawal and Dr. Qing-An Zeng. All rights reserved.
37