Modulation, Demodulation and Coding Course

Download Report

Transcript Modulation, Demodulation and Coding Course

Digital Communications I:
Modulation and Coding Course
Spring - 2013
Jeffrey N. Denenberg
Lecture 6: Linear Block Codes
Last time we talked about:

Evaluating the average probability of
symbol error for different bandpass
modulation schemes

Comparing different modulation schemes
based on their error performances.
Lecture 9
2
Today, we are going to talk about:

Channel coding

Linear block codes




The error detection and correction capability
Encoding and decoding
Hamming codes
Cyclic codes
Lecture 9
3
Block diagram of a DCS
Format
Source
encode
Channel
encode
Pulse
modulate
Bandpass
modulate
Digital demodulation
Format
Source
decode
Channel
decode
Lecture 9
Demod.
Sample
Detect
4
Channel
Digital modulation
What is channel coding?

Channel coding:
Transforming signals to improve
communications performance by increasing
the robustness against channel impairments
(noise, interference, fading, ...)
Waveform coding: Transforming waveforms to
better waveforms
 Structured sequences: Transforming data
sequences into better sequences, having
structured redundancy.

-“Better” in the sense of making the decision process less
subject to errors.
Lecture 9
5
Error control techniques

Automatic Repeat reQuest (ARQ)




Forward Error Correction (FEC)



Full-duplex connection, error detection codes
The receiver sends feedback to the transmitter,
saying that if any error is detected in the received
packet or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively).
The transmitter retransmits the previously sent
packet if it receives NACK.
Simplex connection, error correction codes
The receiver tries to correct some errors
Hybrid ARQ (ARQ+FEC)

Full-duplex, error detection and correction codes
Lecture 9
6
Why using error correction coding?




Error performance vs. bandwidth
Power vs. bandwidth
P
Data rate vs. bandwidth
Capacity vs. bandwidth
B
Coded
A
F
Coding gain:
For a given bit-error probability,
the reduction in the Eb/N0 that can be
realized through the use of code:




E
E
b
b




G
[dB]

[dB]

[dB]




N
N
0
0




u
c
Lecture 9
C
B
D
E
Uncoded
Eb / N0 (dB)
7
Channel models

Discrete memory-less channels


Binary Symmetric channels


Discrete input, discrete output
Binary input, binary output
Gaussian channels

Discrete input, continuous output
Lecture 9
8
Linear block codes

Let us review some basic definitions first
that are useful in understanding Linear
block codes.
Lecture 9
9
Some definitions

Binary field :

The set {0,1}, under modulo 2 binary
addition and multiplication forms a field.
Addition
0 0
01
1 0
1 1





0
1
1
0
Multiplication
0 0
0 1
10
1 1




0
0
0
1
Binary field is also called Galois field, GF(2).
Lecture 9
10
Some definitions…

Fields :


Let F be a set of objects on which two
operations ‘+’ and ‘.’ are defined.
F is said to be a field if and only if
1. F forms a commutative group under + operation.
The additive identity element is labeled “0”.

a
,
b

F

a

b

b

a

F
1. F-{0} forms a commutative group under .
Operation. The multiplicative identity element is
labeled “1”.

a
,
b

F

a

b

b
a

F
1. The operations “+” and “.” are distributive:
a
(
b

c
)
(
a
b
)

(
a
c
)
Lecture 9
11
Some definitions…

Vector space:

Let V be a set of vectors and F a fields of
elements called scalars. V forms a vector space
over F if:
u,
v

V

u

v

v

u

F
1. Commutative: 
2.
a

F
,

v

V

a

v

u

V
3. Distributive:
(
a

b
)

v

a

v

b

v
and
a

(
u

v
)

a

u

a

v
a
,
b

F
,

v

V

(
a

b
)

v

a

(
b

v
)
4. Associative: 
v
V
,1
vv
5. 
Lecture 9
12
Some definitions…

Examples of vector spaces

The set of binary n-tuples, denoted by Vn
V

{(
0000
),
(
0001
),
(
0010
),
(
0011
),
(
0100
),
(
010
),
(
01
),
4
(
1000
),
(
1001
),
(
1010
),
(
1011
),
(
1100
),
(
110
),
(
11
)}

Vector subspace:

A subset S of the vector space Vn is called a
subspace if:
The all-zero vector is in S.
 The sum of any two vectors in S is also in S.
Example:

{(
0000
),
(
0101
),
(
1010
),
(
1111
)}
is
a
subs
of
V
.
4
Lecture 9
13
Some definitions…

Spanning set:

A collection of vectors G

, is said to

v
,v
,
,v
1
2
n
be a spanning set for V or to span V if
linear combinations of the vectors in G include all
vectors in the vector space V,


Example:


(
1000
),
(
0110
),
(
1100
),
(
0011
),
(
1001
)
span
V
.
4
Bases:

The spanning set of V that has minimal cardinality is
called the basis for V.

Cardinality of a set is the number of objects in the set.

Example:


(
1000
),
(
0100
),
(
0010
),
(
0001
)
is
a
basis
for
V
.
4
Lecture 9
14
Linear block codes

Linear block code (n,k)

A set C  Vn with cardinality 2 is called a
linear block code if, and only if, it is a
subspace of the vector space Vn .
k
V
C
V
k
n
Members of C are called code-words.
 The all-zero codeword is a codeword.
 Any linear combination of code-words is a
codeword.

Lecture 9
15
Linear block codes – cont’d
Vn
mapping
Vk
C
Bases of C
Lecture 9
16
Linear block codes – cont’d




The information bit stream is chopped into blocks of k bits.
Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over the channel.
The reverse procedure is done at the receiver.
Data block
Channel
encoder
k bits
Codeword
n bits
n-kRedundant
bits
k
R
rate
c Code
n
Lecture 9
17
Linear block codes – cont’d



The Hamming weight of the vector U,
denoted by w(U), is the number of non-zero
elements in U.
The Hamming distance between two vectors
U and V, is the number of elements in which
they differ.
d
(
U,
V
)
w
(
U

V
)
The minimum distance of a block code is
d

min
d
(
U
,
U
)

min
w
(
U
)
min
i
j
i
i

j
Lecture 9
i
18
Linear block codes – cont’d

Error detection capability is given by
e dmin1

Error correcting-capability t of a code is
defined as the maximum number of
guaranteed correctable errors per codeword,
that is
dmin1
t 
 2 

Lecture 9
19
Linear block codes – cont’d

For memory less channels, the probability
that the decoder commits an erroneous
n
j
decoding is Pn 
n

j

p
(
1

p
)
M



j

j

t
1
p is the transition probability or bit error probability
over channel.

The decoded bit error probability is
n
j
1n 
n

j


P

j
p
(
1

p
)

B


j
n
j

t

1

Lecture 9
20
Linear block codes – cont’d

Discrete, memoryless, symmetric channel model
1-p
1
1
p
Tx. bits
Rx. bits
p
0

1-p
0
Note that for coded systems, the coded bits are
modulated and transmitted over the channel. For
example, for M-PSK modulation on AWGN channels
(M>2):








2
log
M
E
2
log
M
E
R
2
2




2
c
2
b
c




p

Q
sin

Q
si








log
M
M
log
M
M




2
0
2
0
N

N

where Ec is energy per coded bit, given by Ec  RcEb
Lecture 9
21
Linear block codes –cont’d
Vn
mapping
Vk
C
Bases of C

A matrix G is constructed by taking as its
,V
rows the vectors of the basis, {V
.
1,V
2,
k}
v

11 v
12
V

1

v
21 v
22



G
 



V
k 

v
k
1 v
k2

Lecture 9
v
1
n

v
2
n
 

v
kn

22
Linear block codes – cont’d

Encoding in (n,k) block code
U  mG
V
1
V

 2
(u
,
u
,

,
u
)

(
m
,
m
,

,
m
)

1 2
n
1 2
k

 
k
V
(u
,u
,
,u
m
m


m
1
2
n)
1V
1
2V
2
2V
k

The rows of G are linearly independent.
Lecture 9
23
Linear block codes – cont’d

Example: Block code (6,3)
Message vector
1
1
0
1
0
0
V

1 
0

G

V
1
1
0
1
0
 2 


 

1
0
1
0
0
1
3 
V

Lecture 9
Codeword
000
000000
100
010
110100
011010
110
001
1 011 1 0
1 010 0 1
101
011
0 111 0 1
110011
111
000111
24
Linear block codes – cont’d

Systematic block code (n,k)

For a systematic code, the first (or last) k
elements in the codeword are information bits.
G

[P Ik]
Ik k
kidentity
matrix
P
k
(n
k) matrix
k
U

(
u
,
u
,...,
u
)

(
p
,
p
,...,
p
,
m
,
m
,...
m
)
1
2
n
1
2
n

k
1
2
k














parity
bits
mess
bits
Lecture 9
25
Linear block codes – cont’d

For any linear code we can find a
matrix H ( n  k )n , such that its rows are
orthogonal to the rows of G :
GH  0
T
H is called the parity check matrix and
its rows are linearly independent.
 For systematic linear block codes:
T
H[Ink P
]

Lecture 9
26
Linear block codes – cont’d
Data source
m
Format
Channel
encoding
U
Modulation
channel
Data sink
Format
ˆ
m
Channel
decoding
r
Demodulation
Detection
r  Ue
r

(
r
,
r
,....,
r
)
received
codeword
or vecto
1
2
n
e

(
e
,
e
,....,
e
)
error
pattern
or vecto
1
2
n

Syndrome testing:

S is the syndrome of r, corresponding to the error
T
T
pattern e.
SrH
eH
Lecture 9
27
Linear block codes – cont’d

Standard array

For row i 2,3,...,
2nk find a vector in Vn of minimum
weight that is not already listed in the array.

Call this pattern e i and form the i : th row as the
corresponding coset
zero
codeword
coset leaders
U
U
1
2
e
e
U
2
2
2


e
e
U
n

k
n

k
2
2
2
Lecture 9
 U
k
2
e
U
k
2
2


e
U
n

k
k
2
2
28
coset
Linear block codes – cont’d

Standard array and syndrome table decoding
1. Calculate S  rHT
2. Find the coset leader, eˆ  ei , corresponding to S .
ˆ  r  eˆ and the corresponding m
ˆ.
3. Calculate U

ˆ
ˆ
ˆ
ˆ

r

e

(
U

e)

e

U

(e

e
)
Note that U


If eˆ  e , the error is corrected.
If eˆ  e , undetectable decoding error occurs.
Lecture 9
29
Linear block codes – cont’d

Example: Standard array for the (6,3) code
codewords
000000
110100
011010
101110
101001
011101
110011
000111
000001
110101
011011
101111
101000
011100
110010
000110
000010
110111
011000
101100
101011
011111
110001
000101
000100
110011
011100
101010
101101
011010
110111
000110
001000
111100



010000
100100
coset

100000
010100
010001
100101

 010110
Coset leaders
Lecture 9
30
Linear block codes – cont’d
Error pattern Syndrome
000000
000
U(101110)
transmit
ted.
000001
101
r(001110)
isreceived.
000010
011
000100
110
001000
001
010000
010
100000
100
010001
111
The
syndrome
ofriscomputed
:
T
SrH
(001110)
HT (100)
Error
pattern
correspond
ingtothis
syndrome
is
ˆ (100000)
e
The
corrected
vector
isestimated
ˆ re
ˆ (001110)
U
(100000)
(101110)
Lecture 9
31
Hamming codes

Hamming codes


Hamming codes are a subclass of linear block codes
and belong to the category of perfect codes.
Hamming codes are expressed as a function of a
single integer m  2 .
Code
length
:
m
n

2

1
m
Number
of
informatio
n
bits
:k

2

m

1
Number
of
parity
bits
:
n-k

m
Error
correction
capability
: t
1

The columns of the parity-check matrix, H, consist of
all non-zero binary m-tuples.
Lecture 9
32
Hamming codes

Example: Systematic Hamming code (7,4)
10001
1
1




T
H

0
1
0
1
0
1
1

[
I
P
]
3

3




001
1
101


01110 0 0




101010 0


G

[
P I
]
4

4


110 0 010


1110 0 01


Lecture 9
33
Cyclic block codes
Cyclic codes are a subclass of linear
block codes.
 Encoding and syndrome calculation are
easily performed using feedback shiftregisters.



Hence, relatively long block codes can be
implemented with a reasonable complexity.
BCH and Reed-Solomon codes are cyclic
codes.
Lecture 9
34
Cyclic block codes

A linear (n,k) code is called a Cyclic code
if all cyclic shifts of a codeword are also
codewords.
“i” cyclic shifts of U
U

(
u
,
u
,
u
,...,
u
)
0
1
2 n

1
U

(
u
,
u
,...,
u
,
u
,
u
,
u
,...,
u
)
n

i
n

i

1
n

1
0
1
2
n

i

1
(
i
)

Example:
U

(
1101
)
(
1
)
(
2
)
(
3
)
(
4
)
U

(
1110
)
U

(
0111
)
U

(
1011
)
U

(
11
)

U
Lecture 9
35
Cyclic block codes

Algebraic structure of Cyclic codes, implies expressing
codewords in polynomial form
2
n

1
U
(
X
)

u

u
X

u
X

...

u
X
deg
(
n1
)
01 2
n

1

Relationship between a codeword and its cyclic shifts:
2
n

1
n
X
U
(
X
)

u
X

u
X

...,
u
X

u
X
0
1
n

2
n

1
2
n

1
n

u

u
X

u
X

...

u
X

u
X
u
n

1 0
1
n

2
n

1 
n

1






















(
1
)
U
(
X
)
n
u
(
X

1
)
n

1
(
1
)
n

U
(
X
)

u
(
X

1
)
n

1

(
1
)
n
U
(
X
)

X
U
(
X
)
modulo
(
X

1
)
Hence:
By extension
(
i
)
i
n
U
(
X
)

X
U
(
X
)
modulo
(
X

1
)
Lecture 9
36
Cyclic block codes

Basic properties of Cyclic codes:

Let C be a binary (n,k) linear cyclic code
1. Within the set of code polynomials in C, there
is a unique monic polynomial g ( X ) with
minimal degree r n. g(X) is called the
generator polynomial.
r
g
(
X
)

g

g
X

...

g
X
0 1
r
1. Every code polynomial U( X ) in C can be
(X
)
m
(X
)g
(X
)
expressed uniquely as U
2. The generator polynomial g ( X ) is a factor of
X n 1
Lecture 9
37
Cyclic block codes
 The orthogonality of G and H in polynomial
n
(X
)h
(X
)
X

1
form is expressed as g
. This
means h( X ) is also a factor of X n  1
1. The row i,i 1,...,k , of the generator matrix is
formed by the coefficients of the " i  1" cyclic
shift of the generator polynomial.
g
g
0


0 g
1 
r
(
X
)
g

g
g

g
0
1
r
X



g
(
X
)




G






  

g
g

g
 k

0
1
r


X1
g
(
X
)



0
g
g
0 g
1 
r

Lecture 9
38
Cyclic block codes

Systematic encoding algorithm for an
(n,k) Cyclic code:
nk
m
(
X
)
X
1. Multiply the message polynomial
by
1. Divide the result of Step 1 by the generator
polynomial g ( X ) . Let p( X ) be the reminder.
nk
p
(
X
)
X
m(X)to form the codeword
1. Add
to
U( X )
Lecture 9
39
Cyclic block codes

Example:
For the systematic (7,4) Cyclic code
3
(X
)
1
XX
with generator polynomial g
1. Find the codeword for the message m(1011
)
n7, k 4, nk 3
m(1011
)m(X)1X2 X3
Xnkm(X)X3m(X)X3(1X2 X3)X3 X5 X6
Divide
Xnkm(X)byg(X):
X3 X5 X6 (1XX2 X3)(1XX3) 1













 




quotient
q(X)
generator
g(X)
remainder
p(X)
Form
the
codeword
polynomial
:
U(X)p(X)X3m(X)1X3 X5 X6
U(1
0
01
0
1
1)





parity
bitsmessage
bits
Lecture 9
40
Cyclic block codes

Find the generator and parity check matrices, G and H,
respectively.
2
3
g
(
X
)
1

1
X

0
X

1
X

(
g
,g
,g
,g
)
(
1101
)
0
1
2
3
1101000




0
1
1
0
1
0
0


G



0011010


0001101


1
0
G

1

1
1
1
1
0
P
0
1
1
1
1
0
0
0
0
1
0
0
0
0
1
0
0
0

0

1
I 44
Lecture 9
Not in systematic form.
We do the following:
row(1)

row(3)

row(3)
row(1)

row(2)

row(4)

row(
1 0 0 1 011




H

0
1
0
1
1
1
0




0 0 1 0 111


I 33
PT
41
Cyclic block codes

Syndrome decoding for Cyclic codes:

Received codeword in polynomial form is given by
Received
codeword

r
(X
)
U
(X
)
e
(X
)
The syndrome is the remainder obtained by dividing the
received polynomial by the generator polynomial.
r
(
X
)

q
(
X
)
g
(
X
)

S
(
X
)

Error
pattern
Syndrome
With syndrome and Standard array, the error is
estimated.

In Cyclic codes, the size of standard array is considerably
reduced.
Lecture 9
42
Example of the block codes
PB
8PSK
QPSK
Eb / N0 [dB]
Lecture 9
43