Modulation, Demodulation and Coding Course

Download Report

Transcript Modulation, Demodulation and Coding Course

EC 723
Satellite Communication Systems
Mohamed Khedr
http://webmail.aast.edu/~khedr
Syllabus
Week 1
Overview
Week 2
Orbits and constellations: GEO, MEO and LEO
Week 3
Satellite space segment, Propagation and
satellite links , channel modelling

Tentatively
2006-02-16
Week 4
Satellite Communications Techniques
Week 5
Satellite Communications Techniques II
Week 6
Satellite Communications Techniques III
Week 7
Satellite error correction Techniques
Week 8
Satellite error correction TechniquesII
Multiple access
Week 9
Satellite in networks I, INTELSAT systems ,
VSAT networks, GPS
Week 10
GEO, MEO and LEO mobile communications
INMARSAT systems, Iridium , Globalstar,
Odyssey
Week 11
Presentations
Week 12
Presentations
Week 13
Presentations
Week 14
Presentations
9
Week 15 Lecture
Presentations
2
Block diagram of a DCS
Format
Source
encoder
Channel
encoder
Pulse
modulate
Bandpass
modulate
Channel
Digital modulation
Digital demodulation
Format
2006-02-16
Source
decode
Channel
decoder
Lecture 9
Detect
Demod.
Sample
3
What is channel coding?

Channel coding:



Transforming signals to improve
communications performance by increasing
the robustness against channel impairments
(noise, interference, fading, ..)
Waveform coding: Transforming waveforms
to better waveforms
Structured sequences: Transforming data
sequences into better sequences, having
structured redundancy.

2006-02-16
“Better” in the sense of making the decision
process less subject to errors.
Lecture 9
4
What is channel coding?




Coding is mapping of binary source (usually) output sequences of
length k into binary channel input sequences n (>k)
A block code is denoted by (n,k)
Binary coding produces 2k codewords of length n. Extra bits in
codewords are used for error detection/correction
In this course we concentrate on two coding types: (1) block, and
(2) convolutional codes realized by binary numbers:
 Block codes: mapping of information source into channel
inputs done independently: Encoder output depends only on
the current block of input sequence
 Convolutional codes: each source bit influences n(L+1)
channel input bits. n(L+1) is the constraint length and L is the
memory depth. These codes are denoted by (n,k,L).
k-bits
2006-02-16
Lecture 9
(n,k)
block coder
n-bits
5
Error control techniques

Automatic Repeat reQuest (ARQ)




Forward Error Correction (FEC)



Full-duplex connection, error detection codes
The receiver sends a feedback to the transmitter,
saying that if any error is detected in the received
packet or not (Not-Acknowledgement (NACK) and
Acknowledgement (ACK), respectively).
The transmitter retransmits the previously sent
packet if it receives NACK.
Simplex connection, error correction codes
The receiver tries to correct some errors
Hybrid ARQ (ARQ+FEC)

Full-duplex, error detection and correction codes
2006-02-16
Lecture 9
6
Why using error correction coding?




Error performance vs. bandwidth
Power vs. bandwidth
P
Data rate vs. bandwidth
Capacity vs. bandwidth
B
Coded
A
F
Coding gain:
For a given bit-error probability,
the reduction in the Eb/N0 that can be
realized through the use of code:
 Eb 
 Eb 
 [dB]  
 [dB]
G [dB]  
 N0 u
 N 0 c
2006-02-16
Lecture 9
C
B
D
E
Uncoded
Eb / N0 (dB)
7
Channel models

Discrete memory-less channels


Binary Symmetric channels


Discrete input, discrete output
Binary input, binary output
Gaussian channels

Discrete input, continuous output
2006-02-16
Lecture 9
8
Linear block codes

Let us review some basic definitions first
which are useful in understanding Linear
block codes.
2006-02-16
Lecture 9
9
Some definitions

Binary field :

The set {0,1}, under modulo 2 binary
addition and multiplication forms a field.
Addition
00  0
0 1  1
1 0  1
11  0

Multiplication
00  0
0 1  0
1 0  0
1 1  1
Binary field is also called Galois field, GF(2).
2006-02-16
Lecture 9
10
Some definitions…

Examples of vector spaces

The set of binary n-tuples, denoted by Vn
V4  {(0000), (0001), (0010), (0011), (0100), (0101), (0111),
(1000), (1001), (1010), (1011), (1100), (1101), (1111)}

Vector subspace:

A subset S of the vector space Vn is called a
subspace if:



The all-zero vector is in S.
The sum of any two vectors in S is also in S.
Example:
{(0000), (0101), (1010), (1111)} is a subspace of V4 .
2006-02-16
Lecture 9
11
Some definitions…

Spanning set:

A collection of vectors G  v1 , v 2 ,, v n  ,
the linear combinations of which include all vectors in
a vector space V, is said to be a spanning set for V or
to span V.


Example:
(1000), (0110), (1100), (0011), (1001) spans V4 .
Bases:

A spanning set for V that has minimal cardinality is
called a basis for V.

Cardinality of a set is the number of objects in the set.

Example:
(1000), (0100), (0010), (0001) is a basis for V4 .
2006-02-16
Lecture 9
12
Linear block codes

Linear block code (n,k)

k
A set C  Vn with cardinality 2 is called a
linear block code if, and only if, it is a
subspace of the vector space Vn .
Vk  C  Vn
Members of C are called code-words.
 The all-zero codeword is a codeword.
 Any linear combination of code-words is a
codeword.

2006-02-16
Lecture 9
13
Linear block codes – cont’d
Vn
C
mapping
Vk
Bases of C
2006-02-16
Lecture 9
14
(a) Hamming distance d(ci, cj)  2t  1.
(b) Hamming distance d(ci, cj)  2t. The
received vector is denoted by r.
2006-02-16
Lecture 9
15
# of bits for FEC

Want to correct t errors in an (n,k) code
Data word d =[d1,d2, . . . ,dk] => 2k data
words
 Code word c =[c1,c2, . . . ,cn] => 2n code
words
001
dj
011
11
cj
10

111
101
000
00
2006-02-16
Data word
01
cj’
100 Code word
Lecture 9
010
110
16
Representing codes by vectors



Code strength is measured by Hamming distance that tells how
different code words are:
 Codes are more powerful when their minimum Hamming
distance dmin (over all codes in the code family) is large
Hamming distance d(X,Y) is the number of bits that are different
between code words
(n,k) codes can be mapped into n-dimensional grid:
3-bit repetition code
3-bit parity code
valid code word
2006-02-16
Lecture 9
17
Error Detection

If a code can detect
a t bit error, then cj’
must be within a
Hamming sphere of
t

cj
001
111
011
101
For example, if
cj=101, and t =1,
then ‘100’,’111’, and
‘001’ lie in the
Hamming sphere.
000
cj’
010
110
100
Code word
2006-02-16
Lecture 9
18
Error Correction

To correct an
error, the
Hamming
spheres around
a code word
must be
nonoverlapping,
dmin =2 t +1
2006-02-16
Lecture 9
cj
001
111
011
101
000
cj’
010
110
100
19
6-D Code Space
2006-02-16
Lecture 9
20
Block Code Error Detection and Correction

(6,3) code 23 => 26,

Can detect 2 bit errors,
correct 1 bit
dmin=3


110100 sent; 110101
received
Erasure: Suppose code
word 110011 sent but
two digits were erased
(xx0011), correct code
word has smallest
Hamming distance
2006-02-16
Lecture 9
Messag
e
Codeword
1
2
000
000000
4
2
100
110100
1
3
010
011010
3
2
110
101110
3
3
001
101001
3
2
101
011101
2
3
011
110011
2
0
111
000111
3
1
21
Geometric View


Want code
efficiency, so the
space should be
packed with as
many code words
as possible
Code words should
be as far apart as
possible to
minimize errors
2006-02-16
Lecture 9
2n n-tuples,
Vn
2k n-tuples, subspace
of codewords
22
Linear block codes – cont’d




The information bit stream is chopped into blocks of k bits.
Each block is encoded to a larger block of n bits.
The coded bits are modulated and sent over channel.
The reverse procedure is done at the receiver.
Data block
Channel
encoder
k bits
n-k
Codeword
n bits
Redundant bit s
k
Rc 
Code rat e
n
2006-02-16
Lecture 9
23
Linear block codes – cont’d


The Hamming weight of vector U, denoted by
w(U), is the number of non-zero elements in
U.
The Hamming distance between two vectors
U and V, is the number of elements in which
they differ.
d (U, V)  w(U  V)

The minimum distance of a block code is
d min  min d (U i , U j )  min w(U i )
i j
2006-02-16
i
Lecture 9
24
Linear block codes – cont’d

Error detection capability is given by
e  d min  1

Error correcting-capability t of a code, which is
defined as the maximum number of
guaranteed correctable errors per codeword, is
 d min  1
t
 2 
2006-02-16
Lecture 9
25
Linear block codes – cont’d

For memory less channels, the probability
that the decoder commits an erroneous
decoding is P  n  n  p j (1  p) n j
M

  j
j t 1
 
p is the transition probability or bit error probability
over channel.

The decoded bit error probability is
1
PB 
n
2006-02-16
 n j
n j


j
p
(
1

p
)

 
j t 1  j 
n
Lecture 9
26
Linear block codes – cont’d

Discrete, memoryless, symmetric channel model
1-p
1
1
p
Tx. bits
Rx. bits
p
0

p
1-p
0
Note that for coded systems, the coded bits are
modulated and transmitted over channel. For
example, for M-PSK modulation on AWGN channels
(M>2):
 2log2 M Ec
 2log2 M Eb Rc
2
2
  
  
Q
sin    
Q
sin   
log2 M 
N0
N0
 M   log2 M 
 M 
where Ec is energy per coded bit, given by Ec  Rc Eb
2006-02-16
Lecture 9
27
Linear block codes –cont’d
Vn
C
mapping
Vk
Bases of C

A matrix G is constructed by taking as its
rows the vectors on the basis, {V1, V2 ,, Vk }.
 v11
 V1  
v21



G  
 
Vk  
vk 1
2006-02-16
v12

v22


vk 2

Lecture 9
v1n 
v2 n 
 

vkn 
28
Linear block codes – cont’d

Encoding in (n,k) block code
U  mG
 V1 
V 
(u1 , u2 ,  , un )  (m1 , m2 ,  , mk )   2 
 
 
Vk 
(u1 , u2 ,  , un )  m1  V1  m2  V2    m2  Vk

The rows of G, are linearly independent.
2006-02-16
Lecture 9
29
Linear block codes – cont’d

Example: Block code (6,3)
Message vector
 V1  1 1 0 1 0 0
G   V2   0 1 1 0 1 0
 V3  1 0 1 0 0 1 
2006-02-16
Lecture 9
Codeword
000
000000
100
110100
010
011010
110
1 011 1 0
001
101
1 01 0 0 1
0 111 0 1
011
1 1 0 011
111
0 0 0 111
30
Linear block codes – cont’d

Systematic block code (n,k)

For a systematic code, the first (or last) k
elements in the codeword are information bits.
G  [P I k ]
I k  k  k identitymatrix
Pk  k  (n  k ) matrix
U  (u1 , u2 ,...,un )  ( p1 , p2 ,..., pnk , m1 , m2 ,...,mk )


 
 

parity bits
2006-02-16
Lecture 9
message bits
31
Linear block codes – cont’d

For any linear code we can find an
matrix H( nk )n , which its rows are
orthogonal to rows of G :
GH  0
T
H is called the parity check matrix and
its rows are linearly independent.
 For systematic linear block codes:
H  [I nk PT ]

2006-02-16
Lecture 9
32
Linear block codes – cont’d
Data source
m
Format
Channel
encoding
U
Modulation
channel
Data sink
Format
ˆ
m
Channel
decoding
r
Demodulation
Detection
r  Ue
r  (r1 , r2 ,....,rn ) receivedcodewordor vector
e  (e1 , e2 ,....,en ) errorpatternor vector

Syndrome testing:

S is syndrome of r, corresponding to the error
pattern e.
S  rHT  e HT
2006-02-16
Lecture 9
33
Linear block codes – cont’d

Standard array
1.
2.
For row i  2,3,...,2,nk find a vector in Vn of minimum
weight which is not already listed in the array.
Call this pattern e i and form the i : th row as the
corresponding coset
zero
codeword
coset leaders
2006-02-16
U1
e2

e 2 nk
U2
e2  U2

e 2 n k  U 2
Lecture 9

U 2k
 e 2  U 2k


 e 2 n k  U 2 k
coset
34
Linear block codes – cont’d

Standard array and syndrome table decoding
1. Calculate S  rHT
2. Find the coset leader, eˆ  ei , corresponding to S .
ˆ  r  eˆ and corresponding m
ˆ.
3. Calculate U

ˆ  r  eˆ  (U  e) eˆ  U  (e  eˆ)
Note that U


2006-02-16
If eˆ  e , error is corrected.
If eˆ  e , undetectable decoding error occurs.
Lecture 9
35
Linear block codes – cont’d

Example: Standard array for the (6,3) code
codewords
000000 110100 011010 101110 101001 011101 110011 000111
000001 110101 011011 101111 101000 011100 110010 000110
000010 110110 011000 101100 101011 011111 110001 000101
000100 110000 011100 101010 101101 011010 110111 000110
001000 111100



010000 100100
100000 010100
010001 100101
coset



010110
Coset leaders
2006-02-16
Lecture 9
36
Linear block codes – cont’d
Error pattern Syndrome
000000
000001
000
101
000010
000100
001000
011
110
001
010000
100000
010
100
010001
111
2006-02-16
U  (101110) transmitted.
r  (001110) is received.
T hesyndromeof r is computed:
S  rHT  (001110)H T  (100)
Errorpatterncorresponding to thissyndromeis
eˆ  (100000)
T hecorrectedvectoris estimated
ˆ  r  eˆ  (001110) (100000) (101110)
U
Lecture 9
37
Hamming codes

Hamming codes


Hamming codes are a subclass of linear block codes
and belong to the category of perfect codes.
Hamming codes are expressed as a function of a
single integer m  2 .
n  2m  1
Code length:
Number of information bits : k  2 m  m  1
Number of paritybits :
n-k  m
Errorcorrectioncapability: t  1

The columns of the parity-check matrix, H, consist of
all non-zero binary m-tuples.
2006-02-16
Lecture 9
38
Hamming codes

Example: Systematic Hamming code (7,4)
1 0 0 0 1 1 1
H  0 1 0 1 0 1 1  [I 33
0 0 1 1 1 0 1
0 1 1 1 0 0 0 
1 0 1 0 1 0 0
  [P
G
1 1 0 0 0 1 0


1 1 1 0 0 0 1 
2006-02-16
Lecture 9
PT ]
I 44 ]
39
Example of the block codes
PB
8PSK
QPSK
Eb / N0 [dB]
2006-02-16
Lecture 9
40
Convolutional codes

Convolutional codes offer an approach to error control
coding substantially different from that of block codes.

A convolutional encoder:




encodes the entire data stream, into a single codeword.
does not need to segment the data stream into blocks of fixed
size (Convolutional codes are often forced to block structure by periodic
truncation).
is a machine with memory.
This fundamental difference in approach imparts a
different nature to the design and evaluation of the code.


Block codes are based on algebraic/combinatorial
techniques.
Convolutional codes are based on construction techniques.
2006-02-16
Lecture 9
41
Convolutional codes-cont’d

A Convolutional code is specified by
three parameters (n, k , K ) or (k / n, K )
where

Rc  k / n is the coding rate, determining the
number of data bits per coded bit.


In practice, usually k=1 is chosen and we
assume that from now on.
K is the constraint length of the encoder a
where the encoder has K-1 memory
elements.

2006-02-16
There is different definitions in literatures for
constraint length.
Lecture 9
42
Block diagram of the DCS
Information
source
Rate 1/n
Conv. encoder
Modulator
U  G(m )
m  (m1 , m2 ,...,mi ,...)

Codeword
sequence
U i  u1i ,...,uji ,...,uni


Channel
 (U 1 , U 2 , U 3 ,...,U i ,...)

Input sequence
Branch word ( n coded bits)
Information
sink
Rate 1/n
Conv. decoder
ˆ  (m
ˆ 1, m
ˆ 2 ,...,m
ˆ i ,...)
m
Demodulator
Z  ( Z1 , Z 2 , Z 3 ,...,Z i ,...)


receivedsequence
Zi

Demodulator outputs
for Branch word i
2006-02-16
 z1i ,...,zji ,...,zni



n outputs per Branch word
Lecture 9
43
A Rate ½ Convolutional encoder

Convolutional encoder (rate ½, K=3)

3 shift-registers where the first one takes the
incoming data bit and the rest, form the memory
of the encoder.
u1
(Branch word)
Output coded bits
Input data bits
u1 ,u2
m
u2
2006-02-16
First coded bit
Lecture 9
Second coded bit
44
A Rate ½ Convolutional encoder
m  (101)
Message sequence:
Time
Output
(Branch word)
Time
Output
(Branch word)
u1
t1
u1
u1 u2
1 0 0
t2
1 1
0 1 0
u2
u1
u1 u2
1 0 1
t4
0 0
u2
2006-02-16
1 0
u2
u1
t3
u1 u2
u1 u2
0 1 0
1 0
u2
Lecture 9
45
A Rate ½ Convolutional encoder
Time
Output
(Branch word)
Time
Output
(Branch word)
u1
t5
u1
u1 u2
0 0 1
1 1
u2
m  (101)
2006-02-16
t6
u1 u2
0 0 0
0 0
u2
Encoder
Lecture 9
U  (11 10 00 10 11)
46
Effective code rate


Initialize the memory before encoding the first bit (allzero)
Clear out the memory after encoding the last bit (allzero)

Hence, a tail of zero-bits is appended to data bits.
data

Encoder
tail
codeword
Effective code rate :

L is the number of data bits and k=1 is assumed:
Reff
2006-02-16
L

 Rc
n( L  K  1)
Lecture 9
47
Encoder representation

Vector representation:

We define n binary vector with K elements (one
vector for each modulo-2 adder). The i:th element
in each vector, is “1” if the i:th stage in the shift
register is connected to the corresponding modulo2 adder, and “0” otherwise.

Example:
g1  (111)
g 2  (101)
2006-02-16
u1
m
u1 u2
u2
Lecture 9
48
Encoder representation – cont’d

Impulse response representation:

The response of encoder to a single “one” bit that
goes through it.

Example:
Register
contents
Input sequence :
1 0 0
Output sequence : 11 10 11
Input m
Branch word
u1
u2
100
1 1
010
1 0
001
1 1
Output
1 11 10 11
0
1
00 00 00
11 10 11
Modulo-2 sum:
11 10 00 10 11
2006-02-16
Lecture 9
49
Encoder representation – cont’d

Polynomial representation:

We define n generator polynomials, one for each
modulo-2 adder. Each polynomial is of degree K-1 or
less and describes the connection of the shift
registers to the corresponding modulo-2 adder.
Example:

g1 ( X )  g 0(1)  g1(1) . X  g 2(1) . X 2  1  X  X 2
g 2 ( X )  g 0( 2)  g1( 2) . X  g 2( 2) . X 2  1  X 2
The output sequence is found as follows:
U( X )  m( X )g1 ( X ) interlacedwith m( X )g 2 ( X )
2006-02-16
Lecture 9
50
Encoder representation –cont’d
In more details:
m( X )g1 ( X )  (1  X 2 )(1  X  X 2 )  1  X  X 3  X 4
m( X )g 2 ( X )  (1  X 2 )(1  X 2 )  1  X 4
m( X )g1 ( X )  1  X  0. X 2  X 3  X 4
m( X )g 2 ( X )  1  0. X  0. X 2  0. X 3  X 4
U( X )  (1,1)  (1,0) X  (0,0) X 2  (1,0) X 3  (1,1) X 4
U  11
10
00
10
11
2006-02-16
Lecture 9
51
State diagram
A finite-state machine only encounters a
finite number of states.
 State of a machine: the smallest amount
of information that, together with a
current input to the machine, can predict
the output of the machine.
 In a Convolutional encoder, the state is
represented by the content of the
memory.
K 1
 Hence, there are 2
states.

2006-02-16
Lecture 9
52
State diagram – cont’d
A state diagram is a way to represent
the encoder.
 A state diagram contains all the states
and all possible transitions between
them.
 Only two transitions initiating from a
state
 Only two transitions ending up in a state

2006-02-16
Lecture 9
53
State diagram – cont’d
0/00
Input
Output
(Branch word)
Current
state
0/11
S0
00
S2
S1
10
01
S1
01
1/11
S0
00
1/00
0/10
1/01
S3
S2
10
0/01
11
S3
11
1/10
2006-02-16
Lecture 9
input
0
1
0
1
0
1
0
1
Next
state
S0
S2
S0
S2
S1
S3
S1
S3
output
00
11
11
00
10
01
01
10
54
Trellis – cont’d

Trellis diagram is an extension of the state
diagram that shows the passage of time.

Example of a section of trellis for the rate ½ code
State
S0  00
0/00
1/11
S2  10
0/11
S1  01
1/01
0/10
0/01
S3  11
1/10
ti
2006-02-16
1/00
ti 1
Lecture 9
Time
55
Trellis –cont’d

A trellis diagram for the example code
Tail bits
Input bits
1
0
1
0
0
00
10
11
0/00
0/00
0/00
Output bits
11
10
0/00
0/00
1/11
1/11
0/11
1/00
1/01
0/11
1/00
0/10
0/01
t1
2006-02-16
1/11
1/01
0/11
1/00
0/10
0/01
t2
1/11
1/01
0/11
1/00
0/10
0/01
t3
1/01
0/11
1/00
0/10
1/01
0/01
t4
Lecture 9
1/11
0/10
0/01
t5
t6
56
Trellis – cont’d
Tail bits
Input bits
1
0
1
0
0
00
10
11
0/00
0/00
0/00
0/11
0/11
Output bits
11
10
0/00
0/00
1/11
1/11
1/11
0/10
0/11
1/00
1/01
1/01
0/01
0/01
t1
2006-02-16
t2
t3
t4
Lecture 9
0/10
0/10
t5
t6
57
Trellis of an example ½ Conv. code
Tail bits
Input bits
1
0
1
0
0
00
10
11
0/00
0/00
0/00
0/11
0/11
Output bits
11
10
0/00
0/00
1/11
1/11
1/11
0/10
0/11
1/00
1/01
1/01
0/01
0/01
t1
2006-02-16
t2
t3
1/01
Lecture 9
0/10
0/10
t4
t5
t6
58
Soft and hard decision decoding

In hard decision:


The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides
no other information for the decoder such that
how reliable the decision is.
In Soft decision:

The demodulator provides the decoder with some
side information together with the decision. The
side information provides the decoder with a
measure of confidence for the decision.
2006-02-16
Lecture 9
59
Soft and hard decoding

Regardless whether the channel outputs hard or soft decisions
the decoding rule remains the same: maximize the probability
ln p(y, xm )  j0 ln p( y j | xmj )

However, in soft decoding decision region energies must be
accounted for, and hence Euclidean metric dE, rather that
Hamming metric dfree is used
2006-02-16
Transition for Pr[3|0] is indicated
Lecture
9 arrow
by the
60
Decision regions




Coding can be realized by soft-decoding or hard-decoding principle
For soft-decoding reliability (measured by bit-energy) of decision region
must be known
Example: decoding BPSK-signal: Matched filter output is a continuos
number. In AWGN matched filter output is Gaussian
For soft-decoding
several decision
region partitions
are used
Transition probability
for Pr[3|0], e.g. prob.
that transmitted ‘0’
falls into region no: 3
2006-02-16
Lecture 9
61
Soft and hard decision decoding …

ML soft-decisions decoding rule:


Choose the path in the trellis with minimum
Euclidean distance from the received
sequence
ML hard-decisions decoding rule:

Choose the path in the trellis with minimum
Hamming distance from the received
sequence
2006-02-16
Lecture 9
62
The Viterbi algorithm

The Viterbi algorithm performs Maximum
likelihood decoding.

It finds a path through trellis with the largest
metric (maximum correlation or minimum
distance).

At each step in the trellis, it compares the partial
metric of all paths entering each state, and keeps
only the path with the largest metric, called the
survivor, together with its metric.
2006-02-16
Lecture 9
63
Example of hard-decision Viterbi decoding
ˆ  (100)
m
ˆ  (11 10 11 00 11)
U
Z  (11 10 11 10 01)
m  (101)
U  (11 10 00 10 11)
0
2
0
2
3
1
1
2
0
0
2
1
1
1
2
0
0
3
1
0
1
1
2
0
0
3
Partial metric
2
S (ti ), ti 
1
2
2
2
1
Branch metric
3
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
64
Example of soft-decision Viterbi decoding
2 2 2 2 2
2
Z  (1, , ,
,
,1, ,  1,
,1)
3 3 3 3
3
3
m  (101)
U  (11 10 00 10 11)
0
-5/3
-5/3
0
-5/3
5/3
5/3
10/3
-1/3
1/3
0
ˆ  (101)
m
ˆ  (11 10 00 10 11)
U
4/3
8/3
1/3
-1/3
14/3
1/3
-1/3
1/3
3
-4/3
1/3
1/3
5/3
-5/3
1/3
2
5/3
5/3
5/3
Partial metric
13/3
-5/3
S (ti ), ti 
Branch metric
10/3
-5/3
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
65
Free distance of Convolutional codes

Distance properties:

Since a Convolutional encoder generates codewords with
various sizes (as opposite to the block codes), the following
approach is used to find the minimum distance between all
pairs of codewords:



2006-02-16
Since the code is linear, the minimum distance of the code is
the minimum distance between each of the codewords and the
all-zero codeword.
This is the minimum distance in the set of all arbitrary long
paths along the trellis that diverge and remerge to the all-zero
path.
It is called the minimum free distance or the free distance of
the code, denoted by d free or d f
Lecture 9
66
Free distance …
The path diverging and remerging to
all-zero path with minimum weight
Hamming weight
of the branch
All-zero path
df 5
0
0
2
2
0
0
0
2
2
2
2
0
1
1
1
1
1
1
t1
2006-02-16
t2
t3
1
t4
Lecture 9
t5
t6
67
Interleaving

Convolutional codes are suitable for memoryless
channels with random error events.

Some errors have bursty nature:

Statistical dependence among successive error events
(time-correlation) due to the channel memory.


Like errors in multipath fading channels in wireless
communications, errors due to the switching noise, …
“Interleaving” makes the channel looks like as a
memoryless channel at the decoder.
2006-02-16
Lecture 9
68
Interleaving …




Interleaving is done by spreading the coded
symbols in time (interleaving) before
transmission.
The reverse in done at the receiver by
deinterleaving the received sequence.
“Interleaving” makes bursty errors look like
random. Hence, Conv. codes can be used.
Types of interleaving:


Block interleaving
Convolutional or cross interleaving
2006-02-16
Lecture 9
69
Interleaving …


Consider a code with t=1 and 3 coded bits.
A burst error of length 3 can not be corrected.
A1 A2 A3 B1 B2 B3 C1 C2 C3
2 errors

Let us use a block interleaver 3X3
A1 A2 A3 B1 B2 B3 C1 C2 C3
A1 B1 C1 A2 B2 C2 A3 B3 C3
Interleaver
Deinterleaver
A1 B1 C1 A2 B2 C2 A3 B3 C3
A1 A2 A3 B1 B2 B3 C1 C2 C3
1 errors
2006-02-16
Lecture 9
1 errors
1 errors
70
Concatenated codes

A concatenated code uses two levels on coding, an
inner code and an outer code (higher rate).


Popular concatenated codes: Convolutional codes with
Viterbi decoding as the inner code and Reed-Solomon codes
as the outer code
The purpose is to reduce the overall complexity, yet
achieving the required error performance.
Outer
encoder
Interleaver
Inner
encoder
Modulate
Output
data
Outer
decoder
Deinterleaver
Inner
decoder
Demodulate
Channel
Input
data
2006-02-16
Lecture 9
71
Optimum decoding

If the input sequence messages are equally likely, the
optimum decoder which minimizes the probability of
error is the Maximum likelihood decoder.

ML decoder, selects a codeword among all the
possible codewords which maximizes the likelihood
(m)
p
(
Z
|
U
) where Z is the received
function
(m )
U
sequence and
is one of the possible codewords:
2L
codewords
to search!!!
ML decoding rule:
Choose U ( m) if p(Z | U ( m) )  max (m) p(Z | U ( m ) )
over all U
2006-02-16
Lecture 9
72
ML decoding for memory-less channels

Due to the independent channel statistics for
memoryless channels, the likelihood function becomes
p(Z | U
( m)
)  pz1 , z2 ,...,zi ,...(Z1 , Z 2 ,...,Zi ,...| U

( m)
)   p( Z i | U
i 1
( m)
i

n
i 1
j 1
)  p( z ji | u (jim) )
and equivalently, the log-likelihood function becomes

 U (m)  log p(Z | U )   log p(Zi | U
( m)
Path metric

i 1

( m)
i
Branch metric
n
)   log p( z ji | u (jim) )
i 1 j 1
Bit metric
The path metric up to time index "i", is called the partial path
metric.
ML decoding rule:
Choose the path with maximum metric among
all the paths in the trellis.
This path is the “closest” path to the transmitted sequence.
2006-02-16
Lecture 9
73
Binary symmetric channels (BSC)
1
1
p
Modulator
input
Demodulator
output
p
0

p  p(1 | 0)  p(0 | 1)
1  p  p(1 | 1)  p(0 | 0)
0
1-p
If dm  d (Z, U(m) ) is the Hamming distance between Z
and U, then
Size of coded sequence
p (Z | U ( m ) )  p d m (1  p ) Ln  d m
1 p 
  Ln log(1  p )
 U (m)   d m log
 p 
ML decoding rule:
Choose the path with minimum Hamming distance
from the received sequence.
2006-02-16
Lecture 9
74
AWGN channels


For BPSK modulation the transmitted sequence
corresponding to the codeword U (m) is denoted by
where S(m)  (S1(m) , S2(m) ,...,Si(m) ,...) and Si ( m)  (s1(im) ,...,s (jim) ,...,sni( m) )
and sij   Ec .
The log-likelihood function becomes

n
 U (m)   z ji s (jim)  Z, S ( m) 
i 1 j 1

Inner product or correlation
between Z and S
Maximizing the correlation is equivalent to minimizing the
Euclidean distance.
ML decoding rule:
Choose the path which with minimum Euclidean distance
to the received sequence.
2006-02-16
Lecture 9
75
Soft and hard decisions

In hard decision:



The demodulator makes a firm or hard decision
whether one or zero is transmitted and provides no
other information for the decoder such that how
reliable the decision is.
Hence, its output is only zero or one (the output is
quantized only to two level) which are called “hardbits”.
Decoding based on hard-bits is called the
“hard-decision decoding”.
2006-02-16
Lecture 9
76
Soft and hard decision-cont’d

In Soft decision:





The demodulator provides the decoder with some
side information together with the decision.
The side information provides the decoder with a
measure of confidence for the decision.
The demodulator outputs which are called softbits, are quantized to more than two levels.
Decoding based on soft-bits, is called the
“soft-decision decoding”.
On AWGN channels, 2 dB and on fading
channels 6 dB gain are obtained by using
soft-decoding over hard-decoding.
2006-02-16
Lecture 9
77
The Viterbi algorithm
The Viterbi algorithm performs Maximum likelihood
decoding.
It find a path through trellis with the largest metric
(maximum correlation or minimum distance).


It processes the demodulator outputs in an iterative
manner.
At each step in the trellis, it compares the metric of all
paths entering each state, and keeps only the path with
the largest metric, called the survivor, together with its
metric.
It proceeds in the trellis by eliminating the least likely
paths.




It reduces the decoding complexity to L2
2006-02-16
Lecture 9
K 1
!
78
The Viterbi algorithm - cont’d
 Viterbi algorithm:
A.
Do the following set up:



B.
For a data block of L bits, form the trellis. The trellis
has L+K-1 sections or levels and starts at time t1 and
ends up at time t L K .
Label all the branches in the trellis with their
corresponding branch metric.
For each state in the trellis at the time t i which is
denoted by S (ti ) {0,1,...,2K 1}, define a parameter S (ti ), ti 
Then, do the following:
2006-02-16
Lecture 9
79
The Viterbi algorithm - cont’d
1. Set (0, t1 )  0 and i  2.
2. At time ti , compute the partial path metrics for
all the paths entering each state.
3. Set S (ti ),ti  equal to the best partial path metric
entering each state at time t i .
Keep the survivor path and delete the dead paths
from the trellis.
4. If i  L  K , increase i by 1 and return to step 2.
C. Start at state zero at time t L K . Follow the
surviving branches backwards through the
trellis. The path thus defined is unique and
correspond to the ML codeword.
2006-02-16
Lecture 9
80
Example of Hard decision Viterbi
decoding
m  (101)
U  (11 10 00 10 11)
Z  (11 10 11 10 01)
0/00
0/00
0/00
1/11
1/11
2006-02-16
t2
0/00
0/11
0/11
1/11
0/10
0/11
1/00
1/01
0/10
1/01
0/01
t1
0/00
t3
0/01
t4
Lecture 9
0/10
t5
t6
81
Example of Hard decision Viterbi
decoding-cont’d

Label al the branches with the branch metric
(Hamming distance)
S (ti ), ti 
0
2
1
1
0
2
1
1
0
1
1
0
2
1
0
0
1
2
2
1
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
82
Example of Hard decision Viterbi
decoding-cont’d

i=2
0
2
2
1
1
0
2
1
1
0
1
1
0
0
2
1
0
0
1
2
2
1
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
83
Example of Hard decision Viterbi
decoding-cont’d

i=3
0
2
2
3
1
1
0
2
1
1
0
1
1
0
0
3
0
2
1
0
0
1
2
2
2
1
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
84
Example of Hard decision Viterbi
decoding-cont’d

i=4
0
2
2
3
1
1
0
0
2
1
1
1
1
0
0
3
0
1
0
2
2
0
0
3
1
2
2
2
1
3
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
85
Example of Hard decision Viterbi
decoding-cont’d

i=5
0
2
2
3
1
1
0
0
2
1
1
1
0
0
3
0
1
0
2
1
1
2
0
0
3
2
1
2
2
2
1
3
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
86
Example of Hard decision Viterbi
decoding-cont’d

i=6
0
2
2
3
1
1
0
0
2
1
1
1
2
0
0
3
0
1
0
2
1
1
2
0
0
3
2
1
2
2
2
1
3
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
87
Example of Hard decision Viterbi decodingcont’d

Trace back and then:
ˆ  (100)
m
ˆ  (11 10 11 00 00)
U
0
2
2
3
1
1
0
0
2
1
1
1
2
0
0
3
0
1
0
2
1
1
2
0
0
3
2
1
2
2
2
1
3
1
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
88
Example of soft-decision Viterbi decoding
2 2 2 2 2
2
Z  (1, , ,
,
,1, ,  1,
,1)
3 3 3 3
3
3
m  (101)
U  (11 10 00 10 11)
0
-5/3
-5/3
0
-5/3
5/3
10/3
-1/3
1/3
0
5/3
ˆ  (101)
m
ˆ  (11 10 00 10 11)
U
4/3
8/3
1/3
-1/3
14/3
1/3
-1/3
1/3
3
-4/3
1/3
1/3
5/3
-5/3
1/3
2
5/3
5/3
5/3
Partial metric
13/3
-5/3
S (ti ), ti 
Branch metric
10/3
-5/3
t1
2006-02-16
t2
t3
t4
Lecture 9
t5
t6
89
2006-02-16
Lecture 9
90
Trellis
diagram for
K = 2, k = 2,
n=3
convolutional
code.2006-02-16
Lecture 9
91
2006-02-16
Lecture 9
State diagram for
K = 2, k = 2, n = 3
convolutional
code.
92