S.72-227 Digital Communication Systems
Download
Report
Transcript S.72-227 Digital Communication Systems
S.72-227 Digital Communication Systems
Course Overview,
Basic Characteristics of Block Codes
S.72-227 Digital Communication Systems
Lectures: Prof. Timo O. Korhonen, tel. 09 451 2351,
Research Scientist Michael Hall, tel. 09 451 2343
Course assistants: Research Scientist Naser Tarhuni
([email protected] ), tel. 09 451 2255, Research Scientist
Yangpo Gao ([email protected] ), tel. 09 451 5671
Contents: Block codes, convolutional codes, bandpass
digital transmission, multipath channel, digital transmission
in fading channels, diversity techniques, selected topics in
multiuser detection, intensity modulated fiber optic links.
Requirements: Examination, Lecture Diary / Special
Assignment Tutorials
Timo O. Korhonen, HUT Communication Laboratory
Practicalities
References: A. B. Carlson: Communication Systems, J. G.
Proakis, Digital Communications, L. Ahlin, J. Zander:
Principles of Wireless Communications, Sergio Verdu:
Multiuser Detection.
Prerequisites: S-72.244 (Modulation and Coding Methods),
Recommended S-72.420 (Siirtojärjestelmien
suunnittelumetodiikka)
Homepage: http://www.comlab.hut.fi/opetus/227/
Timetables
– Lectures: Tuesdays, 10-12, hall S5
– Tutorials: Wednesday, 12-14, hall I 346, starts 29.1.2003
Timo O. Korhonen, HUT Communication Laboratory
S.72-227 Digital Communication Systems: Course
Overview
Overview to course contents, block codes TK
Convolutional coding TK
Bandpass digital transmission I: Modulated spectra,
Optimum coherent detection TK
Bandpass digital transmission II: Coherent and noncoherent
modulation error rates, comparison of digital modulation
systems TK
Overview to fading multipath radio channels MH
Bandpass digital transmission in multipath channels MH
DFE, ML, linear equalization MH
Timo O. Korhonen, HUT Communication Laboratory
Overview, cont.
Diversity techniques MH
Spread spectrum systems I: DS- & FH Systems TK
Spread spectrum systems II: WCDMA System TK
Multiuser reception
Fiber optic links
Overview to course contents
Examination: 15.5.2003, 9-12 in hall S1
Timo O. Korhonen, HUT Communication Laboratory
Topics today
Block codes
– repetition codes
– parity codes
– Hamming codes
– cyclic codes
Forward error correction (FEC) system error rate in AWGN
Encoding and decoding
Codes characterization
– code rate
– Hamming distance
– error detection ability
– error correction ability
Timo O. Korhonen, HUT Communication Laboratory
A code taxonomy
Timo O. Korhonen, HUT Communication Laboratory
Error-control coding: basics of Forward Error
Correction (FEC) channel coding
Coding is used for error detection and/or error correction
Coding is a compromise between reliability, efficiency, equipment
complexity
In coding extra bits are added for data security
Coding can be realized by two approaches
– ARQ (automatic repeat request)
stop-and-wait
go-back-N
selective repeat
– FEC (forward error coding)
block coding
Topic today
convolutional coding
ARQ includes also FEC
Implementations, hardware structures
Timo O. Korhonen, HUT Communication Laboratory
What is channel coding?
Coding is mapping of binary source (usually) output sequences of length
k into binary channel input sequences n (>k)
A block code is denoted by (n,k)
Binary coding produces 2k codewords of length n. Extra bits in
codewords are used for error detection/correction
In this course we concentrate on two coding types: (1) block, and (2)
convolutional codes realized by binary numbers:
– Block codes: mapping of information source into channel inputs
done independently: Encoder output depends only on the current
block of input sequence
– Convolutional codes: each source bit influences n(L+1) channel
input bits. n(L+1) is the constraint length and L is the memory
depth. These codes are denoted by (n,k,L).
k-bits
(n,k)
block coder
Timo O. Korhonen, HUT Communication Laboratory
n-bits
Representing codes by vectors
Code strength is measured by Hamming distance that tells how different
code words are:
– Codes are more powerful when their minimum Hamming distance
dmin (over all codes in the code family) is large
Hamming distance d(X,Y) is the number of bits that are different
between code words
(n,k) codes can be mapped into n-dimensional grid:
3-bit parity code
3-bit repetition code
valid code word
Timo O. Korhonen, HUT Communication Laboratory
Hamming distance: The decision sphere
interpretation
Consider two block code (n,k) words c1 and c2 at the Hamming
distance d min
d (ci , c j ) in the n-dimensional code space:
i , j
d /2
c1
c2
It can be seen that we can detect l=dmin-1 errors in the code words.
This is because the only way to not to detect the error is that the error
transforms the code into another code word. This requires change in d
code bits.
Also, we can see that we can correct t=(dmin-1)/2 errors. If more errors
occur, the received word may fall into the decoding sphere of another
code word.
Timo O. Korhonen, HUT Communication Laboratory
Example: repetition coding
In repetition coding bits are repeated several times
Can be used for error correction or detection
For (n,k) block codes dmin n k 1 that is a bound achieved by
repetition codes. Code rate is anyhow very small
Consider for instance (3,1) repetition code, yielding the code rate
RC k / n 1/ 3
Assume binomial error distribution:
n i
n i
ni
P(i, n) (1 ) , 1
i
i
Encoded word is formed by the simple coding rule:
1 111 0 000
Code is decoded by majority voting, e.g. for instance:
Error in decoding is introduced if all the bits are inverted or two bits
are inverted (by noise or interference), e.g. majority of bits is in-error
001 0, 101 1
Pwe P(2,3) P(3,3) 3 2 2 3
Timo O. Korhonen, HUT Communication Laboratory
Repetition coding, cont.
In a three bit code word
– one error can be corrected always, because majority voting can
detect and correct one code word bit error always
– two errors can be detected always, because all code words must be
all zeros or all ones (but now the encoded bit can not be recovered)
Example:
For a simple repetition code with transmission error probability of 0.3
probability as the function of block length n.
plot error
Decoding error occurs if at least (n 1) / 2 of the transmitted symbols are received in
error. Therefore the error probability can be expressed as
pe
n k
k (1 )nk
k ( n 1) / 2
n
Timo O. Korhonen, HUT Communication Laboratory
Error rate for a simple repetitive code
error rate pe
Note that by increasing word length
more and more resistance to channel
introduced errors is obtained.
code length
n n
Timo O. Korhonen, HUT Communication Laboratory
Parity-check coding
Repetition coding can greatly improve transmission reliability because
However, due to repetition transmission rate is reduced. Here the code
rate was 1/3 (that is the ration of the bits to be coded to the encoded
bits)
In parity-check coding a check bit is formed that indicates number of
“1” in the word to be coded.
Even number of “1” means that the the encoded word has even parity
Example: coding 2-bit words by even parity is realized by
Pwe 3 2 2 3 Pe , 1
00 000, 01 011
10 101, 11 110
Question: How many errors can be detected/corrected by parity-check
coding?
Timo O. Korhonen, HUT Communication Laboratory
Parity-check error probability
Note that the error is not detected if even number of errors have
happened
Assume n-1 bit word parity coding, e.g. (n,n-1) code. Probability to
have error in a code word:
– single error can be detected (parity changed)
– probability for two bit error is Pwe=P(2,n) where
n i
n i
ni
P(i, n) (1 ) , 1
i
i
and note that for 1 having more than two errors is highly
unlikely and thus we approximate total error probability by
n 2
Pwe P (2, n)
2
n(n 1) (n 2)...(n i 1)
2 (n 2)(n 3)...(n i 1)
Timo O. Korhonen, HUT Communication Laboratory
n(n 1) / 2
2
2
n-1 bit-word error probability
Without error correction we transmit n-1-bit word that will have a
decoding error with the probability
Pwe 0 1 P(0, n 1)
prob. to have no errors
(n 1)! 0
1
(1 ) n1 1 (1 ) n1
(n 1)!
(n 1)
n i
P(i, n) , 1
i
n
n!
i i !(n i )!
where simplification follows from the negligence of higher order terms,
as for instance
Timo O. Korhonen, HUT Communication Laboratory
Comparing parity-check coding and repetitive
coding
Hence we note that parity checking is very efficient method of error
detection: Example:
n 10, 103
puwe (n 1) 102
pwe n(n 1) 2 / 2 5 105
At the same time the information rate was reduced only by 9/10
If the (3,1) repetitive coding would be used (repeating every bit three
times) the code rate would drop to 1/3 and the error rate would be
pe nk( n1) / 2 k (1 )nk
3k 1 k (1 )nk 106
Therefore parity-check coding is very popular coding method of
channel coding
Timo O. Korhonen, HUT Communication Laboratory
Examples of block codes: a summary
(n,1) Repetition codes. High coding gain, but low rate
(n,k) Hamming codes. Minimum distance always 3. Thus can detect 2
errors and correct one error. n=2m-1, k = n - m
Maximum length codes. For every integer m 3 there exists a
maximum length code with n = 2m - 1, k = m, d = 2m-1
Golay codes. The Golay code is a binary code with n = 23, k = 12, dmin
= 7. This code can be extended by adding an extra parity bit to yield a
(24,12) code with dmin = 8. Other combinations of n and k have not been
found.
BCH-codes. For every integer m 3 there exist a code with n = 2m-1,
k n mt and dmin 2t 1 where t is the error correction capability
(n,k) Reed-Solomon (RS) codes. Works with k symbols that consists of
m bits that are encoded to yield code words of n symbols. For these
codes n 2m 1,number of check symbols n k 2t and dmin 2t 1
Nowadays BCH and RS very popular due to large dmin, large number of
codes, and easy generation
Timo O. Korhonen, HUT Communication Laboratory
Generating block codes: Systematic block codes
In (n,k) block codes each sequence of k information bits is mapped into
a sequence of n (>k) channel inputs in a fixed way regardless of the
previous information bits.
The formed code family should be selected such that the code
minimum distance is as large as possible -> high error correction or
detection capability
A systematic block code:
– the first k elements are the same as the message bits
– the following q=n-k bits are the check bits
Therefore the encoded word is
X (m1 m2 ... mk c1 c2 .... cq ), q n k
message
check
or as the partitioned representation
X (M | C)
Timo O. Korhonen, HUT Communication Laboratory
Block codes by matrix representation
Given the message vector M, the respective linear, systematic block
code X can be obtained by the matrix multiplication by
X MG
The matrix G is the generator matrix with the general structure
G (I k | P)
where Ik is kxk identity matrix and P is a kxq binary submatrix
ultimately determining the generated codes
p11
p
21
P
p
k1
Timo O. Korhonen, HUT Communication Laboratory
p12
p22
pk 2
p1q
p2 q
pkq
X (M | C)
Generating block codes
For u message vectors M (each consisting of k bits) the respective n-bit
block codes X are therefore determined by
1
0
G (I k | P )
0
m1,1
m
2,1
X MG
mu ,1
m1,2
m2,2
mu ,2
M
m1, k m1,1
m2, k m2,1
mu ,k mu ,1
0
1
0
0
p1,1
p2,1
p1,2
p2,2
0
1
pk ,1
pk ,2
m1,2
m2,2
m1,k
m2,k
c1,1 c1,2
c2,1 c2,2
mu ,2
mu ,k
cu ,1 cu ,2
X(M|C)
Generated check bits, from above, as for instance for k=4,
cu ,2 mu ,1 p1,2 mu ,2 p2,2 mu ,3 p3,2 mu ,4 p4,2
Timo O. Korhonen, HUT Communication Laboratory
p1,q
p2,q
pk ,q
c1, q
c2,q
cu ,q
Forming the P matrix
The check vector C that is appended to the message in the encoded word
is thus determined by the multiplication
C MP
The j:th element of C on the u:th row is therefore encoded by
cu , j mu ,1 p1, j mu ,2 p2, j
mu ,k pk , j , j 1...q
For the Hamming code P matrix of k rows consists of all q-bit words
with two or more "1":s arranged in any order! Hence P can be for
instance
1 0 1
1 1 1
P
1 1 0
0 1 1
Timo O. Korhonen, HUT Communication Laboratory
Generating a Hamming code: An example
For the Hamming codes n=2q-1, k = n - q, dmin=3
Take the systematic (n,k) Hamming code with q=3 (the number of
check bits) and n=23-1=7 and k=n - q=7-3=4. Therefore the generator
matrix is
1
0
G
0
0
0 0 0 1 0
1 0 0 1 1
0 1 0 1 1
0 0 1 0 1
M
P
1
1
0
1
Note that in Hamming code the three last columns make up the P
submatrix including all the 3-bit words that have 2 or more “1”:s.
For a physical realization of the encoder we now assume that the
message contains the bits M (m1 m2 m3 m4 )
Timo O. Korhonen, HUT Communication Laboratory
Realizing a (7,4) Hamming code encoder
For these four message bits we have a four element message register
implementation
Note that here the check bits [c1,c2,c3] are obtained by substituting the
elements of P into equation C=MP or
c j m1 p1 j m2 p2 j .... mk pkj
Timo O. Korhonen, HUT Communication Laboratory
Listing generated Hamming codes
Going through all the combinations of the input vector X yields all the
possible output vectors
Note that for the Hamming codes the minimum distance or weight w =
3 (the number of “1” on each row)
Timo O. Korhonen, HUT Communication Laboratory
Decoding block codes
A brute-force method for error correction of a block code includes
comparison to all possible same length code structures and choosing the
one with the minimum Hamming distance when compared to the
received code.
In practice applied codes can be very long and the extensive comparison
would require much time and memory. For instance, to get the code rate
of 9/10 with a Hamming code it is required that
k
k
nq 9
q
q
n 2 1 2 1 10
This equation fulfills if the code length is at least k=57, and now n = 63.
There are 2k 1.4 1017 different block codes in this case! Decoding by
direct comparison would be quite unpractical!
This approach of comparing Hamming distance of the received code to
the possible codes, and selecting the shortest one is the maximum
likelihood detection and will be discussed more with convolutional
codes
Timo O. Korhonen, HUT Communication Laboratory
Syndrome decoding for error detection
In syndrome decoding a parity checking matrix H is designed such that
multiplication with a code word produces all-zero matrix:
XHT (0 0 0)
Therefore error detection of the received signal Y can be based on
syndrome:
S YHT
that is always zero when a (correct) code word is received. (Note that
the syndrome does not reveal errors if channel noise has produced
another code word!)
The parity checking matrix is determined by
P
H ( P | I q ) or H
Iq
T
T
Having parity checking matrix design such that the rows of HT are all
different and contain at least one "1" a distinct syndrome for each single
error pattern can be obtained -> enables error correction!
Timo O. Korhonen, HUT Communication Laboratory
Syndrome decoding for error correction
Syndrome decoding can be used for error correction by checking the
one-bit error pattern for each syndrome:
Example: Consider a (7,4) Hamming code with a parity check matrix
1 1 1 0 1 0 0
H ( PT | I q ) 0 1 1 1 0 1 0
1 1 0 1 0 0 1
The respective syndromes and error vectors (showing the position of
errors by "1") are
S HT
eˆ YHT where Y is any valid code with
the error in the position indicated by the
respective syndrome
Timo O. Korhonen, HUT Communication Laboratory
Syndrome is independent of code words
This design enables that the syndrome depends entirely on error pattern
but not on particular code. Consider for instance
X (1 0 11 0) Y (1 0 0 11) E (0 0 1 0 1) (Y X E)
S YHT
( X E) H T
XHT
EHT EHT
0, that follows from the defintion of H
Syndrome does not determine the error pattern uniquely because there
exists only 2q different syndromes (syndrome length is q) but there
exists 2k different codes (for each symbol that must be encoded).
After error correction decoding double errors can turn out even triple
errors
Therefore syndrome decoding is efficient when channel errors are not
too likely, e.g. probability for double errors must be small.
For difficult channels there are more elaborated schemes using for
instance extended Hamming codes or maximum likelihood methods (as
Viterbi-decoding)
Timo O. the
Korhonen,
HUT Communication Laboratory
Table lookup syndrome decoder circuit
The error vector is used for error correction by the circuit shown bellow:
Error subtraction
S YH
Timo O. Korhonen, HUT Communication Laboratory
T
P
H
Iq
T
Error rate in a modulated and channel coded system
Assume:
– (dmin 1) / 2 errors are corrected (upper bound, not achieved always,
as in syndrome decoding)
– Additive White Gaussian Noise channel (AWGN, error statistics in
received encoded words same for each bit)
– channel error probability is small (used to simplify relationship
between word and bit errors)
Timo O. Korhonen, HUT Communication Laboratory
Bit and symbol error rate
Transmission error rate a is a function of channel signal and noise
power. We will note later that for the coherent BPSK1 the bit error rate
probability is
Q( 2 ), E / h
b
b
b
b
Q(k )
1
2
exp(
/ 2)d
2 k
where Eb is the transmitted energy / bit and h is the channel noise power
spectral density [W/Hz].
Due to the coding, energy / transmitted symbol is decreased and hence
for the system using a (n,k) code with the rate RC the error rate is
Q( 2 )
C
where
<-no code gain effect here
k
C b ( RC b ) note that
n
C
b
However, coding can improve symbol error rate after decoding (=code
gain)
Timo O. Korhonen, HUT Communication Laboratory
1Binary
Phase Shift Keying
Bit errors and word errors
It is not self evident which one plays more important role for symbol
errors, the energy decrease / symbol in the channel, or coding gain, thus
for certain channel noise levels coding might be harmful.
Coding can correct up to t (dmin 1)/ 2 errors. Therefore decoding
error rate is upper bounded by
n i
n t 1
t n i
n
ni
n i
Pwe 1 i0 (1 ) it 1 (1 )
i
i
t 1
1
where the simplification follows because higher terms of the summation
are less significant in high SNR channels when 0
Note that this means that in average each unsuccessful (in-error) coded
word contains in average t+1 erroneous bits
Timo O. Korhonen, HUT Communication Laboratory
Bit errors and word errors, cont.
If there would be no ability to correct encoded words their error
probability would be n-times the bit error probability or
p 'we npbe (the average value of the binomial distribution)
However, the ability to correct t+1 errors decreases the word error rate
to
np
pwe be and hence the encoded system error probability is
t 1
p (t 1) t 1 n t 1
pbe we
n
n t 1
where
Q( 2 )
C
Timo O. Korhonen, HUT Communication Laboratory
k Eb
C
nh
1
2
Q(k )
exp
/ 2
k
2
Error rate comparison
Example,
RC=11/15
The error rate
expression was
pbe (t 1) pwe / n
t 1 n t 1
n t 1
Pbe : error rate with coding
P : error rate without any coding
ube
: error rate without coding
excluding code gain
n 1 t 1
t
dmin 3, t 1
where for BPSK
Q( 2 ), R , E / h, R k / n
C
C
C
b
b
b
C
For the respective uncoded system (polar MF detection) error rate was
pube Q( 2 b )
Timo O. Korhonen, HUT Communication Laboratory