Chapter 5 - Website of Professor Po

Download Report

Transcript Chapter 5 - Website of Professor Po

Chapter 5 Signal-Space Analysis
Signal space analysis provides a
mathematically elegant and highly insightful
tool for the study of data transmission.
5.1 Introduction
 Statistical model for a genetic digital communication
system
 Message source: A priori probabilities for information
source
pi  P(mi ) for i  1,2,...,M
 Transmitter: The transmitter takes the message source
output mi and (en-)codes it into a distinct signal si(t)
suitable for transmission over the channel. So:
pi  P(mi )  P( si (t )) for i  1,2,...,M
© Po-Ning [email protected]
Chapter 5-2
5.1 Introduction
 si(t) must be a real-valued energy signal (i.e., a
signal with finite energy) with duration T.
Ei  0 si2 (t )dt  .
T
 Channel: The channel is assumed (in this text) linear
and with a bandwidth wide enough to pass si(t) with no
distortion.
 Zero-mean additive white Gaussian noise (AWGN)
is also assumed (to facilitate the analysis).
© Po-Ning [email protected]
Chapter 5-3
5.1 A Mathematical Model
 We can then simplify the previous system block diagram to:
 Upon the receipt of x(t) for a duration of T, the receiver makes
the best estimate of mi. (We haven’t defined what the best is.)
© Po-Ning [email protected]
Chapter 5-4
5.1 Criterion for the “Best” Decision
 Best = Minimization of the average probability of symbol
error.
M
Pe   pi  P(mˆ  mi | mi )
i 1
 This is optimum in the minimum probability of error
sense.
 Based on this criterion, we can begin to design the
receiver that can give the best decision.
© Po-Ning [email protected]
Chapter 5-5
5.2 Geometric Representation of Signals
 Signal space concept
 Vectorization of the (discrete or continuous) signals
removes the redundancy in the signals, and provides a
compact representation for them.
 Determination of the vectorization basis
 Gram-Schmidt orthogonalization procedure
© Po-Ning [email protected]
Chapter 5-6
5.2 Gram-Schmidt Orthogonalization Procedure
 

Given v1 , v2 ,, vk , how tofind an orthonorma
l basis for them?


v
(step i ) Let u1  1 .
|| v1 ||
'
 
  

u
(step ii ) u2'  v2  (v2  u1 )u1. Set u2  2' .
|| u2 ||
(step iii ) For i  3,4,...,
 
  
  
  
Let ui'  vi  (vi  ui 1 )ui 1  (vi  ui 2 )ui 2    (vi  u1 )u1.


ui'
Set ui   ' .
|| ui ||
 

(step iv ) T henu1 , u2 ,, uk  forman orthonorma
l basis.
© Po-Ning [email protected]
Chapter 5-7
P ropert ies:

(i) vect or: v  ( v1, ,vn )
n
 
(ii) inner product: v1  v2   v1i v2i
i 1
(iii) ort hogonal
, if int er product  0.

(iv) norm: || v || v12    vn2
( v) ort hosnorm
al, if inner product  0, and indivual norm  1.
(vi) linearlyindependent , if nonecan be represent ed as a linear
combinat ion of ot hers.
 


(vii) t riangleinequalit y: || v1  v2 |||| v1 ||  || v2 || .
© Po-Ning [email protected]
Chapter 5-8
( viii) Cauchy Schwartz inequality:
 


| v1  v2 ||| v1 ||  || v2 ||


with equality holds, if v1  av2 .
( xi) normsquare :
  2  2
 2  
|| v1  v2 || || v1 ||  || v2 || 2v1  v2 .
( x) P ythagore
an property: If orthogonal
,
  2  2
 2
|| v1  v2 || || v1 ||  || v2 || .
( xi) Matrix transformation w.r.t.matrixA :


v1  Av2 .
( xii) eigenvalues w.r.t.matrixA :
solution of detA - I  0.
( xiii) eigenvectors w.r.t.eigenvalue :



solutionv of Av  v .
© Po-Ning [email protected]
Chapter 5-9
5.2 Signal Space Concept for Continuous
Functions
P ropertiesfor continuousfunctions
(i) (complex valued) signal : z (t )
(ii) inner product:  z (t ), zˆ(t )  a z (t ) zˆ* (t )dt.
b
(iii) orthogonal
, if inter product  0.
(iv) norm: z (t ) 
2
|
z
(
t
)
|
dt
a
b
( v) orthonorma
l, if inner product  0, and indivual norm  1.
(vi) linearlyindependent vectors,if nonecan be represented as a linear
combination of others.
(vii) triangleinequality: z (t )  zˆ(t ) || z (t ) ||  || zˆ(t ) || .
© Po-Ning [email protected]
Chapter 5-10
( viii) Cauchy Schwartz inequality:
 z (t ), zˆ(t )  || z (t ) ||  || zˆ(t ) ||
with equality holds, if z (t )  a  zˆ(t ). ( a is a complexnumber.)
( xi) normsquare :
|| z (t )  zˆ(t ) ||2 || z (t ) ||2  || zˆ(t ) ||2   z (t ), zˆ(t )    zˆ(t ), z (t )  .
( x) P ythagorean property: If orthogonal
, || z (t )  zˆ(t ) ||2 || z (t ) ||2  || zˆ(t ) ||2 .
( xi) T ransformation w.r.t.a functionC (t,τ ) :
zˆ(t )  a C (t , ) z ( )d
b
n
(Recall v1 j   a jn v2 i .)
i 1
(xii.a)eigenvalues and eigenfunctions w.r.t.a continuousfunctionC (t, ) :
solutionsk and { k (t )} of k   k (t )  a C (t , ) k ( )d

k 1
b

and C (t , ) can be represented as C (t , )   k (t )  k   k ( ).
k 1
© Po-Ning [email protected]
Chapter 5-11
(xii.b) Give a deterministic function{s(t ), t  [0, T )} and a set of
orthonorma
l basis { k (t )}1k  thatcan span s(t ). T hen

s(t )   ak k (t ), 0  t  T ,
k 0
where ak  0 s(t ) k (t )dt.
T
(xii.c) If orthonorma
l set { k (t )}1k  K does not span thespace, then
K
it is possible thatsˆ(t )   ak k (t )  s(t ) for all choicesof {ak }1k  K .
k 0
© Po-Ning [email protected]
Chapter 5-12
 Problem : How to minimize the “energy” of e(t )  s(t )  sˆ(t ) ?

T oselect thecoefficients{ak } thatminimize e 2 (t )dt,


K



2
2

[
s
(
t
)

a

(
t
)
]
dt
 


k k
  e (t )dt
k 1

 
a j
a j

K

  2  s(t )   ak k (t )  j (t )dt
k 1




K

 2  s(t ) j (t )dt  2 ak  k (t ) j (t )dt
k 1


 2  s(t ) j (t )dt  2a j  0.  a j   s(t ) j (t )dt.
© Po-Ning [email protected]
Chapter 5-13
a2  s(t ), 2 (t )
 2 (t )
s(t )
a1  s(t ),1 (t )
e(t )
sˆ(t )
 1 (t )

Hence, e(t ), sˆ(t )   e(t ) sˆ(t )dt  0
 Interpretation
 aj is the projection of s(t) onto the Yj(t)-axis.
 (aj)2 is the energy-projection of s(t) onto the Yj(t)-axis.
© Po-Ning [email protected]
Chapter 5-14
s(t )
a1  s(t ),1 (t )
e(t )
sˆ(t )
a2  s(t ), 2 (t )
 2 (t )


 1 (t )


2
e
 (t )dt   e(t )[s(t )  sˆ(t )]dt   e(t )s(t )dt   e(t )sˆ(t )dt



  e(t ) s(t )dt  0   e(t ) s(t )dt   [ s(t )  sˆ(t )]s(t )dt
K

  s (t )dt     ak k (t )  s(t )dt
 k 1



2

K

  s (t )dt   ak  s(t ) k (t )dt
2
k 1

K
  s 2 (t )dt   ak2
k 1
© Po-Ning [email protected]

K
Notably, sˆ (t )dt   ak2 .
2
k 1
Chapter 5-15
5.2 Signal Space Concept for Continuous
Functions
 Completeness
 If every finite energy signal satisfies

K
2
s
(
t
)
dt

a
 k,

2
k 1
{ k (t )}1k  K is a com pleteorthonorma
l set.
 Example. Fourier series
 2
 2kt  2
 2kt  
cos
sin 
,


 T  T
 T   0 k  
 T
is a completeorthonorma
l set for signals
defined over[0, T ].
© Po-Ning [email protected]
Chapter 5-16
5.2 Gram-Schmidt Orthogonalization Procedure
Given v1 (t ), v2 (t ),, vk (t ), how tofind an orthonorma
l basis for them?
(step i )
v1 (t )
Let u1 (t ) 
.
|| v1 (t ) ||
'
u
(t )
(step ii ) u2' (t )  v2 (t )  (v2 (t ), u1 (t )) u1 (t ). Set u2 (t )  2'
.
|| u2 (t ) ||
(step iii ) For i  3,4,...,
Let ui' (t )  vi (t )  vi (t ), ui 1 (t ) ui 1 (t )    vi (t ), u1 (t ) u1 (t ).
ui' (t )
Set ui (t )  '
.
|| ui (t ) ||
(step iv ) T henu1 (t ), u2 (t ),, uk (t )  forman orthonorma
l basis.
© Po-Ning [email protected]
Chapter 5-17
5.2 Geometric Representation of Signals
N
si (t )   sij f j (t ), 0  t  T , i  1,2,...,M
j 1
{ fi }iN1 orthonorma
l
sij  0 si (t ) f j (t )dt,
T
i  1,2,...M , j  1,2,...,N
© Po-Ning [email protected]
Chapter 5-18
5.2 Geometric Representation of Signals
 Through the signal space concept, si(t) (where 1  i  M)
can be unambiguously represented by an N-dimensional
signal vector (si1, si2,…, siN) over an N-dimensional signal
space.
 The design of transmitters becomes the selection of M
points over the signal space, and the receivers make a guess
about which of the M points was transmitted.
 In the N-dimensional signal space,
 length of the vector = energy of the signal
 angle between vectors = energy correlation between
N
T
signals
2
2
2
si (t ), sk (t )
||
s
(
t
)
||

s
(
t
)
dt

s

i
ij
0 i
cos(ik ) 
j 1
|| si (t ) ||  || sk (t ) ||
© Po-Ning [email protected]
Chapter 5-19
5.2 Geometric Representation of Signals
 the angle between vectors is independent of the basis
used.
 From this view,
 the transmitter may be viewed as a synthesizer, which
synthesizes the transmitted signal by a bank of N
multipliers.
 the receiver may be viewed as an analyzer, which
correlates (product-integrate) the common input into
individual informational signal.
© Po-Ning [email protected]
Chapter 5-20
5.2 Geometric Representation
of Energy Signals
 Illustration the geometric
representation of signals for the
case when N = 2 and M = 3
© Po-Ning [email protected]
Chapter 5-21
5.2 Euclidean Distance
 After vectorization, we can then calculate the Euclidean
distance between two signals:

T
0
N
( si (t )  sk (t )) dt || si (t )  sk (t ) ||   ( sij  skj )2
2
2
j 1
1, i  j
Kroneckerdelta function:  ij  
0, i  j
Applications : We may say thatorthonorma
lity means i (t ), j (t )   ij .
© Po-Ning [email protected]
Chapter 5-22
Example 5.1 Schwarz Inequality
 Cauchy-Schwarz inequality and angle between signals
 Cauchy-Schwarz inequality said that
s1 (t ), s2 (t )
2
|| s1 (t ) ||2  || s2 (t ) ||2 with equality holds if s1 (t )  cs2 (t ).
 Also, angles between signals give that
s1 (t ), s2 (t )
cos(12 ) 
|| s1 (t ) ||  || s2 (t ) ||
 Hence, Cauchy-Schwarz inequality is equivalently
stated as:
| cos(12 ) |2  1 with equality holdsif 12  0 or 
© Po-Ning [email protected]
Chapter 5-23
5.2 Basis
 The (complete) orthonormal basis for a signal space is not
unique!
 So the synthesizer and analyzer for the transmission of
the same informational messages are not unique!
 One way to determine a set of orthonormal basis is the
Gram-Schmidt orthogonalization procedure.
 Try and practice Example 5.2 yourself!
© Po-Ning [email protected]
Chapter 5-24
5.3 Conversion of the Continuous AWGN Channel
into a Vector Channel
 Influence of the AWGN noise to the signal space concept
x(t )  si (t )  w(t )
where w(t) is zero-mean AWGN with PSD N0/2.
 After the correlator at the receiver, we obtain:
x (t ), f j (t )  si (t ), f j (t )  w(t ), f j (t )
 Or equivalently, x j  sij  wj .
 Notably, there is no information loss
by the signal space representation.
© Po-Ning [email protected]
x1
,
x(t )
f1 ( t )
,
xN
f N (t )
Chapter 5-25
5.3 Conversion of the Continuous AWGN Channel
into a Vector Channel
 Statistics of {wj}
 x1   si1   w1 
       
     
 x N   siN   wN 
 Since {sij} is deterministic, the distribution of x is a meanshift of that of w.
 Observe that w is Gaussian distributed because w(t) is
AWGN. The distribution of w can therefore be determined
by its mean vector and covariance matrix.
© Po-Ning [email protected]
Chapter 5-26
5.3 Conversion of the Continuous AWGN Channel
into a Vector Channel
 Mean


E[ w j ]  E 0 w(t ) f j (t )dt  0 E[ w(t )] f j (t )dt  0
 Covariance
T
E[ wi w j ]  E
 0
T
T
 w(s) f (s)ds w(t) f (t)dt
T
0

T
0
T
i
0
j
E[ w( s ) w(t )] f i ( s ) f j (t )dsdt
N0
 0 0
 ( s  t ) f i ( s) f j (t )dsdt
2
N0 T
N0

f i (t ) f j (t )dt 
 ij

0
2
2
T
© Po-Ning [email protected]
T
Chapter 5-27
5.3 Conversion of the Continuous AWGN Channel
into a Vector Channel
 As a result, [w1, w2, …, wN] are zero-mean i.i.d. Gaussian
distributed with variance N0/2.
 This shows that x is independent Gaussian distributed with
common variance N0/2 and mean vector si = [si1, si2, …, siN].
Equivalently,
N
 1
1
2
f ( x | si )  
exp
( x j  sij ) 
N 0
j 1
 N0

© Po-Ning [email protected]
Chapter 5-28
5.3 Conversion of the Continuous AWGN Channel
into a Vector Channel
 Remainder term in noise
 It is possible that
N
w' (t )  w(t )   wi  f i (t )  0
i 1
 However, it can be shown that w’(t) is orthogonal to si(t)
for 1  i  M. Hence, w’(t) will not affect the decision
error rate on message i.
w' (t ), si (t )  0 with probability 1.
© Po-Ning [email protected]
Chapter 5-29
5.4 Likelihood Functions
 An equivalent signal-space channel model
ˆ  d ( x) {m1,...,mM }
m  mi , 1  i  M  s  c(m)  x  s  w  m
 The best decision function d( ) that minimizes the decision
error is:
d ( x )  mi , if P{mi | x}  P{mk | x} for all 1  k  M
 arg max P{m | x}
m{ m1 ,..., m M }
 This is the maximum a posteriori probability (MAP)
decision rule.
© Po-Ning [email protected]
Chapter 5-30
5.4 Likelihood Functions
 With equal prior probabilities,
d ( x )  arg max P{m | x}
m{ m1 ,..., m M }
 arg maxP{m1 | x}, P{m2 | x},...,P{mM | x}
P{mM | x} f ( x ) 
 P{m1 | x} f ( x ) P{m2 | x} f ( x )
 arg max
,
,...,

1/ M
1/ M
1/ M


 P{m1 | x} f ( x ) P{m2 | x} f ( x )
P{mM | x} f ( x ) 
 arg max
,
,...,

P
(
m
)
P
(
m
)
P
(
m
)


1
2
M
 arg max f ( x | m1 ), f ( x | m2 ),..., f ( x | mM )
f(x|mi) is named the likelihood function given that mi is transmitted
Hence, the above rule is named the maximum-likelihood decision rule.
© Po-Ning [email protected]
Chapter 5-31
5.4 Likelihood Functions
 MAP rule = ML rule, if equal prior probability is assumed.
 In practice, it is more convenient to work on the loglikelihood functions, defined by
d ( x )  arg max f ( x | m1 ), f ( x | m2 ),..., f ( x | mM )
 arg maxlog f ( x | m1 ), log f ( x | m2 ),...,log f ( x | mM )
 Why log-likelihood functions are more convenient? The
decision function becomes “sum of Euclidean distances” in
AWGN channel.
© Po-Ning [email protected]
Chapter 5-32
d ( x )  arg max log f ( x | mi )  arg max log f ( x | si )
1 i  M
1 i  M
N
 arg max log 
1 i  M
j 1
 1
1
2
exp
( x j  sij ) 
N 0
 N0

 1
1
2
 arg max    logN 0 
( x j  sij ) 
1 i  M
2
N0
j 1 

N
N
2
 arg 1min
(
x

s
)
 j ij
i  M
j 1
 arg min || x  si ||2
1 i  M
Upon receipt of received signal point x, find the signal
point si that is closest in Euclidean distance to x.
© Po-Ning [email protected]
Chapter 5-33
5.5 Coherent Detection of Signals in Noise:
Maximum Likelihood Decoding
 Signal constellation
 Set of M signal points in the signal space
 Example. Signal constellation for 2B1Q code
decision region
for s1
© Po-Ning [email protected]
decision region
for s2
decision region
for s3
decision region
for s4
Chapter 5-34
5.5 Coherent Detection of Signals in Noise:
Maximum Likelihood Decoding
 Decision regions for
N = 2 and M = 4
© Po-Ning [email protected]
Chapter 5-35
5.5 Coherent Detection of Signals in Noise:
Maximum Likelihood Decoding
 Usually, s1, s2, …, sM are named the message points.
 The received signal point x then wanders about the
transmitted message point in a Gaussian-distributed random
fashion.
© Po-Ning [email protected]
Chapter 5-36
5.5 Coherent Detection of Signals in Noise:
Maximum Likelihood Decoding
 Constant-energy signal constellation
 The ML decision rule can be reduced to an innerproduct.
2
d ( x )  arg 1min
||
x

s
||
i
i  M
 arg min|| x ||2 2 x, si  || si ||2 
1i  M
 arg min 2 x, si  Ei 
1i  M
 arg max x, si , if Ei is constant.
1i  M
© Po-Ning [email protected]
Chapter 5-37
5.6 Correlation Receiver
 If signals do not have equal energy, we can use
1 

d ( x )  arg max x, si  Ei .
1i  M
2 

to implement the ML rule.
 The receiver is coherent because the receiver requires to
be in perfect synchronization with the transmitter (more
specifically, the integration must begin at the right time
instance).
© Po-Ning [email protected]
Chapter 5-38
N productintegrators or
correlators
x1
x2
Correlation receiver
xN
demodulator
or detector
© Po-Ning [email protected]
decision maker
Chapter 5-39
N productintegrators or
correlators
x1
x2
xN
matched filter
© Po-Ning [email protected]
decision maker
Chapter 5-40
5.6 Equivalence of Correlation and Matched Filter
Receivers
 The correlator and matched filter can be made equivalent.
 Specifically,

xi  0 x(t )i (t )dt   x( )hi (T   )d
T
if hi (t )  i (T  t ) and implicitlyi (t ) is zero outside0  t  T .
© Po-Ning [email protected]
Chapter 5-41
5.7 Probability of Symbol Error
 Average probability of symbol error
M
Pe  1  Pc  1   P ( mi ) P ( d ( x )  mi | mi transmitted)
i 1
1
 1
M
1
 1
M
1
 1
M

M
 P(d ( x )  m | m
i 1
M
i
i
transmitted)

2
2
P
r
||
x

s
||

min
||
x

s
||
mi transmitted

i
j
1 j  M , j  i
i 1
M
  f ( x | s )dx
i 1 Z i
i

where Z i  x  N :|| x  si ||2  min || x  s j ||2 .
© Po-Ning [email protected]
1 j  M , j  i
Chapter 5-42

5.7 Invariance of Probability of Symbol Error
 Probability of symbol error is invariant with respect to basis
change (i.e., rotation and translation of the signal space).
 Specifically, SER (symbol error rate) only depends on the
relative Euclidean distances between the message points.
1 M
2
2
Pe  1 
Pr
||
x

s
||

min
||
x

s
||
mi transmitted

i
j
1 j  M , j i
M i 1

© Po-Ning [email protected]

Chapter 5-43
5.7 Invariance of Probability of Symbol Error
 Specifically, if Q is a reversible transform (matrix), such as
rotation, then
x   :|| x  s ||  min || x  s || 
 x   :|| Qx  Qs ||  min || Qx  Qs || 
N
2
i
2
1 j  M , j i
N
j
2
i
2
1 j  M , j i
j
 If a signal constellation is rotated by an orthonormal
transformation, where Q is an orthonormal matrix, then
the probability of symbol error Pe incurred in maximum
likelihood signal detection over an AWGN channel is
completely unchanged.
© Po-Ning [email protected]
Chapter 5-44
5.7 Invariance of Probability of Symbol Error
 A pair of signal constellation for illustrating the principle of
rotational invariance.
© Po-Ning [email protected]
Chapter 5-45
5.7 Invariance of Probability of Symbol Error
 The invariance in SER for translation can be likewise
proved.
 Is the transmission power the same for both
constellation?
© Po-Ning [email protected]
Chapter 5-46
5.7 Minimum Energy Signals
 Since SER is invariant to rotation and translation, we may
rotate and translate the signal constellation to minimize the
transmission power without affecting SER.
M
E g   pi || si ||2
i 1
M
Find a and Q such that Eg (a, Q)   pi || Q( si  a) ||2 is minimized.
i 1
 But Q does not change the norm (i.e., transmission
power). Thus, we only need to determine the right a.
© Po-Ning [email protected]
Chapter 5-47
5.7 Minimum Energy Signals
 Determine the optimal a.
M
E g ( a)   pi || si  a ||2
i 1
  pi || si ||2 2aT si  || a ||2 
M
i 1
M

  pi || si ||  2a   pi si   || a ||2
i 1
 i 1

M
2
M
T
 aoptimal   pi si and Eg (aoptimal )   pi || si ||2 
i 1
© Po-Ning [email protected]
M
i 1
M
ps
i 1
2
i i
Chapter 5-48
5.7 Minimum Energy Signals
 So subfigure (a) below has minimum average energy.
© Po-Ning [email protected]
Chapter 5-49
5.7 Union Bound on the Probability of Error
 Union bound P( A  B)  P( A)  P( B)
1
Pe  1 
M
 P r|| x  s || 
M
2
i
i 1
min || x  s j ||2 mi transmitted
1 j  M , j  i

 || x  si ||2 || x  s1 ||2   



 1
 1
   1 
P
r
m
transmitt
ed
2
2
  || x  s || || x  s ||  i


i
M

 M i 1  M i 1  j i


M
M
 || x  si ||2 || x  s1 ||2   

1



P
r
m
transmitt
ed
2
2

    || x  si || || x  sM ||  i
M i 1 

 j i
1 M M

P r || x  si ||2 || x  s j ||2 mi transmitted


M i 1 j 1, j i
M

© Po-Ning [email protected]

Chapter 5-50
5.7 Union Bound on the Probability of Error
|| x  s1 ||2 || x  s2 ||2  

2
2 
 || x  s1 || || x  s3 || 
 || x  s ||2 || x  s ||2 
1
4


|| x  s || || x  s ||  || x  s || || x  s ||  || x  s || || x  s || 
2
1
© Po-Ning [email protected]
2
2
2
1
2
3
2
1
2
4
Chapter 5-51
5.7 Union Bound on the Probability of Error
1
Pe 
M
M
M
  P (s , s )
i 1 j 1, j i
2
i
j


where P2 ( si , s j )  P r || x  si ||2 || x  s j ||2 mi transmitted .
Notably,given mi transmitted, x is Gaussian distributed with mean si .
si
sj
w
1
si  s j
2
© Po-Ning [email protected]
Chapter 5-52
5.7 Union Bound on the Probability of Erroror
 Hence,
 For N = 1,


P2 ( si , s j )  Pr || x  si ||2 || x  s j ||2 mi transmited
t
1


 Pr w  | si  s j |
2



 d
ij
/2
 v2 
1
dv, where d ij | si  s j |
exp 
N 0
 N0 
 d ij
1
 erfc
2 N
2
0

© Po-Ning [email protected]

, where erfc(u)  2





u
exp( z 2 )dz.
Chapter 5-53
5.7 Union Bound on the Probability of Error
 For N = 2,


P2 ( si , s j )  P r || x  si ||2 || x  s j ||2 mi transmitted
1


 P rw1  d ij and w2  don't care, where d ij || si  s j ||2
2



 d
ij
/2
 v2 
1
dv
exp 
N 0
 N0 
 d ij
1
 erfc
2 N
2
0


, where erfc(u)  2





u
exp( z 2 )dz.
 The same formula is valid for any N.
© Po-Ning [email protected]
Chapter 5-54
5.7 Union Bound on the Probability of Error
 Consequently, the union bound for symbol error rate is:
1
Pe 
M
1
P
(
s
,
s
)


 2 i j M
i 1 j 1, j  i
M
M
 d ij
1
erfc


2 N
i 1 j 1, j  i 2
0

M
M




 The above bound can be further simplified when additional
condition is given.
 For example, if the signal constellation is circularly
symmetric in the sense that “{di1, di2, …, diM} is a
permutation of {dk1, dk2, …, dkM} for i  k,” then
 d ij
1
Pe   erfc
2 N
j 1, j  i 2
0

M
© Po-Ning [email protected]




Chapter 5-55
5.7 Union Bound on the Probability of Error
 Another simplification of union bound
 Define the minimum distance of a signal constellation
as:
d min 
min
1 i  M ,1 j  M ,i  j
d ij
Then by the strict decreasing property of erfc function,
 d ij
erfc
2 N
0

1
 Pe 
M
 d ij
1
erfc


2 N
i 1 j 1, j i 2
0

M
M
© Po-Ning [email protected]


  erfc d min

2 N
0


 1

 M





 d min
1
erfc


2 N
i 1 j 1, j i 2
0

M
M
 M 1
 d min

erfc

2 N
2
0


Chapter 5-56




5.7 Union Bound on the Probability of Error
 We may use the bound for erfc function to realize the
relation between SER and dmin.
erfc(u) 
exp(u 2 )

 d min
M 1
 Pe 
erfc
2 N
2
0

for u  0.608131
2
 M 1
 d min

2


, if d min
exp 
 1.47929N 0 .
 2 
 4N0 

 Conclusion: SER decreases exponentially as the squared
minimum distance.
© Po-Ning [email protected]
Chapter 5-57
5.7 Relation between BER and SER
 The information bits are transmitted in group of log2M bits
to form an M-ary symbol.
 This gives the result that a large symbol error rate (SER)
may not cause a large bit error rate (BER).
 For example, a symbol error (for large M) may be due to
only 1 bit error.
 Optimistically, if every symbol error is due to a single
bit error, then (assuming n symbols are transmitted)
n  SER
SER
BER 

.
n  log2 ( M ) log2 ( M )
© Po-Ning [email protected]

SER 
 In general, BER 
. 
log2 ( M ) 

Chapter 5-58
5.7 Relation between BER and SER
 Pessimistically, if every symbol error causes log2M bit
errors, then (assuming n symbols are transmitted)
BER 
n  log2 M  SER
 SER .
n  log2 M
In general, BER  SER.
 Summary:
SER
 BER  SER
log2 M
© Po-Ning [email protected]
Chapter 5-59
5.7 Relation between BER and SER
 If the statistics for “number of bit error patterns causes one
symbol error” is known, we can then determine the exact
relation between BER and SER.
M 1
BER 
n  SER   # ( b j )  P( b j )
j 1
n  log2 M
where # (b j )  number of 1's in b j ,
and b j representsone binary permutation of log2 M bit pattern.
Here, a 1’s in bj means a bit error is occurred in the corresponding position;
hence, all-zero pattern is excluded because it represents no symbol error.
© Po-Ning [email protected]
Chapter 5-60
5.7 Relation between BER and SER
 Example. If all bit error patters are equally likely, then
M 1
M 1
BER 
n  SER   # ( b j )  P ( b j )
j 1
n  log2 M
SER

( M  1)  log2 M
log 2 M

u 1

SER   # ( b j )  (1 /( M  1))
j 1
log2 M
k
k 
 log2 M 
u  
 (Note u     k 2 k 1.)
u 1
u
 u 
log2 M  2log M 1
SER

( M  1)  log2 M
2
 M /2 

SER
 M 1
© Po-Ning [email protected]
Chapter 5-61