PowerPoint 簡報

Download Report

Transcript PowerPoint 簡報

Chapter 1 Random Process
1.1 Introduction (Physical phenomenon)
Deterministic model : No uncertainty about its timedependent behavior at any instant of time .
Random model :The future value is subject to
“chance”(probability)
Example: Thermal noise , Random data stream
1.2 Mathematical Definition of a Random Process (RP)
The properties of RP
a. Function of time.
b. Random in the sense that before conducting an
experiment, not possible to define the waveform.
Sample space S
function of time, X(t,s)
mapping
1
S  X (t,s)
-T  t  T
(1.1)
2T:The total observation interval
s j  X (t, s j )  x j (t )
x j (t )= sample function
(1.2)
At t = tk, xj (tk) is a random variable (RV).
To simplify the notation , let X(t,s) = X(t)
X(t):Random process, an ensemble of time function
together with a probability rule.
Difference between RV and RP
RV: The outcome is mapped into a number
RP: The outcome is mapped into a function of time
2
Figure 1.1 An ensemble of sample functions:
{x j (t ) | j  1,2,, n}
3
1.3 Stationary Process
Stationary Process :
The statistical characterization of a process is independent of
the time at which observation of the process is initiated.
Nonstationary Process:
Not a stationary process (unstable phenomenon )
Consider X(t) which is initiated at t = ,
X(t1),X(t2)…,X(tk) denote the RV obtained at t1,t2…,tk
For the RP to be stationary in the strict sense (strictly stationary)
The joint distribution function
FX (t1 τ ),...,X (tk τ ) ( x1,.., xk )  FX (t1 ) ,...,X(tk ) ( x1,...xk ) (1.3)
For all time shift t, all k, and all possible choice of t1,t2…,tk
4
X(t) and Y(t) are jointly strictly stationary if the joint
finite-dimensional distribution of X (t1 )X (tk ) and
Y (t1' )Y (t j' ) are invariant w.r.t. the origin t = 0.
Special cases of Eq.(1.3)
1.FX (t ) ( x)  FX (t τ ) ( x)  FX ( x)
2. k = 2 , t = -t1
for all t and t
FX (t1 ), X (t2 ) ( x1, x )  FX (0), X (t2 t1 ) ( x , x )
2
1
2
(1.4)
(1.5)
which only depends on t2-t1 (time difference)
5
Figure 1.2 Illustrating the probability of a joint event.
6
Figure 1.3 Illustrating the concept of stationarity in Example 1.1.
7
1.4 Mean, Correlation,and Covariance Function
Let X(t) be a strictly stationary RP
The mean of X(t) is
X (t )  EX (t )

  xf X ( t ) ( x ) d x

 X
for all t
(1.6)
(1.7)
fX(t)(x) : the first order pdf.
The autocorrelation function of X(t) is
R X (t1,t2 )  E X (t1 ) X (t2 )








- -
- -
x1 x2 f X ( t1 ) X ( t2 ) ( x1 , x2 )dx1dx2
x1 x2 f X (0) X ( t2 t1 ) ( x1 , x2 )dx1dx2
 RX (t2  t1 )
for all t1 and t2
(1.8)
8
The autocovariance function
C X (t1,t 2)  E  X (t1 )   X  X (t2 )   X 
 RX (t2  t1 )   X2
(1.10)
Which is of function of time difference (t2-t1).
We can determine CX(t1,t2) if X and RX(t2-t1) are known.
Note that:
1. X and RX(t2-t1) only provide a partial description.
2. If X(t) = X and RX(t1,t2)=RX(t2-t1),
then X(t) is wide-sense stationary (stationary process).
3. The class of strictly stationary processes with finite
second-order moments is a subclass of the class of all
stationary processes.
4. The first- and second-order moments may not exist.
9
Properties of the autocorrelation function
For convenience of notation , we redefine
RX (t )  EX (t  τ ) X (t ),
for all t
(1.11)
1. The mean-square value


RX (0)  E X 2 (t ) , τ  0
(1.12)
2. RX (t )  R(τ)
(1.13)
3. RX (t )  RX (0)
(1.14)
10
Proof of property 3:
E[( X (t  τ )  X (t )) ]  0
2
Consider
 E[ X 2 (t  t )]  2E[ X (t  t ) X (t )]  E[ X 2 (t )]  0
 2E[ X (t )]  2RX (t )  0
2
 2RX (0)  2RX (t )  0
  RX (0)  RX (t )  RX (0)
 | RX (t ) | RX (0)
11
The RX(t) provides the interdependence information
of two random variables obtained from X(t) at times
t seconds apart
12
Example 1.2 X (t )  Acos(2πfct  Θ)
1

 , π θ  π
f  ( )   2π

elsewhere
 0,
(1.15)
(1.16)
f
π
π
θ
A2
RX (t )  E X (t  τ ) X (t ) 
cos(2πfct )
2
(1.17)
13
Appendix 2.1 Fourier Transform
14
We refer to |G(f)| as the magnitude spectrum of the signal g(t),
and refer to arg {G(f)} as its phase spectrum.
15
DIRAC DELTA FUNCTION
Strictly speaking, the theory of the Fourier transform is
applicable only to time functions that satisfy the Dirichlet
conditions. Such functions include energy signals. However,
it would be highly desirable to extend this theory in two
ways:
1. To combine the Fourier series and Fourier transform into a
unified theory, so that the Fourier series may be treated as a
special case of the Fourier transform.
2. To include power signals (i.e., signals for which the average
power is finite) in the list of signals to which we may apply
the Fourier transform.
16
The Dirac delta function or just delta function, denoted by  (t ) ,
is defined as having zero amplitude everywhere except
at t  0 , where it is infinitely large in such a way that it
contains unit area under its curve; that is
 (t )  0,
t0
(A2.3)
and

  (t )dt  1






g (t ) (t  t0 )dt  g (t0 )
g (t ) (t  t )dt  g (t )
(A2.4)
(A2.5)
(A2.6)
17
18
Example 1.3 Random Binary Wave / Pulse
1. The pulses are represented by ±A volts (mean=0).
2. The first complete pulse starts at td.
1
 , 0  td  T
fTd (td )  T

 0, elsewhere
3. During (n 1)T  t  td  nT, the presence of +A
or –A is random.
4.When tk  ti  T , Tk and Ti are not in the same pulse
interval, hence, X(tk) and X(ti) are independent.
 EX (tk ) X (ti )  EX (tk )EX (ti )  0
19
Figure 1.6
Sample function of random binary wave.
20
4. When tk  ti  T , Tk and Ti are not in the same pulse
interval, hence, X(tk) and X(ti) are independent.
 EX (tk ) X (ti )  EX (tk )EX (ti )  0
21
5. For tk  ti  T , tk  0 , ti  tk
X(tk) and X(ti) occur in the same pulse interval
iff td  T- tk -ti
i.e., td  -ti  T
 A2 , td  T - tk - ti
E X (tk ) X (ti ) td   
elsewhere
 0,
E X (tk ) X (ti )  
0
T- tk -ti
2
A f Td (td )dtd
A2

dtd
0
T
tk  ti
2
 A (1 
)
T
T- tk -ti
tk  ti  T
22
6. Similar reason for any other value of tk
 2
τ
 A (1  ),
RX (t )  
T

0,

τ T
τ T
, where τ  tk -ti
What is the Fourier Transform of RX (t ) ?
Reference : A.Papoulis, Probability, Random
Variables and Stochastic Processes,
Mc Graw-Hill Inc.
23
Cross-correlation Function
RXY (t,u)  EX (t )Y (u)
(1.19)
(1.20)
and RYX (t,u)  EY (t ) X (u)
Note RXY (t , u) and RYX (t , u) are not general even
functions.
The correlation matrix is
 RX (t , u ) RXY (t , u )
R (t , u )  

R
(
t
,
u
)
R
(
t
,
u
)
Y
 YX

If X(t) and Y(t) are jointly stationary
 RX (t ) RXY (t )
R(t )  
(1.21)

 RYX (t ) RY (t ) 
where τ  t  u
24
Proof of RXY (t )  RYX (τ ) :
RXY ( τ )  E[X (t )Y (t  τ )]
Let t  τ  μ,
 RXY ( τ )  E[ X (   t )Y (  )]
 E[Y (  ) X (   t )]
 E[Y (t ) X (t   ( t )]
 RYX (  τ )
(1.22)
25
Example 1.4 Quadrature-Modulated Process
X 1 (t )  X (t ) cos(2πfct  )
X 2 (t )  X (t ) sin(2πfct  ),
where X(t) is a stationary process and  is uniformly
distributed over [0, 2].
R12 (ττ)  E X 1 (t ) X 2 (t  τ )
 E X (t ) X (t  τ )E cos(2πfct  ) sin(2πfct  2πfc τ  )


1
 RX ( τ ) E sin(2πfct  2πfc τ  2)  E sin(2πfc τ ) (1.23)
2
1
=0
  RX ( τ ) sin(2πfc τ )
2
At t  0, sin(2πfc τ )  0, R12 (t )  0 ,
X1 (t ) and X 2 (t ) are orthogonal.
26
1.5 Ergodic Processes
Ensemble averages of X(t) are averages “across the process”.
Long-term averages (time averages) are averages “along the
process ”
DC value of X(t) (random variable)
1 T
μx (T ) 
x (t ) dt


T
2T
If X(t) is stationary,
? 1
T
E μx (T )  
E x ( t ) dt


T
2T
1 T

μX d t

2T T
 μX
(1.24)
27
(1.25)
x (T ) represents an unbiased estimate of  X
The process X(t) is ergodic in the mean, if
a.
lim μx (T )  μX
T 
b. lim var μx (T )  0
T 
The time-averaged autocorrelation function
1 T
Rx ( τ ,T) 
x (t  τ )x (t )dt

2π T
Rx (t, T ) is a random variable.
(1.26)
If the following conditions hold, X(t) is ergodic in the
autocorrelation functions
lim Rx (t , T )  RX ( τ )
T 
lim varRx (t,T )  0
T 
28
1.6 Transmission of a random Process Through a
Linear Time-Invariant Filter (System)

Y (t )   h( τ1 )X (t  τ1 ) dτ1
-
where h(t) is the impulse response of the system
μY (t )  E Y (t )


 E  h ( τ1 ) X (t  τ1 ) dτ1 


 -

If E[X(t)] is finite 
and system is stable





-
-
(1.27)
h ( τ1 ) E x (t  τ1 ) dτ1
h ( τ1 ) μ X (t  τ1 ) dτ1

If X(t) is stationary, μY  μX  h( τ1 ) dτ1  μX H (0),
-
H(0) :System DC response.
(1.28
(1.29)
29
Consider autocorrelation function of Y(t):
RY (t, )  E Y (t )Y (  )



 E  h ( τ1 ) X (t  τ1 ) dτ1  h( τ 2 ) X ( μ  τ 2 ) dτ 2 
 


(1.30)
If E[X 2 (t )] is finite and the system is stable,
RY (t,μ)  

dτ1 h( τ1 )

If

dτ2 h( τ2 ) RX (t  τ1,  τ2 )

(1.31)
RX (t  τ1, μ  τ2 )  RX (t  μ  τ1  τ2 ) (stationary)
RY ( τ )  



 
h( τ1 )h( τ2 ) RX ( τ  τ1  τ2 ) dτ1 dτ2
(1.32)
Stationary input, Stationary output


RY (0)  E Y (t )  
2



 
h( τ1 )h( τ2 )RX ( τ2  τ1 ) dτ1 dτ2 (1.33)
30
1.7 Power Spectral Density (PSD)
Consider the Fourier transform of g(t),

G( f )   g (t ) exp( j 2πft) dt


g (t )   G( f ) exp( j 2πft) df

Let H(f ) denote the frequency response,
t t 2 - t1

h( τ1 )   H ( f ) exp( j 2πfτ1 ) df
(1.34)





E Y (t )     H ( f ) exp( j 2fτ1 ) df h( τ2 ) RX ( τ2  τ1 ) dτ1 dτ2

  
 
2











df H( f )  dτ2h( τ2 )  RX (τ2  τ1 ) exp( j 2fτ1 ) dτ1


  df H( f )  dτ2h(τ2 ) exp( j 2fτ 2 )  RX (t ) exp( j 2fτ ) dt


(1.35)

*
H ( f ) (complexconjugateresponse of the filter)
(1.36)
31



E Y (t )   df H ( f )
2
-
2


-
RX (τ ) exp( j 2ft ) dτ
(1.37)
H ( f ) : the magnitude response
Define: Power Spectral Density ( Fourier Transform ofR(τ ) )

S X ( f )   RX (t ) exp(2πfτ) dt


(1.38)
-

E Y (t )   H ( f ) S X ( f ) df
2
-
2
(1.39)
Recall E Y (t )    h( τ1 )RX ( τ2  τ1 ) dτ1 dτ2
(1.33)
- -
Let H ( f ) be the magnitude response of an ideal narrowband filter
1

1
,
f

f

Df

c
2
|H ( f )|  
(1.40)
1

0, f  f c  2 Df
2


Df : Filter Bandwidth
If Δf



f c and S X ( f ) is continuous,
E Y 2 (t )  2Δf SX ( f c ) in W/Hz
32
Properties of The PSD

S X ( f )   RX ( τ ) exp( j 2ft ) dt


RX ( τ )   S X ( τ ) exp( j 2ft ) df

(1.42)
(1.43)
Einstein-Wiener-Khintahine relations:
S X ( f )  RX ( τ )
S X ( f ) is more useful than RX (τ ) !
33

a. SX (0)   RX ( τ ) dt


(1.44)


b. E X (t )   S X ( f ) df
2
(1.45)

c. If X (t ) is stationary,


E Y 2 (t )  ( 2Δf ) S X ( f )  0
SX ( f )  0
for all f
(1.46)

d. SX (  f )   RX ( τ ) exp( j 2ft ) dτ


  RX (u ) exp( j 2fu ) du,

 SX ( f )
u  τ
(1.47)
e. T he P SD can be associatedwith a pdf :
pX ( f ) 



SX ( f )
(1.48)
S X ( f ) df
34
Example 1.5 Sinusoidal Wave with Random Phase
X (t )  A cos(2fct  ),  ~ U (  ,   )
A2
RX (t ) 
cos(2fct )
2

S X ( f )   RX (t ) exp( j 2ft ) dt

A2 
exp( j 2f ct )dt  exp( j 2f ct )exp( j 2ft ) dt




4
A2
 ( f  f c )   ( f  f c )

4
 Appendix2,



exp j 2 ( f c  f ) dt   ( f  f c )
35
Example 1.6 Random Binary Wave (Example 1.3)
 A,
X (t )  
  A,
if m (t) 1
if m (t) 0
 2
t
 A (1  )
R X (t )  
T

0


T
t T
t T
t
) exp( j 2ft ) dt
T
 A 2T sinc2 ( f T )
SX ( f ) 
T
A2 (1 
(1.50)
Define the energy spectral density of a pulse as
εg ( f )  A2T 2 sinc2 ( f T )
g ( f )
SX ( f ) 
T
(1.51)
(1.52)
36
Example 1.7 Mixing of a Random Process with a Sinusoidal Process
Y (t )  X (t ) cos(2f c t  ) ,  ~ U (0,2 )
RY (t )  E Y (t  τ )Y (t )
(1.53)
 E X (t  τ ) X (t )E cos(2f c t  2f ct  ) cos(2fct  )
1
RX (t ) E cos(2f ct )  cos(4f c t  2f ct  2)
2
1
 RX (t ) cos(2f ct )
2

(1.54)

SY ( f )   RY (t ) exp( j 2ft ) dt

1 
  RX (t )exp( j 2 ( f  f c ))t  exp( j 2 ( f  f c ))t  dt
4 
1
 S X ( f  f c )  S X ( f  f c )
(1.55)
4
We shift the S X ( f )to the right by f c , shift it to the left by f c ,
add them and divide by 4.
37
Relation Among The PSD of The Input and Output Random Processes
X(t)
Y(t)
h(t)
SX (f)
SY (f)
Recall (1.32)
RY (t )  



 
SY ( f )  

h(t 1 )h(t 2 ) RX (t  t 1  t 2 ) dt 1 dt 2

 

  
(1.32)
h(t 1 )h(t 2 ) RX (t  t 1  t 2 ) exp( j 2ft ) dt 1 dt 2 dt
Let t  t 1  t 2  t 0 , or t  t 0  t 1  t 2
SY ( f )  


 

  
h(t 1 )h(t 2 ) RX (t 1 ) exp( j 2ft 0 ) exp( j 2ft 2 ) exp( j 2ft 0 ) dτ1 dτ 2 dτ0
 SX ( f )H ( f )H * ( f )
 H ( f ) SX ( f )
2
(1.58)
38
Relation Among The PSD and
The Magnitude Spectrum of a Sample Function
Let x(t) be a sample function of a stationary and ergodic Process X(t).
In general, the condition for Fourier transformable is



x(t ) dt  
(1.59)
This condition can never be satisfied by any stationary x(t) with infinite duration.
T
(1.60)
We may write X ( f , T )   x (t ) exp( j 2ft ) dt
T
Ergodic  T aketimeaverage
1 T
RX (t )  lim
x (t  t )x (t ) dt
(1.61)
T  2T T
If x(t) is a power signal (finite average power)
1 T
1
2
x
(
t

t
)
x
(
t
)
dt

X
(
f
,
T
)
(1.62)

T
2T
2T
Time-averaged autocorrelation periodogram function
39
Take inverse Fourier Transform of right side of (1.62)
1
2

T
T

1
2
X ( f , T ) exp( j 2 fT )df
  2T
x(t  t )x(t )dt  
(1.63)
From (1.61),(1.63),we have
1
2
X ( f , T ) exp( j 2ft )df
  2T
RX (t )  lim 
T 

(1.64)
Note that for any given x(t) periodogram does not converge asT  
Since x(t) is ergodic
E RX (t )  RX (t )  lim 



1
2
E X ( f  T ) exp( j 2ft )df
T    2T
 
1
2 
RX (t )    lim
E X ( f  T )  exp( j 2ft )df
  T  2T




(1.66)

Recall (1.43)RX (t )   S X ( f ) exp( j 2ft )df



1
2
S X ( f )  lim
E X ( f ,T )
T  2T
2

1  T
 lim
E   x (t ) exp( j 2ft )dt 
T  2T
 T

(1.67) is used to estimate the PSD of x(t)
(1.67)
40
Cross-Spectral Densities

S XY ( f )   RXY (t ) exp( j 2ft )dt


SYX ( f )   RYX (t ) exp( j 2ft )dt

(1.68)
(1.69)
S XY ( f ) and SYX ( f ) may not be real.

RXY ( τ )   S XY ( f ) exp( j 2πfτ)df


RYX ( τ )   SYX ( f ) exp( j 2πfτ)df

 RXY ( τ )  RYX (  τ )

S XY ( f )  SYX (  f )  SYX
(f)
(1.22)
(1.72)
41
Example 1.8 X(t) and Y(t) are zero mean stationary processes.
Consider Z (t )  X (t )  Y (t )
SZ ( f )  S X ( f )  SY ( f )
(1.75)
Example 1.9 X(t) and Y(t) are jointly stationary.
RVZ (t , u )  E V (t ) Z (u )



 E  h1 (t 1 ) X (t  t 1 )dt 1 h2 (t 2 )Y (u  t 2 )dt 2 
 






 
h1 (t 1 )h2 (t 2 )RXY (t  t 1 , u  t 2 )dt 1dt 2
Let τ  t  u
RVZ (t )  



 
h1 (t 1 )h2 (t 2 )RXY (t  t 1  t 2 )dt 1dt 2 (1.77)
F
 SVY ( f )  H1 ( f ) H 2 ( f ) SXY ( f )
42
1.8 Gaussian Process
Define : Y as a linear functional of X(t)
T
Y   g (t ) X (t )dt
0
( g(t): some function)
(1.79)
(e) )
The process X(t) is a Gaussian process if every linear
functional of X(t) is a Gaussian random variable
( e.g g(t):
 ( y  Y )2 
fY ( y ) 
exp

2
2

2  Y
Y


1
y2
Normalized fY ( y ) 
exp( ) , as N (0,1)
2
2
1
(1.80)
(1.81)
Fig. 1.13 Normalized Gaussian distribution
43
Central Limit Theorem
Let Xi , i =1,2,3,….N be (a) statistically independent R.V.
and (b) have mean μ X and variance σ X2 .
Since they are independently and identically distributed (i.i.d.)
Normalized Xi
 Yi 
1
X
( X i  X )
i  1,2,....,N
Hence, EYi   0,
Var Yi   1.
1
Define VN 
N
N
Y
i 1
i
The Central Limit Theorem
The probability distribution of VN approaches N(0,1)
as N approaches infinity.
44
Properties of A Gaussian Process
1.
X(t)
h(t)
Gaussian
Y(t)
Gaussian
T
Y (t )   h(t  t )X (t )dt
0
Define

T
0
0
Z   gY (t )  h(t  t )X ( τ ) dt dt


0

T
0
gY (t )h(t  t ) dt X( τ ) dt
T
  g (t ) X (t ) dτ
0

where g (t )   gY (t )h(t  t )dt
0
By definitionZ is a Gaussian random variable(1.81)
T
 Y (t )   h(t  t )X (t )dt , 0  t   is Gaussian
0
45
2. If X(t) is Gaussisan
Then X(t1) , X(t2) , X(t3) , …., X(tn) are jointly Gaussian.
Let X ( ti )  EX (ti ) i  1,2,....,n
and the set of covariance functions be



C X (tk , ti )  E X (tk )   X ( tk ) X (ti )   X ( ti ) ,
where X  X (t1) ,X (t 2) ,....,X(tn )
k,i  1,2 ,...,n
T
1
1
T
( x  μ)) (1.85)
Σ
)
μ

x
(

exp(
T hen f X ( t1 ),...,X ( tn ) ( x1 ,..., xn ) 
n
1
2
( 2 ) 2 D 2
1
where μ  mean vect or 1 , 2 ,....,n 
T
Σ  covariancematrix {C X (tk , ti )}nk ,i 1
D  determinant of covariancematrixΣ
46
3. If a Gaussian process is stationary then it is strictly stationary.
(This follows from Property 2)
4. If X(t1),X(t2),…..,X(tn) are uncorrelated as
E  [(X (tk )   X (tk ) )(X (ti )   X (ti ) )]  0
Then they are independent
Proof : uncorrelated
 12
0


2
2
 Σ

,
where


E
[(
X
(
t
)

E
(
X
(
t
))
] , i  1,2 ,n.
i
i
i

2
0

n

Σ 1 is also a diagonal matrix
1
1
T
1


f
(
x
,
...,
x
)

exp(

x

μ
Σ
( x  μ))
n
(1.85) X ( t ),, X ( t ) 1
n
1
2
( 2 ) 2 D 2
1
n
n
 f X ( x )   f X i ( xi )
i 1
where X i  X (ti ) and f X i ( xi ) 

 xi   X
1
i
exp 
2

2 i
2  i

 
2


47
1.9 Noise
· Shot noise
· Thermal noise
 
1
1
E I  
E V   4kT Df  4kTGDf
R
R
E VTN2  4kTRDf
2
TN
2
2
TN
volts2
amps2
k: Boltzmann’s constant = 1.38 x 10-23 joules/K, T is the
absolute temperature in degree Kelvin.
48
· White noise
N0
SW ( f ) 
2
N 0  kTe
(1.93)
(1.94)
Te : equivalent noise temperature of thereceiver
N0
RW (t ) 
 (t )
2
(1.95)
49
Example 1.10 Ideal Low-Pass Filtered White Noise
 N 0
-B  f  B
SN ( f )   2
f  B
 0
B N
0
RN (t )  
exp( j 2 ft ) df
B 2
 N 0 B sinc(2 Bt )
(1.96)
(1.97)
50
Example 1.11 Correlation of White Noise with a Sinusoidal Wave
'
w
(t
)
w
(t )
White noise
dt
X

T
0
2
cos(2fct )
T
,
fc 
2 T
w' ( t ) 
w(t ) cos(2f c t )dt

0
T
T he varanceof w' (t ) is
 2  E 
T
2

T
0
k
, k is integer
T
(1.98)

w
(
t
)
cos(
2

f
t
)
w
(
t
)
cos(
2

f
t
)
dt
dt
1
c
1
2
c
2
1
2
0

T
2 T T
E w(t1 ) w(t2 )cos(2 fc t1 ) cos(2 fc t2 ) dt1 dt2


0
0
T
2 T T

RW (t1 , t2 ) cos(2 fc t1 ) cos(2 fc t2 ) dt1 dt2


0
0
T
From (1.95)

2 T T N0
 (t1  t2 ) cos(2 fc t1 ) cos(2 fc t2 ) dt1 dt2


0
0
T
2
N0 T
N0
51
2

cos
(
2

f
t
)
dt

(1.99)
c
T 0
2
2 
1.10 Narrowband Noise (NBN)
Two representations
a. in-phase and quadrature components (cos(2fct) ,sin(2fct))
b.envelope and phase
1.11 In-phase and quadrature representation
n(t )  nI (t ) cos(2 fct )  nQ (t ) sin(2 fct )
(1.100)
nI (t ) and nQ (t ) are low - pass signals
52
Important Properties
1.nI(t) and nQ(t) have zero mean.
2.If n(t) is Gaussian then nI(t) and nQ(t) are jointly Gaussian.
3.If n(t) is stationary then nI(t) and nQ(t) are jointly stationary.
S N ( f  f c )  S N ( f  f c ) ,
4.SN I ( f )  SNQ ( f )  
0

-B  f  B
otherwise
(1.101)
N0
.
2
5. nI(t) and nQ(t) have the same variance
6.Cross-spectral density is purely imaginary.
S N I NQ ( f )   S NQ N I ( f )
-B  f  B
 jS N  f  f c   S N  f  f c  ,

(1.102)
0
otherwise

7.If n(t) is Gaussian, its PSD is symmetric about fc, then nI(t) and nQ(t)
are statistically independent.
53
Example 1.12 Ideal Band-Pass Filtered White Noise
 fc  B
fc  B N
N0
0
RN (t )  
exp( j 2 ft )df  
exp( j 2 ft )df
 fc B 2
fc B 2
 N 0 B sinc(2 Bt )exp( j 2 fct ) exp( j 2 fct )
 2 N 0 B sinc(2 Bt ) cos(2 fct )
Comparewit h (1.97)(a fact orof t ),
RN I (t )  RNQ (t )  2 N 0 B sinc(2 Bt ).
(1.103)
54
1.12 Representation in Terms of Envelope and Phase Components
n (t )  r (t ) cos2f c t   (t )
(1.105)
Envelope

r (t )  n (t )  n (t )
2
I
2
Q

1
2
(1.106)
P hase
 nQ ( t ) 
 (t )  t an 

n
(
t
)
 I

1
(1.107)
Let NI and NQ be R.V.s obtained (at some fixed time) from nI(t)
and nQ(t). NI and NQ are independent Gaussian with zero mean and
variance  2.
55
f N I , NQ (nI , nQ ) 
1
2
2
f N I , NQ (nI , nQ )dnI dnQ 
exp(
1
2
2
nI2  nQ2
2
2
exp(
)
(1.108)
nI2  nQ2
2
2
) dnI dnQ
(1.109)
Let nI  r cosψ
(1.110)
nQ  r sin ψ
(1.111)
 dnI dnQ  r dr dψ
(1.112)
56
Substituting (1.110) - (1.112) into (1.109)
f N I , NQ ( nI , nQ )dnI dnQ  f R ,Ψ ( r, ) rdrd
r2

exp( 2 ) rdrd
2
2
2
r
r2
f R ,Ψ ( r, ) 
exp( 2 )
2
2
2
 1
0    2
0    2 , fΨ ( )   2
elsewhere
 0
r
 r
r2

f R ( r )    2 exp( 2 2 ) , r  0
 0
elsewhere
f R ( r ) is Rayleigh distribution.
For convenience , let ν 
(1.113)
(1.114)
(1.115)
r
.  fV ( ν )  f R ( r )
σ
2



fV ( ν )   exp( 2 ) ,   0
 0
elsewhere
(1.118)
57
Figure 1.22 Normalized Rayleigh distribution.
58
1.13 Sine Wave Plus Narrowband Noise
x(t )  A cos(2f ct )  n(t )
(1.119)
x(t )  nI (t ) cos(2f ct )  nQ(t ) sin(2f ct )
nI (t )  A  nI (t )
If n(t) is Gaussian with zero mean and variance  2
1. nI ' (t ) and nQ (t ) are Gaussian and statistically independent.
2.The mean of nI ' (t ) is A and that of nQ (t ) is zero.
3.The variance of nI ' (t ) and nQ (t ) is  2 .
 ( nI  A) 2  nQ 
f N I  , NQ ( nI , nQ ) 
exp

2
2
2
2



2
1
Let


r (t )  nI (t )  nQ2 (t )
2
1
2
 nQ (t ) 

 nI (t ) 
 (t)  tan-1 
(1.123)
(1.124)
Followa similar procedure, we have
r 2  A2  2 Ar cos
f R ,Ψ ( r, ) 
exp(
)
2
2
2
2
 R and are dependent.
r
59
f R (r)  
2
0
f R ,Ψ ( r, )d
r 2  A2 2
Ar

exp(
)  exp( 2 cos )d
2
2
0
2
2

r
(1.126)
The modified Bessel function of the first kind of zero
order is defined is (Appendix 3)
1 2
I0 ( x) 
exp(x cos )d

2 0
Ar
r
r 2  A2
Ar
Let x  2 , fR(r)  2 exp(
) I0 ( 2 )
2
σ

2

(1.127)
(1.128)
It is called Rician distribution.
60
Norm alized  
fV ( v )  f R ( r )
r

,a 
A

v2  a2
 v exp(
) I 0 (av)
2
(1.131)
(1.132)
Figure 1.23 Normalized Rician distribution
.
61