Document 7542333
Download
Report
Transcript Document 7542333
Chapter 7
Generating and Processing
Random Signals
第一組
電機四 B93902016 蔡馭理
資工四 B93902076 林宜鴻
1
Outline
Stationary and Ergodic Process
Uniform Random Number Generator
Mapping Uniform RVs to an Arbitrary pdf
Generating Uncorrelated Gaussian RV
Generating correlated Gaussian RV
PN Sequence Generators
Signal processing
2
Random Number Generator
Noise, interference
Random Number Generatorcomputational or physical device designed
to generate a sequence of numbers or
symbols that lack any pattern, i.e. appear
random, pseudo-random sequence
MATLAB - rand(m,n) , randn(m,n)
3
Stationary and Ergodic Process
strict-sense stationary (SSS)
wide-sense stationary (WSS)
Gaussian
SSS =>WSS ; WSS=>SSS
Time average v.s ensemble average
The ergodicity requirement is that the ensemble
average coincide with the time average
Sample function generated to represent signals,
noise, interference should be ergodic
4
Time average v.s ensemble average
Time average
ensemble average
5
Example 7.1 (N=100)
-1
0
0.5
1
1.5
2
0
-1
0
0.5
1
1.5
2
-0.5
0
e
z(t)
2
-2
0
e
y(t)
1
0.5
e
0
z nsemble-avarag(t)
x(t)
1
y nsemble-avarag(t) x nsemble-avarage(t)
x(t , ξ i ) A cos(2πft φi )
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
0
0.5
1
1.5
2
1
0
-1
2
0
-2
x(t , ξ i ) A(1 μi ) cos( 2πft φi )
6
Uniform Random Number Genrator
Generate a random variable that is
uniformly distributed on the interval (0,1)
Generate a sequence of numbers (integer)
between 0 and M and the divide each
element of the sequence by M
The most common technique is linear
congruence genrator (LCG)
7
Linear Congruence
LCG is defined by the operation:
xi+1=[axi+c]mod(m)
x0 is seed number of the generator
a, c, m, x0 are integer
Desirable property- full period
8
Technique A: The Mixed Congruence
Algorithm
The mixed linear algorithm takes the form:
xi+1=[axi+c]mod(m)
- c≠0 and relative prime to m
- a-1 is a multiple of p, where p is the
prime factors of m
- a-1 is a multiple of 4 if m is a
multiple of 4
9
Example 7.4
m=5000=(23)(54)
c=(33)(72)=1323
a-1=k1‧2 or
k2‧5 or 4‧k3
so, a-1=4‧2‧5‧k =40k
With k=6, we have a=241
xi+1=[241xi+ 1323]mod(5000)
We can verify the period is 5000, so it’s full
period
10
Technique B: The Multiplication Algorithm
With Prime Modulus
The multiplicative generator defined as :
xi+1=[axi]mod(m)
- m is prime (usaually large)
- a is a primitive element mod(m)
am-1/m = k =interger
ai-1/m ≠ k, i=1, 2, 3,…, m-2
11
Technique C: The Multiplication Algorithm
With Nonprime Modulus
The most important case of this generator
having m equal to a power of two :
xi+1=[axi]mod(2n)
The maximum period is 2n/4= 2n-2
the period is achieved if
- The multiplier a is 3 or 5
- The seed x0 is odd
12
Example of Multiplication Algorithm With
Nonprime Modulus
11
10
a=3
9
8
c=0
7
6
m=16
5
x0=1
4
3
2
1
0
5
10
15
20
25
30
35
13
Testing Random Number Generator
Chi-square test, spectral test……
Testing the randomness of a given
sequence
Scatterplots
- a plot of xi+1 as a function of xi
Durbin-Watson Test
N
(1/N)n 2 (X[n] X[n 1]) 2
D
N
(1/N)n 2 X 2[n]
14
Scatterplots
Example 7.5
1
1
1
0.9
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0.5
1
0
0
0.5
1
0
(i) rand(1,2048)
(ii)xi+1=[65xi+1]
mod(2048)
(iii)xi+1=[1229xi+
1]mod(2048)
0
0.5
1
15
Durbin-Watson Test (1)
(1/N)n 2 (X[n] X[n 1]) 2
N
D
(1/N)n 2 X 2[n]
N
Let X = X[n]
&
Y = X[n-1]
Assume X[n] and X[n-1] are correlated and X[n]
is an ergodic process
E{( X - Y) 2 } 1
2
E
{
(
X
Y)
}
D
2
2
E{X }
x
Let Y ρX 1 ρ 2 Z
1 ρ 1
16
Durbin-Watson Test (2)
X and Z are uncorrelated and zero mean
1
D 2 E (1 ρ) 2 X 2 2(1 ρ) (1 ρ) 2 XZ (1 ρ) 2 Z 2
σ
(1 ρ 2 )σ 2 (1 ρ 2 )σ 2
D
2(1 ρ)
2
σ
D>2 – negative correlation
D=2 –- uncorrelation (most desired)
D<2 – positive correlation
17
Example 7.6
rand(1,2048) - The value of D is 2.0081 and
ρ is 0.0041.
xi+1=[65xi+1]mod(2048) - The value of D is
1.9925 and ρ is 0.0037273.
xi+1=[1229xi+1]mod(2048) - The value of D
is 1.6037 and ρ is 0.19814.
18
Minimum Standards
Full period
Passes all applicable statistical tests for
randomness.
Easily transportable from one computer to
another
Lewis, Goodman, and Miller Minimum
Standard (prior to MATLAB 5)
xi+1=[16807xi]mod(231-1)
19
Mapping Uniform RVs to an Arbitrary pdf
The cumulative distribution for the target
random variable is known in closed form –
Inverse Transform Method
The pdf of target random variable is
known in closed form but the CDF is not
known in closed form – Rejection Method
Neither the pdf nor CDF are known in
closed form – Histogram Method
20
Inverse Transform Method
FX(x)
CDF FX(X) are known in
closed form
1
U
U = FX (X) = Pr { X≦ x }
X = FX-1 (U)
FX (X) = Pr { FX-1 (U) ≦ x }
= Pr {U ≦ FX (x) }
= FX (x)
FX-1(U)
x
21
Example 7.8 (1)
Rayleigh random variable with pdf –
r2
r
f R (r) 2 exp 2 u(r)
σ
2σ
r y
y2
r2
∴ FR (r) 2 exp 2 dy 1 exp 2
0 σ
2σ
2σ
Setting FR(R) = U
r2
1 exp 2 U
2σ
22
Example 7.8 (2)
∵ RV 1-U is equivalent to U (have same pdf)
r2
∴
exp 2 U
2σ
Solving for R gives
R 2σ ln( U )
2
[n,xout] = hist(Y,nbins) -
bar(xout,n) - plot the histogram
23
Example 7.8 (3)
Number of Samples
1500
1000
500
0
0
1
2
3
4
5
6
Independent Variable - x
7
8
9
Probability Density
0.4
true pdf
samples from histogram
0.3
0.2
0.1
0
0
1
2
3
4
5
Independent Variable - x
6
7
8
24
The Histogram Method
CDF and pdf are unknown
Pi = Pr{xi-1 < x < xi} = ci(xi-xi-1)
FX(x) = Fi-1 + ci(xi-xi-1)
i 1
Fi 1 Pr{ X X i 1} Pi 1
j 1
FX(X) = U = Fi-1 + ci(X-xi)
1
X xi 1 (U Fi 1)
ci
more samples
more accuracy!
25
Rejection Methods (1)
Having a target pdf
MgX(x) ≧ fX(x), all x
b M / a
Mg X (x)
0,
M
b
max{ f X (x)}
a
0 xa
otherwise
MgX(x)
M/a=b
fX(x)
gX(x)
1/a
0
0
x
a
x+dx
26
Rejection Methods (2)
Generate U1 and U2 uniform in (0,1)
Generate V1 uniform in (0,a), where a is
the maximum value of X
Generate V2 uniform in (0,b), where b is at
least the maximum value of fX(x)
If V2≦ fX(V1), set X= V1. If the inequality is
not satisfied, V1 and V2 are discarded and
the process is repeated from step 1
27
Example 7.9 (1)
4
2 R2 x2
f X ( x) πR
0,
0xR
otherwise
MgX(x)
M
4
R πR
fX(x)
gX(x)
1
R
0
0
R
28
Example 7.9 (2)
Number of Samples
150
100
50
0
0
1
2
3
4
Independent Variable - x
5
6
7
5
6
7
Probability Density
0.2
0.15
0.1
0.05
0
true pdf
samples from histogram
0
1
2
3
4
Independent Variable - x
29
Generating Uncorrelated Gaussian RV
Its CDF can’t be written in closed form,so
Inverse method can’t be used and
rejection method are not efficient
Other techniques
1.The sum of uniform method
2.Mapping a Rayleigh to Gaussian RV
3.The polar method
30
The Sum of Uniforms Method(1)
1.Central limit theorem
2.See next
N
.
1
Y B (U i )
2
i 0
Ui
i 1, 2.., N represent independent uniform R.V
B is a constant that decides the var of Y
3. N Y converges to a Gaussian R.V.
31
The Sum of Uniforms Method(2)
Expectation and Variance
1
E{U i }
2
N
1
E{Y } B ( E{U i } ) 0
2
i 0
1/ 2
1
1
var{U i } x 2 dx
1/ 2
2
12
1 NB 2
B var{U i }
2
12
i 1
N
2
y
2
We can set to any desired value
Nonzero at y N 12 y 3N
2
B y
12
N
N
32
The Sum of Uniforms Method(3)
Approximate Gaussian
Maybe not a realistic situation.
33
Mapping a Rayleigh to Gaussian RV(1)
Rayleigh can be generated by
R 2 2 ln U U is the uniform RV in [0,1]
Assume X and Y are indep. Gaussian RV
and their joint pdf
2
2
1
x
1
x
f XY ( x, y)
exp( 2 )
exp( 2 )
2
2
2
2
x2 y 2
exp(
)
2
2
2
2
1
34
Mapping a Rayleigh to Gaussian RV(2)
Transform
let x r cos and y r sin
2
2
2
1 y
x
y
r
and tan ( )
x
f R (r , )dAR f XY ( x, y)dAXY
dAXY ( x, y ) dx / dr dx / d
r
dAR (r , ) dy / dr dy / d
r2
f R (r , )
exp( 2 )
2
2
2
r
35
Mapping a Rayleigh to Gaussian RV(3)
Examine the marginal pdf
r2
r
r2
f R (r )
exp( 2 )d 2 exp( 2 ) 0 r
0 2 2
2
2
r
r2
1
f ( )
exp(
)
dr
0 2
2
0 2 2
2
2
2
r
R is Rayleigh RV and
is uniform RV
X R cos
X 2 2 ln(U1 ) cos 2U 2
Y R sin
Y 2 2 ln(U1 ) sin 2U 2
36
The Polar Method
From previous
X 2 2 ln(U1 ) cos 2U 2
Y 2 2 ln(U1 ) sin 2U 2
We may transform
s R2 u 2 v2 (R s )
u
u
R
s
v
v
sin 2 U 2 sin
R
s
cos 2 U1 cos
u
2 2 ln( s )
X 2 ln(U1 ) cos 2 U 2 2 ln( s )( ) u
s
s
2
2
v
2 2 ln( s)
Y 2 ln(U1 ) sin 2 U 2 2 ln( s)( ) v
s
s
2
2
37
The Polar Method Alothgrithm
1.Generate two uniform RV,U1 and U 2
and they are all on the interval (0,1)
2.Let V1 2U1 1 and V2 2U 2 1,so they are
independent and uniform on (-1,1)
3.Let S V12 V22 if S 1 continue,
else back to step2
4.Form A(S ) (2 ln S ) / S
5.Set X A(S )V1 and Y A(S )V2
2
38
Establishing a Given Correlation Coefficient(1)
Assume two Gaussian RV X and Y ,
they are zero mean and uncorrelated
Define a new RV Z X 1 2 Y | | 1
We also can see Z is Gaussian RV
Show is correlation coefficient relating
X and Z
39
Establishing a Given Correlation Coefficient(2)
Mean,Variance,Correlation coefficient
E{Z } E{ X } E{Y } 0
E{ XY } E{ X }E{Y } 0
X2 Y2 E{ X 2 } ( E{ X }) 2 E{ X 2 } E{Y 2 } 2
E{[ X 1 Y ] }
2
Z
2
2
2 E{ X 2 } 2 1 2 E{ XY } (1 2 ) E{Y 2 }
2 2 (1 2 ) 2 2
40
Establishing a Given Correlation Coefficient(3)
Covariance between X and Z
E{ XZ } E{ X [ X (1 )Y ]}
E{ X } (1 ) E{ XY }
2
E{ X 2 } 2
XZ
E{ XZ }
X Z
2
2
as desired
41
Pseudonoise(PN) Sequence Genarators
PN generator produces periodic sequence
that appears to be random
Generated by algorithm using initial seed
Although not random,but can pass many
tests of randomness
Unless algorithm and seed are known,
the sequence is impractical to predict
42
PN Generator implementation
43
Property of Linear Feedback Shift Register(LFSR)
Nearly random with long period
May have max period
If output satisfy period
,is called
max-length sequence or m-sequence
We define generator polynomial as
The coefficient to generate m-sequence
can always be found
44
Example of PN generator
45
Different seed for the PN generator
46
Family of M-sequences
47
Property of m-sequence
Has
ones,
zeros
The periodic autocorrelation of a 1 msequence is
If PN has a large period,autocorrelation
function approaches an impulse,and
PSD is approximately white as desired
48
PN Autocorrelation Function
49
Signal Processing
Relationship
1.mean of input and output
2.variance of input and output
3.input-output cross-correlation
4.autocorrelation and PSD
50
Input/Output Means
Assume system is linearconvolution
y[n]
k
h[k ]x[n k ]
k
E{ y[n]} E{ h[k ]x[n k ]}
k
h[k ]E{x[n k ]}
k
Assume stationarity assumption
E{x[n k ]} E{x[n]}
We can get E{ y} E{x} h[k ]
and h[k ] H (0) E{ y} H (0) E{x}
k
k
51
Input/Output Cross-Correlation
The Cross-Correlation is defined by
E{x[n] y[n m]} Rxy [m] E{x[n] h[ j ]x[n j m]}
Rxy [m]
h[ j]E{x[n j m]}
j
j
h[ j]R
j
xx
[m j ]
This use is used in the development of a
number of performance estimators,which
will be developed in chapter 8
52
Output Autocorrelation Function(1)
Autocorrelation of the output
E{ y[n] y[n m]} Ryy [m]
j
k
E{ h[ j ]x[n j ] h[k ]x[n k m]}
Ryy [m]
h[ j]h[k ]E{x[n j]x[n m k ]}
j k
h[ j]h[k ]R
j k
xx
(m k j )
Can’t be simplified without knowledge of
the Statistics of x[ n]
53
Output Autocorrelation Function(2)
If input is delta-correlated(i.e. white noise)
x2
Rxx [m] E{x[n]x[n m]}
0
m 0
2
x [ m]
m 0
substitute previous equation
Ryy [m] x2
R yy [ m]
h[ j]h[k ] (m k j)
j k
2
x
h[ j]h[ j m]
j
54
Input/Output Variances
By definition
Let m=0 substitute into
Ryy [0] E{ y 2[n]}
y2 Ryy [0]
But if
x[ n ]
R yy [ m]
h[ j]h[k ]R
j k
xx
[ j k]
is white noise sequence
y2 Ryy [0] x2 h2 [ j ]
j
55
The End
Thanks for listening
56