PowerPoint 簡報 - 清華大學電機系

Download Report

Transcript PowerPoint 簡報 - 清華大學電機系

BLIND SOURCE SEPARATION BY KURTOSIS
MAXIMIZATION WITH APPLICATIONS IN WIRELESS
COMMUNICATIONS
Chong-Yung Chi (祁忠勇)
Institute of Communications Engineering &
Department of Electrical Engineering
National Tsing Hua University
Hsinchu, Taiwan 30013, R.O.C.
Tel: +886-3-5731156, Fax: +886-3-5751787
E-mail: [email protected]
http://www.ee.nthu.edu.tw/cychi/
Invited talk at I2R, Singapore, July 18, 2006.
Acknowledgments: The viewgraphs were prepared through Chun-Hsien Peng’s helps.
OUTLINE
1.
Introduction to Blind Source Separation (BSS)
2.
FKMA and MSC Procedure
3. Turbo Source Extraction Algorithm (TSEA)
4.
Non-cancellation Multistage Source (NCMS)
Separation Algorithms
NCMS-FKMA
NCMS-TSEA
5.
Simulation Results --- Part 1
6.
Turbo Space-time Receiver for CCI/ISI Reduction
7. Simulation Results --- Part 2
8.
Conclusions
FKMA: Fast Kurtosis Maximization Algorithm
MSC: Multistage Successive Cancellation
1
1. Blind Source Separation (BSS)
 Instantaneous Mixture of Sources
Noise w1[n ]
(Mutually Indep.
but Colored)
Unknown
s1[n ]
s[n ]


P K

A
(Memoryless channel)
Measurements
x1[n ]
mixing matrix
sK [n ]
P Output

x[n ]
x P [n ]
wP [n ]
K
x[n]  A s[n]  w[n]   ai si [n]  w[n]
i 1
ai (ith column of A)
GOAL
is the basis vector that spans the subspace of si [n ]
Extract all the source signals si [n ] with only measurements x[n ].
Applications: array signal processing, wireless communications and biomedical
signal processing, etc [1-3].
2
 Existing BSS Algorithms
SOS
Algorithm
Whitening
AMUSE


SOBI
FOBI
EFOBI
FastICA
HOS
MSC-FKMA
MSC-TSEA
NCMS-FKMA
NCMS-TSEA



Multistage
Statistically independent
sk [n ],
A PK
k  1,..., K
w[n ]
1. Statistically mutually uncorrelated
PK





2. Zero mean
3. Temporal colored with distinct
power spectra
1. Statistically mutually independent
2. Zero mean
3. Distinct fourth-order moments
PK
1. Statistically mutually independent
2. Zero mean
3. Non-Gaussian
(Non-zero fourth-order cumulants,
e.g.,
1. Zero
mea
n
2. Gaussian
kurtosis)
SOS: Second-order Statistics
HOS: Higher-order Statistics
3
AMUSE: Algorithm for Multiple Unknown Signals Extraction (Tong et al., 1990 [1])
SOBI: Second-order Blind Identification (Belouchrani et al., 1997 [2])
FOBI: Fourth-order Blind Identification (Cardoso, 1989 [12])
EFOBI: Extended Fourth-order Blind Identification (Tong et al., 1991 [1])
FastICA: Fast Independent Component Analysis (Hyvarinen et al., 1997 [13, 14])
MSC: Multistage Successive Cancellation
NCMS: Non-cancellation Multistage
FKMA: Fast Kurtosis Maximization Algorithm
TSEA: Turbo Source Extraction Algorithm
4
 AMUSE and SOBI Algorithm Using SOS:
Step 1: Prewhitening by Eigenvalue Decomposition (EVD)
R x  E {x[n ]x H [n ]} (PxP matrix)
EVD
{
1 ,  2 , ...,  K : largest K eigenvalues of R x
f1 , f 2 , ..., f K
: associated K eignevectors of R x
ˆ w2 : average of the other (smallest) P-K eigenvalues of R x
2
(assuming that R w  E {w[n ]w H [n ]}   w
I)
ˆ [
D
f1
1  ˆ w2
, ...,
fK
 K  ˆ w2
]H
ˆ x[n ]  Us[n ]  D
ˆ w[n ]
z[n ]  D
U:
KxK unitary matrix
(whitening matrix)
(dimension-reduced whitening
spatial processing)
AMUSE: Algorithm for Multiple Unknown Signals Extraction (Tong et al., 1990 [1])
SOBI: Second-order Blind Identification (Belouchrani et al., 1997 [2])
5
Step 2: Estimation of the Unitary Matrix U from
R z [k ]  E {z[n ]z H [n  k ]}
x[n ]
Prewhitening
by EVD
(KxK matrix)
EVD of
z[n ]
Û
( R z [k ]  R z [k ]) / 2
H
Û
Joint Diagonalization of
{R z [ki ] | i  1, ..., J }
(AMUSE)
(SOBI)
Step 3: Source Separation and Channel Estimation
ˆ H z[n ]
sˆ[n ]  U
(spatial processing for simultaneous extraction of
all the K sources)
ˆ HD
ˆ (demixing matrix)
WU
ˆ  W#  D
ˆ #U
ˆ
A
(mixing matrix estimate)
( # : pseudo-inverse)
6
2. FKMA and MSC Procedure
 Assumptions:
(A1) The unknown P K mixing matrix
A is of full column rank with P  K .
(A2) si [n ], i {1, 2, ..., K } are modeled as
ui [n ]
Stable LTI System
bi [n ]
si [n ]  ui [n ]  bi [n ]
ui [n ] : zero-mean non-Gaussian independent identically distributed (i.i.d.)
process with C4{ui [n]}  0 ; ui [n ] is statistically independent of u j [n ]
for all i  j.
(A3) w[n ] is zero-mean Gaussian and statistically independent of s[n ].
cum{z1 , z 2 , z 3 , z 4 }: fourth-order joint cumulant of random variables z1 , z 2 , z 3 , z 4
C4{z}  cum{z, z, z , z} (referred to as kurtosis of z )
FKMA: Fast Kurtosis Maximization Algorithm (Chi and Chen, 2001 [4,5])
MSC: Multistage Successive Cancellation
7
 Definition of HOS (i.e., Cumulants):
(Bartlett, 1955, Brillinger, 1975, etc)
 M ln (1 ,, M )
cum{x1 ,, x M }  (  j )
1  M
M
where
1  M 0
 (1 ,, M )  E{e j (1x1  M x M ) }
is the characteristic function of random variables x1 ,, x M .
Assume that x1 , x 2 , x 3 , x 4 are zero-mean random variables. Then
cum{x1 , x 2 }  E{x1x 2 }
cum{x1 , x 2 , x 3 }  E {x1x 2 x 3 }
cum{x1 , x 2 , x 3 , x 4 }  E {x1x 2 x 3 x 4 }  E {x1x 2 }E {x 3 x 4 }
 E {x1x 3 }E {x 2 x 4 }  E {x1x 4 }E {x 2 x 3 }
4
2
C4 {x}  cum{x, x, x* , x* }  E{ x }  2( E{ x }) 2  E{x 2 }
2
(referred to as kurtosis of x )
8
 Fast Kurtosis Maximization Algorithm (FKMA)
(Chi et al., 2001 [4, 5])
Criterion [7]:
J ( v)  J (e[n]) 
C4 {e[n]}
2
Maximization
2
E { e[n] }
Optimum
v
e[n]  v T x[n]   k sk [n]
(noise-free case)
( k is an unknown complex scale
factor and k  {1,, K } )
magnutude of normalized
kutorsis of e[n ]
 Closed-form solution for v : Not existent
 Gradient-type iterative algorithms for finding a local optimum v : Not very
computationally efficient
v (i )  v (i 1)   Q(i 1) J ( v (i 1) )
where Q is a positive-definite matrix depending on the algorithm used, and
μ is the step size such that
J ( v (i ) )  J ( v (i 1) )
9
 Fast Kurtosis Maximization Algorithm (FKMA)
(Chi et al., 2001 [4, 5])
Criterion [7]:
C4 {e[n]}
J ( v)  J (e[n]) 
Maximization
2
2
E { e[n] }
magnutude of normalized
kutorsis of e[n ]
Optimum
v
e[n]  v T x[n]   k sk [n]
(noise-free case)
( k is an unknown complex scale
factor and k  {1,, K } )
Algorithm:
e (i ) [n ]
At the i th iteration
Yes
Compute
x[n ]
e (i 1) [n ]
v
(i )

R 1d (i 1)
J ( v ( i ) )  J ( v ( i 1) )
R 1d (i 1)
?
No
Update v through a
gradient type optimization
algorithm such that
(i )
R  R*X  E{x*[n]xT [n]} (PxP matrix)
d(i1)  cum{e(i1) [n], e(i1) [n], (e(i1) [n])* , x*[n]}
To the
(i  1) th
iteration
e (i ) [n ]
J ( v (i ) )  J ( v (i 1) )
9
 Observations:
 The FKMA itself is an exclusive spatial processing algorithm.
 The smaller the value of  ( si [n]), the worse the performance of the
FKMA for finite SNR and finite data length N .
By (A2)
J (si [n])   (si [n])  J (ui [n])
where

(absolute normalized kurtosis of si [n ])

J ( si [n])
(entropy measure of the stable
0   ( si [ n]) 
 k 

1
2
sequence bi [n ] )
J (ui [n])  
2
  bi [k ] 
 k 

(equality holds only as bi [n ]   [n   ], i.e., minimum entropy of bi [n ] )
bi [k ]
4
 (si [n]) can be thought of as a measure of distance of
si [n ] from a Gaussian
process, implying that the performance of the FKMA (which requires si [n ] to
be non-Gaussian [6]), depends on  ( si [n]).
10
 MSC Procedure
x[n ]
Each Stage of the Multistage Successive
Cancellation (MSC) Procedure
Estimate One Source
Signal Using FKMA
( ak : k th column of A )
sˆk [n ]
Obtain
aˆ k 
x[n ]sˆk [n ]
| sˆk [n ] |2
E{
E{
}
}
âk
x[n ]
Update x[n ] by
x[n ]  aˆ k sˆk [n ]
x[n ]
Next
Stage
NOTE
The estimated sources sˆk [n ] and âk columns of A obtained at later
stages in the MSC procedure may become less accurate due to error
propagation effects from stage to stage [6].
11
3. Turbo Source Extraction Algorithm
 Source Separation Filter:
v T SEA [n ]  vv [n ]
 [ n]  v
where
T
TSEA
(A bank of same temporal filters)
[n]  x[n] 


k 
T
v TSEA
[k ]x[n  k ]
v: P
1 vector for extracting a colored source signal sk [n ], i.e.,
removing spatial interference due to the mixing matrix A . (spatial filter)
v [n ]: single-input single-output (SISO) deconvolution (or higher-order whitening)
filter of order L to restore uk [n ] from sk [n ] . (temporal filter)
 Design Criterion:
J ( [n])
Maximization
  ( [n])  J (uk [n])
 J (e[n])   (e[n])
(Extracted Source)
 [n]  v TTSEA [n]  x[n]  v T y[n]  e[n]  v[n]
e[n]  sˆk [n]  v T x[ n] (spatial processing)
y[n]  v[n]  x[n] (temporal processing)
12
Turbo Source Extraction Algorithm (TSEA)
(Chi et al., 2003 [3])
Signal processing procedure at the i th cycle
Temporal Processing
Step 1
x[n ]
y
(i 1)
[n ]
 x[n ] v[n ]
(a)
Spatial Processing
(b)
v [n ]  vˆ ( i1) [n ]
FKMA(s)
 [n ]
 ( vˆ (i ) )T y (i 1) [n ]
Step 1
vˆ ( i )
vˆ ( i ) [n ]
Step 2
(b)
 [n ]
 e[n ] vˆ [n ]
(i )
  T e[n ]
FKMA(t)
(a)
e[n]  v x[n]
T
v  vˆ
Step 2
(i )
x[n ]
 sˆk [n]
  [v[0], v[1],
, v[ L]]T
e[n ]  [e[n ], e[n  1], , e[n  L ]]T (Extracted Source)
13
Why?
Performance of TSEA is superior to FKMA.
Interpretations:
1) Temporal
Processing:
 [n ]  e [n ] v (i ) [n ]  sˆk [n ] v (i ) [n ]
 ( k uk [n ]  bk [n ]) v (i ) [n ]   k uk [n ]  g k [n ]
(b)
Increasing J ( [n])   ( [n]) J (uk [n]) is equivalent to increasing

 ( [n]) 

m
m
2) Spatial
Processing:
(b)
g k [ m]
g k [ m]
4

2 2
  ( sk [n]), gk [n ]  bk [n ] v (i ) [n ]
~ [n ]
y[n ]  x[n ] v (i ) [n ]  A~s [n ]  w
~
s [n ]  (s~1[n ], , s~k [n ]  sk [n ] v (i ) [n ]   [n ], , s~K [n ]) T
 ( [n])   (sk [n])   (sk [n]) and
 (sl [n]), l  k
14
Remarks:
 TSEA is computationally efficient with super-exponential
convergence rate and P parameters for spatial processing
and L+1 parameters for temporal processing, respectively.
 The performance gain of the TSEA reaches the maximum
as long as the order L (a parameter under our choice) of
the temporal filter is sufficiently large. On the other hand,
the asymptotic performance of FKMA approaches that of
the TSEA as N   and SNR   .
 All the sources can be extracted through the MSC procedure.
The resultant BSS algorithm that uses the TSEA, is referred
to as MSC-TSEA, also outperforms the MSC-FKMA, at the
extra expense of the temporal processing at each stage.
15
4. Non-Cancellation Multistage Source Separation
Algorithms
 NCMS-FKMA
Constrained
Criterion:
Unconstrained
Criterion:
Constraint
v  arg max{J ( v)  J (e[n]) : e[n]  v T x[n], v T C  0T-1}
v

where C  aˆ1 ,
aˆ2 ,
, aˆ -1  ,
 2, 3,
,K
v  (C ) v (unconstrained optimization problem)
  arg max{J (v )  J (e[n]) : e[n]   T x [n]}
v
C : P  P
projection matrix
x [n]  C x[ n]
Theorem 1: Let S be the set of all the extracted source signals up to
stage  1. With (A1), (A2), and the noise-free assumption, the
optimum e [n]   T x [n]  v T x[n]   k sk [n] where  k is an unknown nonzero constant and sk [n]  S .
16
 Signal Processing Procedure of NCMS-FKMA
(Initial Condition)
v ( 0)  (1, 1, ,1) T / P
x[n ]
aˆ -1
C 
Obtain
by
SVD of C  and
x [n]
x  [n ]  C x[n ]
(F-a)
Estimate One
Source Signal
Using FKMA
v
sˆk [n ]
â 
{
E{| sˆ
}
[n ] | }
E x[n ]sˆk [n ]
k
(F-b)
Estimate One
Source Signal
Using FKMA
Good Initial
Condition
Obtain
x[n ]
x[n ] v (0)  v
e [ n]
â
2
17
Remarks:
 The constrained source extraction filter v obtained in (F-a)
provides a suitable initial condition for the unconstrained source
extraction filter v in (F-b), which accordingly leads to one
distinct source estimate e [n] obtained at each stage neither
involving cancellation nor imposing any constraints on the source
extraction filter, as well as faster convergence than (F-a).
Therefore, unlike the MSC-FKMA, the NCMS-FKMA is free from
the error propagation effects at each stage.
 As the MSC-TSEA performs better than the MSC-FKMA, the
NCMS-TSEA also performs better than the NCMS-FKMA at the
moderate expense of extra computational load for the temporal
processing of the TSEA.
18
 Signal Processing Procedure of NCMS-TSEA
v (0) [n]  v [n]
v (0)  v x[n ]
 (1, 1, ,1) T / P
(Initial Condition)
v ( 0)
x[n ]
aˆ -1
C 
Obtain
by
SVD of C  and
x [n]
x  [n ]  C x[n ]
(T-a)
Estimate One
Source Signal
Using TSEA
v [ n]
v
sˆk [n ]  e  [n ]
Good Initial
Condition
Obtain
x[n ]
â 
{
E{| sˆ
E
}
[n ] | }
x[n ]sˆk [n ]
k
(T-b)
Estimate One
Source Signal
Using TSEA
e [ n]
â
2
19
5. Simulation Results --- Part 1
 Parameters Used:
 ui [n ] : zero-mean, independent binary sequence of {1} with equal probability
 si [n ] : generated by filtering ui [n ] through the chosen FIR filters bi [n ]
 w[n ] : real white Gaussian noise vector
 SNR:

SNR 
2
E { x[n ]  w[n ] }
2
E { w[n ] }
 50 independent runs
 Output (extracted) signal to interference-plus-noise ratio
(Output SINR)
1
Output SINR 
K
K
 SINR
i 1
i
20
 Part A: Performance of NCMS-FKMA and NCMS-TSEA
 5
4 mixing matrix A (taken from Chang et al., 1998 [9]) (P=5, K=4)
0.2887
 0.2380
 0.3397  0.7494

A   0.6107
0.4959

0.2644
 0.3558
  0.5731  0.1983
bi [n ]  exp( 
n 1
)
10  i
 0.7120
 0.1157
0.2661
 0.4216
 0.4807
0.4914 
0.2097 

0.2504 

 0.6640 
0.4593
n  0, 1, ..., 5
 Four cases are considered as follows:
Case 1: Output SINR versus SNR for different data length N .
Case 2: Output SINR versus different data length N .
Case 3: Output SINR versus  (si [n])   (or i   ) for all i .
Case 4: (a) Output SINR versus L.
K
(b) (1 K ) J ( k [n ]) versus L.
k 1
21
 i  0.5 (or
30
 i (si [n])  0.2368) for all i , L  5
NCMS-TSEA, N=1500
NCMS-TSEA, N=1000
NCMS-TSEA, N=500
OUTPUT SINR (dB)
25
NCMS-FKMA, N=1500
NCMS-FKMA, N=1000
NCMS-FKMA, N=500
20
15
10
5
0
5
10
15
20
25
30
SNR (dB)
Figure 1. Simulation results (Output SINR versus SNR) of Case 1.
22
 i  1 (or
 i (si [n])  0.1856) for all i, L  5, and SNR=30 dB
32
30
OUTPUT SINR (dB)
28
26
24
22
20
18
NCMS-TSEA
NCMS-FKMA
16
14 3
10
10
4
N
10
5
10
6
Figure 2. Simulation results (Output SINR versus data length N ) of Case 2.
23
 SNR=30 dB, N  2000, and L  5

32
0.745
0.345
0.240 0.183
0.144 0.115
0.091
0.068 0.01
30
OUTPUT SINR (dB)
28
26
24
22
20
18
16
14
NCMS-TSEA
NCMS-FKMA
12
10
0.1
0.2
0.3
0.4
0.5

0.6
0.7
0.8
0.9
1
Figure 3. Simulation results (Output SINR versus  ) of Case 3.
24
 SNR=30 dB, N  2000
i    1 and 0.5 (i.e.,  (si [n])    0.1856 and 0.2368 ) for all i
35
OUTPUT SINR (dB)
30
25
20
NCMS-TSEA, =0.5
(or =0.2368)
NCMS-TSEA, =1
(or =0.1856)
15
10
0
1
2
3
4
5
6
7
8
9
L
Figure 4a. Simulation results (Output SINR versus the order of
the temporal filter L ) of Case 4 (a).
25
 SNR=30 dB, N  2000
i    1 and 0.5 (i.e.,  (si [n])    0.1856 and 0.2368 ) for all i
2
1.8
[n])
1.6
k =1
(1/K)
K
 J(
k
1.4
1.2
1
0.8
NCMS-TSEA, =0.5
(or =0.2368)
NCMS-TSEA, =1
(or =0.1856)
0.6
0.4
0
1
2
3
4
5
6
7
8
9
L
Figure 4b. Simulation results (Output SINR versus the order of
the temporal filter L ) of Case 4 (b).
26
 Part B: Performance Comparison
 The same 5 4 mixing matrix A in Part A and
bi [n ]  exp ( 
n 1
), n  0, 1, ..., 5 (1  1 , 2  0.4, 3  0.3, 4  0.2)
10  i
 (s1[n ])  0.1856 ,  (s2 [n ])  0.2706,  (s3[n ])  0.3335,  (s4 [n ])  0.4644
 Data length N = 2000 and L = 5
 Comparison with the MSC-FKMA, AMUSE (  1) (Tong et al. 1990 [1]) and
SOBI algorithm ( i
 i , i  1, 2,3)
(Belouchrani et al. 1997 [2])
 Three cases are considered as follows:
Case A: Output SINR1 versus SNR for N  2000 and L  5.
Case B: Output SINR versus SNR for N  2000 and L  5.
Case C: Output SINR versus N for SNR = 20 dB and L  5.
27
35
NCMS-TSEA
MSC-TSEA
NCMS-FKMA
MSC-FKMA
FastICA
SOBI ALGORITHM
AMUSE
OUTPUT SINR1 (dB)
30
25
20
15
10
5
0
5
10
15
20
25
30
SNR (dB)
Figure 5. Simulation results (Output SINR1 versus SNR) of Case A.
28
NCMS-TSEA
MSC-TSEA
NCMS-FKMA
MSC-FKMA
FastICA
SOBI ALGORITHM
AMUSE
30
OUTPUT SINR (dB)
25
20
15
10
5
0
5
10
15
20
25
30
SNR (dB)
Figure 6. Simulation results (Output SINR versus SNR) of Case B.
29
22
20
OUTPUT SINR (dB)
18
16
14
12
NCMS-TSEA
MSC-TSEA
NCMS-FKMA
MSC-FKMA
FastICA
SOBI ALGORITHM
AMUSE
10
8
6
500
1000
1500
2000
2500
3000
3500
4000
4500
5000
N
Figure 7. Simulation results (Output SINR versus data length N ) of Case C.
30
Case D: Output SINR versus 

a 3x2 mixing matrix by removing the last two rows and columns of
the mixing matrix in Part A. (P=3, K=2)
A :
0.2887 
 0.2380
A  0.3397  0.7494 


0.6107
0.4959 
B1 (z)  (1  0.5z 1 )(1  0.8z 1 )(1  4z 1 )
B2 (z)  [1  (0.5   )z 1 ][1  (0.8   )z 1 ][1  (4   )z 1 ] 0.05    0.40
 Data length N =1000, SNR=30 dB and L =3.
 Comparison with the MSC-FKMA, AMUSE (  1) and SOBI algorithm
( i  i , i  1, 2, 3)
31
30
OUTPUT SINR (dB)
25
20
15
NCMS-TSEA
MSC-TSEA
NCMS-FKMA
MSC-FKMA
FastICA
SOBI ALGORITHM
AMUSE
10
5
0.05
0.1
0.15
0.2

0.25
0.3
0.35
0.4
Figure 8. Simulation results (Output SINR versus  ) of Case D.
32
6. Turbo Space-time Receiver for
CCI/ISI Reduction
 Problem Statement: CCI and ISI Suppression in TDMA
Cellular Wireless Communications
w[n ] (Noise)
f5
f6
x [n ]
h [n ] (Multipath channel)
f4
f1
f7
u [n ]
f3
f2
CCI: Co-channel Interference
ISI: Intersymbol Interference (due to multipath)
GOAL
f1
CCI
Enhance data rate, link quality, capacity, and coverage.
Space-time processing using an antenna array has been used for
combating CCI and ISI in the receiver design [15-16].
33
 Signal Model:
Consider the scenario where the base station is equipped with multiple
antennas, and the signal of interest and CCI are received from multiple
distinct directions of arrival (DOA), with a frequency-selective fading
channel for each DOA. (a general scenario)
(h11[n ], 11 )
u1[n ]
(h12 [n ],12 )
(h 21[n ],  21 )
u 2 [n ]
x1[n ]
x 2 [n ]

x P [n ]
(h 22[n ],  22 )
34
The received signal from the desired user and K  1 CCIs (users) can be
expressed as an instantaneous mixture of multiple sources
x[n ]  As[n ]  w[n ] 
where
{A}P   ( A1 , A2 ,..., AK )
K
 Ak sk [n]  w[n]
k 1
{A k }P pk  (a( k1 ), a( k 2 ),..., a( kpk ))
s[n ]  (s1T [n ], s T2 [n ],..., s TK [n ]) T sk [n ]  (sk1[n ], sk 2 [n ],..., skpk [n ])T
skj [n ]  hkj [n ]  uk [n ] j  1, 2,..., pk
( pk is no. of DOAs of user k)
“ISI-distorted’’ signal (colored signal) from jth DOA of user k
a(kj ) : steering vector of jth DOA of user k
hkj [n ] : Lkj th-order channel impulse response of jth DOA of user k
K
   pk (total no. of DOAs or ’’sources”)
k 1
35
 Assumptions:
(A1) The unknown P
 DOA matrix A is of full column rank and P

(A2) The data sequence u1[n ] of user 1 (the desired signal) is i. i. d. zero-mean
non-Gaussian with C4{u1[n ]}  0, and meanwhile statistically independent
of the other ( K  1 ) zero-mean i. i. d. data sequences uk [n ] (of CCI).
(A3) w[n ] is zero-mean Gaussian, and statistically independent of uk [n ] for all k .
K
   pk (total no. of DOAs or ’’sources”)
k 1
sk [n ]  (sk1[n ], sk 2 [n ],..., skpk [n ])T
pk correlated colored non-Gaussian sources
s[n ]  (s1T [n ], s T2 [n ],..., s TK [n ]) T
K block mutually independent colored sources
36
 Case I: Each user has a single DOA with multiple paths
(Venkataraman et al., 2003), i.e.,
pk  1,
A  (a(11 ),
k  1, 2,..., K
, a(K 1 ))
sk1[n ]  uk [n ] hk1[n ]
s[n ]  (s11[n ],
, sK 1[n ]) T
Mutually independent
colored sources
 Case II: Each user has multiple DOAs with disjoint domains of
support of multipath channel impulse responses, i.e.,
hki [n ] hkj [n ]  0 i  j and 1  k  K




2
E ski [n ]skj* [n ]   hki [l ]hkj* [l ]  E uk [n ]  0, i  j
 l



s[n ]  (s1T [n ], s T2 [n ],..., s TK [n ]) T
K block mutually independent colored sources;
 mutually independent random variables for each n
37
 Conventional Cascade Space-Time Receiver (CSTR)
(Jelitto and Fettweis, 2002)
For Cases I and II, the conventional CSTR has been reported for
CCI and ISI suppression
Space-time Processor
x[n ]
Spatial Filter
v
e[n ]  s1 j [n ] Temporal Filter
 [n ]  u1[n ]
v [n ]
In CAMSAP-06, we proposed two space-time receivers based on
kurtosis maximization for these two cases and a discussion of the
proposed space-time receivers for the general scenario.
Other existing structures: full-dimension (joint) ST processing,
reduced dimension ST processing (prewhitening followed by joint ST
processing).
38
 Kurtosis Maximization (Ding ad Nguyen, 2000):
J ( v)  J (e[n ]) 
C 4{e[n ]}
2
E { e[n ] }
2
magnutude of normalized
kutorsis of e[n ]
Maximization
(noise-free case)
e[n ]  v T x[n ]  kj skj [n ]
 kj uk [n ]  hkj [n ]
 kj is an unknown complex scale factor
k  {1, 2, , K } and j {1, 2,
, pk }
 Closed-form solution for v : Not existent
 Gradient-type iterative algorithms for finding a local optimum v :
Not very computationally efficient
 Applicable not only for Case I but also for Case II (Peng et al.,
ICICS 2005)
39
 Fast Kurtosis Maximization Algorithm (FKMA)
2001):
(Chi and Chen,
e (i ) [n ]
At the i th iteration
Yes
Compute
x[n ]
e (i 1) [n ]
v
(i )

J ( v (i ) ) J ( v (i 1) )
R 1d (i 1)
?
R 1d (i 1)
No
Update v through a
gradient type optimization
algorithm such that
(i )
To the
(i  1) th
iteration
e (i ) [n ]
J ( v (i ) ) J ( v (i 1) )
R  E{x* [n ]xT [n ]}
(PxP matrix)
d ( i 1)  cum {e ( i 1) [n ], e ( i 1) [n ], (e ( i 1) [n ]) * , x * [n ]}
40
 Blind CSTR Using FKMA
Space-time Processor
Spatial Filter
x[n ]
e[n ]  s1 j [n ]
v
Temporal Filter
v [n ]
 [n ]  u1[n ]
 Spatial processing using FKMA for CCI suppression
e[n ]  vT x[n ]  s1 j [n ]  u1[n ] h1 j [n ]
With a suitable initial condition for v , FKMA will converge at a
super-exponential rate with e[n ]  s1 j [n ], j  1, 2, , p1 for high SNR.
 Temporal processing using FKMA for ISI removal
 [n ]  v[n ]  e[n ]  v Te[n ]   u1[n ]
where
v  [v[0], v[1], , v[L ]]T
(L: order of the temporal filter)
e[n ]  [e[n ], e[n  1], , e[n  L ]]T
41
 Usually, the ISI-distorted (desired) signal s1 j [n ], has higher power than all
2
the CCI, (i.e., E { s1 j [n ] }
E { sk i [n ] } k  1). So
2
2
1 j  arg max E{ a ( )x[n ] },
H

(DOA estimate by delay-and-sum)

implying that a (1 j ) can be used as the initial condition for the spatial
filter v needed by the FKMA.
 It can be easily shown that (Chi et al., 2003)
J (s1 j [n ])   (s1 j [n ])  J (u1[n ]),
where
L1 j
0   (s1 j [n ]) 

m 0
h1 j [m ]
4

2
  h1 j [m ]
 m 0
L1 j



2
1
The performance of the spatial filter v (to suppress CCI) using FKMA is worse
for smaller  (s1 j [n ]), and worse for larger L1 j , leading to limited performance of
the temporal filter v [n ] of the blind CSTR.
42
Blind Turbo Space-Time Receiver (TSTR)
 Space-Time Filter for Source Extraction
(Chi et al. 2003, 2006):
vTSTR [n ]  vv[n ]
 [n ]  v TTSTR [n ]  x[n ]
 Design Criterion:
Optimum (noise-free case)
Maximization
J ( [n ])
 [n ]  vTTSTR [n ]  x[n ]  vT y[n ]
 s1 j [n ] v[n ]  1u1[n  1 j ]
e[n ]  s1 j [n ]  vT x[n ]  1 j s1 j [n ]
y[n ]  x[n ] v [n ]
43
 Proposed Blind TSTR Using FKMA
Signal processing procedure at the i th cycle:
CSTR
x[n ]
e (i ) [n ]  v T x[n ]
Spatial Filter
(i )
vv
J ( 1(i ) [n ])
 s1 j [n ]
(S2)
Temporal Filter
 2(i ) [n ]  u1[n ]
FKMA
(i )
v (i ) [n ] J ( 2 [n ])
v (i )
i=i+1
 1(i ) [n ]
Spatial Filter
FKMA
(S1)
y ( i ) [n ]
 x[n ] v [n ]
Temporal Filter
x[n ]
v [n ]  v (i 1) [n ]
CSTR
44
Why?
Performance of blind TSTR is superior to blind CSTR.
Interpretations:
1) Temporal
Processing:
 2(i ) [n ]  e (i ) [n ] v (i ) [n ]  s1 j [n ] v (i ) [n ]
 (1u1[n ]  h1 j [n ]) v (i ) [n ]  1u1[n ]  g1 j [n ]
g1 j [n ]
Increasing J ( 2(i ) [n ]) is equivalent to increasing
 (
(i )
2

[n ]) 

m
m
2) Spatial
Processing:
g1 j [m ]
g1 j [m ]
4

2 2
  (s1 j [n ])
~ [n ]
y[n ]  x[n ] v (i 1) [n ]  As[n ]  w
 [n ],..., s1j [n ]  s1 j [n ] v (i 1) [n ]   2(i ) [n ],
s[n ]  ( s11
 ( 2(i ) [n ])   (s1j [n ])   (s1 j [n ]) and
, sk 1[n ],
 K [n ] )T
, sKp
 (skl [n ]), k  1, l  j
45
Remarks:
 It can be proven that
J ( 2(i 1) [n ]) J (1(i ) [n ]) J ( 2(i ) [n ]) J (u1[n ])
for all i , implying the guaranteed convergence of the proposed blind
TSTR. Typically, the number of cycles spent by the TSTR before
convergence, is equal to 2 or 3. The computational load of the blind
TSTR is approximately 2 or 3 times that of the blind CSTR.
 Because the design of v and that of v [n ] are coupled in a constructive
and boosting manner, the proposed blind TSTR outperforms the blind
CSTR for all L, and meanwhile their performance difference is larger
for larger L .
 Compared with the blind CSTR, the proposed blind TSTR is insensitive to
the value of  (s1 j [n ]) (i.e., robust against channel h1 j [n ] with multiple
paths or severe ISI).
46
Performance of the blind TSTR:
 [n ]  v TTSTR [n ]  x[n ], where v TSTR [n ]  vv[n ]
CASE I: CCI suppression by v
 [n ]   v T a(1 )h11[n ] v[n ]  u1[n ]  residual CCI and noise
1u1[n   1 ]
Multiple DOAs suppressed also by v
CASE II: CCI suppression by v
 p1 T

 [n ]  v a(1i )h1i [n ] v [n ]  u1[n ]    v a(1 j )h1 j [n ]  v [n ]* u1[n ]
 j i

 residual CCI and noise
1i u1[n   1i ]

T

GENERAL CASE: CCI suppression by v
 p1 T

 [n ]    v a(1 j )h1 j [n ]  v [n ]  u1[n ]  residual CCI and noise
 j 1

the spatial filter v and the temporal filter v [n ] combine the signals
from all the DOAs in a constructive and boosting fashion
g1u1[n  1 ]
47
7. Simulation Results --- Part 2
 Scenario of Case I
 ui [n ] : zero-mean, independent binary sequence of {1} with equal probability
H1 (z)  0.6178  0.4325z 1  0.3707z 2  0.3089z 3  0.2471z 4  0.3707z 5

H 2 (z)  0.4056  0.2839z 1  0.3650z 2  0.2839z 3  0.1217z 4  0.1622z 5
H 3 (z)  0.3984  0.3187z 1  0.3586z 2  0.2390z 3  0.1992z 4  0.1195z 5


1  0 ,  2  40,  3  60 
P  10 (array size)
R S [0]  E{s[n ](s[n ]) H } : Diagonal matrix
 w[n ] : white Gaussian noise vector
 SNR:
E{ a(1 )s1[n ] }
2
SNR 
2
E{ w[n ] }
2

E{ s1[n ] }
 w2

1
 w2
 50 independent runs
48
 3  60 
1  0
 2  40
 Order of the temporal filter L =20
 SNR=20 dB
 Data length N =2000
49
Blind CSTR
Proposed Blind TSTR
50
 Data length
N =2000
 Order of the temporal filter L =20
51
 SNR=30 dB
 Order of the temporal filter
L =20
52
 SNR=30 dB
 Data length
N =2000
53
 Scenario of Case II
 ui [n ] : zero-mean, independent binary sequence of {1} with equal probability

H11 (z)  0.5199  0.3639z 1  0.3119z 2
H 21 (z)  0.3562  0.3206z 1  0.1425z 2
H12 (z)  0.5754z 3  0.2466z 4  0.3288z 5 H 22 (z)  0.3776z 3  0.2098z 4  0.2518z 5

11  0 12  20  21  40  22  60

P  10 (array size)

w[n ] : real white Gaussian noise vector
 SNR:

SNR 
R S [0]  E{s[n ](s[n ]) H }: Diagonal matrix
E { a(11 )s11[n ]  a(12 )s12 [n ] }
2
2
E { w[n ] }
 50 independent runs
54
 22  60
12  20
11  0
 21  40
 Order of the temporal filter L =20
 SNR=20 dB
 Data length N =2000
55
Blind CSTR
Proposed Blind TSTR
56
 Data length N =2000
 Order of the temporal filter L =20
57
 SNR=30 dB
 Order of the temporal filter
L =20
58
 SNR=30 dB
 Data length N =2000
59
 Scenario of the general case
 ui [n ] : zero-mean, independent binary sequence of {1} with equal probability

H 21 (z)  1  0.9z 1  0.4z 2  0.3z 3
H11 (z)  1  0.7z 1  0.6z 2  0.5z 3
2
3
4
5
H12 (z)  0.8z 2  0.7z 3  0.3z 4  0.4z 5 H 22 (z)  z  0.9z  0.5z  0.6z

11  0 12  20  21  40  22  60
(array size)
R S [0]  E{s[n ](s[n ]) H }: Block-diagonal matrix

P  10

w[n ] : white Gaussian noise vector
 SNR:
E{ a(11 )s11[n ]  a(12 )s12[n ] }
2
SNR 
2
E{ w[n ] }
 50 independent runs
60
 22  60
12  20
11  0
 21  40
 Order of the temporal filter L =20
 SNR=30 dB
 Data length N =2000
61
Blind CSTR
Proposed Blind TSTR
62
 Data length N =2000
 Order of the temporal filter L =20
63
 SNR=30 dB
 Order of the temporal filter L =20
64
 SNR=30 dB
 Data length N =2000
65
8. Conclusions
 FKMA only involves spatial processing for extraction of one
non-Gaussian (i.i.d. or colored) source from source mixtures.
It performs well with super-exponential convergence rate, but its
performance depends on the parameter 0   (si [n])  1 .
 We have introduced a novel blind source extraction algorithm,
TSEA, which operates cyclically using the FKMA for both of the
temporal processing and spatial processing. The proposed TSEA
outperforms the FKMA for  (si [n])  1 in addition to sharing
convergence speed and computational efficiency of the later at
each cycle.
 Because of performance degradation resultant from the error
propagation in the MSC procedure, we further introduced two
non-cancellation BSS algorithms, namely, NCMS-FKMA and
NCMS-TSEA, that can extract a distinct source at each stage
without error propagation.
66
 The two BSS algorithms, NCMS-FKMA and NCMS-TSEA perform
better than the existing MSC-FKMA and the MSC-TSEA,
respectively, with moderately higher computational complexities.
 FKMA and TSEA are under investigation for CCI and ISI in MIMO
wireless communications (e.g., OFDM and multi-rate CDMA) and
other applications such as 2-D MIMO systems in biomedical signal
processing (with certain constraints or partial correlation between
source signals).
 Some works of Part 1/Part 2 will be published in
C.-Y. Chi and C.-H. Peng, “Turbo source extraction algorithm and noncancellation source separation algorithms by kurtosis maximization,” IEEE
Trans. Signal Processing, vol. 54, no. 8, pp. 2929-2942, Aug. 2006.
C.-H. Peng, C.-Y. Chi and C.-W. Chang, “Blind multiuser detection by kurtosis
maximization for asynchronous multi-rate DS/CDMA systems,” EURASIP
Journal on Applied Signal Processing, vol. 2006, Article ID 84930, 17 pages,
2006. doi:10.1155/ASP/2006/84930. (special issue: Multisenor Processing for
Signal Extraction and Applications)
67
 Background materials of the talk can be found in the following
book:
C.-Y. Chi, C.-C.Feng, C.-H. Chen and C.-Y. Chen, Blind Equalization
and System Identification, London: Springer-Verlag, 2006.
Thank you very much
68
References
[1] L. Tong, R.-W. Liu, V. C. Soon, and Y.-F. Huang, ``Indeterminacy and identifiability of blind
identification,'' IEEE Trans. Circuits and Systems, vol. 38, pp. 499-509, May 1991.
[2] A. Belouchrani, K. Abed-Meraim, J. -F. Cardoso, and E. Moulines, ``A blind source separation
technique using second-order statistics,'' IEEE Trans. Signal Processing, vol. 45, pp. 434-444,
Feb. 1997.
[3] C.–Y. Chi, C.-J. Chen, F.-Y. Wang, C.-Y. Chen and C.-H. Peng, ``Turbo source separation algorithm
using HOS based inverse filter criteria,'' Proc. IEEE International Symposium on Signal
Processing and Information Technology, Darmstadt, Germany, Dec. 14-17, 2003.
[4] C.–Y. Chi and C.-Y. Chen, ``Blind beamforming and maximum ratio combining by kurtosis
maximization for source separation in multipath,'' Proc. IEEE Workshop on Signal Processing
Advances in Wireless Communications, Taoyuan, Taiwan, Mar. 20-23, 2001, pp. 243-246.
[5] C.-Y. Chi and C.-Y. Chen , C.-H. Chen and C.-C. Feng, ``Batch processing algorithms for blind
equalization using higher-order statistics,'' IEEE Signal Processing Magazine, vol. 20, pp. 25-49,
Jan. 2003.
69
[6] J. M. Mendel, ``Tutorial on higher-order statistic (spectra) in signal processing and system
theory: theoretical results and some applications,'' Proc. IEEE, vol. 79, pp. 278-305, Mar. 1991.
[7] C.-Y. Chi, C.-H. Chen and C.-Y. Chen, ``Blind MAI and ISI suppression for DS/CDMA systems
using HOS-based inverse filter criteria,'' IEEE Trans. Signal Processing, vol. 50, pp. 1368-1381,
June 2002.
[8] Z. Ding and T. Nguyen, ``Stationary points of a kurtosis maximization algorithm for blind signal
separation and antenna beamforming,'' IEEE Trans. Signal Processing, vol. 48, pp. 1587-1596,
June 2000.
[9] C. Chang, Z. Ding, S. F. Yau, and F. H. Y. Chan, ``A matrix-pencil approach to blind separation
of non-white sources in white noise,'' Proc. IEEE International Conference on Acoustics,
Speech, and Signal Processing, Seattle, WA, May 12-15, 1998, pp. 2485-2488.
[10] D. J. Moelker, A. Shah and Y. Bar-Ness, ``The generalized maximum SINR array processor
for personal communication systems in a multipath environment,'' Proc. IEEE International
Symposium on Personal, Indoor, and Mobil Radio Communications, vol. 2, Taipei, Oct. 15-18,
1996, pp. 531-534.
[11] V. Venkataraman, R. E. Cagley and J. J. Shynk, ``Adaptive beamforming for interference
rejection in an OFDM system,'' IEEE Conference Record of the Thirty-Seventh Asilomar
Conference on SSC, Nov. 9-12, 2003, vol. 1, pp. 507-511.
70
[12] J.-F. Cardoso, ``Source separation using higher order moments,'' Proc. IEEE International
Conference on Acoustics, Speech, and Signal Processing, Glasgow, UK, May 23-26, 1989, pp.
2109-2112.
[13] A. Hyvärinen, J. Karhunen, and E. Oja, Independent Component Analysis. New York: WileyInterscience, 2001.
[14] A. Hyvärinen and E. Oja, ``A fixed-point algorithm for independent component analysis,''
Neural Computation, vol. 9, pp. 1482-1492, 1997.
[15] J. Jelitto and G. Fettweis, ``Reduced dimension space-time processing for multi-antenna
wireless systems,'' IEEE Wireless Communications Mag., vol. 9, pp. 18-25, Dec. 2002.
[16] Jen-Wei Liang and A. J. Paulraj, ``Two stage CCI/ISI reduction with space-time processing in
TDMA cellular networks,'' Proc. 30th Asilomar Conference on Signals, Systems, and Computers,
vol. 1, Pacific Grove, CA, Nov. 3-6, 1996, pp. 607-611.
71