19. Series Representation of Stochastic Processes Given information about a stochastic process X(t) in 0  t  T , can this.

Download Report

Transcript 19. Series Representation of Stochastic Processes Given information about a stochastic process X(t) in 0  t  T , can this.

19. Series Representation of Stochastic Processes
Given information about a stochastic process X(t) in 0  t  T ,
can this continuous information be represented in terms of a countable
set of random variables whose relative importance decrease under
some arrangement?
To appreciate this question it is best to start with the notion of
a Mean-Square periodic process. A stochastic process X(t) is said to
be mean square (M.S) periodic, if for some T > 0
2
E [ X (t  T )  X (t ) ]  0
(19-1)
for all t .
i.e X ( t )  X ( t  T ) with probability 1 for all t .
Suppose X(t) is a W.S.S process. Then
X ( t ) is m ean-square perodic

R ( ) is periodic in the
ordinary sense, w h ere
R ( )  E [ X ( t ) X ( t  T )]
*
Proof: ( ) suppose X(t) is M.S. periodic. Then
1
PILLAI
2
E [ X (t  T )  X (t ) ]  0.
(19-2)
But from Schwarz’ inequality
2
2
2
E [ X ( t1 ){ X ( t 2  T )  X ( t 2 )} ]  E [ X ( t1 ) ] E [ X ( t 2  T )  X ( t 2 ) ]
*
0
Thus the left side equals
E [ X ( t1 ){ X ( t 2  T )  X ( t 2 )} ]  0
*
or
E [ X ( t1 ) X ( t 2  T )]  E [ X ( t1 ) X ( t 2 )]
*

*
R (  T )  R ( )

R ( t 2  t1  T )  R ( t 2  t1 )
for any 
i.e., R ( ) is periodic with period T .
(19-3)
(  ) Suppose R ( ) is periodic. Then
E [| X ( t   )  X ( t ) | ]  2 R ( 0 )  R ( )  R ( )  0
2
i.e., X(t) is mean square periodic.
*
2
PILLAI
Thus if X(t) is mean square periodic, then R ( ) is periodic and let

  ne
R ( ) 
jn  0
0 
,
2
(19-4)
T

represent its Fourier series expansion. Here
n 
1
T
T
R ( ) e
0
 jn  0
d .
(19-5)
dt
(19-6)
In a similar manner define
ck 
E [ck c ] 

1
T
2
1
T
E[
T
2
T
0
X (t )e
jk  0 t
T
k    
Notice that c k ,
*
m
1
T
X ( t1 ) e
0
T
0 0
jk  0 t1
R ( t 2  t1 ) e
are random variables, and
T
dt1 
jk  0 t1
0
e
*
X (t2 )e
 jm  0 t 2
 jm  0 t 2
dt 2 ]
dt1 dt 2


1
T
T
0
[
1
T
T
0
R ( t 2  t1 ) e
 jm  0 ( t 2  t1 )

d ( t 2  t1 )] e

m
 j ( m  k )  0 t1
dt1
3
PILLAI
E [ck c ]   m {T
*
m
1
T
0
e
 j ( m  k )  0 t1
 m ,k
  m  0,
dt1 }  
0
k m
(19-7)
k  m.
i.e., {c n } nn  
form a sequence of uncorrelated random variables,

and, further, consider the partial sum
N
X N (t ) 

ck e
 jk  0 t
.
(19-8)
K N
~
We shall show that X N ( t )  X ( t ) in the mean square sense as N   .
i.e.,
2
~
E [ X ( t )  X N ( t ) ]  0 as
N  .
(19-9)
Proof:
2
2
E [ X ( t )  X N ( t ) ]  E [ X ( t ) ]  2 R e[ E ( X ( t ) X N ( t )]
*
2
 E [ X N ( t ) ].
But
(19-10)
4
PILLAI


2
E [ X ( t ) ]  R (0 ) 
k,
k  
and
N

E [ X ( t ) X N ( t )]  E [
*
ck e
 jk  0 t
*
X ( t )]
kN
N
1

T

kN
N


[
kN
E[
1
T
T
0
T
0
X ( ) e
 jk  0 ( t   )
R (t   )e
X (t )d  ]
 jk  0 ( t   )
*
d ( t   )] 
N

(19-12)
k.
kN
k
Similarly
2
E [ X N (t ) ]  E [
k
 ck c
*
m
e
j ( k  m ) 0 t
k

 E [ X (t )  X N (t ) ]  2(

k 
k  

X (t )
  E [c k c
m
2
i.e.,


k  
ck e
 jk  0 t
N

*
m
]e
j ( k  m ) 0 t
N

k.
k N
m
k)  0

as
N  
(19-13)
kN
,
   t  .
(19-14)
5
PILLAI
Thus mean square periodic processes can be represented in the form
of a series as in (19-14). The stochastic information is contained in the
random variables c k , k     . Further these random variables
*
are uncorrelated ( E { c k c m }   k  k , m ) and their variances  k  0 as k   .
This follows by noticing that from (19-14)


2
 k  R (0 )  E [ X ( t ) ]  P   .
k  
Thus if the power P of the stochastic process is finite, then the positive

sequence  k     k converges, and hence  k  0 as k   . This
implies that the random variables in (19-14) are of relatively less
importance as k   , and a finite approximation of the series in
(19-14) is indeed meaningful.
The following natural question then arises: What about a general
stochastic process, that is not mean square periodic? Can it be
represented in a similar series fashion as in (19-14), if not in the whole
interval    t   , say in a finite support 0  t  T ?
Suppose that it is indeed possible to do so for any arbitrary process6
PILLAI
X(t) in terms of a certain sequence of orthonormal functions.
i.e.,
~
X (t ) 

 c k k (t )
(19-15)
X ( t ) k ( t ) d t
(19-16)
n 1
where
T

ck 
T
0
0
*
 k ( t ) n ( t ) d t   k , n ,
*
(19-17)
and in the mean square sense
X (t )
X (t )
0  t  T.
in
Further, as before, we would like the ck s to be uncorrelated random
variables. If that should be the case, then we must have
*
E [ c k c m ]   m  k ,m .
(19-18)
Now
E [ck c ]  E [ 
*
m
T

0

T
T
0
X ( t1 ) ( t1 ) dt1 
*
k
 ( t1 ) 
0
*
k
*
k
T
0
( t1 ){ 
T
X ( t 2 ) m ( t 2 ) dt 2 ]
*
0
E { X ( t1 ) X ( t 2 )}  m ( t 2 ) dt 2 dt1
*
T
0
R X X ( t1 , t 2 )  m ( t 2 ) dt 2 } dt1
(19-19)
7
PILLAI
and
T
 m  k , m   m   k ( t1 ) m ( t1 ) dt1 .
*
(19-20)
0
Substituting (19-19) and (19-20) into (19-18), we get
T
0
*
k
( t1 ){ 
T
0
R X X ( t1 , t 2 )  m ( t 2 ) dt 2   m  m ( t1 )}dt1  0.
(19-21)
Since (19-21) should be true for every  k ( t ), k  1   , we must have
T
0
R X X ( t1 , t 2 ) m ( t 2 ) dt 2   m  m ( t1 )  0,
or
T
0
R X X ( t1 , t 2 ) m ( t 2 ) dt 2   m  m ( t1 ),
0  t1  T ,
m  1   . (19-22)
i.e., the desired uncorrelated condition in (19-18) gets translated into the
integral equation in (19-22) and it is known as the Karhunen-Loeve or
K-L. integral equation. The functions { k ( t )} k 1 are not arbitrary
and they must be obtained by solving the integral equation in (19-22).
They are known as the eigenvectors of the autocorrelation
8
PILLAI
function of R ( t1 , t 2 ). Similarly the set {  k }k 1 represent the eigenvalues
of the autocorrelation function. From (19-18), the eigenvalues  k
represent the variances of the uncorrelated random variables c k ,
k  1   . This also follows from Mercer’s theorem which allows the
representation
XX

R X X ( t1 , t 2 ) 

 k  k ( t1 ) k ( t 2 ),
*
0  t1 , t 2  T ,
(19-23)
k 1
where
T
0
 k ( t ) m ( t ) dt   k , m .
*
Here  k (t ) and  k , k  1   are known as the eigenfunctions
and eigenvalues of R ( t1 , t 2 ) respectively. A direct substitution and
simplification of (19-23) into (19-22) shows that
(19-24)
 k ( t )   k ( t ),
k   k ,
k  1  .
XX
Returning back to (19-15), once again the partial sum
N
X N (t ) 
 ck k (t )
k 1
 N



X ( t ),
0tT
(19-25)
9
PILLAI
in the mean square sense. To see this, consider
E [| X ( t )  X N ( t ) | ]  E [| X ( t ) | ]  E [ X ( t ) X N ( t )]
2
2
*
 E [ X ( t ) X N ( t )]  E [| X N ( t ) | ].
*
2
(19-26)
We have
E [| X ( t ) | ]  R ( t , t ).
2
Also
N
E [ X ( t ) X ( t )] 
*
N

(19-27)
X (t )ck k (t )
*
*
k 1
N

T
0
E [ X ( t ) X ( )]  k ( t ) k ( ) d 
*
*
k 1
N

 (
k 1
T
0
R ( t ,  ) k (  ) d  ) k ( t )
*
N

N
  k  k ( t ) ( t )=   k |  k ( t ) | .
*
k
k 1
Similarly
~
E [ X ( t ) X N ( t )] 
*
2
(19-28)
k 1
N

k 1
 k |  k (t ) |
2
(19-29)
10
PILLAI
and
E [| X N ( t ) | ] 
2

k
E [ c k c ] k ( t ) ( t ) 
N

k |  k (t ) | .
(19-30)
 k |  k ( t ) |  0 as   .
2
(19-31)
0  t  T,
(19-32)
*
m
*
m
2
k 1
m
Hence (19-26) simplifies into
~
2
E [| X ( t )  X N ( t ) | ]  R ( t , t ) 
N

k 1
i.e.,

X (t )
 c k  k ( t ),
k 1
where the random variables {c k } k 1 are uncorrelated and faithfully
represent the random process X(t) in 0  t  T , provided  k ( t ),
k  1   , satisfy the K-L. integral equation.
Example 19.1: If X(t) is a w.s.s white noise process, determine the

sets { k ,  k }k  1 in (19-22).
Solution: Here
R XX ( t1 , t 2 )  q  ( t1  t 2 )
(19-33)
11
PILLAI
and
T
0
T
R X X ( t1 , t 2 ) k ( t 2 ) dt1  q   ( t1  t 2 ) k ( t 2 ) dt1
0

 q  k ( t1 )   k  k ( t1 )
(19-34)
  k (t ) can be arbitrary so long as they are orthonormal as in (19-17)
and  k  q , k  1   . Then the power of the process

P  E [| X ( t ) | ]  R (0 ) 
2
 k
k 1


q
 
k 1
and in that sense white noise processes are unrealizable. However, if
the received waveform is given by
r ( t )  s ( t )  n ( t ),
(19-35)
0tT
and n(t) is a w.s.s white noise process, then since any set of
orthonormal functions is sufficient for the white noise process
representation, they can be chosen solely by considering the other
signal s(t). Thus, in (19-35)
R rr ( t1  t 2 )  R ss ( t1  t 2 )  q  ( t1  t 2 )
(19-36)
12
PILLAI
and if


 k  k ( t 1 ) k ( t 2 )
(19-37)
 (  k  q ) k ( t1 ) k ( t 2 ).
(19-38)
R ss ( t1  t 2 ) 
*
k 1
Then it follows that

R rr ( t1  t 2 ) 
*
k 1
Notice that the eigenvalues of R ss ( t1  t 2 ) get incremented by q.
Example19.2: X(t) is a Wiener process with
 t 2
R X X ( t1 , t 2 )   m in( t1 , t 2 )  
  t1
t1  t 2
t1  t 2
,
 0
In that case Eq. (19-22) simplifies to
0
T
0
R X X ( t1 , t 2 ) k ( t 2 ) dt 2 
t1
 

dt 2
t1
T
R X X ( t1 , t 2 ) k ( t 2 ) dt 2
0

(19-39)
T
t
R X X ( t1 , t 2 ) k ( t 2 ) dt 2   k  k ( t1 ),
1
and using (19-39) this simplifies to
t1
0
 t 2  k ( t 2 ) dt 2 
T
t
1
 t1  k ( t 2 ) dt 2   k  k ( t1 ).
(19-40)
13
PILLAI
Derivative with respect to t1 gives [see Eqs. (8-5)-(8-6), Lecture 8]
T
 t1 k ( t1 )  (  1) t1 k ( t1 )     k ( t 2 ) dt 2   k  k ( t1 )
t1
or
T
   k ( t 2 ) dt 2   k  k ( t1 ).
(19-41)
t1
Once again, taking derivative with respect to t1, we obtain
 (  1) k ( t1 )   k  k ( t1 )
or
d  k ( t1 )
2
2
1

dt

(19-42)
 k ( t1 )  0,
k
and its solution is given by
 k ( t )  Ak cos
But from (19-40)

k
t  B k sin
 k (0)  0,

k
t.
(19-43)
14
PILLAI
and from (19-41)
 k ( T )  0.
(19-44)
This gives
 k (0)  Ak  0,
 k (t )  Bk

k
k  1  ,
cos

k
t,
(19-45)
and using (19-44) we obtain
 k (T )  B k


Also

k

k

k
cos
T   2 k  1
k 
T

(19-46)
2
2
(k  ) 
1
2
T 0
2
2
,
k  1  .
(19-47)
15
PILLAI
 k ( t )  B k sin

k
0  t  T.
t,
(19-48)
Further, orthonormalization gives
T
0
 ( t ) dt  B
2
k
2
k
T
0

sin
T 1
2
 Bk  
2 2
 Bk 

k
sin 2
t


k

k
2
2
 T  1  cos 2
dt  B  
2
 0 
2
k

2  T
  Bk  
2

T
t
0

k
t
 
dt 
 
sin( 2 k  1)   0
4

k

2 T

B
1

k 2

2 /T .
Hence
 k (t ) 
2
T
sin


k

t 
2
T
sin  k  12  T t ,
with  k as in (19-47) and c k as in (19-16),
(19-49)
16
PILLAI

X (t ) 
 c k  k ( t ) is the desired series representation.
k 1
Example 19.3: Given
R X X ( )  e
  | |
  0,
,
(19-50)
find the orthonormal functions for the series representation of the
underlying stochastic process X(t) in 0 < t < T.
Solution: We need to solve the equation
T
0
e
  |t1  t 2 |
 n ( t 2 ) dt 2   n  n ( t1 ).
 

dt 2
Notice that (19-51) can be rewritten as,
0
0
t1
0
e
  ( t1  t 2 )
(19-51)
 

dt 2
t1
T
0
 n ( t 2 ) dt 2 
T
t
e
  ( t 2  t1 )
 n ( t 2 ) dt 2   n  n ( t1 )
(19-52)
1
Differentiating (19-52) once with respect to t1, we obtain
17
PILLAI
t1
 n ( t1 ) 
0
 n


t1
0
(  ) e
  ( t1  t 2 )
T
 n ( t 2 ) dt 2   n ( t1 ) 
t
e
  ( t 2  t1 )
 n ( t 2 ) dt 2
1
d  n ( t1 )
d t1
e
  ( t1  t 2 )
 n ( t 2 ) dt 2 
T
t
e
  ( t 2  t1 )
 n d  n ( t1 )
 n ( t 2 ) dt 2 

1
d t1
(19-53)
Differentiating (19-53) again with respect to t1, we get
t1
  n ( t1 ) 
0
  n ( t1 ) 
T
t
(  ) e
  ( t1  t 2 )
 n ( t 2 ) dt 2
 n d  n ( t1 )
2
e
  ( t 2  t1 )
 n ( t 2 ) dt 2 
1

2
d t1
or
1
  ( t1  t 2 )

 2  n ( t1 )    e
 n ( t 2 ) dt 2 
 0
t
T
t
e
  ( t 2  t1 )
1
 n ( t 2 ) dt 2 

 n  n ( t1 ) { use (19-52)}
 n d  n ( t1 )
2


dt
2
1
18
PILLAI
or
2
(  n  2 ) n ( t1 ) 
or
 n d  n ( t1 )
d  n ( t1 )
2
2
1
dt

2
dt1
  (  n  2 ) 

  n ( t1 ).
n


(19-54)
Eq.(19-54) represents a second order differential equation. The solution
for  n ( t ) depends on the value of the constant  ( n  2 ) /  n on the
right side. We shall show that solutions exist in this case only if
2
 n  2, or
0  n  .
(19-55)

In that case  ( n  2) /  n  0.
Let

2 
n

 ( 2   n )
n
 0,
(19-56)
and (19-54) simplifies to
d  n ( t1 )
2
dt
2
1
   n  n ( t1 ).
2
(19-57)
19
PILLAI
General solution of (19-57) is given by
 n ( t1 )  An cos  n t1  B n sin  n t1 .
(19-58)
From (19-52)
T
 n (0 )  
n
1
  t2
0
e
T
  t2
 n ( t2 ) d t2
(19-59)
and
 n (T )  
n
1
0
e
 n ( T  t 2 ) dt 2 .
(19-60)
Similarly from (19-53)
 n (0) 
d  n ( t1 )
dt1

t1  0

n
T
0
e
  t2
 n ( t 2 ) dt 2   n (0)
(19-61)
and
 n (T )  

n
T
0
e
  t2
 n ( T  t 2 ) dt 2     n ( T ).
Using (19-58) in (19-61) gives
(19-62)
20
PILLAI
Bn n
or

An
Bn

 An
n

,
(19-63)
and using (19-58) in (19-62), we have
 An  n sin  n T  B n  n cos  n T    ( An cos  n T  B n sin  n T ),

( An   B n  n ) cos  n T  ( An  n  B n  ) sin  n T
or
tan  n T 
2 An 
An  n  B n 

2 An  / B n 
An  n
1
Bn

2 An / B n
An  n
1
Bn 
2(  n /  )
 
.
n 2
(  ) 1
Thus  n s are obtained as the solution of the transcendental equation
tan  n T 
2 ( n /  )
( n /  )  1
2
,
(19-64)
21
PILLAI
which simplifies to
tan(  n T / 2 )  
n

.
(19-65)
In terms of  n s from (19-56) we get
n 
2
 
2
2
n
 0.
(19-66)
Thus the eigenvalues are obtained as the solution of the transcendental
2
equation (19-65). (see Fig 19.1). For each such  n (or  n ), the
corresponding eigenvector is given by (19-58). Thus
 n ( t )  An cos  n t  B n sin  n t
 c n sin(  n t   n )  c n sin  n ( t  T2 ),
0tT
(19-67)
since from (19-65)
 n  tan
1
n 
 An 
1 
   n T / 2,

  tan  
  
 Bn 
and cn is a suitable normalization constant.
(19-68)
22
PILLAI
ta n (  T / 2 )

T
1
2

0


 / 
Fig 19.1 Solution for Eq.(19-65).
23
PILLAI
Karhunen – Loeve Expansion for Rational Spectra
[The following exposition is based on Youla’s classic paper “The
solution of a Homogeneous Wiener-Hopf Integral Equation occurring
in the expansion of Second-order Stationary Random Functions,” IRE
Trans. on Information Theory, vol. 9, 1957, pp 187-193. Youla is
tough. Here is a friendlier version. Even this may be skipped on a first
reading. (Youla can be made only so much friendly.)]
Let X(t) represent a w.s.s zero mean real stochastic process with
autocorrelation function R ( )  R (  ) so that its power spectrum
XX
S X X ( ) 

 
R X X ( ) e
XX
 j t
dt  2 

0
R X X ( ) cos  d 
(19-69)
is nonnegative and an even function. If S ( ) is rational, then the
process X(t) is said to be rational as well. S ( ) rational and even
implies
2
N ( )
(19-70) 24
S ( ) 
 0.
2
XX
XX
XX
D ( )
PILLAI
The total power of the process is given by
P  21

 
S X X ( ) d 
 21
  N ( 2 )
 
D ( )
2
d
(19-71)
and for P to be finite, we must have
(i) The degree  ( D )  2 n of the denominator polynomial D ( 2 )
must exceed the degree  ( N )  2 m of the numerator polynomial
2
N (  ) by at least two,
and
(ii) D ( 2 ) must not have any zeros on the real-frequency ( s  j )
axis.
The s-plane ( s    j ) extension of S ( ) is given by
XX
N ( s )
2

S X X (  ) |s  j  S ( s ) 
2
D ( s )
2
.
(19-72)
Thus
D ( s ) 
2

(s  k )
2
2
ki
(19-73)
k
2
2 k
and the Laplace inverse transform of ( s   ) is given by
25
PILLAI
1
(s   )
2
2
(  1)

k
Let   1 ,   2 ,
k
( k  1) !
e
  | |

j 1
| |
(k  j  2)!
k
( j  1) !( k  j ) ! ( 2  )
k j
k  j 1
(19-74)
,   n represent the roots of D(– s2) . Then
0  Re  1  Re  2 
 Re  n
(19-75)
Let D+(s) and D–(s) represent the left half plane (LHP) and the right half
plane (RHP) products of these roots respectively. Thus
2


D (  s )  D ( s ) D ( s ),
(19-76)
where

D (s) 

( s   k )( s   ) 
*
k
n


d k s  D (  s ).
k
(19-77)
k 0
k
This gives
N ( s )
2
S (s ) 
2
C1 ( s )
D ( s )
2

C1 ( s )

D (s)

C2 (s)

D (s)
(19-78)
Notice that D ( s ) has poles only on the LHP and its inverse (for all t > 0)
26
converges only if the strip of convergence is to the right
PILLAI

s  j
of all its poles. Similarly
strip of convergence
–
C2(s) /D (s) has poles only on
for R (  )
the RHP and its inverse will
 Re 
Re 
 Re 
Re 
converge only if the strip is




to the left of all those poles. In
that case, the inverse exists for
t < 0. In the case of R ( ), from
Fig 19.2
(19-78) its transform N(s2) /D(–s2)
is defined only for  R e  1  R e s  R e  1 (see Fig 19.2). In particular,
for   0, from the above discussion it follows that R ( ) is given
by the inverse transform of C1(s) /D+(s). We need the solution to the
integral equation
XX
n
1
1
n
XX
XX
 (t )   
T
0
R X X ( t   ) ( ) d  ,
0tT
(19-79)
that is valid only for 0 < t < T. (Notice that  in (19-79) is the
reciprocal of the eigenvalues in (19-22)). On the other hand, the right
side (19-79) can be defined for every t. Thus, let
T

(19-80) 27
g ( t )   R ( t   ) ( ) d  ,
   t  
0
XX
PILLAI
and to confirm with the same limits, define
 ( t )
 (t )  
 0
0tT
(19-81)
.
otherw ise
This gives
g (t ) 
+
  R
XX
( t   ) ( ) d 
(19-82)
and let
f (t )   (t )   g (t )   (t )   
Clearly
f ( t )  0,
D
  f (t )    
d
dt

R X X ( t   ) ( ) d  .
(19-83)
(19-84)
0tT
and for t > T

+

D (  k )0
+

{D

 dtd  R
XX
( t   )} ( ) d   0,
 k ak e
 k t
(19-85)
, for t  0. Hence it
since RXX(t) is a sum of exponentials
follows that for t > T, the function f (t) must be a sum of exponentials
 t
28
a
e
. Similarly for t < 0
k k
PILLAI
k

D ( k )0
D

 
d
dt
f (t )    
+

{D

 dtd  R
XX
( t   )} ( ) d   0,
and hence f (t) must be a sum of exponentials  k bk e  t , for t  0.
Thus the overall Laplace transform of f (t) has the form
(19-86)
P(s)
Q (s)
k
F (s) 
e

 sT

D (s)
D (s)
contributes to t  0
contributions
in t < 0
contributions in t  T
where P(s) and Q(s) are polynomials of degree n – 1 at most. Also from
(19-83), the bilateral Laplace transform of f (t) is given by
N ( s
F ( s )   ( s ) 1  
D ( s

2
2
)
,
)
 R e 1  R e s  R e 1
(19-87)
Equating (19-86) and (19-87) and simplifying, Youla obtains the key
identity

 sT

P(s)D (s)  e Q (s)D (s)
(19-88)
 (s) 
.
2
2
D ( s )   N ( s )
T
Youla argues as follows: The function  ( s )    ( t ) e  st dt is an entire
0
29
function of s, and hence it is free of poles on the entire
PILLAI
finite s-plane (   Re s   ). However, the denominator on the right
side of (19-88) is a polynomial and its roots contribute to poles of  ( s ).
Hence all such poles must be cancelled by the numerator. As a result
the numerator of  ( s ) in (19-88) must possess exactly the same set of
zeros as its denominator to the respective order at least.
Let   1 (  ),   2 (  ), ,   n (  ) be the (distinct) zeros of the
denominator polynomial D (  s 2 )   N (  s 2 ). Here we assume that 
is an eigenvalue for which all  k ' s are distinct. We have
(19-89)
0  R e  1 ( )  R e  2 ( ) 
 R e  n ( )  .
These  k ' s also represent the zeros of the numerator polynomial

 sT

P ( s ) D ( s )  e Q ( s ) D ( s ). Hence

D ( k ) P ( k )  e
and
 kT

 kT
D (  k ) Q (  k )
kT
D ( k ) Q (   k ).
D (  k ) P (  k )  e
which simplifies into

D ( k ) P (   k )  e

D ( k ) Q ( k )
From (19-90) and (19-92) we get


(19-90)
(19-91)
(19-92)
30
PILLAI
P ( k ) P (  k )  Q ( k ) Q (  k ),
k  1, 2,
, n
(19-93)
i.e., the polynomial
L ( s )  P ( s ) P (  s )  Q ( s )Q (  s )
(19-94)
which is at most of degree n – 1 in s2 vanishes at  ,  ,
(for n distinct values of s2). Hence
2
1
2
2
, n
2
(19-95)
L(s )  0
2
or
P ( s ) P (  s )  Q ( s ) Q (  s ).
(19-96)
Using the linear relationship among the coefficients of P(s) and Q(s)
in (19-90)-(19-91) it follows that
P ( s)  Q ( s)
or
P ( s)  Q (  s)
(19-97)
are the only solutions that are consistent with each of those equations,
31
and together we obtain
PILLAI
P ( s)  Q ( s)
(19-98)
as the only solution satisfying both (19-90) and (19-91). Let
n 1

P(s) 
i
pi s .
(19-99)
i0
In that case (19-90)-(19-91) simplify to (use (19-98))

P ( k ) D ( k )
n 1

 {1
e
 kT

D ( k ) P (   k )
(  1) a k } k p i  0,
i
k  1, 2,
i
,n
(19-100)
i0
where

ak 
D ( k )

D ( k )

e
 kT

D (  k )

D ( k )
e
For a nontrivial solution to exist for p 0 , p1 ,
must have
 kT
.
(19-101)
, p n  1 in (19-100), we
32
PILLAI
 1, 2 
n 1
(1
a1 )
(1  a 1 ) 1
(1
a2 )
(1  a 2 ) 2
(1
(  1)
n 1
(1
an )
(1  a n ) n
(1
(  1)
n 1
(  1)
(1
n 1
a 1 ) 1
n 1
a 2 ) 2
 0.
(19-102)
n 1
a n ) n
The two determinant conditions in (19-102) must be solved together to
obtain the eigenvalues  i ' s that are implicitly contained in the a i ' s
and  i ' s (Easily said than done!).
To further simplify (19-102), one may express ak in (19-101) as
ak  e
 2 k
k  1, 2,
,
,n
(19-103)
so that
tanh  k 

e
e
e
e
k
k
e
 k
e
 k
kT / 2
kT / 2

1  ak
1  ak

 kT / 2

 kT / 2
D ( k )  e
 kT
D (  k )

 k T
D (  k )
D ( k )  e

D ( k )  e

D ( k )  e



D (  k )

D (  k )
(19-104)
33
PILLAI
Let

D ( s)  d 0  d1s 
 dns
n
(19-105)
and substituting these known coefficients into (19-104) and simplifying
we get
( d 0  d 2 k 
) tan h (  k T / 2 )  ( d 1 k  d 3 k 
( d 0  d 2 k 
)  ( d 1 k  d 3  k 
2
tan h  k 
2
3
)
) tan h (  k T / 2 )
3
(19-106)
and in terms of tan h  k ,  2 in (19-102) simplifies to
n 1
tan h  1
n 1
tan h  2
1
 1 tan h  1
1
2
 1 tan h  1
1
1
 2 tan h  2
2
 2 tan h  2
2
1
 n tan h  n
n
 n tan h  n
n
2
2
3
3
3
n 1
0
(19-107)
tanh  n
if n is even (if n is odd the last column in (19-107) is simply
T
[ ,  , ,  ] ). Similarly  1 in (19-102) can be obtained by
replacing tan h  k with cot h  k in (19-107).
34
n 1
1
n 1
n 1
2
n
PILLAI
To summarize determine the roots  k ' s with Re( i )  0 that satisfy
D (   k )   N (   k )  0,
2
2
k  1, 2,
,n
(19-108)
in terms of  , and for every such  k , determine  k using (19-106).
Finally using these  k s and tanh  k s in (19-107) and its companion
equation  1 , the eigenvalues  k s are determined. Once  k s are
obtained, p k s can be solved using (19-100), and using that  i ( s ) can
be obtained from (19-88).
Thus

 i (s) 
D ( s ) P ( s , i )  e
 sT

D ( s )Q ( s ,  i )
D (  s )  i N (  s )
2
2
(19-109)
and
1
 i ( t )  L { i ( s )}.
(19-110)
Since  i ( s ) is an entire function in (19-110), the inverse Laplace
transform in (19-109) can be performed through any strip of
35
convergence in the s-plane, and in particular if we use the strip PILLAI
Re s  Re( n ) (to the right of all Re(  i )), then the two inverses

L
1


D (s)P(s)
,

2
2 
 D ( s )   N ( s ) 

L
1


D ( s )Q ( s )

2
2 
D
(

s
)


N
(

s
)

obtained from (19-109) will be causal. As a result
L
1

e
 sT
(19-111)

D ( s )Q ( s )
D ( s ) N ( s )
2
2

will be nonzero only for t > T and using this in (19-109)-(19-110) we
conclude that  i ( t ) for 0 < t < T has contributions only from the first
term in (19-111). Together with (19-81), finally we obtain the desired
eigenfunctions to be

 k (t )  L
1


D ( s ) P ( s, k )
, 0  t  T,

2
2 
 D (  s )  k N (  s ) 
R e s  R e  n  0,
(19-112)
k  1, 2,
,n
that are orthogonal by design. Notice that in general (19-112)
corresponds to a sum of modulated exponentials.
36
PILLAI
Next, we shall illustrate this procedure through some examples. First,
we shall re-do Example 19.3 using the method described above.
Example 19.4: Given R ( )  e   | | , we have
XX
S X X ( ) 
2
 
2
N ( )
2
2

D ( )
2
.
This gives D  ( s )    s , D  ( s )    s and P(s), Q(s) are constants
here. Moreover since n = 1, (19-102) reduces to 1  a1  0, or a1   1
and from (19-101),  1 satisfies

e
 1T

D ( 1 )

D ( 1 )

  1
  1
(19-113)
or  1 is the solution of the s-plane equation
e
sT

 s
 s
(19-114)
 s
But |esT| >1 on the RHP, whereas   s  1 on the RHP. Similarly
37
|esT| <1 on the LHP, whereas   ss  1 on the LHP.
PILLAI
Thus in (19-114) the solution s must be purely imaginary, and hence
 1 in (19-113) is purely imaginary. Thus with s  j 1 in (19-114)
we get
e
j 1 T

  j 1
  j 1
or
tan(  1 T / 2 )  
1
(19-115)

which agrees with the transcendental equation (19-65). Further from
(19-108), the  s satisfy
D (  s )  n N (  s )
2
    n  2 n  0
2
or
2
s  j n
  n
2
n 
2
2
2
 0.
(19-116)
Notice that the  n in (19-66) is the inverse of (19-116) because as
noted earlier  in (19-79) is the inverse of that in (19-22).
38
PILLAI
Finally from (19-112)
 n (t )  L
1
 s  
 An cos  n t  B n sin  n t ,
 2
2 
s  n 
0tT
(19-117)
which agrees with the solution obtained in (19-67). We conclude this
section with a less trivial example.
Example 19.5
R X X ( )  e
  | |
2
2
e
  | |
(19-118)
.
In this case
S X X ( ) 
 
2
2

 
2
2(   )(     )
2
2

(   )(   )
2
2
2
2
.
(19-119)
This gives D  ( s )  ( s   )( s   )  s 2  (   ) s    . With n = 2,
(19-107) and its companion determinant reduce to
 2 tan h  2   1 tan h  1
 2 coth  2   1 cot h  1
39
PILLAI
or
tan h  1   tan h  2 .
(19-120)
From (19-106)
(    i ) tan h (  i T / 2 )  (   ) i
2
tan h  i 
(    )  (   ) i tan h (  i T / 2 )
2
i
,
i  1, 2
(19-121)
Finally  12 and  22 can be parametrically expressed in terms of 
using (19-108) and it simplifies to
D (  s )   N (  s )  s  (    2  (   )) s
2
2
4
2
 
2

2
2
2
 2  (   ) 
 s  bs  c  0.
4
2
This gives
1 
2
and
b( ) 
b ( )  4c( )
2
2
40
PILLAI
and
2 
2
b( ) 
b ( )  4c( )
2
2
 1 
2
b ( )  4c( )
2
and substituting these into (19-120)-(19-121) the corresponding
transcendental equation for  i s can be obtained. Similarly the
eigenfunctions can be obtained from (19-112).
41
PILLAI