Transcript Math 260

Ch 7.8: Repeated Eigenvalues

We consider again a homogeneous system of

n

first order linear equations with constant real coefficients x' =

Ax

.

If the eigenvalues

r

1 ,…,

r n

there are

n

of

A

are real and different, then linearly independent eigenvectors  (1)

,…,

 (

n

) , and

n

linearly independent solutions of the form

x

( 1 ) (

t

) 

ξ

( 1 )

e r

1

t

,  ,

x

(

n

) (

t

) 

ξ

(

n

)

e r n t

If some of the eigenvalues

r

1 ,…,

r n

may not be

n

are repeated, then there corresponding linearly independent solutions of the above form.

In this case, we will seek additional solutions that are products of polynomials and exponential functions.

Example 1: Direction Field

(1 of 12) Consider the homogeneous equation x' =

Ax

below.

x

    1 1  1 3  

x

A direction field for this system is given below.

Substituting

x

= 

e rt

in for (

A

rI )  =

0

, we obtain

x

, and rewriting system as   1  1

r

 1 3 

r

      1 1      0 0  

Example 1: Eigenvalues

(2 of 12) Solutions have the form

x

= 

e rt

, where

r

and  satisfy   1  1

r

 1 3 

r

      1 1      0 0   To determine

r

, solve det(

A

rI ) = 0: 1 

r

1  1 3 

r

 (

r

 1 )(

r

 3 )  1 

r

2  4

r

 4  (

r

 2 ) 2 Thus

r

1 = 2 and

r

2 = 2.

Example 1: Eigenvectors

(3 of 12) To find the eigenvectors, we solve 

I

ξ

0

   1  2 1 3   1 2      1  2      0 0       1 1  1 1      1  2      0 0   by row reducing the augmented matrix:    1 1  1 1 0 0      1 1 

ξ

( 1 )     2  2    1 1 0 0      1 0 1 0 choose

ξ

( 1 )     1 1   0 0    1  1  1  2 0  2  0  0 Thus there is only one eigenvector for the repeated eigenvalue

r

= 2.

Example 1: First Solution; and Second Solution, First Attempt

(4 of 12 ) The corresponding solution

x

= 

e rt

of x' =

Ax

is

x

( 1 ) (

t

)     1 1  

e

2

t

Since there is no second solution of the form

x

= 

e rt

, we need to try a different form. Based on methods for second order linear equations in Ch 3.5, we first try

x

= 

te

2

t

. Substituting

x

= 

te

2

t

into x' =

Ax

, we obtain

ξ

e

2

t

 2

ξ

te

2

t

te

2

t

or 2

ξ

te

2

t

ξ

e

2

t

te

2

t

 0

Example 1: Second Solution, Second Attempt

(5 of 12) From the previous slide, we have 2

ξ

te

2

t

ξ

e

2

t

te

2

t

 0 In order for this equation to be satisfied for all

t

, it is necessary for the coefficients of

te

2

t

and

e

2

t

to both be zero.

From the

e

2

t

term, we see that  nonzero solution of the form

x

=

0

, and hence there is no = 

te

2

t

.

Since

te

2

t

and

e

2

t

appear in the above equation, we next consider a solution of the form

x

ξ

te

2

t

η

e

2

t

Example 1: Second Solution and its Defining Matrix Equations

(6 of 12) Substituting

x ξ

e

2

t

 = 

te

2

t

2

ξ

te

2

t

 + 

e

2

t

into x' 2

η

e

2

t

A

ξ

te

2

t

=

Ax

, we obtain 

η

e

2

t

 or 2

ξ

te

2

t

 

ξ

 2

η

e

2

t

te

2

t

e

2

t

Equating coefficients yields

A

 = 2  and

A

 = 

+

2  , or 

A

 2

I

ξ

0

and 

A

 2

I

η

ξ

The first equation is satisfied if  is an eigenvector of

A

corresponding to the eigenvalue

r

= 2. Thus

ξ

    1 1  

Example 1: Solving for Second Solution

(7 of 12) Recall that

A

   1 1  1 3   ,

ξ

    1 1   Thus to solve (

A – 2I

)  =  for  corresponding augmented matrix: , we row reduce the    1 1  1 1  1 1      1 1 1 1   1 1      1 0 

η

    1  1   1   

η

    0 1   

k

   1 1   1 0  1 0     2   1   1

Example 1: Second Solution

(8 of 12) Our second solution

x

= 

te

2

t

+ 

e

2

t

is now

x

    1 1  

te

2

t

    0 1  

e

2

t

k

   1 1  

e

2

t

Recalling that the first solution was

x

( 1 ) (

t

)     1 1  

e

2

t

, we see that our second solution is simply

x

( 2 ) (

t

)     1 1  

te

2

t

    0 1  

e

2

t

, since the last term of third term of

x

is a multiple of

x

(1) .

Example 1: General Solution

(9 of 12) The two solutions of

x

( 1 ) (

t

)     1 1  

e

2

t

, x' =

Ax

are

x

( 2 ) (

t

)     1 1   

te

2

t

    0 1  

e

2

t

The Wronskian of these two solutions is

W

x

( 1 ) ,

x

( 2 )  (

t

) 

e

2

t

e

2

t te

2

t te

2

t

e

2

t

 

e

4

t

 0 Thus

x

(1) and

x

(2) are fundamental solutions, and the general solution of x' =

Ax

is

x

(

t

) 

c

1

x

( 1 ) (

t

) 

c

2

x

( 2 ) (

t

) 

c

1    1 1  

e

2

t

c

2       1 1  

te

2

t

    0 1  

e

2

t

  

Example 1: Phase Plane

(10 of 12) The general solution is Thus

x x

(

t

) 

c

1    1 1  

e

2

t

 is unbounded as

c

2

t

      1 1  

te

2

t

   , and

x

   0 1  

e

2

t

0

   as

t

Further, it can be shown that as

t

  ,

x

0

  . asymptotic to the line

x

2 Similarly, as

t

= -

x

1 determined by the first eigenvector.   ,

x

is asymptotic to a line parallel to

x

2 = -

x

1 .

Example 1: Phase Plane

(11 of 12) The origin is an

improper node

, and is unstable. See graph. The pattern of trajectories is typical for two repeated eigenvalues with only one eigenvector. If the eigenvalues are negative, then the trajectories are similar but are traversed in the inward direction. In this case the origin is an asymptotically stable improper node.

Example 1: Time Plots for General Solution

(12 of 12) Time plots for

x

1 (

t

) are given below, where we note that the general solution

x

can be written as follows.

x

(

t

)  

c

1    1 1  

e

2

t

c

2       1 1  

te

2

t

    0 1  

e

2

t

    

x x

1 (

t

2 (

t

) )       (

c

1

c

1

e

2

t

c

2  )

e c

2

te

2

t

2

t

c

2

te

2

t

 

General Case for Double Eigenvalues

Suppose the system x' =

Ax

has a double eigenvalue

r

and a single corresponding eigenvector  .

=  The first solution is where 

x

(1) = 

e

t

, satisfies (

A

I

)  =

0

. As in Example 1, the second solution has the form where 

x

( 2 ) 

ξ

te

t

η

e

is as above and  

t

satisfies (

A

I

)  =  . Since  is an eigenvalue, det(

A

I

) = 0, and (

A

I

)  =

b

does not have a solution for all

b

. However, it can be shown that (

A

I

)  =  always has a solution. The vector  is called a

generalized eigenvector

.

Example 2: Fundamental Matrix

 (1 of 2) Recall that a fundamental matrix  (

t

) for x' =

Ax

has linearly independent solution for its columns. In Example 1, our system x' =

Ax

was

x

    1 1  1  3 

x

and the two solutions we found were

x

( 1 ) (

t

)     1 1  

e

2

t

,

x

( 2 ) (

t

)     1 1   

te

2

t

    0 1  

e

2

t

Thus the corresponding fundamental matrix is

Ψ

(

t

)    

e

2

t e

2

t

te te

2

t

2

t

e

2

t

  

e

2

t

  1  1 

t t

 1  

Example 2: Fundamental Matrix

 (2 of 2) The fundamental matrix  (

t

) that satisfies  (

0

) = found using  (

t

) =  (

t

)  -1 (0), where

I

can be

Ψ

( 0 )    1  1  0 1   ,

Ψ

 1 ( 0 )    1  1  0 1   , where  -1 (0) is found as follows:   1  1 0  1 1 0 0 1      1 0 0  1 1 1 0 1      1 0 0 1 1  1  0 1   Thus

Φ

(

t

) 

e

2

t

  1  1 

t t

 1     1  1  0 1   

e

2

t

  1

t

t t

 

t

1  

Jordan Forms

If

A

is

n

x

n

with

n

linearly independent eigenvectors, then

A

can be diagonalized using a similarity transform

T

-1

AT

=

D

.

The transform matrix

T

consisted of eigenvectors of

A

, and the diagonal entries of D consisted of the eigenvalues of

A

. In the case of repeated eigenvalues and fewer than

n

linearly independent eigenvectors,

A

can be transformed into a nearly diagonal matrix

J

, called the

Jordan form

of

A

, with

T

-1

AT

=

J

.

Example 3: Transform Matrix

(1 of 2) In Example 1, our system x'

x

    1 1  1  3 

x

=

Ax

was with eigenvalues

r

1 = 2 and

r

2 = 2 and eigenvectors

ξ

    1 1   ,

η

    0 1   

k

   1 1   Choosing

k

= 0, the transform matrix

T

two eigenvectors  and  is formed from the

T

   1  1  0 1  

Example 3: Jordan Form

(2 of 2) The Jordan form

J

of

A

is defined by

T

-1

AT

=

J

.

Now

T

   1  1  0 1   ,

T

 1    1  1  0 1   and hence

J

T

 1

AT

   1  1  0 1        1 1  1 3     1  1  0 1         2 0 1 2   Note that the eigenvalues of

A

,

r

1 = 2 and

r

2 = 2, are on the main diagonal of

J

, and that there is a 1 directly above the second eigenvalue. This pattern is typical of Jordan forms.