CHAPTER ONE Matrices and System Equations

Download Report

Transcript CHAPTER ONE Matrices and System Equations

CHAPTER ONE
Matrices and System Equations
 Objective:To
provide solvability
conditions of a linear equation Ax=b
and introduce the Gaussian elimination
method, a systematical approach in
solving Ax=b, to solve it.
Outline

Motivative Example.
 Elementary row operations and Elementary
Matrices.
 Some Basic Properties of Matrices.
 Gaussian Elimination for solving Ax=b.
 Solvability conditions for Ax=b.
Motivative Example (curve fitting)
 Given three points( x , y )( x , y )( x3 , y3
1
1
2
2
),find a
polynomial of degree 2 passing through the three given
points.
Solution: Let the polynomial be
y ( x)  ax 2  bx  c
Where a,b and c are to be determined
 y1   x1

  y2    x22
 
 y3   x32
2
x1 1 a 

x2 1 b 
 
x3 1  c 
Ax=b
Question: Why transform to matrix
form?
To provide a systematic approach and
to use computer resource.
Question: How to solve Ax=b systematically?
One way is to put Ax=b in triangular form,which
can be easily solved by back-substitution.

Definition: A system is said to be in triangular form if in the k-th
equation the coefficients of thee first (k-1) variables are all zero and
the coefficient of xk is nonzero ( k = 1,…,n)

Eg1:
1 2 3  x1  2
0 2 4   x    1 

 2   
0 0 3  x3  4
x1  2 x2  3x3  2
2 x2  4 x3  1
3x3  4
x3 
4
3
x2   136
x1 
7
3
Question: How to put Ax=b in triangular form
while leaving the solution set invariant?
Solution: By elementary row operations as described
below.

Definition: Two systems of equations involing the same variables
are said to be equivalent if they have the same solution set.
Before introducing elementary operation, we
recall some definitions and notations.
(§ 1.3)
 Equality of two matrices.
 Multiplication of a matrix by a scalar.
 Matrix addition.
 Matrix multiplication.
 Identity matrix.
 Multiplicative inverse.
 Nonsingular and singular matrix.
 Transpose of a matrix.
Definitions
mn
nr
Def. If A  (aij )  F and B  (bij )  F , then the
Matrix Multiplication AB  C  (cij )  F mr ,
n
where cij  a (i,:)b j   aik bkj .
k 1
Def. An (n × n) matrix A is said to be nonsingular
or invertible if there exists a matrix B such that
AB=BA=I. The matrix B is said to be a
multiplicative inverse of A. And B is denoted
by A-1.
Warning: In general, AB≠BA. Matrix multiplication is not commutative.
Definitions (cont.)
Def. The transpose of an (m × n) matrix A is the (n ×
m) matrix B defined by b ji  aij for j=1,…,n and
i=1,…,m. The transpose of A is denoted by AT.
Def. An (n × n) matrix A is said to be symmetric if
AT=A .
Some Matrix Properties
Let 
&  be scalars,A,B and C be matrices
with proper dimensions.
(Commutative Law)
 A B  B  A
 ( A  B )  C  A  ( B  C ) (Associative Law)
(Associative Law)
 ( AB)C  A( BC )
 A( B  C )  AB  AC
(Distributive Law)
 ( A  B )C  AC  BC
(Distributive Law)
Some Matrix Properties (cont.)
 ( ) A   ( A)
  ( AB )  (A) B  A(B )
 (   ) A  A  A
  ( A  B )  A  B
 ( AT )T  A
 (A)T  AT
 ( A  B )T  AT  B T
 ( AB)T  B T AT
 ( AB) 1  B 1 A1
Notations

a1n 
 a11
A   
   F m n



amn 

 am1

 b1 
 x1 
  Fm
n


b

X 

F





,
bm 


 xn 

 a11  a1n  b1 
The matrix A  b   
   is called an


am1  amn  bm 
augmented matrix.
In general, F   or F  C.
Moreover,we define
a (i,:)   ai1
ain 
 a1 j 


a j  a (:, j )  

 amj 


 a (1,:) 

 A   a1 , a2,
an   

 a (m,:) 
 a (1,:)  x 
n

 Ax   xi ai  

i 1
 a (n,:)  x 
m
ci a i
Def: Let a1 ,a 2 ,...,a n  F n and c1 ,c2 ,..., cn  F.Then 

i 1
is said to be a linear combination of a1 ,a 2 ,...,a n .
Note that Ax   xi ai .We have the next result.

Theorem1.3.1: Ax=b is consistent
b can be written
as a linear combination of colum vectors
of A.
Application 1: Weight Reduction
Table 1
Calories Burned Per Hour
Weight in lb
Exercise Activity
152
161
170
178
Walking 2 mph
213
225
237
249
Running 5.5 mph
651
688
726
764
Bicycling 5.5mph
304
321
338
356
Tennis
420
441
468
492
Application 1: Weight Reduction (cont.)
Table 2
Hours Per Day For Each Activity
Exercise schedule
walking
Running
Bicycling
Tennis
Monday
1.0
0.0
1.0
0.0
Tuesday
0.0
0.0
0.0
2.0
Wednesday
0.4
0.5
0.0
0.0
Thursday
0.0
0.0
0.5
2.0
Friday
0.4
0.5
0.0
0.0
Application 1: Weight Reduction (end)
Solution:
1.0
0.0

0.4
0.0

0.4
0.0 1.0 0.0 
 605 
249  

0.0 0.0 2.0     984 
 764  

0.5 0.0 0.0      481 
356  


0.0 0.5 2.0
1162 
 492 

0.5 0.0 0.0 
481.6 
Application 2: Production Costs
Table 3
Production Costs Per Item (dollars)
Product
Expenses
A
B
C
Raw materials
0.1
0.3
0.15
Labor
0.3
0.4
0.25
Overhead and
miscellaneous
0.1
0.2
0.15
Application 2: Production Costs (cont.)
Table 4
Amount Produced Per Quarter
Season
Summer
Fall
Winter
Spring
A
4000
4500
4500
4000
B
2000
2600
2400
2200
C
5800
6200
6000
6000
Product
Application 2: Weight Reduction (cont.)
Solution:
0.1 0.3 0.15


M  0.3 0.4 0.25
0.1 0.2 0.15


4000 4500 4500 4000


P  2000 2600 2400 2200
5800 6200 6000 6000 


Application 2: Weight Reduction (cont.)
Solution:
1870 2160 2070 1960 


MP  3450 3940 3810 3580
1670 1900 1830 1740 


Application 2: Production Costs (end)
Solution:
Table 5
Amount Produced Per Quarter
Season
Summer
Fall
Winter
Spring
Year
Raw materials
1,870
2,160
2,070
1,960
8,060
Labor
3,450
3,940
3,810
3,580
14,780
Overhead and
miscellaneous
1,670
1,900
1,830
1,740
7,140
Total
production cost
6,990
8,000
7,710
7,280
29,980
Application 5: Networks and Graphs (P.57)
Application 5: Networks and Graphs (cont.)
DEF.
If A is a adjacency matrix  Fnn ,
 1 , if {Vi , V j } is an edge of the graph.
then aij  
 0 , if there is no edge joioning Vi and V j .
for Figure 1.3.2,
0
1


adjacency matrix A  0
0


0
1
0
0
0
0
0
0
0
0
1
1
0
1
1
1
0
1


1
1

0

Application 5: Networks and Graphs (end)
Theorem 1.3.3.
If A is an n × n adjacency matrix of a graph and a (ijk ) represents
) equal to the number of walks of length
the ijth entry of Ak, then a ( kis
ij
from to Vi to Vj.
0
2


3
A  1
1


0
2
1
1
0
1
1
2
1
3
1
4
3
4
2
4
0
4


4
4

2

Application 6: Information Retrieval (P.59)
Suppose that our database, consists of these book titles:
B1.
B2.
B3.
B4.
B5.
B6.
B7.
Applied Linear Algebra
Elementary Linear Algebra
Elementary Linear Algebra with Applications
Linear Algebra and Its Applications
Linear Algebra with Applications
Matrix Algebra with Applications
Matrix Theory
The collection of key words is given by the following alphabetical
list:
algebra, application, elementary, linear, matrix, theory
Application 6: Information Retrieval (cont.)
Table 8
Array Representation for
Database of Linear Algebra Books
Books
B1
B2
B3
B4
B5
B6
B7
algebra
1
1
1
1
1
1
0
application
1
0
1
1
1
1
0
elementary
0
1
1
0
0
0
0
linear
1
1
1
1
1
0
0
matrix
0
0
0
0
0
1
1
theory
0
0
0
0
0
0
1
Key Words
Application 6: Information Retrieval (end)
If the words we are searching for are applied, linear, and algebra,
then the database matrix and search vector are given by
1
1

0
A
1
0


0
1
0
1
1
1
1
1
1
1
1
1
1
0
0
0
1
1
1
1
0
0
0
0
0
0
0
0
0
1
0
If we set y= ATx, then
1
1

1

y  1
1

1
0

0
0

0

0
1

1

1 
1 
 
0 
x 
1 
0 
 

0 

1
0
1
0
0
1
1
1
1
1
0
0
1
0
1
0
1
0
1
0
1
0
0
1
0
0
0
1
0
3
1


0     2 
1
0   3
 0   
0    3
1


3
0


 0
 
0   2
 0   
1 
0 
Let’s back to solve Ax=b

Eg2
x1  2 x2  x3  3
3x1  x2  x3  1
2 x1  3x2  x3  4
x1  2 x2  x3  3
0  7 x2  6 x3  10
0  x2  x3  2
x1  2 x2  x3  3
 7 x2  6 x3  10

1
4
x3  
7
7
3  3
1 2
 3  1  1   1


2 3 1  4 
1  3 
1 2
0  7  6   10


0  1  1   2 
1 2

1

3


0  7  6   10

1
4
0 0  7   7 
(§ 1.2)
Three types of Elementary row operations.
I. Interchange two row.
II. Multiply a row by    \ 0 .
III. Replace a row by its sum with a multiple of
another row.

Lead variables and free variables(p.15)
 Eg: 1 2 0 2 0  1
0 0 1 3 2  2 


0 0 0 0 1  5
x1 , x3 and x5 are lead variables while x2 and
x4 are free variables.

Def. A matrix is said to be in row echelon form if
(i) The first nonzero entry in each row is 1.
(ii) If row k does not consist entirely of zero,
the number of leading zero entries in row
k+1 is grater then the number of leading
zero entries in row k.
(iii) If there are rows whose entries are all zero, they
are below the rows having nonzero entries.

Def. The process of using row operations I, II, and III to
transform a linear system into one whose augmented
matrix is in row echelon form is called Gaussian
elimination.
Overdetermined and Underdetermined
 Def. A linear system is said to be overdetermined
if there are more equations(m) than unknowns
(n). (m > n)
Warning: Overdetermined systems are usually (but not always) in consistent.
 Def. A system of m linear equations in n unknowns
is said to be underdetermined if there are
fewer equations. (m < n)
Reduced Row Echelon Form
 Def. A matrix is said to be in reduced row echelon
form if:
(i) The matrix is in row echelon form.
(ii) The first nonzero entry in each row is the
only nonzero entry in its column.
 Def. The process of using elementary row operations
to transform a matrix into reduced row echelon
form is called Gauss-Jordan reduction.
Application 2: Electrical Networks (P.22)
Application 2: Electrical Networks (end)
Kirchhoff’s Laws:
1. At every node the sum of the incoming currents equals the
sum of the outgoing currents.
2. Around every closed loop the algebraic sum of the voltage
must equal the algebraic sum of the voltage drops.
 1 1 1
1 1 1


4 2 0
 0 2 5
0

0

8
9 

1 1 1


2
0 1
3

0 0 1

0 0 0
0
4 

3
1

0 
Application 4: Economic Models For
Exchange of Goods (P.25)
F
F
M
C
1/2
1/3 1/2
M 1/4
1/3 1/4
C
1/3 1/4
1/4
(§ 1.4)
Elementary Matrices
Type I ( Eij): Obtained by interchanging rows i and j
from identity matrix.
Type II ( Ei ( )): Obtained from identity matrix by
multiplying row i with  .
Type III ( Eij ( )): Obtained from identity matrix by adding
  row i  to row j.
Elementary Row / Column Operation

Eij A means performing type I row operation on A.
 Ei ( ) A means performing type II row operation on A.
 Eij ( ) A means performing type III row operation on A.

AEij means performing type I column operation on A.
 AE ( ) means performing type II column operation on A.
i
 AEij ( ) means performing type III column operation on A.
Theorem1.4.2:
If E is an elementary matrix, then E is nonsingular
and E-1 is an elementary matrix of the same type.
1
E
 Eij
ij
With
Ei1 ( )  Ei (1 /  )
Eij1 ( )  Eij (  )

The solution set of a linear equations is invariant under
three types row operation.
 Ax  b and EAx  Eb have the solution set.
Row Equivalent (P.71)
Def. A matrix B is row equivalent to A if there exists a
finite sequence E1 , E2 ,..., Ek of elementary matrices such that
B  Ek Ek 1...E1 A
Theorem1.4.3
(a) A is nonsingular.
(b) Ax=0 has only the trivial solution 0.
(c) A is row equivalent to I.
Proof of Theorem 1.4.3

 x  (A
(b) (c)
(a)
(b)
Let x0 be a solution of Ax=0.
1
0
A) x0  A1 ( Ax0 )  A1 0  0
row
Let A ~ U, where U is in reduced row echelon form.
Suppose U contains a zero row.
by Th1.2.1, Ux=0 has a nontrivial solution 
thus A~I.

(c)
(a)
A~I  A= E1 …… Ek for some E1 … Ek
∵ each Ei is nonsingular.
∴ A is nonsingular. (by Th.1.2.1)
Corollary1.4.4
Ax=b has a unique solution
A is nonsingular.
Pf: "  “ The unique solution is x  A1b .
"  " Suppose x
is the unique solution and A is
ˆ
singular.
Th1.4.3
 Z  0  AZ  0
^
 A( x  Z )  b
^
^
 x  Z  x is also a solution of Ax=b.
A is nonsingular.

BUT AB 6
= BA in general, and AB=AC
Eg.
1 0
A 

1
0


0 1
B

0
1


B=C.

0 1
C

1
1


0 1
1 0
 AB  
 BA  


0 1
1 0
Moreover,AC=AB while B  C .
Method For Computing
If A is nonsingular and row equivalent to I, so
there exists elementary matrices such that

 Ek Ek 1 ... E1 A = I ---------- 1

-1
E
E
...
E
I
=
A
----------  2

1
 k k 1
then,
Ek…E1(A | I)= (Ek…E1‧A | Ek…E1‧I) ( by
= (I | Ek…E1‧I)
= (I | A-1)
)
( by
1
) 2
Example 4.
Q: Compute A-1 if
Sol:
 1 4 3
A   1 2 0 
 2 2 3 
 1 4 3 1 0 0
A   1 2 0 0 1 0 
 2 2 3 0 0 1 
 1
 2

1
 A
 4
 1

 6

1
2
1
4
1
2
(P.73)
1 
2 

1 
4 
1 

6 

.

1 0 0

A  0 1 0


0 0 1

1
2
1
4
1
6
1
2
1
4
1
2
1
2

1 
4
1 
6 
Example 4.
Q: Compute A-1 if
Sol:
 1 4 3
A   1 2 0 
 2 2 3 
 1 4 3 1 0 0
A   1 2 0 0 1 0 
 2 2 3 0 0 1 
 1
 2

1
 A
 4
 1

 6
1
2
1
4
1
2
1 
2 

1 
4 
1 
6 
(cont.)
.

1 0 0

A  0 1 0


0 0 1

1
2
1
4
1
6
1
2
1
4
1
2
1
2

1 
4
1 
6 
Diagonal and Triangular Matrices
Def. An n × n matrix A is said to be upper triangular if aij=0 for
i > j and lower triangular if aij=0 for i > j.
Def. An n × n matrix B is diagonal if aij=0 whenever i ≠ j.
Triangular Factorization
If an n × n matrix C can be reduced to upper triangular form
using only row operation III, then C has an LU factorization.
The matrix L is unit lower triangular, and if i > j, then lij is the
multiple of t he jth row subtracted from the ith row during the
reduction process.
Example 6.
2 4 2
A  1 5 2 
 4 1 9 
Mark:
row operation III
2 4 2
LU   1 5 2   A
 4 1 9 
(P.74)

1

1
L  

2

2



2

0
U




 0

0

1 0

3 1 
0
4 2
3 1 
0 8 
Block Multiplication
Let A be an m × n matrix and B is an n × r matrix.
It is often useful to partition A and B and express the
product in terms of the submatrices of A and B.
In general, partition B into columns (b1,..., br )
then
AB  ( Ab1, Ab2 ,..., Abr )
 a (1,:) B 
 a (1,:) 
 a (2,:) B 
 a (2,:) 




partition A into rows A  
 , then AB  





a (m,:) 
a (m,:) B 
Block Multiplication (cont.)
Case 1.
A B1 B2    AB1
Case 2.
 A1 
 A1 B 
 A  B   A B
 2
 2 
Case 3.
 A1
AB2 
 B1 
A2     A1 B1  A2 B2
 B2 
Block Multiplication (cont.)
Case 4.
 A11
Let A  
A
 s1
A1t 

st

F
and

Ast 
 C11

then AB  C  
C
 s1
 B11

B
B
 t1
B1r 

t r

F

Btr 
C1r 
t

s r

F
, where Cij   Aik Bkj

k 1

Csr 
Example 2.
(P.85)
Let A be an n × n matrix of the form
where A11 is a k × k matrix (k < n ) .
 A11
A
O
O ,

A22 
Show that A is nonsingular if and only if A11 and A22
are nonsingular.
Solution:
Scalar / Inner Product
n
x
and
y
in
R
Give two vectors
,
 y1 
 
y2 
T

x y  ( x1 , x2 ,..., xn )
 x1 y1  x2 y2 
 
 
 yn 
 xn yn
This product is referred to as a scalar product or an
inner product.
 R11
Outer Product
Give two vectors x and y in Rn ,
 x1 
 x1 y1
 

x2 
x2 y1
T


xy 
( y1 , y2 ,..., yn ) 
 

 

 xn y1
 xn 
x1 y2
x2 y2
xn y2
x1 yn 

x2 yn 
 R nn


xn yn 
The product xy T is referred to as the outer product
of x and y .
Outer Product Expansion
Suppose that X  F mn and Y  F kn , then
 y1T 
 T
 y2 
T
XY  ( x1 , x2 ,..., xn )    x1 y1T  x2 y2T 
 
 yT 
 n
 xn ynT
This representation is referred to as an outer product
expansion .