Rank, determinant, dimension, and their links

Download Report

Transcript Rank, determinant, dimension, and their links

ENGG2013 Unit 15
Rank, determinant, dimension,
and the links between them
Mar, 2011.
Review on “rank”
• “row-rank of a matrix” counts the max.
number of linearly independent rows.
• “column-rank of a matrix” counts the max.
number of linearly independent columns.
• One application: Given a large system of linear
equations, count the number of essentially
different equations.
– The number of essentially different equations is
just the row-rank of the augmented matrix.
kshum
ENGG2013
2
Evaluating the row-rank by
definition
Linearly independent
Linearly independent
Linearly independent
Linearly dependent
Linearly independent
Linearly independent
 Row-Rank = 2
Linearly dependent
kshum
ENGG2013
3
Calculation of row-rank via RREF
Row reductions
Row-rank = 2
Row-rank = 2
Because row reductions
do not affect the number
of linearly independent rows
kshum
ENGG2013
4
Calculation of column-rank by
definition
List all combinations
of columns
 Column-Rank = 2
Linearly independent??
Y
Y
N
Y
N
Y
Y
N
Y
Y
Y
Y
Y
N
N
kshum
ENGG2013
5
Theorem
Given any matrix, its row-rank and column-rank are equal.
In view of this property, we can just say the “rank of a matrix”. It
means either the row-rank of column-rank.
kshum
ENGG2013
6
Why row-rank = column-rank?
• If some column vectors are linearly dependent, they
remain linearly dependent after any elementary row
operation
• For example,
are linearly dependent
kshum
ENGG2013
7
Why row-rank = column-rank?
• Any row operation does not change the
column- rank.
• By the same argument, apply to the transpose
of the matrix, we conclude that any column
operation does not change the row-rank as
well.
kshum
ENGG2013
8
Why row-rank = column-rank?
Apply row reductions.
row-rank and column-rank
do not change.
Apply column reductions.
row-rank and column-rank
do not change.
The top-left corner is
an identity matrix.
The row-rank and column-rank of this
“normal form” is certainly
the size of this identity submatrix,
and are therefore equal.
kshum
ENGG2013
9
DISCRIMINANT, DETERMINANT
AND RANK
kshum
ENGG2013
10
Discriminant of a quadratic
equation
• y = ax2+bx+c
• Discirminant of ax2+bx+c = b2-4ac.
• It determines whether the roots are distinct or
not
y
x
kshum
ENGG2013
11
Discriminant measures the
separation of roots
• y = x2+bx+c. Let the roots be  and .
• y = (x – )(x – ). Discriminant = ( – )2.
• Discriminant is zero means that the two roots
coincide.
y
( – )2
x
kshum
ENGG2013
12
Discriminant is invariant under
translation
• If we substitute u= x – t into y = ax2+bx+c, (t is
any real constant), then the discriminant of
a(u+t)2+b(u+t)+c, as a polynomial in u, is the
same as before.
y
( – )2
u
kshum
ENGG2013
13
Determinant of a square matrix
• The determinant of a square matrix determine
whether the matrix is invertible or not.
– Zero determinant: not invertible
– Non-zero determinant: invertible.
kshum
ENGG2013
14
Determinant measure the area
• 22 determinant measures the area of a
parallelogram.
• 33 determinant measures the volume of a
parallelopiped.
• nn determinant measures the “volume”
of some “parallelogram” in n-dimension.
• Determinant is zero means that the columns
vectors lie in some lower-dimensional space.
kshum
ENGG2013
15
Determinant is invariant under
shearing action
• Shearing action = third kind of elementary row
or column operation
kshum
ENGG2013
16
Rank of a rectangular matrix
• The rank of a matrix counts the maximal
number of linearly independent rows.
• It also counts the maximal number of linearly
independent columns.
• It is an integer.
• If the matrix is mn, then the rank is an
integer between 0 and min(m,n).
kshum
ENGG2013
17
Rank is invariant under row and
column operations
Rank = 2
Rank = 2
Rank = 2
Rank = 2
Rank = 2
kshum
Rank = 2
Rank = 2
ENGG2013
18
Comparison between det and rank
Determinant
• Real number
• Defined to square matrix only
• Non-zero det implies existence
of inverse.
• When det is zero, we only
know that all the columns (or
rows) together are linearly
dependent, but don’t know
any information about subset
of columns (or rows) which are
linearly independent.
kshum
Rank
• Integer
• Defined to any rectangular
matrix
• When applied to nn
square matrix, rank=n
implies existence of inverse.
ENGG2013
19
Basis: Definition
• For any given vector
in
if there is one and only one choice for the
coefficients c1, c2, …,ck, such that
we say that these k vectors form a basis of
kshum
ENGG2013
.
20
Yet another interpretation of rank
• Recall that a subspace W in
which is
is a subset
– Closed under addition: Sum of any two vectors in
W stay in W.
– Closed under scalar multiplication: scalar multiple
of any vector in W stays in W as well.
W
kshum
ENGG2013
21
Closedness property of subspace
W
kshum
ENGG2013
22
Geometric picture
x – 3y + z = 0
W
z
y
x
W is the plane
generated, or spanned,
by these vectors.
kshum
ENGG2013
23
Basis and dimension
• A basis of a subspace W is a
set of linearly independent
vectors which span W.
• A rigorous definition of the
dimension is:
z
y
W
Dim(W) = the number of
vectors in a basis of W.
x
kshum
ENGG2013
24
Rank as dimension
• In this context, the rank of a matrix is the
dimension of the subspace spanned by the
rows of this matrix.
– The least number of row vectors required to span
the subspace spanned by the rows.
• The rank is also the dimension of the subspace
spanned by the column of this matrix.
– The least number of column vectors required to
span the subspace spanned by the columns
kshum
ENGG2013
25
Example
x – 2y + z = 0
z
y
Rank = 2
The three row vectors
lie on the same plane.
Two of them is enough
to describe the plane.
kshum
x
ENGG2013
26
INTERPOLATION
kshum
ENGG2013
27
Polynomial interpolation
• Given n points, find a polynomial of degree n-1 which goes through
these n points.
• Technical requirements:
– All x-coordinates must be distinct
– y-coordinates need not be distinct.
y3 y2
y4
y1
x1
kshum
x2
ENGG2013
x3
x4
28
Lagrange interpolation
• Lagrange interpolating polynomial for four
data points:
kshum
ENGG2013
29
Computing the coefficients by
linear equations
• We want to solve for coefficients c3, c2, c1, and
c0, such that
or equivalently
kshum
ENGG2013
30
The theoretical basis for
polynomial interpolation
• The determinant of a vandermonde matrix
is non-zero, if all xi’s are distinct.
Hence, we can always find the matrix inverse
and solve the system of linear equations.
kshum
ENGG2013
31