Transcript Document

6.3 Linear Independence,
Dimension
“Better” Spanning Sets
• For a vector space, there may be a number of different
spanning sets. For instance, the entire set V is a spanning
sets.
• In the interest of efficiency, we would like to find a
spanning set with as few vectors as possible.
• We would like to find spanning sets which have the property
that there is only one way of representing any vector in V
with a linear combination of vectors in the spanning set
(vectors in spanning set called linearly independent).
• Interestingly, if we can find a spanning set which has only
one way of representing the zero vector, we have found
such a linearly independent spanning set.
Formal Definition
A set of vectors {v1,v2,…,vn} is called linearly independent if
it satisfies the following condition:
If s1v1 + s2v2 + … + snvn = 0 --> s1=s2=…=sn=0
Otherwise, the set is called linearly dependent
(I.e. If we can only represent the 0 by the trivial linear combination:
0v1 + 0v2 + … + 0vn
and no other way, then we have a
linearly independent set.
Example
Show that {(1,-1,0),(0,-1,2),(2,1,1)} in 3 is linearly
independent.
s1(1,-1,0) + s2(0,-1,2) + s3(2,1,1) = (0,0,0)
Show that s1,s2,s3 must all be 0 (will be a system of 3 equations
with 3 unknowns).
Example 2
Show that (1+x, 3x+x2, 2+x-x2) is linearly independent in P2.
s1(1+x) + s2(3x + x2) + s3(2 + x -x2) = 0
s1 + s1x + 3s2x + s2x2 + 2s3 + s3x - s3x2 = 0
s1+ 2s3=0
s1+3s2 + s3 = 0
s2 - s3 = 0
solve and show that s1=s2=s3 = 0
Example 3
Show that {sin x, cos x} is linearly independent in the vector
space F[0,2]
s1 sin x + s2 cos x = 0 must be true for every x in [0,2].
If x = 0, s2=0
If x = /2, then s1=0, so s1=s2=0
Example 4
If {u,v} is linearly independent, show that {u+v,u-v} is too.
s1(u+v) + s2(u-v) = 0
(s1 + s2) u + (s1 - s2) v = 0
Since {u,v} is linearly independent,
s1 + s2 = 0 and s1-s2=0 --> s1=s2=0. •
Example 5
If v≠0, the set {v} consisting of only one vector is linearly ind.
sv = 0 --> s = 0 since v≠0
Example 6
If {v1,v2,…,vn} is linearly independent in a vector space V,
show that {a1v1, a2v2,…,anvn} is also linearly independent if
a1,a2,…an are all non-zero.
s1a1v1 + s2a2v2 + … + snanvn = 0
Since {v1,v2,…,vn} is linearly independent,
s1a1=s2a2=…=snan=0
And since a1,a2,…an are all non-zero, s1=s2=…=sn=0
Example 7
Show that no linearly independent set of vectors can
contain the zero vector.
10 + 0v1 + 0v2 + … + 0vn = 0
which is a non-trivial
linear combination.
Theorem 1
A set {v1,v2,…,vn} of vectors in vector space V is linearly
dependent iff some vi is a linear combination of the others.
Proof: --> Given the set in linearly independent,
Then a1v1 + a2v2 + … + anvn = 0 where some coeff, say a1 ≠ 0
Then a1v1 = -a2v2 - … - anvn
So v1 = (-a2/a1)v2 + … + (-an/a1)vn - a linear combo of other v’s
Theorem 1 Proof (cont)
<-- Given v1 is a linear combination of the other v’s:
v1 = a2v2 + … + anvn
So 1v1 - a2v2 + … + anvn = 0 a non-trivial linear combination
Geometric Interpretation
Let u and v be nonzero vectors w/ initial points at origin.
• Theorem 1 tells us that {u,v} is linearly dependent iff one is a
scalar multiple of the other, (i.e. iff they are parallel).
• So {u,v} is linearly independent iff u,v nonparallel in which
case span {u,v} is the plane through the origin containing u and v
• Likewise, {u,v,w} is linearly dependent iff one is a linear
combination of the others which happens geometrically iff one is a
scalar multiple of another (parallel) or a linear combination of the
other 2 (would have to be in the same plane then)
• If {u,v,w} is linearly independent, the vectors must span a 3dimensional space. (draw)
Theorem 2
Let {v1,v2,…,vn} be a linearly independent set of vectors in a
vector space V. If a vector v has representations,
v = s1v1 + s2v2 + … + snvn
v = t1v1 + s2v2 + … + snvn
as linear combinations of the other vectors, then s1=t1,…,sn=tn
Proof: s1v1 + s2v2 + … + snvn = t1v1 + s2v2 + … + snvn
(s1-t1)v1 + … + (sn-tn)vn = 0
Since {v1,v2,…,vn} are linearly independent,
(s1-t1)=…= (sn-tn)=0
So s1=t1,…,sn=tn.
Theorem 3 - Fundamental Theorem
Suppose a vector space V can be spanned by n vectors. If any
set of m vectors is V is linearly independent, then m ≤ n.
Proof: Let V = span {v1,v2,…,vn}. We need to show that every
set {u1,u2,…,um} of vectors in V with m > n is not linearly
independent. We can do this by showing that we can find
x1,x2,…,xn not all zero such that
m
x u
j 1
j
j
 x1 u1  ...  x mum  0
Theorem 3 (cont)
Since V = span {v1,v2,…,vn}, each vector uj can be expressed as
a linear combination of the v’s:
n
u j  a1 jv1  a2 jv 2 ...  anjvn   aij
i 1
We can substitute uj’s into previous equation:

 n
 n  m
0   x j  aijvi    aij x j vi
i 1
 i1 j 1

j 1
m
 a11 x1v1  a12 x 2 v1  ...  a1m x m v1  ...  an1 x1 vn  ...  anm xm vn
Theorem 3 (cont)
0  a11x1v1  a12x 2v1  ...  a1m x mv1 ...  an1x1vn  ...  anmx m vn
This is true if all the coefficients of the v’s are zero:
m
a x
j 1
ij
j
0
for each i.
This is a system of n equations with m variables (xj’s).
Since m > n, (more variables than equations), it has a
non-trivial solution (in fact infinitely many).
Since there is a non-trivial solution, the u’s are linearly
dependent.
Definition
A set {e1,e2,... ,en} of vectors in a vector space V is called a
basis of V if it satisfies the following 2 conditions:
1. {e1,e2,…,en} is linearly independent
2. V = span {e1,e2,…,en}
Basically saying that this is the “most efficient” or smallest
set which spans V.
Also, every vector in V can be written as a linear
combination of the e vectors in a unique way (by Thm 2).
And any two bases of V contain the same number of vectors.
(proven in Theorem 4)
Theorem 4
Let {e1,e2,…,en} and {f1,f2,…,fm} be two bases of a vector
space V. Then n=m.
Proof: Since V = span {e1,e2,…,en}, and the m f’s are
linearly independent, m ≤ n (by Thm 3).
Also, since V= span {f1,f2,…,fm}, and the n e’s are linearly
independent, n ≤ m (by Thm 3).
So n = m.
Definition
If {e1,e2,…,en} is a basis of the nonzero vector space V, the
number, n, of vectors in the basis is called the dimension of
V (dim V = n).
The zero vector space, V = 0, is defined to have dimension 0:
dim 0 = 0
A vector space V is called finite dimensional if V = 0 or V
has a finite basis.
Example
Show that dim n = n and that {e1,e2,…,en} is a basis where
e1=(1,0,…,0), e2=(0,1,0,…,0),…,en=(0,…,0,1)
(the standard basis for n)
Proof: We showed in section 2 that n = span {e1,e2,…,en}.
Also, a1e1+ a2e2+…+anen = 0 implies a1=a2=…=an = 0 :
a1  0 
0  0 
0  a 
0  0 
2
     ...      
...  ... 
...  ...


0 
 
0 

an 
 
0 

So the e’s are linearly
independent and thus a
basis, so dim n = n. •
Example
The space Mmn has a basis with (mn) different (m x n) matrices
each having a different entry that is a 1, and all other entries 0.
The dimension is therefore mn.
Example
Show that dim Pn = n + 1 and that {1,x,x2,…,xn} is a basis.
Proof: We know that Pn = span {1,x,x2,…,xn}.
Also, if a01 + a1x + … + anxn = 0, then each term must be
zero, so a0 = a1 = … = an = 0. So both a spanning set and
linearly independent. Therefore, a basis, and dim Pn = n+1 •
Example
If v ≠ 0, is any nonzero vector in a vector space V, show that
span {v} = v has dimension 1.
Proof: {v} clearly spans v.
{v} is also linearly independent. So {v} is a basis of v and
dim v = 1
Example
1 1 
A  

0 0 
U = {X in M22 | AX = XA}, a subspace of M22
Show that dim U = 2 and find a basis of U.
We showed earlier that
y  w y 

U  
y,w 

0
w




So each matrix, X, in U is
So U = span B where
y  w
X  
 0
y 

w 

1
y 
0
1
1 0 
 w

0

0 1 
1 1 1 0 
B  
, 

0 0 0 1 
We could also show that B is linearly independent
and thus a basis. So dim U = 2.
Example
Show that the set V of all symmetric (2 x 2) matrices is a
vector space, and find the dimension of V.
Proof: This is clearly a subset of M22, so we just show that it
is a subspace.
1. It include the 0 element: O22
2. (A + B)T= AT+BT = A + B (since A, B symmetric)
3. (kA)T = kAT = kA (since A symmetric) so subspace
Example (cont)
a c  1 0  0 0 0
c b  a0 0  b 0 1  c 1

 
 
 
1 0 0 0  0
B  
, 
, 


0 0 0 1  1
1
0

1


0
So B spans V.
We could also show that B is linearly independent, and is
thus a basis.
So dim V = 3
Theorem 5
Let V be a vector space and assume that dim V = n > 0
1. No set of more than n vectors in V can be linearly independent.
2. No set of fewer than n vectors can span V.
Proof: since dim V = n, V can be spanned by n vectors, so
(1) follows directly from the fundamental theorem
(2): since dim V = n, n vectors form a basis, and these n vectors
are all linearly independent.
Say {a1,a2,…,an} is a basis for V and we eliminate one vector, a1,
to make a set of fewer than n vectors.
a1V, but no linear combination of {a2,…,an} will give a1 since
{a1,a2,…,an} is linearly independent.
Therefore, no smaller set can span V. •
Example
A is (n x n). Show:there exist n2 + 1 real numbers (a’s below)
which are not all zero such that:
2
n2
a0 I  a1 A  a2 A  ...  an 2 A  0
We showed earlier that dim Mnn = n2
So the n2+1 matrices above cannot be linearly independent, so
the trivial solution is not the only solution above.•