Chapter 3: Systems of Linear Equations

Download Report

Transcript Chapter 3: Systems of Linear Equations

Chapter: 3c
System of Linear Equations
Dr. Asaf Varol
[email protected]
1
Pivoting
Some disadvantages of Gaussian elimination are as follows: Since each result
follows and depends on the previous step, for large systems the errors introduced due to
round off (or chop off) errors lead to loss of significant figures and hence to in accurate
results. The error committed in any one step propagates till the final step and it is
amplified. This is especially true for ill-conditioned systems. Of course if any of the
diagonal elements is zero, the method will not work unless the system is rearranged so
to avoid zero elements being on the diagonal. The practice of interchanging rows with
each other so that the diagonal elements are the dominant elements is called partial
pivoting. The goal here is to put the largest possible coefficient along the diagonal by
manipulating the order of the rows. It is also possible to change the order of variables,
i.e. instead of letting the unknown vector
{X}T = (x, y, z) we may let it be {X}T = ( y, z, x)
When this is done in addition to partial pivoting, this practice is called full
pivoting. In this case only the meaning of each variable changes but the system of
equations remain the same.
2
Example E3.4.2
Consider the following set of equations
0.0003 x + 3.0000 y = 2.0001
1.0000 x + 1.0000 y = 1.0000
The exact solution to which is x = 1/3 and y = 2/3
Solving this system by Gaussian elimination with a three significant figure
mantissa yields
x = -3.33, y = 0.667
with four significant figure mantissa yields
x = 0.0000 and y = 0.6667
3
Example E3.4.2 (continued)
Partial pivoting (switch the rows so that the diagonal elements are largest)
1.0000 x + 1.0000 y = 1.0000
0.0003 x + 3.0000 y = 2.0001
The solution of this set of equations using Gaussian elimination gives
y = 0.667 and x = 0.333 with three significant figure arithmetic
y = 0.6667 and x = 0.3333 with four significant figure arithmetic
4
Example E3.4.3
Problem: Apply full pivoting to the following system to achieve a well conditioned
matrix.
3x + 5y - 5z = 3
2x - 4y - z = -3
6x - 5y + z = 2
3

2
6

5
4
5
 5  x   3 

  
 1   y     3
  

1   z   2 
5
Example E3.4.3 (continued)
Solution: First switch the first column with the second column, then switch the
first column with the third column to obtain
-5z + 3x + 5y = 3
- z + 2x - 4y = -3
z + 6x - 5y = 2
Then switch second row with the third row
-5z + 3x + 5y = 3
z + 6x - 5y = 2
-z + 2x - 4y = -3
Yielding finally a well conditioned system given by
 5

1

  1
3
6
2
5 z   3 

  
 5 x    2 

 4   y    3 
6
Gauss – Jordan Elimination
This method is very similar to Gaussian elimination method. The only difference is
in that the elimination procedure is extended to the upper diagonal elements so that
a backward substitution is no longer necessary. The elimination process begins
with the augmented matrix, and continued until the original matrix turns into an
identity matrix, of course with necessary modifications to the right hand side.
In short our goal is to start with the general augmented system and arrive at the
right side after appropriate algebraic manipulations.
 a aa

a
 21
 a 31
Start
a 12
a 22
a 32
arrive
a 13   c 1 
 
a 23  c 2 


a 33  
c3 
===>
1

0

 0
0
1
0
0 c 1 
 * 
0  c2 

*
1   c 3 
*
The solution can be written at once as
x1  c1 ; x 2  c 2 ; x 3  c 3
*
*
*
7
Pseudo Code for Gauss – Jordan Elimination
do for k = 1 to n
do for j= k+1 to n+1
akj = akj/akk
end do
! Important note
! a(i,n+1) represents the
! the right hand side
do for i = 1 to n ; i is not equal to k
do for j = k+1 to n+1
aij = aij - (aik)(akj)
end do
end do
end do
cc----- The solution vector is saved in a(i,n+1), i=1, to n
8
Example E3.4.4
P rob lem : S o lve the pro blem o f E quatio ns (3.4.3) using G auss-Jo rdan elim inatio n
4 x1 + x2 + x3 = 6
-x 1 - 5x 2 + 6x 3 = 0
2 x 1 - 4x 2 + x 3 = -1
S olu tion : T o do that w e start w ith the augm ented the co efficient m atrix
 A a
(0)
4

 1

2

1
1
5
6
4
1
6

0
 1
(i) First m ultiply the first ro w by 1/4, then m ultiply it by -1 and subtract the result fro m
the seco nd ro w and replace the result o f the subtractio n w ith the seco nd ro w . S im ilarly
m ultiply the first ro w by 2 and subtract the result fro m the third ro w and replace the third
ro w w ith the result o f the subtractio n. T hese o peratio ns lead to
A 
(1)
a
1 . 0

  0
 0

0 . 25
 4 . 75
 4 . 50
0 . 25 1 . 5 

6 . 25 1 . 5

0 . 50  4 
9
Example E3.4.4 (continued)
(ii) M u ltip ly the seco nd ro w by -1 /4 .7 5 , then m u ltip ly the seco nd ro w by 0 .2 5 and su btract the
resu lt fro m the first ro w and rep lace the resu lt w ith the first ro w . S im ilarly m u ltip ly the seco nd
ro w by -4 .5 and su btract the resu lt fro m the third ro w and rep lace the resu lt w ith the third ro w
to o btain
A 
(2)
a
1

 0
0

0
0 . 5789
1
 1 . 3158
0
 5 . 4211
1 . 5789 

 0 . 3158

 5 . 4211 
(iii) M u ltip ly the third ro w by -1 /5 .4 2 1 1 , then m u ltip ly the third ro w by 0 .5 7 8 9 and su btract the
resu lt fro m the first ro w , and then rep lace the first ro w by the resu lt. S im ilarly m u ltip ly the third
ro w by -1 .3 1 5 8 and su btract the resu lt fro m the seco nd ro w , and then rep lace the seco nd ro w by
the resu lt to finally arrive at
A 
( 3)
a
1

 0
0

0
0
1
0
0
1
1.0 0 0 0 

1.0 0 0 0 
1.0 0 0 0 
H ence w e fo u nd the exp ected so lu tio n x 1 = 1 .0 , x 2 = 1 .0 , and x 3 = 1 .0 . N o te, ho w ever, that the
so lu tio ns o f 1 .0 0 w ere ju st the so lu tio ns. O f co u rse, each m atrix w ill have its o w n so lu tio ns, and 10
no t alw ays eq u al to o ne!
Finding the Inverse using Gauss-Jordan Elimination
T he inverse, [B ] o f a m atrix, [A ], is defined such that
[A ][B ] = [B ][A ] = [I]
w here [I] is the identity m atrix. In o ther w o rds, to find the inverse o f a m atrix w e need to
find the elem ents o f the m atrix [B ] such that w hen [A ] is m ultiplied by [B ] the result
sho uld equal the identity m atrix. If w e recall w hat m atrix m ultiplicatio n is, this am o unts to
the sam e thing as saying the first co lum n o f [B ] m ultiplied by [A ] m ust be equal to the first
co lum n o f [I] ; the seco nd co lum n o f [B ] m ultiplied by [A ] m ust be equal to the seco nd
co lum n o f [I] and so o n.
Fo r sum m ary, using G auss-Jo rdan elim inatio n
1
 4

-1 - 5

 2 - 4
1 | 1
0
6 | 0
1
1 | 0
0
0
1


0 
0



We obtain
1 
 0
0
0 | 0.185
- 0.04854
1
0 | 0.1262
0.01942
0
1 | 0.1359
0.17476
0.1068 

- 0.2427

- 0.1845 
11
LU Decomposition
L o w er and U pper (L U ) triangular deco m po sitio n technique is o ne o f the m o st w idely used
techniques used fo r so lutio n o f linear system s o f equatio ns due to its generality and
efficiency. T his m etho d calls fo r first deco m po sing a given m atrix into a pro duct o f a
lo w er and an upper triangular m atrices such that
[A ] = [L ][U ]
G iven that
[A ]{X } = {R }
[U ]{X } = {Q }
fo r an unkno w n vecto r {Q } T he additio nal unkno w n vecto r {Q } can be fo und fro m
[L ]{Q } = {R }
T he so lutio n pro cedure (i.e. the algo rithm ) can no w be sum m arized as
(i)
D eterm ine [L ] and [U ] fo r a given system o f equatio ns [A ]{X } = {R }
(ii)
C alculate {Q } fro m [L ]{Q } = {R } by fo rw ard substitutio n
(iii)
C alculate {X } fro m [U ]{X } = {Q } by backw ard substitutio n
12
Example E3.4.5
5
2

[A ] =  1

 3
3
4
1

1 ;

2 
12 
 
{R } =   8 
 16 
 
T o find the so lutio n w e start w ith
2 q1
-q 1
3q1
+ 0
+ 0
+ q 2 /2 + 0
+ 7q 2 /2 + 4q 3
2

[L ] =  1

 3
0
1/2
7/2
0

0 ;

4 
1

[U ] = 0

 0
5 / 2
1
0
1 / 2

1

1 
[L ]{Q } = {R }; that is
= 12
= -8
= 16
W e co m pute {Q } by fo rw ard substitutio n
q 1 = 12/2 = 6
q 2 = (-8 + q 1 )/(1/2) = -4
q 3 = (16 - 3q 1 - 7q 2 /2)/4 = 3
T
H ence {Q } = {6, -4, 3}
T hen w e so lve [U ]{X } = {Q } by backw ard substitutio n
x 1 - (5/2) x 2
0
+ x2
0
+ 0
+ (1/2) x 3
x3
+
x3
x3 = 3
x 2 = -4 + x 3 = -1
H ence the final so lutio n is
= 6
= -4
= 3
x 1 = 6 + (5/2) x 2 -(1/2) x 3 = 2
T
{X } = { 2, -1, 3 }
13
Crout Decomposition
W e illustrate C ro ut-deco m po sitio n in detail fo r a 3x3 general m atrix
[L ] [U ] = [A ]
 l 11

l
 21
 l 3 1
0
l 22
l 32
0

0

l 3 3 
1

0

 0
u 12
1
0
u 13 

u 23

1 
=
 a 11

a
 21
 a 3 1
a 12
a 22
a 32
a 13 

a 23

a 3 3 
M ultiplying [L ] by [U ] and equating the co rrespo nding elem ents o f the pro duct w ith that o f the m atrix
[A ] w e get the fo llo w ing equatio ns
l1 1 = a 1 1 ; l 2 1 = a 2 1 ; l3 1 = a 3 1 ;
in index notation: l i1 = a i1 , I= 1,2, ... , n (j= 1, first colu m n )
(T he first colum n of [L ] is equal to the first colum n of [A ]
equatio n
l1 1 u 1 2 = a 1 2
l1 1 u 1 3 = a 1 3
so lutio n
====>
====>
in index notation
l2 1 u 1 2 + l2 2 = a 2 2
l3 1 u 1 2 + l3 2 = a 3 2
====>
====>
u 1 2 = a 1 2 /l1 1
u 1 3 = a 1 3 /l1 1
u 1 j = a 1 j /l 1 1
j= 2, 3, ... , n (i= 1, first row )
l2 2 = a 2 2 - l2 1 u 1 2
l3 2 = a 3 2 - l3 1 u 1 2
14
in index notation
l i2 = a i2 - l i1 u 1 2 ; i= 2,3, ... , n (j= 2, secon d colu m n )
Pseudo Code for LU-Decomposition
cc--- S im p le co d in g
d o fo r i = 1 to n
li1 = a i1
end d o
c--- Fo rw ard substitutio n
q 1 1 = r 1 / l1 1
do fo r i = 2 to n
i 1
q i = r i - (  l ij q j )/lii
j 1
d o fo r j = 2 to n
u 1 j = a 1 j /l1 1
end d o
end do
c--- B ack subst itutio n
xn = qn
d o fo r j = 2 to n-1
ljj = a jj -
j 1
do fo r i = n-1 to 1
k 1

 l jk u kj
n
xi = q i -
j  i 1
u ij x j
end do
d o fo r i = j+ 1 to n
j 1
lij = a ij - k 1l ik u kj
j 1
u ji = a ji - (
 l jk u ki
k 1
)/ljj
end d o
end d o
n 1
lnn = a nn -

l nk u kn
k 1
15
Cholesky Decomposition for Symmetric Matrices
For symmetric matrices the LU-decomposition can be further simplified to take advantage
of the symmetric property that
[A] = [A]T ; or aij = aji
(3.4.9)
Then, it follows that
[A] = [L][U] = [A]T = ([L][U])T = [U]T [L]T
That is
[L] = [U]T ; [U] = [L]T
or in index notation
uij = lji
(3.4.10)
(3.4.11)
It should be noted that the diagonal elements of [U] are no longer, necessarily, equal to
one. Comment: This algorithm should be programmed using complex arithmetic.
16
Tridiagonal Matrix Algorithm (TDMA),
also known as Thomas Algorithm
 b1

a2
[A ] = 
0

0
c1
0
b2
c2
a3
b3
0
a4
0

0

c3 

b4 
If w e ap p ly L U -d eco m p o sitio n to this m atrix tak ing into acco u nt o f its sp ecial stru ctu re and let
 f1

e
 2
[L ] =
 0

 0
0
0
f2
0
e3
f3
0
e4
0 

0

0 

f4
1

0
[U ] = 
0

0
g1
0
1
g2
0
1
0
0
0

0

g3

1
B y m u ltip lying [L ] and [U ] and eq u ating the co rresp o nd ing elem ents o f the p ro d u ct to that o f the o rig inal m atrix
[A ] it can be sho w n that
f1 = b 1
g 1 = c 1 /f1
e2 = a2
and fo r k = 2 ,3 , ... , n-1
ek = ak
fk = b k - e k g k -1
g k = c k / fk
fu rther
en = an
fn = b n - e n g n-1
17
TDMA Algorithm
c----- D eco m po sitio n
c 1 = c 1 /b 1
do fo r k= 2 to n-1
b k = b k - a k c k -1
c k = c k /b k
end do
b n = b n - a n c n-1
c----- (no te here r k ; k= 1,2,3, ..., n is the right hand side o f the equatio ns)
c----- Fo rw ard substitutio n
r 1 = r 1 /b 1
do fo r k = 2, n
r k = (r k - a k r k -1 )/b k
end do
c----- B ack substitutio n
xn = rn
do fo r k = (n-1) to 1
x k = (r k - c k x k +1 )
end do
18
Example E3.4.6
P rob lem : D eterm ine the L U -deco m po sitio n fo r the fo llo w ing m atrix using T D M A
2

[A ] =  1
 0
2
4
1
0

4

2 
S olu tion : n= 3
c 1 = c 1 /b 1 = 2/2 = 1
k = 2
b 2 = b 2 - a 2 c 1 = 4 - (1)(1) = 3
c 2 = c 2 /b 2 = 4/3
k = n= 3
b 3 = b 3 - a 3 c 2 = 2 - (1)(4/3) = 2/3
T he o ther elem ents rem ain the sam e; hence
2

[L ] =  1
 0
0
3
1
0 
1


0
0
;
[U
]
=


 0
2 / 3 
1
1
0
0 

4/3

1 
19
It can be verified that the pro duct [L ][U ] is indeed equal to [A ].
Iterative Methods for Solving Linear Systems
Iterative m eth od s are tho se w here an initial guess is m ade fo r the
so lutio n vecto r fo llo w ed by a co rrectio n sequentially until a certain
bo und fo r erro r to lerance is reached. T he co ncept o f iterative
co m putatio ns w as intro duced in C hapter 2 fo r no nlinear equatio ns.
T he idea is the sam e here except that w e deal w ith linear system s o f
equatio ns. In this regard the fixed po int iteratio n m etho d presented
in C hapter 2 fo r tw o (no nlinear) equatio ns and tw o unkno w ns w ill
be repeated here, because that is precisely w hat Jaco bi iteratio n is
abo ut.
20
Jacobi Iteration
T w o linear eq u atio ns, in g eneral, can be w ritten as
a11 x1 + a12 x2 = c1
a21 x1 + a22 x2 = c2
W e no w rearrang e these eq u atio ns su ch that each u nk no w n is w ritten as a fu nctio n o f the o thers,
i.e.
x 1 = ( c 1 - a 1 2 x 2 )/ a 1 1 = f(x 1 ,x 2 )
x 2 = ( c 2 - a 2 1 x 1 )/ a 2 2 = g (x 1 ,x 2 )
w hich are in the sam e fo rm as tho se u sed fo r the fixed p o int iteratio n (C hap ter 2 ). T he fu nctio ns
f(x 1 ,x 2 ) and g (x 1 ,x 2 ) are intro d u ced fo r g enerality. It is im p licitly assu m ed that the d iag o nal
elem ents o f the co efficient m atrix are no t zero . In case there is a zero d iag o nal elem ent this sho u ld
be avo id ed by chang ing the o rd er o f the eq u atio ns, i.e. by p ivo tin g . Jaco bi iteratio n calls fo r
starting w ith a g u ess fo r the u nk no w ns, x 1 and x 2 , and then find ing new valu es u sing these
eq u atio ns iteratively. If certain co nd itio ns are satisfied this iteratio n p ro ced u re co nverg es to the
rig ht answ er as it d id in case o f fixed p o int iteratio n m etho d .
21
Example E3.5.1
P rob lem : S o lve the fo llo w ing system o f equatio ns using Jaco bi iteratio n
10 x 1 + 2x 2 + 3x 3 = 23
2x 1 - 10x 2 + 3x 3 = -9
- x 1 - x 2 + 5x 3 = 12
S olu tion : R earranging
x 1 = ( 23 - 2x 2 - 3x 3 )/10
x 2 = ( -9 - 2x 1 - 3x 3 )/(-10)
x 3 = (12 + x 1 + x 2 )/5
S tarting w ith a so m ew hat arbitrary guess, x 1 = 0 , x 2 = 0 , and x 3 = 0. and iterating yield in the values sho w n in the table
given belo w
n
IT E R
0
1
2
3
4
5
6
7
8
9
10
X1
X2
0
0
2.300000 0.900000
1.400000
2.080000
0.972000
2.092000
0.952800
2.023200
0.991520
1.994400
1.002560
1.996864
1.001472
1.999667
1.000101
2.000260
0.9998797 2.000089
0.9999606 1.999998
X3
0
2.400000
3.040000
3.096000
3.012800
2.995200
2.997184
2.999885
3.000228
3.000072
2.999994

E rro r N o rm , E = i  1
--5.600000
2.720000
4.960001E -01
1.712000E -01
8.512014E -02
1.548803E -02
6.592035E -03
2.306700E -03
5.483031E -04
2.506971E -04
n ew
xi
 xi
o ld
22
Convergence Criteria for Jacobi Iteration
f
x 1
g
x 1


f
x 2
g
x 2
=  a 1 2 /a 1 1  < 1
=  a 2 1 /a 2 2  < 1
In general a sufficient (but no t necessary) co nditio n fo r co nvergence o f Jaco bi iteratio n is
n
(

j 1, j i
a ij ) / a ii  1
In o ther w o rds, the sum o f the abso lute value o f the o ff diago nal elem ents o n each ro w
m ust be less than the abso lute value o f the diago nal elem ent o f the co efficient m atrix.
T hese m atrices are called strictly diagon ally dom in an t m atrices. T he reader can sho w
easily that the m atrix o f exam ple E 3.5.1 do es satisfy the criteria given.
23
Gauss - Seidel Iteration
T his m etho d is essentially the sam e as the Jaco bi iteratio n m etho d excep t that the new
valu es o f the variables (i.e. the m o st recently u p d ated valu e) is u sed in su bseq u ent
calcu latio ns w itho u t w aiting fo r co m p letio n o f o ne iteratio n fo r all variables. In cases the
co nverg ence criteria is satisfied , G au ss-S eid el iteratio n tak es few er iteratio n to achieve the
sam e erro r bo u nd co m p ared to Jaco bi iteratio n. T o illu strate ho w this p ro ced u re w o rk s w e
p resent the first tw o iteratio ns o f the G au ss-S eid el m etho d ap p lied to the exam p le
p ro blem :
x 1 = ( 2 3 - 2 x 2 - 3 x 3 )/1 0
x 2 = ( -9 - 2 x 1 - 3 x 3 )/(-1 0 )
x 3 = (1 2 + x 1 +
x 2 )/5
IT E R = 0 , x1 = 0 , x2 = 0 , x3 = 0 (initial g u ess)
IT E R = 1 ,
x 1 = ( 2 3 - 0 -0 )/1 0 = 2 .3
24
Gauss - Seidel Iteration (II)
N o w this new va lue fo r x 1 is used in the se co nd equatio n as o ppo sed to using x 1 = 0 (the
previo us va lue o f x 1 ) in Ja co bi iterat io n, that is
x 2 = [-9 - (2)(2.3) - 0 )/(-10) = 1.36
S im ilarly in the third equat io n the m o st recent va lue s o f x 1 a nd x 2 are used. H ence,
x 3 = (12 + 2.3 + 1.36 )/5 = 3.132
A nd so an fo r the rest o f the iteratio ns
IT E R = 2
x 1 = [23 - (2)(1.36) - (3)(3.132) ] /10
x 2 = [ -9 - (2)(0.8884) - (3)(3.132) ] /(-10)
x 3 = (12 + 0.8884 + 2.10728) /5
= 0.8884
= 2.10728
= 2.998256
25
Example E3.5.2
P rob lem : S o lve the fo llo w ing syste m o f equatio ns (fro m E xa m p le E 3.5.1) using G auss S e ide l iterat io n.
10 x 1 + 2x 2 + 3x 3 = 23
2x 1 - 10x 2 + 3x 3 = -9
- x 1 - x 2 + 5x 3 = 12
T ab le E 3.5.1 C on vergen ce tren d of G au ss-S eid el m eth od fo r E xam p le E 3.5.1
IT E R
0
1
2
3
4
5
6
7
8
9
10
X1
0
2.300000E + 00
1.088400E + 00
9.798032E -01
9.999894E -01
1.000243E + 00
9.999875E -01
9.999977E -01
1.000000E + 00
1.000000E + 00
1.000000E + 00
X2
0
1.360000E + 00
2.057280E + 00
2.004701E + 00
1.999068E + 00
1.999992E + 00
2.000012E + 00
1.999999E + 00
2.000000E + 00
2.000000E + 00
2.000000E + 00
X3
E rror N o rm , E
0
3.132000E + 00
3.029136E + 00
2.996901E + 00
2.999811E + 00
3.000047E + 00
3.000000E + 00
3.000000E + 00
3.000000E + 00
3.000000E + 00
3.000000E + 00
--6.792000E + 00
2.011744E + 00
1.934104E -01
2.872980E -02
1.412988E -03
3.223419E -04
2.276897E -05
3.457069E -06
3.576279E -07
0.000000E + 00
26
Convergence Criteria for Gauss - Seidel Iteration
C o nvergence criteria fo r G auss-S eidel m etho d are m o re relaxed than the equatio n fo r
Jaco bi Iteratio n co nvergence. H ere a sufficient co nditio n is




a
 ij 

j 1, j  i
n
a ii
  1 fo r a ll eq u a tio n s excep t o n e .

  1 fo r a t lea st o n e eq u a tio n .
T his co nditio n is also kno w n as the S carbo ro ugh criterio n. W e also intro duce the
fo llo w ing theo rem co ncerning the co nvergence o f G auss-S eidel iteratio n m etho d.
T heo rem :
L et [A ] be a real sym m etric m atrix w ith po sitive diago nal elem ents. T hen the G auss-S eidel
m etho d so lving [A ]{x} = {c} w ill co nverge fo r any cho ice o f initial guess if and o nly if [A ]
is a po sitive definite m atrix.
D efinitio n:
T
If fo r all no nzero co m plex vecto rs {x}, {x } [A ]{x} > 0 fo r a real sym m etric m atrix [A ]
then [A ] is said to be po sitive definite m atrix. (N o te also that [A ] is po sitive definite if and
o nly if all o f its eigenvalues are real and po sitive.
27
Relaxation Concept
T he iterative m etho ds such as Jaco bi and G auss-S eidel iteratio n are
successively predictio n co rrectio n m etho ds in that an initial guess is
co rrected to co m pute a new appro xim ate so lutio n, then the new so lutio n is
co rrected and so o n. T he G auss-S eidel m etho d, fo r exam ple can be
fo rm ulated as
 k 1 
xi
k 
 xi
k 
 xi
w here k deno tes the num ber o f iteratio ns and  x i
th
k so lutio n. W hich is o btained fro m
k 
xi

k 
 xi  xi
e
(k )
is the co rrectio n to the

T o interpret and understand this equatio n better w e let
xi  xi
e
new
k
, xi
 xi
old
and w rite it as
 new 
xi
new
 x i
 1   x i
old
28
Example E3.5.4
P rob lem : S ho w that the G auss-S eidel iteratio n m etho d co nverges in 50 iteratio ns w hen
applied to the fo llo w ing system w itho ut relaxatio n (i.e.  = 1)
x 1 + 2x 2 = -5
x 1 – 3x 2 = 20
Find an o ptim um relaxatio n facto r that w ill m inim ize the num ber o f iteratio ns to achieve a
so lutio n w ith the erro r bo und.
|| E i || < 0.5 x 10
-5
2
w here || E i || =

new
xi
 xi
old
i 1
S olu tion : A few iteratio ns are sho w n here w ith  = 0.8
L et x 1
old
= 0 and x 2
x1
new
old
= 0 (initial guess)
= (-5 - 2x 2
old
) = -5
A fter a few iteratio ns w e reach
U nder relax
new
x2
= (0.8)(-4.853) + (0.2)(-6.4) = -5.162
x2
old
= -5.162 (sw itch o ld and new )
29
Example E3.5.4 (continued)
T ab le E 3.5.4b N u m b er of Ite ration s ve rsu s R elaxation F acto r fo r E xam p le E 3.5.4

0.20
0.40
0.60
0.70
0.80
0.90
1.00
1.05
1.10
N u m ber o f iteratio ns
62
30
17
14
11
15
50
84
> 1000
It is see n that m inim u m nu m ber o f iteratio ns is o btained w it h a re la xat io n facto r o f
 = 0.80.
30
Case Study - The Problem of Falling Objects
m1
T1
m2
g - gravity
T2
m3
2
-m 1 a = c1 v
2
-m 2 a = c2 v
2
-m 3 a = c3 v
-
m1 g
m2 g
m3 g
- T1
- T1 +
+ T2
T2
31
Case Study - The Problem of Falling Objects
T hese three equatio ns can be used to so lve fo r any three o f the unkno w ns param eters: m 1,
m 2, m 3, c1, c2, c3, v, a, T 1, and T 2. A s an exam ple here w e set up a pro blem by
specifying the m asses, the drag co efficients, and the acceleratio n as
m 1 = 10 kg, m 2 = 6 kg, m 3 = 14 kg, c 1 = 1.0 kg/m , c 2 = 1.5 kg/m , c 3 = 2.0 kg/m , and a =
0.5g and find the rem aining three unkno w ns; v, T 1 and T 2 . T he acceleratio n o f gravity
2
g= 9.81 m /sec .
S ubstituting these values in the given set o f equatio ns and rearranging yield
2
v - T1
= 49.05
2
1.5 v + T 1 - T 2 = 29.43
2
2.0 v
+ T 2 = 68.67
2
2
v = 32.7 (v = 5.72 m /s), T 1 = -16.35 N , T 2 = 3.27 N (N = new to n= kg.m /sec )
32
References
1.
2.
3.
4.
5.
6.
Celik, Ismail, B., “Introductory Numerical Methods for Engineering Applications”,
Ararat Books & Publishing, LCC., Morgantown, 2001
Fausett, Laurene, V. “Numerical Methods, Algorithms and Applications”, Prentice
Hall, 2003 by Pearson Education, Inc., Upper Saddle River, NJ 07458
Rao, Singiresu, S., “Applied Numerical Methods for Engineers and Scientists, 2002
Prentice Hall, Upper Saddle River, NJ 07458
Mathews, John, H.; Fink, Kurtis, D., “Numerical Methods Using MATLAB” Fourth
Edition, 2004 Prentice Hall, Upper Saddle River, NJ 07458
Varol, A., “Sayisal Analiz (Numerical Analysis), in Turkish, Course notes, Firat
University, 2001
http://mathonweb.com/help/backgd3e.htm
33