Transcript Chapter 3

Chapter 12
Iterative Methods for
System of Equations
Iterative Methods for Solving
Matrix Equations
Ax  b 
x  Cx  d , C ii  0
Jacobi method
Gauss-Seidel Method*
Successive Over Relaxation (SOR)
MATLAB’s Methods
Iterative Methods
a 11 x 1
a x
 21 1

a 31 x 1
a 41 x 1
 a 12 x 2
 a 13 x 3
 a 14 x 4
 b1
 a 22 x 2
 a 23 x 3
 a 24 x 4
 b2
 a 32 x 2
 a 33 x 3
 a 34 x 4
 b3
 a 42 x 2
 a 43 x 3
 a 44 x 4
 b4
Can be converted to
 x1

 x2

 x3
 x 4
 ( b1  a 12 x 2  a 13 x 3  a 14 x 4 ) / a 11
 ( b2  a 21 x 1  a 23 x 3  a 24 x 4 ) / a 22
 ( b3  a 31 x 1  a 32 x 2  a 34 x 4 ) / a 33
 ( b4  a 41 x 1  a 42 x 2  a 43 x 3 ) / a 44
Iterative Methods
Idea behind iterative methods:
Convert Ax = b into x = Cx +d
Ax  b  x  Cx  d Equivalent system
Generate a sequence of approximations
(iteration) x1, x2, …., with initial x0
x  Cx
j
j 1
d
Similar to fix-point iteration method
Rearrange Matrix Equations
a11 x1  a12 x 2  a13 x 3  a14 x 4  b1
a 21 x1  a 22 x 2  a 23 x 3  a 24 x 4  b2
a x  a x  a x  a x  b
32 2
33 3
34 4
3
 31 1
a 41 x1  a 42 x 2  a 43 x 3  a 44 x 4  b4
Rewrite the matrix equation in the same way

 x1

x
 2

 x3


 x4


a 21

a 22
a
  31
a 33
a
  41
a 44
a13
a12
a14
b1

x2 
x3 
x4 
a11
a11
a11
a11
a 23
a 24
b2
x1

x3 
x4 
a 22
a 22
a 22
a
a
b
x1  32 x 2
 34 x 4  3
a 33
a 33
a 33
a
a
b
x1  42 x 2  43 x 3
 4
a 44
a 44
a 44
n equations
n variables
Iterative Methods
Ax  b 
x  Cx
j
j 1
 d ; C ii  0
x and d are column vectors, and C is a square matrix

 0

  a 21
 a 22
C
a 31

 a 33
 a 41

 a 44
a12

a11
0
a 32

a 33
a 42

a 44
a13

a11
a 23

a 22
0
a 43

a 44
a14 
 b1 

a 
a11 

 11 
b2 
a 24 


 a 22 
a 22 
; d 

b3
a 34

 

a 33 
 a 33 

 b4 
0 
 
 a 44 

Convergence Criterion

x ij  xij 1
100%   s for all x i
1  a,i 
j
xi

 2 Norm of the residual vector Ax  b  
s
 
For system of equations
i  1,   eig( C )
Jacobi Method
 x 1 new
 new
 x2
 new
 x3
 new
 x4
 ( b1  a12 x 2
old
 a13 x 3
old
 a14 x 4
old
) / a11
 ( b2  a21 x 1
old
 a23 x 3
old
 a24 x 4
old
) / a22
 ( b3  a31 x 1
old
 a32 x 2
old
 a34 x 4
old
) / a33
 ( b4  a41 x 1
old
 a42 x 2
old
 a43 x 3
old
) / a44
Gauss-Seidel Method
Differ from Jacobi method by sequential updating:
use new xi immediately as they become available
 x 1 new
 new
 x2
 new
 x3
 new
 x4
 ( b1  a12 x 2
old
 a13 x 3
 ( b2  a21 x 1
new
 a23 x 3
old
 ( b3  a31 x 1
new
 a32 x 2
new
 ( b4  a41 x 1
new
 a42 x 2
new
old
 a14 x 4
old
 a24 x 4
) / a11
old
) / a22
 a34 x 4
old
) / a33
 a43 x 3
new
) / a44
Gauss-Seidel Method
 use new xi at jth iteration as soon as they
become available
 x1 j
 j
 x2
 j
 x3
 j
 x4
 a,i
 ( b1  a12 x 2
j 1
 ( b2  a21 x 1
new
 a13 x 3
j 1
 a23 x 3
j 1
 a14 x 4
j 1
 a24 x 4
j 1
 ( b3  a31 x 1  a32 x 2  a34 x 4
j
j
j 1
) / a11
) / a22
) / a33
 ( b4  a41 x 1  a42 x 2  a43 x 3 ) / a44
j
j
j
xij  xij 1

 100%   s for all xi
j
xi
(a) Gauss-Seidel Method
(b) Jacobi Method
Convergence and Diagonal Dominant
Sufficient condition -- A is diagonally dominant
Diagonally dominant: the magnitude of the diagonal
element is larger than the sum of absolute value of
the other elements in the row
n
aii   aij
i 1
j i
Necessary and sufficient condition -- the magnitude
of the largest eigenvalue of C (not A) is less than 1
Fortunately, many engineering problems of
practical importance satisfy this requirement
Use partial pivoting to rearrange equations!
Convergence or Divergence of Jacobi and
Gauss-Seidel Methods
 Same equations, but in different order
 Rearrange the order of the equations to ensure convergence
Diagonally Dominant Matrix
1
 2  3
8
  1  10 2

5

A
 1
 6 12  3


2
3  9
 3
is strictly diagonally dominant
because diagonal elements are
greater than sum of absolute
value of other elements in row
1  2  3
8
 1 6

2
5

A
 1  6 12  3


3  9
 3 2
is not diagonally dominant
Jacobi and Gauss-Seidel
Example:
5 x1  2 x 2  2 x 3  10

 2 x 1  4 x 2  x 3  7
 3 x  x  6 x  12
2
3
 1
Jacobi
2 old 2 old 10
 new
x

x2  x3 
 1
5
5
5

2 old
1 old 7
 new
x1
 x3 
 x2 
4
4
4

3 old 1 old
12
 new
x


x

x

1
2
 3
6
6
6

Gauss-Seidel
2 old 2 old 10
 new
x2  x3 
 x1 
5
5
5

2 new
1 old 7
 new
x

x

x3 
 2
1
4
4
4

3 new 1 new
12
 new

 x 3   6 x1  6 x 2
6

Example
 5 x1
4 x1
 x2
6 x1
 8 x2
 12 x 3
 80
 x3
 2
 45
0 12
 5
 4  1  1


 6
8
0 
Not diagonally dominant !!
Order of the equation can be important
Rearrange the equations to ensure convergence
4 x1
 x2
6 x1
 8 x2
 5 x1
 x3
 2
 45
 12 x 3
 80
 4  1  1
 6
8
0


  5
0 12
Gauss-Seidel Iteration
 x1  ( x 2  x 3  2 ) / 4

Rearrange  x 2  ( 45  6 x 1 ) / 8
 x  ( 80  5 x ) / 12
1
 3
Assume x1 = x2 = x3 = 0
 x1  ( 0  0  2 ) / 4  0.5
First

x 2  45  6  ( 0.5 ) / 8  6.0

iteration
 x  80  5  ( 0.5 ) / 12  6.4583
 3
Gauss-Seidel Method
 x 1  ( 2  6  6.4583 ) / 4  2.6146

Second iteration  x 2  ( 45  6 ( 2.6146 )) / 8  3.6641
 x  ( 80  5( 2.6146 )) / 12  7.7561
 3
Third iteration
 x 1  ( 2  3.6641  7.7561 ) / 4  2.3550

 x 2  ( 45  6 ( 2.3550 )) / 8  3.8587
 x  ( 80  5( 2.3550 )) / 12  7.6479
 3
 x 1  ( 2  3.8587  7.6479 ) / 4  2.3767

Fourth iteration  x 2  ( 45  6 ( 2.3767 )) / 8  3.8425
 x  ( 80  5( 2.3767 )) / 12  7.6569
 3
5 th : x 1  2.3749 , x 2  3.8439 , x 3  7.6562
6 th : x 1  2.3750 , x 2  3.8437 , x 3  7.6563
7 th : x 1  2.3750 , x 2  3.8438 , x 3  7.6562
MATLAB M-File for Gauss-Seidel method
Continued on next page
MATLAB M-File for Gauss-Seidel method
Continued from previous page
Gauss-Seidel Iteration
» A = [4 –1 –1; 6 8 0; -5 0 12];
» b = [-2 45 80];
» x=Seidel(A,b,x0,tol,100);
i
x1
x2
1.0000
-0.5000
6.0000
2.0000
2.6146
3.6641
3.0000
2.3550
3.8587
4.0000
2.3767
3.8425
5.0000
2.3749
3.8439
6.0000
2.3750
3.8437
7.0000
2.3750
3.8438
8.0000
2.3750
3.8437
Gauss-Seidel method converged
x3
6.4583
7.7561
7.6479
7.6569
7.6562
7.6563
7.6562
7.6563
x4 ....
Converges faster than the Jacobi method shown in next page
Jacobi Iteration
» A = [4 –1 –1; 6 8 0; -5 0 12];
» b = [-2 45 80];
» x=Jacobi(A,b,0.0001,100);
i
x1
x2
1.0000
-0.5000
5.6250
2.0000
2.5729
6.0000
3.0000
2.6146
3.6953
4.0000
2.3585
3.6641
5.0000
2.3550
3.8561
6.0000
2.3764
3.8587
7.0000
2.3767
3.8427
8.0000
2.3749
3.8425
9.0000
2.3749
3.8438
10.0000
2.3750
3.8439
11.0000
2.3750
3.8437
12.0000
2.3750
3.8437
13.0000
2.3750
3.8438
14.0000
2.3750
3.8438
Jacobi method converged
x3
6.6667
6.4583
7.7387
7.7561
7.6494
7.6479
7.6568
7.6569
7.6562
7.6562
7.6563
7.6563
7.6562
7.6562
x4 ....
Relaxation Method
Relaxation (weighting) factor 
Gauss-Seidel method:  = 1
Overrelaxation 1 <  < 2
Underrelaxation 0 <  < 1
x
new
i
 x
new
i
 (1   ) x
Successive Over-relaxation (SOR)
old
i
Successive Over Relaxation
(SOR)
Relaxation method
G-S method x2 new  (b2  a21 x1new  a23 x3old  a24 x4 old ) / a22
SOR method x2 new  (1   ) x2old   x2new
 (1   ) x2old   (b2  a21 x1new  a23 x3old  a24 x4 old ) / a22
 x 1 new
 new
 x2
 new
 x3
 new
 x4
 (1   )x 1
old
 (1   )x 2
old
  (b1  a12 x 2
old
 a13 x 3
old
 a14 x 4
old
)/a 11
  (b2  a21 x 1
new
 a23 x 3
old
 (1   )x 3
old
  (b3  a31 x 1
new
 a32 x 2
new
 a34 x 4
old
 (1   )x 4
old
  (b4  a41 x 1
new
 a42 x 2
new
 a43 x 3
new
 a24 x 4
old
)/a 22
)/a 33
)/a 44
SOR Iterations
 x1  ( x 2  x 3  2 ) / 4

Rearrange  x 2  ( 45  6 x 1 ) / 8
 x  ( 80  5 x ) / 12
1
 3
Assume x1 = x2 = x3 = 0, and  = 1.2
x i   x iG  S  1    x iold ; G - S : Gauss - Seidel
 x1  ( 0.2 )  0  1.2  ( 0  0  2 ) / 4  0.6

First
 x 2  ( 0.2 )  0  1.2  45  6  ( 0.6 ) / 8  7.29
iteration 
 x 3  ( 0.2 )  0  1.2  80  5  ( 0.6 ) / 12  7.7
SOR Iterations
Second iteration
 x1  ( 0.2 )  ( 0.6 )  1.2  ( 7.29  7.7  2 ) / 4  4.017

 x 2  ( 0.2 )  ( 7.29 )  1.2  ( 45  6 ( 4.017 )) / 8  1.6767
 x  ( 0.2 )  ( 7.7 )  1.2  ( 80  5( 4.017 )) / 12  8.4685
 3
Third iteration
 x1  ( 0.2 )  ( 4.017 )  1.2  ( 1.6767  8.4685  2 ) / 4  1.6402

 x 2  ( 0.2 )  ( 1.6767 )  1.2  ( 45  6 ( 1.6402 )) / 8  4.9385
 x  ( 0.2 )  ( 8.4685 )  1.2  ( 80  5( 1.6402 )) / 12  7.1264
 3
Converges slower !! (see MATLAB solutions)
There is an optimal relaxation parameter
Successive Over Relaxation
Relaxation factor
w (= )
SOR with  = 1.2
» [A,b]=Example
A =
4
6
-5
-1
8
0
-1
0
12
b =
-2
45
80
» x0=[0 0 0]'
x0 =
0
0
0
» tol=1.e-6
tol =
1.0000e-006
Converge Slower !!
» w=1.2; x = SOR(A, b, x0, w, tol, 100);
i
x1
x2
x3
1.0000
-0.6000
7.2900
7.7000
2.0000
4.0170
1.6767
8.4685
3.0000
1.6402
4.9385
7.1264
4.0000
2.6914
3.3400
7.9204
5.0000
2.2398
4.0661
7.5358
6.0000
2.4326
3.7474
7.7091
7.0000
2.3504
3.8851
7.6334
8.0000
2.3855
3.8261
7.6661
9.0000
2.3705
3.8513
7.6521
10.0000
2.3769
3.8405
7.6580
11.0000
2.3742
3.8451
7.6555
12.0000
2.3753
3.8432
7.6566
13.0000
2.3749
3.8440
7.6561
14.0000
2.3751
3.8436
7.6563
15.0000
2.3750
3.8438
7.6562
16.0000
2.3750
3.8437
7.6563
17.0000
2.3750
3.8438
7.6562
18.0000
2.3750
3.8437
7.6563
19.0000
2.3750
3.8438
7.6562
20.0000
2.3750
3.8437
7.6563
21.0000
2.3750
3.8438
7.6562
SOR method converged
....
SOR with  = 1.5
» [A,b]=Example2
A =
4
6
-5
b =
-1
8
0
-1
0
12
-2
45
80
» x0=[0 0 0]'
x0 =
0
0
0
» tol=1.e-6
tol =
1.0000e-006
Diverged !!
» w = 1.5; x = SOR(A, b, x0, w, tol, 100);
i
x1
x2
x3
....
1.0000
-0.7500
9.2812
9.5312
2.0000
6.6797
-3.7178
9.4092
3.0000
-1.9556
12.4964
4.0732
4.0000
6.4414
-5.0572
11.9893
5.0000
-1.3712
12.5087
3.1484
6.0000
5.8070
-4.3497
12.0552
7.0000
-0.7639
11.4718
3.4949
8.0000
5.2445
-3.1985
11.5303
9.0000
-0.2478
10.3155
4.0800
10.0000
4.7722
-2.0890
10.9426
11.0000
0.1840
9.2750
4.6437
12.0000
4.3775
-1.1246
10.4141
13.0000
0.5448
8.3869
5.1335
14.0000
4.0477
-0.3097
9.9631
15.0000
0.8462
7.6404
5.5473
………
20.0000
3.3500
1.4220
9.0016
………
30.0000
2.7716
2.8587
8.2035
………
50.0000
2.4406
3.6808
7.7468
………
100.0000
2.3757
3.8419
7.6573
SOR method did not converge
CVEN 302-501
Homework No. 8
Chapter 12
Prob. 12.4 & 12.5 (Hand
calculation and check the
results using the programs)
You do it but do not hand in. The
solution will be posted on the net.
Nonlinear Systems
Simultaneous nonlinear equations
f1 ( x1 , x 2 , x 3 ,  , xn )  0
f 2 ( x1 , x 2 , x 3 ,  , xn )  0
f 3 ( x1 , x 2 , x 3 ,  , xn )  0

fn ( x1 , x 2 , x 3 ,  , xn )  0
Example
x 12  2 x 22  4 x 3  7
2 x 1 x 2  5 x 2  7 x 2 x 32  8
5 x 1  2 x 2  x 32  4
Two Nonlinear Functions
Circle x2 + y2 = r2

f(x,y)=0
g(x,y)=0
Solution
depends on
initial guesses
Ellipse (x/a)2 + (y/b)2 = 1
function [x1,y1,x2,y2]=curves(r,a,b,theta)
» r=2; a=3; b=1; theta=0:0.02*pi:2*pi;
% Circle with radius A
» [x1,y1,x2,y2]=curves(2,3,1,theta);
x1=r*cos(theta); y1=r*sin(theta);
» H=plot(x1,y1,'r',x2,y2,'b');
% Ellipse with major and minor axes (a,b)
» set(H,'LineWidth',3.0); axis equal;
x2=a*cos(theta); y2=b*sin(theta);
Newton-Raphson Method
One nonlinear equation (Ch.6)
f ( x i  1 )  f ( x i )  ( x i  1  x i ) f ( x i )
xi1
f ( xi )
 xi 
f ( x i )
Two nonlinear equations (Taylor-series)
f 1 , i
f 1 , i

 f 1 , i  1  f 1 , i  ( x 1 , i  1  x 1 , i ) x  ( x 2 , i  1  x 2 , i ) x

1
2

f 2 , i
f 2 , i
f
 ( x 2 ,i 1  x 2 ,i )
2 , i  1  f 2 , i  ( x 1, i  1  x 1, i )

x 1
x 2
Intersection of Two Curves
Two roots: f1(x1,x2) = 0 , f2 (x1,x2) = 0
 f 1, i  1

 f 2 ,i 1
f 1 , i
 f 1 , i
x

x 2 , i  1   f 1, i  x 1, i
1, i  1

0
x 2
 x 1
 
0
f 2 , i
 f 2 , i x

x 2 , i  1   f 2 , i  x 1, i
1, i  1
 x 1
x 2
f 1 , i
x 1
f 2 , i
x 1
 x 2 ,i
 x 2 ,i
f 1 , i
x 2
f 2 , i
x 2
Alternatively
 f 1 , i
 x
 1
 f 2 , i
 x1
f 1 , i 
 f 1, i 
x 2   x 1 , i  1  x 1 , i 

  

f 2 , i  x 2 , i  1  x 2 , i
 f 2. i 


x 2 
[J]{x} = { f }
Jacobian matrix
Intersection of Two Curves
Intersection of a circle and a parabola
 f1 ( x1 , x 2 )  x12  x 22  1  0
 f ( x, y)  x 2  y 2  1  0
or 

2
2

g
(
x
,
y
)

x

y

0
f
(
x
,
x
)

x

 2 1 2
1  x2  0
Intersection of Two Curves
 f 1 ( x 1 , x 2 )  x 12  x 22  1

 f 2 ( x 1 , x 2 )  x 12  x 2
 2 x 1, i
[ J ]{ x}  
 2 x 1, i
 f 1
 x
 1
 f 2
 x 1
f 1 
x 2 

f 2 
x 2 
 2 x1
2 x
 1
2 x2 
 1 
2 x2 ,i   x1,i    f1,i 




 1 x2 ,i   f 2 ,i 
Solve for xnew
 x1,i 
 x 1, i  1  x 1, i 
new
old
x  
 x x 

x2 ,i 
 y2 ,i 1  x2 ,i 
function [x,y,z]=ellipse(a,b,c,imax,jmax)
for i=1:imax+1
theta=2*pi*(i-1)/imax;
for j=1:jmax+1
phi=2*pi*(j-1)/jmax;
x(i,j)=a*cos(theta);
y(i,j)=b*sin(theta)*cos(phi);
z(i,j)=c*sin(theta)*sin(phi);
end
end
» a=3; b=2; c=1; imax=50; jmax=50;
» [x,y,z]=ellipse(a,b,c,imax,jmax);
» surf(x,y,z); axis equal;
» print -djpeg100 ellipse.jpg
function
[x,y,z]=parabola(a,b,c,imax,jmax)
% z=a*x^2+b*y^2+c
for i=1:imax+1
xi=-2+4*(i-1)/imax;
for j=1:jmax
eta=-2+4*(j-1)/jmax;
x(i,j)=xi;
y(i,j)=eta;
Parabolic or
Hyperbolic
Surfaces
z(i,j)=a*x(i,j)^2+b*y(i,j)^2+c;
end
end
» a=0.5; b=-1; c=1; imax=50; jmax=50;
» [x2,y2,z2]=parabola(a,b,c,imax,jmax);
» surf(x2,y2,z2); axis equal;
» a=3; b=2; c=1; imax=50; jmax=50;
» [x1,y1,z1]=ellipse(a,b,c,imax,jmax);
» surf(x1,y1,z1); hold on; surf(x2,y2,z2); axis equal;
» print -djpeg100 parabola.jpg
Intersection of two nonlinear surfaces
Infinite
many
solutions
Intersection of three nonlinear surfaces
»
»
»
»
»
»
[x1,y1,z1]=ellipse(2,2,2,50,50);
[x2,y2,z2]=ellipse(3,2,1,50,50);
[x3,y3,z3]=parabola(-0.5,0.5,-0.5,30,30);
surf(x1,y1,z1); hold on; surf(x2,y2,z2); surf(x3,y3,z3);
axis equal; view (-30,60);
print -djpeg100 surf3.jpg
Newton-Raphson Method
n nonlinear equations in n unknowns
 f 1 ( x 1 , x 2 , x 3 ,...., x n )  0

f
(
x
,
x
,
x
,....,
x
)

0
2
1
2
3
n

 f 3 ( x 1 , x 2 , x 3 ,...., x n )  0



 f n ( x 1 , x 2 , x 3 ,...., x n )  0

 f1 ( x ) 
 f ( x ) 
   2  
F(x)   f 3 ( x ) 
  

 
 f n ( x ) 

 

F ( x )  0 , x  x1 x2 x3  xn 
Newton-Raphson Method
Jacobian (matrix of partial derivatives)
  f1
 x
 1
  f2
 x
 1
J ( x )    f3
 x
 1


  fn
  x1
 f1
 x2
 f1
 x3
 f2
 x2
 f2
 x3
 f3
 x2
 f3
 x3
 fn
 x2
 fn
 x3
 f1 
 xn 

 f2 
 xn 

 f3 
 xn 



 fn 
 xn 
Newton’s iteration (without inversion)
 
 

 

J ( xold ) y  F ( xold ); y  x  xnew  xold
Newton-Raphson Method
For a single equation with one variable
df
J ( x) 
 f ( x )
dx
Newton’s iteration
xnew  xold

f ( xold )
 J ( xold ) f  xold 
f ( xold )
xi 1
1
f ( xi )
 xi 
f ( xi )
function x = Newton_sys(F, JF, x0, tol, maxit)
%
%
%
%
%
%
%
Solve the nonlinear system F(x) = 0 using Newton's method
Vectors x and x0 are row vectors (for display purposes)
function F returns a column vector, [f1(x), ..fn(x)]'
stop if norm of change in solution vector is less than tol
solve JF(x) y = - F(x) using Matlab's "backslash operator"
y = -feval(JF, xold) \ feval(F, xold);
the next approximate solution is x_new = xold + y';
xold = x0;
disp([0 xold ]);
iter = 1;
while (iter <= maxit)
y= - feval(JF, xold) \ feval(F, xold);
xnew = xold + y';
dif = norm(xnew - xold);
disp([iter
xnew
dif]);
if dif <= tol
x = xnew;
disp('Newton method has converged')
return;
else
xold = xnew;
end
iter = iter + 1;
end
disp('Newton method did not converge')
x=xnew;
Intersection of Circle and Parabola
function f = ex11_1(x)
function df = ex11_1_j(x)
f = [(x(1)^2 + x(2)^2 -1)
(x(1)^2 - x(2)) ];
df = [2*x(1)
2*x(1)
2*x(2)
-1
]
;
Use x0 = [0.5 0.5] as initial estimates
» x0=[0.5 0.5]; tol=1.e-6; maxit=100;
» x=Newton_sys('ex11_1','ex11_1_j',x0,tol,maxit);
0
1.0000
2.0000
3.0000
4.0000
5.0000
Newton method
0.5000
0.5000
0.8750
0.6250
0.7907
0.6181
0.7862
0.6180
0.7862
0.6180
0.7862
0.6180
has converged
0.3953
0.0846
0.0045
0.0000
0.0000
Intersection of Three Surfaces
Solve the nonlinear system
 f 1 ( x1, x 2 , x 3 )  x12  x 22  x 32  14  0
 f ( x , y, z )  x 2  y 2  z 2  14  0


2
2
2
2
g
(
x
,
y
,
z
)

x

2
y

9

0
or
f
(
x
x
,
x
)

x

2
x

 2 1, 2 3
1
2 9  0
h( x , y, z )  x  3 y 2  z 2  0

2
2
f
(
x
x
,
x
)

x

3
x

x
0

3
1
,
2
3
1
2
3


Jacobian
 f 1
 x
 1
f 2

[J ] 
 x 1
 f
 3
 x 1
f 1
x 2
f 2
x 2
f 3
x 2
f 1 
x 3   2 x
2 x2
1
f 2  
 2 x1
4 x2

x 3 
f 3   1  6 x 2

x 3 
2 x3 
0

2 x 3 
Newton-Raphson Method
Solve the nonlinear system
2 x 2 2 x 3   x 1 
 f1 
 2 x1
2 x
  x     f 
4
x
0
 2
2
 1
 2 
f 
 1  6 x 2 2 x 3   x 3 
 3
 x new   x old   x1 

 
 

y

y


x
 new   old   2 
 z   z   x 
 new   old   3 
MATLAB function ( y = J-1F )
y =  feval(JF,xold) \ feval(F, xold)
Intersection of Three Surfaces
function f = ex11_2(x)
% Intersection of three surfaces
function df = ex11_2_j(x)
% Jacobian matrix
df = [ 2*x(1) 2*x(2) 2*x(3)
2*x(1) 4*x(2)
0
1
-6*x(2) 2*x(3) ] ;
f = [ (x(1)^2 + x(2)^2 + x(3)^2 - 14)
(x(1)^2 + 2*x(2)^2 - 9)
(x(1) - 3*x(2)^2 + x(3)^2) ];
» x0=[1 1 1]; es=1.e-6; maxit=100;
» x=Newton_sys(‘ex11_2',‘ex11_2_j',x0,es,maxit);
0
1
1
1
1.0000
1.6667
2.1667
4.6667
3.9051
2.0000
1.5641
1.8407
3.2207
1.4858
3.0000
1.5616
1.8115
2.8959
0.3261
4.0000
1.5616
1.8113
2.8777
0.0182
5.0000
1.5616
1.8113
2.8776
0.0001
6.0000
1.5616
1.8113
2.8776
0.0000
Newton method has converged
Fixed-Point Iteration
No need to compute partial derivatives
 f 1 ( x 1 , x 2 , x 3 ,...., x n )  0
 x1


f
(
x
,
x
,
x
,....,
x
)

0
n
 2 1 2 3
 x2


 f 3 ( x 1 , x 2 , x 3 ,...., x n )  0   x 3





 f n ( x 1 , x 2 , x 3 ,...., x n )  0
 x n
Convergence
Theorem
 g 1 ( x 1 , x 2 , x 3 ,...., x n )
 g 2 ( x 1 , x 2 , x 3 ,...., x n )
 g 3 ( x 1 , x 2 , x 3 ,...., x n )

 g n ( x 1 , x 2 , x 3 ,...., x n )
g i
K
 ; K 1
x j
n
Example1: Fixed-Point
Solve the nonlinear system
 f 1 ( x 1 , x 2 , x 3 )  x 12  50 x 1  x 22  x 32  200  0

2
2
f
(
x
,
x
,
x
)

x

20
x

x
 2 1 2 3
1
2
3  50  0

2
2
f
(
x
,
x
,
x
)


x

x
1
2  40 x 3  75  0
 3 1 2 3
Rearrange (initial guess: x = y = z = 2)
 x 1  g 1 ( x 1 , x 2 , x 3 )  ( 200  x 12  x 22  x 32 ) /50

2
2
x

g
(
x
,
x
,
x
)

(
50

x

x
 2
2
1
2
3
1
3 ) /20

2
2
x

g
(
x
,
x
,
x
)

(
x

x
3
1
2
3
1
2  75 ) /40
 3
function x = fixed_pt_sys(G, x0, es, maxit)
% Solve the nonlinear system x = G(x) using fixed-point method
% Vectors x and x0 are row vectors (for display purposes)
% function G returns a column vector, [g1(x), ..gn(x)]'
% stop if norm of change in solution vector is less than es
% y = feval(G,xold); the next approximate solution is xnew = y';
disp([0
x0 ]);
%display initial estimate
xold = x0;
iter = 1;
while (iter <= maxit)
y = feval(G, xold);
xnew = y';
dif = norm(xnew - xold);
disp([iter
xnew
dif]);
if dif <= es
x = xnew;
disp('Fixed-point iteration has converged')
return;
else
xold = xnew;
end
iter = iter + 1;
end
disp('Fixed-point iteration did not converge')
x=xnew;
function G = example1(x)
G = [ (-0.02*x(1)^2 - 0.02*x(2)^2 - 0.02*x(3)^2 + 4)
(-0.05*x(1)^2 - 0.05*x(3)^2 + 2.5)
(0.025*x(1)^2 + 0.025*x(2)^2 -1.875) ];
» x0=[0 0 0]; es=1.e-6; maxit=100;
» x=fixed_pt_sys('example1', x0, es, maxit);
0
0
1.0000
2.0000
3.0000
4.0000
5.0000
6.0000
7.0000
8.0000
9.0000
10.0000
11.0000
12.0000
13.0000
14.0000
15.0000
0
4.0000
3.4847
3.6759
3.6187
3.6372
3.6314
3.6333
3.6327
3.6329
3.6328
3.6328
3.6328
3.6328
3.6328
3.6328
0
2.5000
1.5242
1.8059
1.7099
1.7393
1.7298
1.7328
1.7318
1.7321
1.7321
1.7321
1.7321
1.7321
1.7321
1.7321
-1.8750
-1.3188
-1.5133
-1.4557
-1.4745
-1.4686
-1.4705
-1.4699
-1.4701
-1.4700
-1.4701
-1.4701
-1.4701
-1.4701
-1.4701
Fixed-point iteration has converged
5.0760
1.2358
0.3921
0.1257
0.0395
0.0126
0.0040
0.0013
0.0004
0.0001
0.0000
0.0000
0.0000
0.0000
0.0000
Example 2: Fixed-Point
Solve the nonlinear system
 f1 ( x1 , x 2 , x 3 )  x12  4 x 22  9 x 32  36  0

2
2
f
(
x
,
x
,
x
)

x

9
x
1
 2 1 2 3
2  47  0

2
f
(
x
,
x
,
x
)

x
1 x 3  11  0
 3 1 2 3
Rearrange (initial guess: x = 0, y = 0, z > 0)
 x1  g 1 ( x1 , x 2 , x 3 )  11 / x 3


2
x

g
(
x
,
x
,
x
)

47

x
 2
2
1
2
3
1 /3

2
2
x

g
(
x
,
x
,
x
)

36

x

4
x
 3
3
1
2
3
1
2 /3
function G = example2(x)
G = [ (sqrt(11/x(3)))
(sqrt(47 - x(1)^2) / 3 )
(sqrt(36 - x(1)^2 - 4*x(2)^2) / 3 ) ];
» x0=[0 0 10]; es=0.0001; maxit=500;
» x=fixed_pt_sys(‘example2',x0,es,maxit);
0
0
1.0000
2.0000
3.0000
4.0000
5.0000
6.0000
7.0000
8.0000
9.0000
10.0000
11.0000
12.0000
13.0000
14.0000
15.0000
16.0000
17.0000
18.0000
19.0000
20.0000
...
50.0000
...
102.0000
103.0000
0
1.0488
2.3452
2.9692
3.2224
3.3411
3.3501
3.3581
3.3307
3.3332
3.3139
3.3230
3.3105
3.3213
3.3112
3.3212
3.3120
3.3209
3.3126
3.3205
3.3130
10
2.2852
2.2583
2.1473
2.0598
2.0170
1.9955
1.9938
1.9923
1.9974
1.9969
2.0005
1.9988
2.0011
1.9991
2.0010
1.9992
2.0008
1.9992
2.0007
1.9993
2.0000
1.2477
1.0593
0.9854
0.9801
0.9754
0.9916
0.9901
1.0017
0.9962
1.0037
0.9972
1.0033
0.9973
1.0028
0.9974
1.0024
0.9977
1.0022
0.9979
8.3858
1.4991
0.6612
0.2779
0.1263
0.0238
0.0181
0.0275
0.0129
0.0201
0.0124
0.0142
0.0126
0.0120
0.0116
0.0107
0.0103
0.0097
0.0092
0.0087
3.3159
1.9999
0.9996
0.0017
3.3166
3.3167
2.0000
2.0000
1.0000
1.0000
0.0001
0.0001
Fixed-point iteration has converged