Newton-Raphson Method Nonlinear Equations
Download
Report
Transcript Newton-Raphson Method Nonlinear Equations
Newton-Raphson Method
Industrial Engineering Majors
Authors: Autar Kaw, Jai Paul
http://numericalmethods.eng.usf.edu
Transforming Numerical Methods Education for STEM
Undergraduates
7/17/2015
http://numericalmethods.eng.usf.edu
1
Newton-Raphson Method
http://numericalmethods.eng.usf.edu
Newton-Raphson Method
f(x)
x f x
f(xi)
i,
i
f(xi )
xi 1 = xi f (xi )
f(xi-1)
xi+2
xi+1
xi
X
Figure 1 Geometrical illustration of the Newton-Raphson method.
3
http://numericalmethods.eng.usf.edu
Derivation
f(x)
f(xi)
tan(
B
AB
AC
f ( xi )
f ' ( xi )
xi xi 1
C
A
xi+1
xi
X
f ( xi )
xi 1 xi
f ( xi )
Figure 2 Derivation of the Newton-Raphson method.
4
http://numericalmethods.eng.usf.edu
Algorithm for NewtonRaphson Method
5
http://numericalmethods.eng.usf.edu
Step 1
Evaluate
6
f (x )
symbolically.
http://numericalmethods.eng.usf.edu
Step 2
Use an initial guess of the root, xi , to estimate the new
value of the root, xi 1 , as
f xi
xi 1 = xi f xi
7
http://numericalmethods.eng.usf.edu
Step 3
Find the absolute relative approximate error a as
xi 1- xi
a =
10 0
xi 1
8
http://numericalmethods.eng.usf.edu
Step 4
Compare the absolute relative approximate error
with the pre-specified relative error tolerance s.
Yes
Go to Step 2 using new
estimate of the root.
No
Stop the algorithm
Is a s ?
Also, check if the number of iterations has exceeded
the maximum number of iterations allowed. If so,
one needs to terminate the algorithm and notify the
user.
9
http://numericalmethods.eng.usf.edu
Example
You are working for a start-up computer assembly company
and have been asked to determine the minimum number of
computers that the shop will have to sell to make a profit.
The equation that gives the minimum number of
computers ‘x’ to be sold after considering the total costs
and the total sales is:
f(x) 40x1.5 875x 35000 0
10
http://numericalmethods.eng.usf.edu
Example 1 Cont.
Use the Newton-Raphson method of finding roots
of equations to find
11
The minimum number of computers that
need to be sold to make a profit. Conduct
three iterations to estimate the root of the
above equation.
Find the absolute relative approximate error
at the end of each iteration, and
The number of significant digits at least
correct at the end of each iteration.
http://numericalmethods.eng.usf.edu
Example 1 Cont.
3.510
4
f ( x)
4 10
4
3 10
4
2 10
4
1 10
4
0
0
1 10
4
1.2510
4
2 10
4
0
20
0
40
60
80
x
100
100
f(x)
Figure 3 Graph of the function f(x).
f x 40x1.5 875x 35000 0
12
http://numericalmethods.eng.usf.edu
Example 1 Cont.
Entered functi on alon g gi ven i nterval with current and next root and the
tangent li ne of the cu rve at the current root
3.510
4
f ( x)
3.5 10
4
2.42 10
4
1.33 10
4
f ( x)
f ( x)
Initial Guess: x0 50
Iteration 1
The estimate of the root is
x1 x0
f x0
f ' x0
x1 50
5392.1
450.74
0
0
tan( x)
8333.33
1.92 10
2.6159410
4
4
0
20
40
60
x x 0 x 1 x
0
80
100
120
120
f(x)
p rev . gu ess
n ew g uess
t ang ent li ne
Figure 4 Graph of the estimated root
after Iteration 1.
61.963
The absolute relative approximate
error is
x x
a 1 0 100 19.307%
x1
The number of significant digits at least correct is 0.
13
http://numericalmethods.eng.usf.edu
Example 1 Cont.
Entered functi on alo ng gi ven interval wi th current and next root an d the
tangent li ne of the curve at th e current root
3.5 10
Iteration 2
The estimate of the root is
4
3.510
4
2.42 10
4
x2 x1
1.33 10
4
f ( x)
f ( x)
f ( x)
292.45
x2 61.963
402.70
62.689
0
tan( x)
f x1
f ' x1
0
8333.33
1.92 10
4
2.3079110
4
0
20
40
60
x x 1 x 2 x
0
80
100
120
120
f(x)
p rev . gu ess
n ew g uess
t ang ent
Figure 5 Graph of the estimated root
after Iteration 2.
The absolute relative approximate
error is
a
x2 x1
100 1.1585%
x2
The number of significant digits at least correct is 1.
14
http://numericalmethods.eng.usf.edu
Example 1 Cont.
3.510
4
3.5 10
4
2.42 10
4
Iteration 3
The estimate of the root is
x3 x2
4
1.33 10
f ( x)
f ( x)
f x2
f ' x2
f ( x)
0
62.689
0
tan( x)
8333.33
2.2919910
4
1.92 10
1.0031
399.94
62.692
4
0
20
40
60
80
100
x x 2 x 3 x
0
f(x)
p rev . gu ess
n ew g uess
t ang ent
Figure 6 Graph of the estimated root
after Iteration 3.
120
120
The absolute relative approximate
error is
x3 x2
a
100 4.0006103%
x3
The number of significant digits at least correct is 4.
15
http://numericalmethods.eng.usf.edu
Advantages and Drawbacks
of Newton Raphson Method
http://numericalmethods.eng.usf.edu
16
http://numericalmethods.eng.usf.edu
Advantages
17
Converges fast (quadratic convergence), if
it converges.
Requires only one guess
http://numericalmethods.eng.usf.edu
Drawbacks
1.
Divergence at inflection points
Selection of the initial guess or an iteration value of the root that
is close to the inflection point of the function f x may start
diverging away from the root in ther Newton-Raphson method.
For example, to find the root of the equation f x x 1 0.512 0 .
3
The Newton-Raphson method reduces to xi 1 xi
x
3
i
3
1 0.512
.
2
3 xi 1
Table 1 shows the iterated values of the root of the equation.
The root starts to diverge at Iteration 6 because the previous estimate
of 0.92589 is close to the inflection point of x 1 .
Eventually after 12 more iterations the root converges to the exact
value of x 0.2.
18
http://numericalmethods.eng.usf.edu
Drawbacks – Inflection Points
Table 1 Divergence near inflection point.
Iteration
Number
19
xi
0
5.0000
1
3.6560
2
2.7465
3
2.1084
4
1.6000
5
0.92589
6
−30.119
7
−19.746
18
0.2000
Figure 8 Divergence at inflection point for
f x x 1 0.512 0
3
http://numericalmethods.eng.usf.edu
Drawbacks – Division by Zero
2. Division by zero
For the equation
f x x3 0.03x 2 2.4 106 0
the Newton-Raphson method
reduces to
xi3 0.03xi2 2.4 106
xi 1 xi
3xi2 0.06xi
For x0 0 or x0 0.02 , the
denominator will equal zero.
20
Figure 9 Pitfall of division by zero
or near a zero number
http://numericalmethods.eng.usf.edu
Drawbacks – Oscillations near local
maximum and minimum
3. Oscillations near local maximum and minimum
Results obtained from the Newton-Raphson method may
oscillate about the local maximum or minimum without
converging on a root but converging on the local maximum or
minimum.
Eventually, it may lead to division by a number close to zero
and may diverge.
2
For example for f x x 2 0 the equation has no real
roots.
21
http://numericalmethods.eng.usf.edu
Drawbacks – Oscillations near local
maximum and minimum
Table 3 Oscillations near local maxima
and mimima in Newton-Raphson method.
Iteration
Number
0
1
2
3
4
5
6
7
8
9
22
xi
–1.0000
0.5
–1.75
–0.30357
3.1423
1.2529
–0.17166
5.7395
2.6955
0.97678
6
5
f xi a %
3.00
2.25
5.063
2.092
11.874
3.570
2.029
34.942
9.266
2.954
f(x)
4
3
3
300.00
128.571
476.47
109.66
150.80
829.88
102.99
112.93
175.96
2
2
11
4
x
0
-2
-1.75
-1
-0.3040
0
0.5
1
2
3
3.142
-1
Figure 10 Oscillations around local
2
minima for f x x 2 .
http://numericalmethods.eng.usf.edu
Drawbacks – Root Jumping
4. Root Jumping
In some cases where the function f x is oscillating and has a number
of roots, one may choose an initial guess close to a root. However, the
guesses may jump and converge to some other root.
f(x)
For example
1
f x sin x 0
0.5
Choose
It will converge to
23
x
0
x0 2.4 7.539822
instead of
1.5
-2
0
-0.06307
x0
x 2 6.2831853
2
0.5499
4
6
4.461
8
7.539822
10
-0.5
-1
-1.5
Figure 11 Root jumping from intended
location of root for
f x sin
. x0
http://numericalmethods.eng.usf.edu
Additional Resources
For all resources on this topic such as digital audiovisual
lectures, primers, textbook chapters, multiple-choice
tests, worksheets in MATLAB, MATHEMATICA, MathCad
and MAPLE, blogs, related physical problems, please
visit
http://numericalmethods.eng.usf.edu/topics/newton_ra
phson.html
THE END
http://numericalmethods.eng.usf.edu