Transcript Document
Calculus of Variations
Functional
Euler’s equation
V-1 Maximum and Minimum of Functions
V-2 Maximum and Minimum of Functionals
V-3 The Variational Notaion
V-4 Constraints and Lagrange Multiplier
Euler’s equation
V-5
Functional
Approximate method
1. Method of Weighted Residuals
Galerkin method
2. Variational Method
Kantorovich Method
Raleigh-Ritz Method
V-1 Maximum and Minimum of Functions
Part A Functional
Euler’s equation
Maximum and Minimum of functions
(a) If f(x) is twice continuously differentiable on [x0 , x1]
Nec. Condition for a max. (min.) of f(x) at x [ x0 , x1 ]
i.e.
is that F x 0
Suff. Condition for a max (min.) of f(x) at x [ x0 , x1 ] are that F x 0
also F x 0 ( F x 0 )
(b) If f(x) over closed domain D. Then nec. and suff. Condition for a max. (min.)
of f(x) at x0
2 f
that
xi x j
D D
x x0
are that
f
xi
x x0
0 i = 1,2…n and also
is a negative infinite .
2
(c) If f(x) on closed domain D
If we want to extremize f(x) subject to the constraints
gi ( x1 , , xn ) 0 i=1,2,…k ( k < n )
Ex :Find the extrema of f(x,y) subject to g(x,y) =0
(i) 1st method : by direct diff. of g
dg g x dx g y dy 0
gx
dy dx
gy
To extremize f
df f x dx f y dy 0
gx
( f x f y )dx 0
gy
3
We have
f x g y f y gx 0
and
g 0
to find (x0,y0) which is to extremize f subject to g = 0
(ii) 2nd method : (Lagrange Multiplier)
Let
v( x, y, ) f ( x, y) g ( x, y)
extrema of v without any constraint
extrema of f subject to g = 0
To extremize v
v
fx gx 0
x
v
fy gy 0
y
f x g y f y gx 0
v
g 0
We obtain the same equations to extrimizing. Where is called
The Lagrange Multiplier.
4
V-2 Maximum and Minimum of Functionals
2.1 What are functionals.
y ( x)
y
Functional is function’s function.
y1
x1
y ( x ) ( x)
( x)
y2
x2
2.2 The simplest problem in calculus of variations.
Determine y( x) c2[ x , x ] such that the functional:
0
1
I y x F x, y x , y x dx
x1
x0
as an extrema
2
where F c over its entire domain , subject to y(x0) = y0 , y(x1) = y1at the end
points.
5
x
On integrating by parts of the 2nd term
[ Fy ( x, y, y) ]xx x [ d Fy ( x, y, y) Fy ( x, y, y)]dx 0 ------dx
1
0
since
x1
(1)
0
( x0 ) ( x1 ) 0 and since ( x) is arbitrary.
d
Fy ( x, y, y) Fy ( x, y, y ) 0
dx
-------- (2)
Euler’s Equation
Natural B.C’s
F
y 0 or /and
x0
F
0
y
x x1
The above requirements are called national b.c’s.
6
V-3
The Variational Notation
Variations
Imbed u(x) in a “parameter family” of function
variation of u(x) is defined as
( x, ) u( x) ( x)
the
u ( x)
The corresponding variation of F , δF to the order in ε is ,
since
F F ( x y , y ) F ( x, y, y)
F
F
y y
y
y
and
x1
I (u ) F ( x, u , u )dx G( )
x0
Then
x1
I F ( x, y, y)dx
x0
x1
F ( x, y, y)dx
x0
F
F
y y dx
x0
y
y
x1
7
1
F d F
F
ydx y
y
x0
y dx y
x
x1
x0
Thus a stationary function for a functional is one for which the first variation = 0.
F
y
For the more general cases
(a) Several dependent variables.
Ex :
I F ( x, y, z ; y , z )dx
x1
x0
Euler’s Eq.
F d F
( )0 ,
y dx y
F d F
( )0
z dx z
(b) Several Independent variables.
Ex :
I F ( x, y, u, u x , u y )dxdy
Euler’s Eq.
R
F F
F
(
) (
)0
u x ux y u y
8
(C) High Order.
x1
Ex :
I F ( x, y, y, y)dx
Euler’s Eq.
x0
F d F
d 2 F
( ) 2 ( ) 0
y dx y dx y
Variables
Causing more equation.
Order
Causing longer equation.
9
V-4 Constraints and Lagrange Multiplier
Lagrange multiplier
Lagrange multiplier can be used to find the extreme value of a multivariate
function f subjected to the constraints.
Ex :
(a) Find the extreme value of I
where u( x1 ) u1
v( x1 ) v1
x1
x0
F ( x, u, v, ux , vx )dx
u( x2 ) u2
v( x2 ) v2
and subject to the constraints
G( x, u, v) 0
------------(1)
F d F
x F
d F
From I
u
v dx 0
x
2
1
v
dx ux
v
dx vx
-----------(2)
10
Because of the constraints, we don’t get two Euler’s equations.
From
G
G
G
u
v 0
u
v
Gv
v u
Gu
So (2)
Gv F d F F d F
I
vdx 0
x0
Gu u dx ux v dx vx
G F d F G F d F
0
v u dx u x u v dx vx
x1
The above equations together with (1) are to be solved for u , v.
11
(b) Simple Isoparametric Problem
I
To extremize
x2
x1
(1) J
x2
x1
F x, y, y dx , subject to the constraint:
G x, y, ydx const.
(2) y (x1) = y1 , y (x2) = y2
Take the variation of two-parameter family: y y y 11 x 22 x
(where 1 x and 2 x are some equations which satisfy
x x x x 0 )
1
1
2
1
x2
)
Then , I (1 , 2 )
J (1 , 2
1
x1
x2
x1
2
2
2
F ( x, y 11 22 , y 11 22 )dx
G( x, y 11 22 , y 11 22 )dx
To base on Lagrange Multiplier Method we can get :
12
I J 1 2 0 0
1
I J 1 2 0 0
2
x2
x1
G d G
F d F
i dx 0
y
dx
y
y
dx
y
i = 1,2
So the Euler equation is:
d
F G F G 0
y
dx y
G d G
when
0 , is arbitrary numbers.
y dx y
The constraint is trivial, we can ignore
.
13
Examples
Euler’s equation
Functional
Helmholtz Equation
Ex : Force vibration of a membrane.
2u 2 2u 2u
c ( 2 2 ) f ( x, y, t )
2
t
x y
-----(1)
if the forcing function f is of the form
f ( x, y, t ) P( x, y)sin(t )
we may write the steady state disp u in the form
(1)
u v( x, y)sin(t )
2v 2v
c ( 2 2 ) 2v p 0
x
y
2
14
2 2v 2v
2
R c ( x2 y 2 ) v p vdxdy 0 -----(2)
Consider
c 2 vxx vdxdy
c 2 v yy vdxdy
c 2 [(vx v) x vx vx ]dxdy
c 2 [(vy v) y vy vy ]dxdy
R
R
R
Note that (vx v) x vxx v vx vx
V Vxi Vy j
R
(vy v) y vyy v vy vy
Vx vx v
Vy vy v
n cosi sin j
Vx Vy ( x v) ( y v)
V
=
x
y
x
y
15
V nds
(v v cos v v sin )ds
( V )da
x
y
c 2 vxx vdxdy c 2v yy vdxdy
R
R
1 2
c (vx ) 2 dxdy
R 2
1 2
2
c v y v sin ds c (v y )2 dxdy
R 2
c 2 vx v cos ds
1 2
c [(vx ) 2 (v y ) 2 ]dxdy
R 2
P vdxdy 0
c 2 (vx cos v y sin ) vds
1
2
(
v
(2)
R 2 2 )dxdy R
c 2 v vds R [ 1 c 2 (v)2 1 2v 2 Pv]dxdy 0
n
2
2
16
Hence :
(i) if v f ( x, y) is given on
i.e. v 0
on
then the variational problem
c2
1
[ (v)2 2v 2 pv]dxdy 0
R 2
2
-----(3)
v
(ii) if n 0 is given on
the variation problem is same as (3)
(iii) if
v
( s ) is given on
n
1
2
1
2
[R c2 (v)2 2v2 pv dxdy c2 vdx] 0
17
Diffusion Equation
Ex :
Steady state Heat condition
(kT ) f ( x, T )
B.C’s :
T T1
kn T q2
kn T h(T T3 )
in
on
on
on
D
B1
B2
B3
multiply the equation by T , and integrate over the domain D. After integrating
by parts, we find the variational problem as follow.
T
1
2
[ k (T ) f ( x, T )dT d
D 2
T0
1
q2Td
B2
with
T = T1 on
2 B3
h(T T3 ) 2 d ] 0
B1
18
Poison Equation
Ex : Torsion of a primatri Bar
2 2
0
in
R
on
where is the Prandtl stress function and
z G
y
,
zy
x
The variation problem becomes
[(D ) 4 ]dxdy 0
2
R
with 0 on
19
V-5-1 Approximate Methods
(I) Method of Weighted Residuals (MWR)
L[u ] 0 in
D
+homo. b.c’s in B
Assume approx. solution
n
u un Cii
i 1
where each trial function i satisfies the b.c’s
The residual
Rn L[un ]
In this
method (MWR), Ci are chosen such that Rn is forced to be zero in an average
sense.
i.e. < wj, Rn > = 0,
j = 1,2,…,n
where wj are the weighting functions..
20
(II) Galerkin Method
wj are chosen to be the trial functions j hence the trial functions is chosen
as members of a complete set of functions.
Galerkin method force the residual to be zero w.r.t. an orthogonal complete set.
Ex: Torsion of a square shaft
2 2
0
on
x a ,
y a
(i) one – term approximation
1 c1 ( x2 a2 )( y 2 a2 )
Ri 21 2 2c1[( x a)2 ( y a)2 ] 2
1 ( x2 a2 )( y2 a2 )
21
a
From
a
a a
R11dxdy 0
5 1
c1 2
8a
therefore
1
5
2
2
2
2
(
x
a
)(
y
a
)
2
8a
the torsional rigidity
D1 2G dxdy 0.1388G (2a) 4
R
the exact value of D is
Da 0.1406G(2a)4
the error is only -1.2%
22
(ii) two – term approximation
2 ( x2 a2 )( y2 a2 )[c1 c2 ( x2 y2 )]
By symmetry → even functions
R2 2 2
1 ( x2 a2 )( y2 a2 )
2 ( x2 a2 )( y2 a2 )( x2 y2 )
From
R21dxdy 0
and
R22 dxdy 0
R
R
we obtain
c1
1295 1
525 1
,
c
2
2216 a 2
4432 a 2
therefore
D2 2G 2 dxdy 0.1404G (2a) 4
R
the error is only -0.14%
23
V-5-2 Variational Methods
(I) Kantorovich Method
n
Assuming the approximate solution as :
u Ci ( xn )U i
i 1
where Ui is a known function decided by b.c. condition.
Ci is a unknown function decided by minimal “I”.
Euler Equation of Ci
Ex : The torsional problem with a functional “I”.
I (u )
a
b
a b
[(
u 2 u 2
) ( ) 4u ]dxdy
x
y
24
Assuming the one-term approximate solution as :
u( x, y) (b2 y 2 )C( x)
Then,
I (C)
a
b
a b
{(b2 y 2 )2 [C( x)]2 4 y 2C2 ( x) 4(b2 y 2 )C( x)}dxdy
Integrate by y
16
8
16
I (C) [ b5C2 b3C2 b3C]dx
a 15
3
3
a
Euler’s equation is
C( x)
5
5
C(
x
)
2b 2
2b 2
where b.c. condition is C(a) 0
General solution is
C( x) A1 cosh(
5x
5x
) A 2 sinh(
) 1
2b
2b
25
where
A1
1
cosh(
5a
)
2b
, A2 0
and
cosh(
C( x) 1
cosh(
5
2
5
2
x
)
b
a
)
b
So, the one-term approximate solution is
cosh(
u 1
cosh(
5
2
5
2
x
)
b (b 2 y 2 )
a
)
b
26
(II) Raleigh-Ritz Method
This is used when the exact solution is impossible or difficult to obtain.
n
First, we assume the approximate solution as :
u CiU i
i 1
Where, Ui are some approximate function which satisfy the b.c’s.
Then, we can calculate extreme I .
I I (c1 ,
Ex:
Sol :
From
1
0
, cn )
y xy x
choose c1 ~ cn i.e.
,
y xy x ydx 0
I
c1
I
0
cn
y(0) y(1) 0
11
1
2
I y xy 2 xy dx
0 2
2
27
Assuming that
y x 1 x c1 c2 x c3 x 2
(1)One-term approx
y c1 x 1 x c1 x x 2
y c1 1 2x
x 2 2
1 2
2
3
4
2
I
c
c
1
4
x
4
x
c
x
2
x
x
c
x
x
x
dx
Then, 1 0 1
1
1
2
2
c12
4 c12 1 2 1
1 1 19 c 2 c1
1 2 c1
1
120
12
2
3 2 4 5 6
3 4
I
19
1
0
c1 0 c1 0.263 y(1) 0.263x(1 x)
c1
60
12
1
(2)Two-term approx
y x(1 x)(c1 c2 x) c1 ( x x2 ) c2 ( x2 x3 )
28
Then
y c1 (1 2x) c2 (2x 3x2 )
1 2
c1 1 4 x 4 x 2 2c1c2 2 x 7 x 2 6 x 3
0 2
1 2 3
2
2
3
4
c3 (4 x 12 x 9 x ) c1 x 2 x 4 x5 2c1c2 x 4 2 x5 x 6
2
1
I (c1 , c2 ) [
c2 2 ( x5 2 x 6 x 7 ) c1 x 2 x3 c2 x3 x 4 ]dx
c12
4 1 2 1
7 3 1 1 1
1 2 c1c2 1
2
3 4 5 6
3 2 5 3 7
c12 4
9 1 2 9 c1 c2
3
2 3
5 6 7 8 12 20
19 2 11
107 2 c1 c2
c1 c1c2
c2
120
70
1680
12 20
29
I
0
c1
I
0
c2
19
11
1
c1 c2
60
70
12
11
109
1
c1
c2
70
840
20
0.317 c1 + 0.127 c2 = 0.05
c1 = 0.177 , c2 = 0.173
y(2) (0.177 x 0.173x2 )(1 x)
( It is noted that the deviation between the successive approxs. y(1) and y(2) is
found to be smaller in magnitude than 0.009 over (0,1) )
30