An Introduction to the Finite Element Method

Download Report

Transcript An Introduction to the Finite Element Method

The Finite Element Method
Defined
The Finite Element Method (FEM) is a weighted
residual method that uses compactly-supported
basis functions.
Brief Comparison with Other
Methods
Finite Difference (FD)
Method:
Finite Element (FE)
Method:
FD approximates an
operator (e.g., the
derivative) and solves a
problem on a set of points
(the grid)
FE uses exact operators
but approximates the
solution basis functions.
Also, FE solves a problem
on the interiors of grid
cells (and optionally on the
gridpoints as well).
Brief Comparison with Other
Methods
Spectral Methods:
Spectral methods use
global basis functions to
approximate a solution
across the entire domain.
Finite Element (FE)
Method:
FE methods use compact
basis functions to
approximate a solution on
individual elements.
Overview of the Finite Element
Method
S   W   G   M 
Strong
Weak
Galerkin
Matrix
form
form
approx.
form
Sample Problem
Axial deformation of a bar subjected to a uniform load
(1-D Poisson equation)
L
px  = p0
x = 0, L
d 2u
EA 2 = p0
dx
u 0  = 0
du
EA
=0
dx x  L
u = axial displacement
E=Young’s modulus = 1
A=Cross-sectional area = 1
Strong Form
The set of governing PDE’s, with boundary conditions, is
called the “strong form” of the problem.
Hence, our strong form is (Poisson equation in 1-D):
d 2u
= p0
2
dx
u 0  = 0
du
dx
=0
xL
Weak Form
We now reformulate the problem into the weak form.
The weak form is a variational statement of the problem in
which we integrate against a test function. The choice of test
function is up to us.
This has the effect of relaxing the problem; instead of finding
an exact solution everywhere, we are finding a solution that
satisfies the strong form on average over the domain.
Weak Form
d 2u
= p0
2
dx
d 2u
 p0 = 0
2
dx
L
 d 2u

0  dx2  p0 vdx = 0
Strong Form
Residual R=0
Weak Form
v is our test function
We will choose the test function later.
Weak Form
Why is it “weak”?
It is a weaker statement of the problem.
A solution of the strong form will also satisfy the weak form,
but not vice versa.
Analogous to “weak” and “strong” convergence:
strong: lim xn  x
n 
weak : lim f xn  f x f
n 
Weak Form
Choosing the test function:
We can choose any v we want, so let's choose v such that it
satisfies homogeneous boundary conditions wherever the actual
solution satisfies Dirichlet boundary conditions. We’ll see why
this helps us, and later will do it with more mathematical rigor.
So in our example, u(0)=0 so let v(0)=0.
Weak Form
Returning to the weak form:
 d 2u

0  dx2  p0 vdx = 0
L
L
L
d 2u
0 dx2 vdx = 0 p0vdx
Integrate LHS by parts:
Integrate
xL
du dv
du 

 
dx  v( x) 
dx dx
dx  x 0

0
L
L
du dv
du
du
 
dx vL 
 v0
dx dx
dx x  L
dx x 0
0
Weak Form
Recall the boundary conditions on u and v:
u 0  = 0
du
=0
dx x  L
v ( 0)  0
Hence,
H
L
du dv
du
du

dx vL 
 v0
dx dx
dx x  L
dx x 0
0
L
L
du dv
0 dx dx dx  0 p0vdx
The weak form
satisfies Neumann
conditions
automatically!
Weak Form
Why is it “variational”?
L
L
du dv
0 dx dx dx = 0 p0vdx
Variational statement:
Find u  H 1 such thatBu, v   F (v) v  H 01
B a bilinear functional, F a linear functional
u and v are functions from an infinite-dimensional
function space H
Galerkin’s Method
We still haven’t done the “finite element method” yet, we have
just restated the problem in the weak formulation.
So what makes it “finite elements”?
Solving the problem locally on elements
Finite-dimensional approximation to an infinitespace → Galerkin’s Method
dimensional
Galerkin’s Method
Choosefinitebasis  
N
i i i
T hen,
N
u x    c j j x ,
j 1
c j unkowns to solvefor
N
v x    b j j x ,
j 1
b j arbitrarily chosen
Insert the
se intoour weak form:
L du dv
L
0 dx dx dx  0 p0vdx
N
N
L N
L
d j
di
x  bi x dx  0 p0  b j j x dx
cj
0 
dx
dx
j 1
i 1
j 1
Galerkin’s Method

d j
N
L
di
x  bi x dx  0 p0  b j j x dx
cj

dx
dx
j 1
i 1
j 1
L N
0
N
Rearranging :
N
N
b c 
i 1
i
j 1
j
L
0
N
L
d j di
dx  bi  p0i dx
0
dx dx
i 1
Cancelling:
N
c 
j 1
j
L
0
L
d j di
dx   p0i dx
0
dx dx
Galerkin’s Method
N
c 
j 1
j
L
0
L
d j di
dx   p0i dx
0
dx dx
We now havea matrixproblemKc  F, where c j
is a vectorof unknowns,
L d j d
i
K ij  
dx,
0 dx dx
L
and Fi   p0i dx
0
We can alreadysee K ij will be symmetricsince we can
interchange i, j without effect.
Galerkin’s Method
So what have we done so far?
1) Reformulated the problem in the weak form.
2) Chosen a finite-dimensional approximation to the solution.
Recall weak form written in terms of residual:

L
0
L
L
 d 2u

 2  p0 vdx   Rvdx   bi  Ri dx  0
0
0
i
 dx

This is an L2 inner-product. Therefore, the residual is orthogonal
to our space of basis functions. “Orthogonality Condition”
Orthogonality Condition

L
0
L
L
 d 2u

 2  p0 vdx   Rvdx   bi  Ri dx  0
0
0
i
 dx

The residual is orthogonal to our space of basis functions:
u
uh
H
φi
Hh
Therefore, given some space of approximate functions Hh, we are
finding uh that is closest (as measured by the L2 inner product) to
the actual solution u.
Discretization and Basis Functions
Let’s continue with our sample problem. Now we discretize our
domain. For this example, we will discretize x=[0, L] into 2
“elements”.
Ω1
0
Ω2
h
2h=L
In 1-D, elements are segments. In 2-D, they are triangles, tetrads,
etc. In 3-D, they are solids, such as tetrahedra. We will solve the
Galerkin problem on each element.
Discretization and Basis Functions
For a set of basis functions, we can choose anything. For
simplicity here, we choose piecewise linear “hat functions”.
Our solution will be a linear combination of these functions.
φ1
x1=0
φ2
φ3
x2=L/2
x3=L
Basis functionssatisfy: i x j    ij  Our solution will be
interpolatory. Also, theysatisfy the partitionof unity.
Discretization and Basis Functions
To save time, we can throw out φ1 a priori because, since in this
example u(0)=0, we know that the coefficent c1 must be 0.
x1=0
φ2
φ3
x2=L/2
x3=L
Basis Functions
 2x
L

 2x
 2  2 
L

0


 2x
 1
3   L
0
if x  0, L2 
if x   L2 , L 
φ2
φ3
x2=L/2
x3=L
ot herwise
x1=0
if x   L2 , L 
ot herwise
Matrix Formulation
Given our matrixproblemKc  F :
N
L d j d
L
i
cj 
dx   p0i dx  Kc  F

0 dx dx
0


j 1


 
c
K
F
We can insert thei chosenon thepreviousslide and
arriveat a linear algebra problem. Differentiating thebasis
functions,thenevaluatingtheintegrals,we have:
p0  12 
1  4  2
K 
, F
 

L  2 2 
L  14 
In a computercode, differentiating thebasis functionscan be
done in advance,since thebasis functionsare known,and
integration would be performednumerically by quadrature.
It is standardin FEM to use Gaussian quadrature, since it is exact
for polynomial
s.
NoticeK is symmetricas expected.
Solution
Solving theGaussian elimination problemon thepreviousslide,
we obtain our coefficients ci :
 3 p0 L 
c   p 8L2 , which when multipliedby basis functionsi gives
 02 
our final numericalsolution:
2
when x  0, L2 
 34 p0 Lx
 x    1
when x   L2 , L 
 4 p0 ( L2  Lx)
T heexact analyticalsolutionfor thisproblemis :
p0 x 2
u  x   p0 Lx 
2
Solution
0.6
0.5
u(x)
0.4
Exact
Approx
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
x
Notice the numerical solution is “interpolatory”, or nodally exact.
Concluding Remarks
•Because basis functions are compact, matrix K is typically
tridiagonal or otherwise sparse, which allows for fast solvers that
take advantage of the structure (regular Gaussian elimination is
O(N3), where N is number of elements!). Memory requirements
are also reduced.
•Continuity between elements not required. “Discontinuous
Galerkin” Method