Extrapolation Models for Convergence Acceleration and Function's Extension

Download Report

Transcript Extrapolation Models for Convergence Acceleration and Function's Extension

Extrapolation Models for Convergence
Acceleration and Function’s Extension
David Levin
Tel-Aviv University
MAIA Erice 2013
Extrapolation – Given values at some domain, estimate values
outside the domain.
Prediction, Forecasting, Extension, Continuation
Extrapolation to the limit: Infinite series, Infinite Integrals
Convergence Acceleration
Models
Extension of Univariate and Bivariate Functions
Given N terms of an infinite series
{a1 , a2 , a3 ,..., aN }

can we estimate the infinite sum
a
n 1

n
?
  n1.5 J 02 (nx) sin(2nx) 
n 1
We must assume that the unknown terms can be determined by the given
terms, i.e., we must assume the existence of a model, a prediction model!
M [an 1 , an  2 ,..., an  m 1 ]  0
A general model:

For evaluating
 f ( x)dx
using values
f ( x), x  [0, N ] ,
0
a natural model would be a differential equation
M [ f ( x), f '( x),..., f ( m) ( x)]  0
or a difference equation.
Q: What is a good model?
A: A model which covers a large class of series, and can
be used for extrapolation.
Q: What about linear models? With constant coefficients?
A: Exact for rational functions!
Leading to Pade Approximations.
Q: What about linear models with varying coefficients?
A: They cover a very large class of series in applied math.
Q: How to use linear models for convergence acceleration?
Q: How to use linear models for function’s extension?
linear model with constant coefficients
Given N terms of an infinite series
{a1 , a2 , a3 ,..., aN }
let us assume that the unknown terms can be predicted by a linear model with constant
coefficients:
m
an  m 1   pi an i
i 1
The coefficients
{ pi } of the model can be found by fitting this model to the given terms
m
an  m 1   pi an i , n  N  2m,..., N  m  1
i 1
Assuming such a model is equivalent to assuming that the terms of the series, as
function of their index, are sums of exponentials (including polynomials).
(1)
m

i 1
n 1
Using the model an  m1   pi an i for approximating S   an

Apply

nk
to the model we obtain
m
S  Sk  m   pi (S  Sk i 1 ),
i 1
k
where Sk   a j
j 1
If we know { pi } we can find out S.
The resulting approximation to S is the same as Wynn’s   algorithm, and it
also gives the Pade approximant in the case of power series.
Pade Approximation is very good for series whose terms approximately satisfy
a linear model with constant coefficients – i.e., sum of exponentials.
What about other series, e.g.,

  n1.5 J 02 (nx) sin(2nx) 
n 1
Which model is appropriate here?
A model with varying coefficients:
(m)
Definition: {an }  B
m
an  m 1   pi (n) an i
Where
{ pi }  A
( ki )
, ki  Z
i 1
have asymptotic expansions
pi (n) ~ n
Examples:
(2)

ki
j

n
 i, j , as n  
(3)
j 0
an  n 1.5 J 02 (nx) sin(2nx)
{n 1.5 }  B(1) , {J 0 (nx)}  B(2) , {sin(2nx)}  B(2) ,
 {J 02 (nx)}  B(3)  {an }  B(6)
{an }  B( m) , {bn }  B( k )  {an  bn }  B( m k ) , {anbn }  B( mk ) , {an2 }  B
(
m ( m 1)
)
2
A model with varying coefficients (cont.)
m
{an }  B ( m )
Assuming
leads to the
d
(m)
Application of the

, i.e.,
an  m 1   pi (n) an i
with
{ pi }  A( ki )
i 1
transformation of series (Levin-Sidi 1980).
d (m) 
transformation does not require knowledge of
{ pi (n)}
It involves solving a linear system for the approximation to the infinite sum S.
It follows that
m 1
S  Sn    an qi (n),
i
i 0
Truncating the asymptotic expansions of
equations for S.
k
where Sk   a j , qi  A( mi )
j 1
{qi (n)}
we form a system of linear
For details and analysis see: Practical Extrapolation Methods, by A. Sidi
Linear Models for Double series:
S

a
m , n 1
an k ,m  
j 1
k
p
i 1
a
i , j n i , m  j
m,n
, pk ,  1
(4)
Here, even if we have the model, we cannot compute the terms of the series
based upon a finite number of terms.
This is due to the ODE – PDE difference:
An ODE + initial conditions define the solution
But a PDE requires boundary conditions to determine the solution in a domain!
Yet, such models are related to multivariate Pade approximations, and to
other rational approximations
Extension of functions
Given function values in a domain D, we would like to extend it so that the
extension continues the behavioral trends of the function. As in
convergence acceleration, we assume a model, and use it for the extension:
Ideally, we would like to find ‘THE’ differential equation which the function fulfills
on D, and then use it to extend the function outside D.
Since we assume discrete data on D, maybe noisy, we shall look instead for a
difference equation which ‘best’ describes the behavior of the function on D.
We shall discuss different models, and how to use them for extension.
Linear Model – Constant Coef. – Univariate case
Given function values in [a,b]:
{xi , f i }iN0 ,
xi  a  ih,
Assume f is bandlimited,
i  0,..., N ,
h  (b  a) / N
spectrum( f )  [ B, B] and let
d  nh 
By Nyquist–Shannon sampling theorem sampling distance d is sufficient for
reconstructing f. We use sequences of ‘mesh size’
N
]
n 1
j 1
[
d  nh, { f i  ( j 1) n }
To these sequences we fit a Linear Constant Coefficients Model of order m
m
f i   pk f i ( m  k 1) n
k 1
using least-squares minimization to find the model coefficients:
N

i  mn
2


f

p
f
 i  k i ( mk 1) n   min
k 1


m
1
2B
m
All sequences satisfying
f i   pk f i ( m  k 1) n
are of the form (*)
k 1
m
f i n   c (ji ) j
j 1
where
{ j }
m
are roots of
p ( )     pk k 1 (if the roots are simple)
m
k 1
The extension algorithm:
1. Find the model coefficients by least-squares minimization
2. Find the roots { j }
3. Define the approximation on [a,b] and its extension by fitting
m
g ( x)   c j xj
( w.l.o.g. d  1)
j 1
This is nothing but Prony’s method (1795)
Q: Is it applicable to varying coefficients models? or to the 2D case?
Example 1 – Fitting Exponentials
f ( x)  .8 x  cos( x)  2 sin( 2 x) 
m  6 ; d  1 ; { j }  {0.273,  0.864,  0.414  0.908, 0.542  0.652}
1
x 1
To enable varying coefficients models, and for 2D applications, we suggest an
algorithm which does not require solving the difference equation:
Denote g  {g i }  M a sequence satisfying a model M
We look for g  M which approximates the given data, and is smooth
The algorithm:
Find
g  {g i }  M
which minimizes the functional
N
Fp ( g )   |  g i |    | f i  g i | 2
p
2
i 0
Q: How to choose the parameter  ?
Example 2 – Fitting Smooth Approximation-Extension
f ( x)  .8 x  cos( x)  2 sin( 2 x) 
m  6 ; d  1 ; p  2;
1
x 1
  0.0001
Example 3 – Approximation-Extension using varying coefficients
f ( x) 
5 cos( 2 x) 1.5
 x sin( x)
2
x 1
m  6 ; d 1 ;
p  2;
6
(1  q7 u ( xi )) f i   ( pk  qk u ( xi )) f i ( m  k 1) n
k 1
,
  0.0001
u ( x) 
1
x 1
Bivariate case: 2D Linear Models – Constant Coef.
Given function values in [a,b]x[a,b]:
{xi , y j , f ij }, ( xi , y j )  (a  ih, b  jh)
i, j  0,..., N ,
h  (b  a) / N
1
2
Assume f is bandlimited, spectrum( f )  [ B, B] and let d  nh 
2B
By Nyquist–Shannon sampling theorem sampling distance d is sufficient for
reconstructing f. We use sequences of ‘mesh size’
d  nh, { f i  ( k 1) n , j  (  1) n }
To these sequences we fit a Linear Constant Coefficients Model M of order mxm
m
p
k , 1
f
k i  ( k 1) n , j  (  1) n
 0,
pm,m  1.
Note: This model includes bivariate exponentials, and much more.
Unlike the 1D case, the dimension of {g  M } is not finite.
Example 4 – 2D Approximation-Extension
  0.0539 0.1662  0.0964  0.7606 


0
.
3700

0
.
0796

0
.
3966

0
.
4032


 P
 0.2253 0.0797  0.4422 0.2967 


  0.1989 0.1743
0.5024
1.0000 

We look for g  M which approximates
the given data, and is smooth:
The optimization problem is heavy!
Observation: Let
g ( x, y ) 
 c  ( x  i, y  j )
ij
,
( x, y )  R 2
( i , j )Z 2
And let
{ci , j }  M , i.e.
2
p
c

0
,

(
i
,
j
)

Z
 k ,l i  k , j  
k ,

{g}  M , i.e.
pk ,l g ( x  k , y  )  0,  ( x, y )  R
then
k ,
--------------------------------------------------------------------------------------------------------------------E.g., Choosing  as a cubic tensor product B-spline, we can approximate well the
unknown function by
2
g  span{ (  i,  j )}  M
Using this observation we can now work with smooth functions satisfying linear models
with constant coefficients, avoiding the high cost of the above optimization approach
of finding g  {g i }  M minimizing
N
Fp ( g )   |  g i |    | f i  g i | 2
p
2
i 0
We can even define basis functions spanning
span{ (  i,  j )}  M
within a given domain.
Example of a basis function:
Example 5 – 2D Approximation-Extension using spline basis
0.0772
0.1560  0.3483 
 0.1505


0
.
0480
0
.
2556
0
.
1838

0
.
0669


 P
 0.1204  0.0840  0.0200  0.4786 


  0.1863  0.2081  0.3254 1.0000 


g  span{ (  i,  j )}  M
Example 6 – Blending between models
From noisy cos(2x) into exp(-2x)
Thank you!