CHAPT VI : Dosage adjustment - univ
Download
Report
Transcript CHAPT VI : Dosage adjustment - univ
Estimation + optimization in PK modeling
Introduction to modeling
Estimation criteria
Numerical optimization, examples
http://pharmapk.pharmacie.univ-mrs.fr/
Est+Opt CIRM 18/8/2009
1
A. ILIADIS
Real process and mathematical model
Real process
y (µg/mL)
100
Fitted
model
100
10
0
2
4
6
8
y (µg/mL)
t (h)
100
y t
D
CL
exp
t
V
V
y (µg/mL)
Math. model
CL
k
V
V
10
0
2
4
6
t (h)
Est+Opt CIRM 18/8/2009
2
8
10
0
2
4
6
8
t (h)
V 16 L
k 01
. h-1
A. ILIADIS
Functional scheme
Measurement
noise
PK process
Administration
protocol
+
A priori
information
PK model
Observation
Equivalence
criterion
Prediction
Nonlinear
programming
Est+Opt CIRM 18/8/2009
3
A. ILIADIS
Mathematical modeling
Models are defined by :
- their structure ( number and connectivity of compartments, etc ) expressed by
mathematical operations involving adjustable parameters :
Ex :
1-cpt,
D
CL
yt exp
t
V
V
exponential structure, parameters :
- the numerical value of parameters used :
CHARACTERIZATION
Structure
x V , CL
-1
V
,
CL
16
L,
16
.
L
h
+
ESTIMATION
Parameters
MODELING
System Identification
Est+Opt CIRM 18/8/2009
4
A. ILIADIS
Checking identifiability
Structural identifiability : given a hypothetical structure with unknown parameters,
and a set of proposed experiments (not measurements !), would the unknown parameters
be uniquely determinable from these experiments ?
Parametric identifiability : estimating the parameters from measurements with
errors and optimize sampling designs.
k12
u t
1
V1
k1
9.60E-2
k (h-1)
9.35E-2
2
k 21
k2
1/V (L-1)
9.10E-2
5.70E-2
k2
: structural non-identifiable
Est+Opt CIRM 18/8/2009
5.80E-2
Non-consistent
5
5.90E-2
V estimate
A. ILIADIS
Structural identifiability
It depends on the observation site !
k12
u t
1
V1
k1
u t
2
k 21
k12
k2
1
2
V2
k 21
k1
Solutions :
Grouping :
V1
k1
Setting :
Est+Opt CIRM 18/8/2009
k12
But ONLY identifiable parameters :
k21 k2
k1
k12
k 21
k2
6
A. ILIADIS
Functional scheme (dynamic)
Linear / p model
: no loop, one stage estimation.
Nonlinear / p model : many loops until convergence.
Measurement
noise
+
PK process
Administration
protocol
A priori
information
PK model
Arbitrary
initial values
Est+Opt CIRM 18/8/2009
Equivalence
criterion
Nonlinear
programming
7
A. ILIADIS
Iterations, parameter convergence
Ex :
Fotemustine neutrophil toxicity :
Nonlinear modeling :
10
Optimized
final values
1
n 29, p = 8
10
1
2 nd
10
Arbitrary
initial values
10
Est+Opt CIRM 18/8/2009
3 rd
0
10
1 st
0
RMSE
Param. value
10
-1
10
-2
0
5
8
10
15
Iteration no
20
10
25
-1
-2
A. ILIADIS
Errors in the functional scheme
The existing errors :
10
1
experimental,
structural,
parametric.
10
0
RMSE
Initial parametric error
(canceled at the convergence)
Residual error :
10
-1
experimental,
structural (model
misspecification).
Est+Opt CIRM 18/8/2009
Residual error
10
-2
0
5
9
10
Iteration no
15
20
A. ILIADIS
Parametric and output spaces
Parametric space
Output space
PK process
Observation
Comparison
Comparison
Artificial mechanism
PK model
Prediction
Random component
Precision of estimates
Measurement error
Real process
Est+Opt CIRM 18/8/2009
10
A. ILIADIS
Optimal estimation
Estimation is the operation of assigning a numerical values to unknown parameters,
based on noise-corrupted observations.
Organization of the variables :
y ( m dimensional vector).
x ( p dimensional vector).
The observed drug concentrations over time,
The random parameters to be estimated,
Consider the joint pdf
f x, y
and then :
f x is called prior pdf [ the marginal f y
the conditional f x y is called posterior pdf :
the marginal
x B arg max f x y
the conditional
Est+Opt CIRM 18/8/2009
f y x leads to the likelihood function :
x L arg max f y x
11
is not of interest ].
MAP
MLE
A. ILIADIS
Maximum a posteriori (MAP)
Design : A reasonable estimate of x
the given observation y y :
0
x B arg max f x y
would be the mode
x B
of the posterior density for
0.35
0.3
Ex :
if
y0 y1 x B 2
y0 y3 x B 6
y2
f (x/y)
if
0.25
0.2
y
1
0.15
y
3
0.1
The role of the dispersion.
0.05
0
0
2
4
6
8
10
x
Est+Opt CIRM 18/8/2009
12
A. ILIADIS
Maximum likelihood (MLE)
Design : After the observation y y 0 has been obtained, a reasonable estimate of x
would be x L , the value which gives to the particular observation y the highest
0
probability of occurrence :
0.35
x L arg max f y x
0.3
0.25
if
y0 2.5 x L x1
if
y0 6.0 x L x3
f (y/x)
Ex :
x2 = 3
0.2
x =2
1
0.15
0.1
The role of the precision.
x3 = 8
0.05
0
0
2
4
6
8
10
y
Est+Opt CIRM 18/8/2009
13
A. ILIADIS
The influence of x on the conditional
12000
10000
8000
f (y/x)
x = 40 L/h
6000
4000
x = 8 L/h
2000
0
-4
10
10
-3
y (g/L)
Est+Opt CIRM 18/8/2009
14
A. ILIADIS
The influence of y on the likelihood
15000
x = 47.86 L/h
x = 12.59 L/h
L (y/x)
10000
5000
y = 0.2E-3 g/L
y = 0.4E-3 g/L
0
0
10
10
1
10
2
x (L/h)
Est+Opt CIRM 18/8/2009
15
A. ILIADIS
MLE criterion for single - output
xˆ L arg max ln f y x
Initial form :
Hypotheses :
H0 : The model is an exact description of the real process.
Error
Output
yi y M ti , x ei
yi ~ N y M ti , x ,i2
ei yi y M ti , x
ei ~ N 0, i2
H1 : Additive error :
H2 : Normal error :
j
i
ei
0
i
ej
H3 : Independence :
yi
y M t j , x
yj
f yi , y j x f yi x f y j x
f ei , e j x f ei x f e j x
Est+Opt CIRM 18/8/2009
j
y M ti , x
0
16
A. ILIADIS
Variance heterogeneity
The regression model :
yi y M ti , x ei
assumes that
ei ~ N 0, 2
Need to relax this assumption (particularly when the model is highly nonlinear).
Transformed models
h . , under which the error assumptions hold, i.e. :
h yi , h y M ti , x , ei where
ei ~ N 0, 2
Find a transformation function
Box – Cox transformations :
Estimate !
y 1
h y ,
log y
0
0
Other transformations : John – Drapper, Carroll, Huber, etc.
Est+Opt CIRM 18/8/2009
17
A. ILIADIS
General form of the MLE criterion
For m available observed data
yi
and under the H3 hypothesis the estimator becomes :
arg max ln f y / x
arg max ln f yi , i 1, m / x
m
m
arg max ln f yi / x arg max ln f yi / x
i 1
i 1
arg min J x
xˆ L
yi y M ti , x
1
1
2
J x ln 2 i
2 i 1
2 i 1
i
m
Where :
J
m
2
if
y~N
is the criterion function to be minimized.
The 1st term is known as the extended SE term.
The 2nd term is called the weighted SE term. It is the only one involving observed
data y and it is weighted by the uncertainty of experiment.
i
Est+Opt CIRM 18/8/2009
18
A. ILIADIS
Criterion and error variance model
After introducing the error variance model :
K y M t, x
J
is minimum along the
J
0
K
Then :
K direction when :
SE x
K
m
2
or
yi y M ti , x
SE x
y
t
,
x
i 1
M i
m
with
2
m
SE x
m
m
J x ln2 1 ln y M ti , x ln
2
2
m
i 1
Nonlinearly unconstrained optimization :
Find :
Assumptions :
x arg min J x
J x is computable for all x and analytic solution does not exist.
Est+Opt CIRM 18/8/2009
19
A. ILIADIS
Iterative solutions
Solution for the nonlinear optimization problem
0
Sequentially approximate x starting from an initial value x and converging towards
a stationary point
x
.
Design a routine algorithm generating the converging sequence :
0
1
x
J x
0
J x
Terminology :
k
x
Assign :
Est+Opt CIRM 18/8/2009
1
x
0
x
...
x
k
J x
is the initialization and obtaining
x
x
J x
k 1
k 1
x
k 1
...
from
x
k
J x
is an iteration.
x
20
A. ILIADIS
Approximation of functions
Taylor's expansion for smooth multivariate function:
Construct simple approximations of a general function J x in a neighborhood of the
reference point x . With
a vector u of unit length supplying the direction of search, and
a scalar h specifying the step length:
1
1
T
T
J x h u J x h g x u h 2 u H x u Oh 3
1!
2!
Associate successive approximations to iterations :
xx
k
J x J x
Est+Opt CIRM 18/8/2009
k
x h u x
k 1
J x h u J x
21
k 1
A. ILIADIS
Direction of search
Linear approx. of J x h u : J x h g x u
The scalar
1
Quadratic approx. of J x h u : J x h g T x u h 2 u T H x u
2
T
T
g x u gives the rate of change of J at the point x along the direction u
To reduce J , move along the direction opposite to g x : the descent direction.
The scalar
u H x u
T
involves the second derivative of
J x
.
.
H x characterizes an ellipse. To reduce J , move along the direction u targeting
1
the center of ellipse : x 0 12 H x g x
: the Newton direction
Est+Opt CIRM 18/8/2009
22
A. ILIADIS
Line search
Minimization
directions : moving along the Newton direction for quadratic surfaces,
near x . Elsewhere, move along the descent direction.
Line search : to complete the k iteration search for h in the direction of search :
k
k
k
k
or
h arg min J x h H 1 k g
h arg min J x h g
20
x2
4.5
Minimization direction
4.5
15
3.8
3.4
10
0
1
2
3
x1
Est+Opt CIRM 18/8/2009
23
A. ILIADIS
Families of algorithms
Practical : Approximate derivatives by finite-differences instead analytical calculation.
Classify :
Twice-continuously differentiable
J:
Second derivative : quadratic model of J , compute and invert
numerically stable, time consuming processing).
First derivative : quadratic model of
without inverting
H x
J
, approximate
H 1 x
H x (not
:
but directly from by finite-differences of
g x
.
quasi-Newton methods (appropriate in many circumstances) : BFGS, DFP,...
Non-derivative : linear model of J . Approximate
(for smooth functions) : Powell, Brent,...
No assumptions on
Est+Opt CIRM 18/8/2009
g x by finite-differences of J
J differentiability : heuristic algorithms : NMS, Hooke-Jeeves,...
24
A. ILIADIS
The information matrix
The Fisher information matrix : For MLE estimation :
2 log L y x
log L y x log L y x
I x E
E
T
T
x
x
x x
P x I 1 x
Cramér-Rao inequality :
I 1 x is a lower bound of the covariance matrix P x , evaluating the precision of x.
In practice : With t
m sampling times,
Obtain the m p sensitivity matrix S t , x with elements
Set
the vector of the
R1 t , x
and
R2 t , x
the
y M2 ti , x
and
y M2 ti , x
respectively.
Est+Opt CIRM 18/8/2009
y M ti , x
sij
x j
,
m order diagonal matrices having as elements
25
A. ILIADIS
Covariance (precision) matrix
The p order precision matrix is :
1
1
P x 2 S T t , x R11 t , x S t , x 2 2 S T t , x R21 t , x S t , x
K
Sums of weighted products of sensitivity functions
over the available sampling times
Dependence on the sampling protocol: P x Pt , x
Graphic interpretation of the precision matrix :
P t , x is symmetric, and, if m p , it is also definite positive (by construction).
T
1
2
If 0 , then x P t , x x is an a p dimensional ellipsoid.
12
The volume V of the ellipsoid is : V p 2 p 1 p 2 1 P t , x
The lowest P t , x , the most precise x
Est+Opt CIRM 18/8/2009
26
A. ILIADIS
Check the structural identifiability
The sensitivity matrix depends :
On the experiment (not measurements !)
t
and
On the model parametrization (structural and parametric) .
Ensure definite-positivity of the sensitivity matrix :
It must be of full rank for any numeric value of the parameters, e.g., for the arbitrary
initial values (several).
rankS t, x
Ex :
Observation in the
central cpt
#
central cpt
#
peripheral cpt
Est+Opt CIRM 18/8/2009
27
k2
k2
free
fixed
dim x
4
4
5
4
3
4
?
?
A. ILIADIS
Simulation in optimization
2-cpt model :
k12 1.0 h -1
ke 0.3 h -1
k 21 0.4 h -1
3
y1 (g/L)
2
80 mg IV bolus
-2
10
-3
1.5
10
-4
0
6
12
18
24
t (h)
Observation :
1
Horizon 24 h
0.5
m 11
Heteroscedastic
10
2.5
Administration :
nbr
-3
y1 (g/L)
V1 20 L
x 10
K 15%
0
0
6
12
18
24
t (h)
Est+Opt CIRM 18/8/2009
28
A. ILIADIS
Performances of algorithms
Heuristic
Non
derivative
First
derivative
Reference
Nelder-Mead
Genetic
Threshold
Annealing
BFGS
DFP
Steepdesc
BFGS
DFP
Steepdesc
Trust-region
BFGS
Est+Opt CIRM 18/8/2009
RMSE
0.150
0.105
0.938
0.493
0.413
0.137
0.268
0.272
0.105
0.136
0.149
0.105
0.105
V1
20.000
22.940
1.311
25.512
40.835
17.062
17.042
17.018
22.810
17.621
17.065
22.805
22.810
ke
k12
0.300
0.260
2.924
0.244
0.150
0.342
0.328
0.342
0.261
0.335
0.343
0.261
0.261
29
1.000
0.977
4.455
1.692
0.201
1.503
1.300
0.982
0.990
1.401
1.327
0.990
0.990
k 21
0.400
0.533
0.242
1.006
0.281
0.581
0.641
0.365
0.535
0.532
0.502
0.535
0.535
cpu time
12.198
129.597
409.010
289.945
11.808
10.653
14.085
2.039
12.029
16.227
8.926
1.000
fminsearch
ga
threshacceptbnd
simulannealbnd
fminunc
fminunc
fminunc
fminunc
fminunc
fminunc
fminunc
VA13AD
A. ILIADIS