Models, Genes, Steroids and Clouds (adventures with acslX)

Download Report

Transcript Models, Genes, Steroids and Clouds (adventures with acslX)

Case Study: Verification, Validation and
Preparation of a PBPK Model for use in
Risk Assessment Studies
AEgis Technologies Group
September 9, 2011
Conrad Housand, Robin McDougall
[email protected],
[email protected]
www.acslx.com/webinars2011
7/16/2015
[email protected]
http://www.acslx.com/webinars2011
1
Preliminary Notes…
• Introductions
• Website for this year’s series of Webinars
http://www.acslx.com/webinars2011
• Slides, Models, Materials posted there
7/16/2015
http://www.acslx.com/webinars2011
2
Motivation for this Webinar
• Present a brief description of the processes
we use here when helping customers prepare
PBPK models for using in chemical risk
assessment studies: e.g., our informal “best
practices”…
(consistent with WHO/IPCS
guidance on PBPK modeling)
7/16/2015
http://www.acslx.com/webinars2011
3
Context
(WHO/IPCS, Characterization and application of PBPK models in risk assessment, Fig. 4)
7/16/2015
http://www.acslx.com/webinars2011
4
Another Way of Looking at it…
Identify Requirements
Review Literature
Select/Develop/Refine Model
Verify Model Against Publications
Calibrate Model
Validate Against Data
7/16/2015
http://www.acslx.com/webinars2011
Risk Assessment
5
Case Study: Dichloromethane (DCM) PBPK Model
7/16/2015
http://www.acslx.com/webinars2011
6
Workflow for this Case Study
1. Identify requirements
2. Port model source code
3. Verify model code against source publication
4. Address programming “best practice” issues
5. Validate baseline model against data
6. Implement model modifications as needed
7. Prep model for GSA/MCMC/MC
8. Optimize performance
9. Calibrate model
10. Re-validate model
7/16/2015
http://www.acslx.com/webinars2011
7
1. Identify Requirements
• Purpose: what are the risk assessment needs?
– E.g., dose extrapolation, extrapolation across
species, evaluating human PK variability
• What chemical, species, target tissue,
exposure routes, etc.
• What’s the acceptable level of predictive
accuracy?
7/16/2015
http://www.acslx.com/webinars2011
8
1. Identify Requirements
• For DCM, requirements + lit review
• Baseline model based on Andersen, et al (1985)
• Add extra-hepatic metabolism (i.e. Marino/David
et. al (2006)
• Retrofit Berkeley Madonna model for
GSA/MCMC/MC on acslX
• Calibrate using additional data sets referenced in
Marino and David in addition to Andersen
7/16/2015
http://www.acslx.com/webinars2011
9
2. Port Model Source Code
• Typical source languages: Berkeley Madonna,
MATLAB, legacy ACSL, MCSim
• DCM model coded in Berkeley Madonna
• Apply “rules” for BM -> acslX model
conversion
7/16/2015
http://www.acslx.com/webinars2011
10
BM -> acslX Conversion “Rules”
• Create necessary CSL sections: PROGRAM,
INITIAL, DYNAMIC, DERIVATIVE and copy
relevant BM code to each
• Port basic syntactical elements: e.g.,
comments, exponentiation
• Identify model parameters and define as CSL
CONSTANTs
• Identify states/IC and convert to
corresponding INTEG and CONSTANT
statements
7/16/2015
http://www.acslx.com/webinars2011
11
3. Verify Model Code
• Manual inspection:
– Model equations consistent with publication?
– Parameter values match published?
– Are units consistent and in agreement with
published?
• Use translator syntax/semantic diagnostics:
– Any undefined variables in model? (or 5.55e33)
– Multiply defined variables?
– Algebraic loops?
– Other semantic warnings?
7/16/2015
http://www.acslx.com/webinars2011
12
3. Verify Model Code
• For DCM model, ported CSL code was
manually verified against Andersen, et al
(1985)
• A few refs cited by Andersen were consulted
to clarify some inconsistencies
• Translator revealed no warnings related to
code syntax/semantics
• Use comments in code to track
7/16/2015
http://www.acslx.com/webinars2011
13
4. Enforce Programming “Best Practice”
• Not all valid CSL models are created equal
– Convert simple assignment statements to
CONSTANT declarations where appropriate
– Implement discrete logic (e.g., exposure) as
DISCRETE sections and SCHEDULE statements
instead of IF/THEN, PULSE, SWITCH
– Use M scripts to parameterize model instead of
hard-coded values
– Fix any compiler-specific constructs (e.g., old-style
I/O)
– Other things: integ alg, computed CINT, etc.
7/16/2015
http://www.acslx.com/webinars2011
14
5. Validate Baseline Model
• To desired accuracy, ensure predictions are
consistent with:
– Published data
– Published predictions
– Predictions of legacy model (if available)
• Initial assessment of model sensitivity,
uncertainty, if practical
• Implement validation code as M scripts, which
thereafter accompany the model code itself
– Automated validation, regression test
7/16/2015
http://www.acslx.com/webinars2011
15
5. Validate Baseline Model
• Characterize level of confidence
– See WHO guidance (e.g., Figure 12) for example
approach
– Are results consistent with requirements?
7/16/2015
http://www.acslx.com/webinars2011
16
5. Validate Baseline Model
• For DCM model
– Digitize Figs. 3 and 4 from Andersen, capture the
data and published predictions in M scripts
– Create M scripts to run the model using mouse
parameters at the necessary dose levels
– Create scripts to re-create the Fig 3 and 4 plots
and overlay outputs from the ported model
– Simultaneously compare with output taken from
BM model
– Built-in diagnostics: implement mass-balance
checks
7/16/2015
http://www.acslx.com/webinars2011
17
6. Refine Model
• For DCM, need to add extra-hepatic
metabolism as described in David, et al (2006)
• Capture new data sets for calibration and
validation
• Update validation scripts
• Re-validate using the automated script
7/16/2015
http://www.acslx.com/webinars2011
18
7. Prep Model for GSA/MCMC/MC
• For DCM, fix typical “inconvenience”
– Mass balance depends on fractional quantities
summing to 1; requires individual values to be
consistent
– Can’t guarantee this with stochastic methods
– Solution: implement code to normalize these
parameters within the model
• Other common problems are addressed by
programming “best practices”
– E.g., use of constants instead of assignments,
DISCRETE/SCHEDULE, etc.
7/16/2015
http://www.acslx.com/webinars2011
19
8. Optimize Performance
• GSA/MCMC/MC studies are computeintensive, so it’s advantageous to get the
simulation to run as fast as possible
– Adjust CINT, IALG, MERROR/XERROR as necessary
– Use DISCRETE/SCHEDULE instead of IF/THEN
– For MCMC/PE use a large CINT in combination
with DATA statement
– Other tricks…
• Minimize prepare lists, use minimal TSTOP, cache
output of sim runs, parallelize, etc.
7/16/2015
http://www.acslx.com/webinars2011
20
9. Calibrate Model
• Preliminary SA/GSA to determine sensitive
parameters
– See McNally (2011) guidance on GSA
• Preliminary conventional PE and manual
parameter adjustment
• Determine priors for Bayesian/MCMC
– Lit review or uninformative, priors from previous
work (DCM use MCSim model as starting point)
• Perform MCMC, check for convergence and
identifiability problems
7/16/2015
http://www.acslx.com/webinars2011
21
10. Re-validate Model
• Update scripts to use new test data and revalidate model as was done with baseline
model
• Updated script will follow the new model code
• For DCM, used figures from Davis, et al (2006)
7/16/2015
http://www.acslx.com/webinars2011
22
In Conclusion…
• Please email questions to [email protected]
• .WMV recording of presentation along with
slides and model files will be posted to
website (link below)
• Join us for the next webinar in a few week…
check the website for schedule.
Thank you!
7/16/2015
http://www.acslx.com/webinars2011
23