DCM - Translational Neuromodeling Unit
Download
Report
Transcript DCM - Translational Neuromodeling Unit
DCM: Advanced Topics
Klaas Enno Stephan
Neural population activity
0.4
0.3
0.2
u2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0.6
Translational Neuromodeling Unit (TNU)
Institute for Biomedical Engineering, University
of Zurich & ETH Zurich
0.4
u1
x3
0.2
0
0.3
0.2
0.1
0
x1
Laboratory for Social & Neural Systems
Research (SNS), University of Zurich
x2
3
fMRI signal change (%)
2
1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
4
3
Wellcome Trust Centre for Neuroimaging,
University College London
A
dt
dx
m
i 1
n
ui B
(i)
j 1
x jD
( j)
x Cu
2
1
0
-1
3
2
1
0
Methods & models for fMRI data analysis
November 2012
Overview
• Bayesian model selection (BMS)
• Extended DCM for fMRI: nonlinear, two-state, stochastic
• Embedding computational models in DCMs
• Integrating tractography and DCM
• Applications of DCM to clinical questions
Dynamic Causal Modeling (DCM)
Hemodynamic
forward model:
neural activityBOLD
Electromagnetic
forward model:
neural activityEEG
MEG
LFP
Neural state equation:
dx
F ( x , u, )
dt
fMRI
simple neuronal model
complicated forward model
EEG/MEG
complicated neuronal model
simple forward model
inputs
Generative models & model selection
• any DCM = a particular generative model of how the data (may)
have been caused
• modelling = comparing competing hypotheses about the
mechanisms underlying observed data
a priori definition of hypothesis set (model space) is crucial
determine the most plausible hypothesis (model), given the
data
• model selection model validation!
model validation requires external criteria (external to the
measured data)
Model comparison and selection
Given competing hypotheses
on structure & functional
mechanisms of a system, which
model is the best?
Which model represents the
best balance between model
fit and model complexity?
For which model m does p(y|m)
become maximal?
Pitt & Miyung (2002) TICS
Bayesian model selection (BMS)
Model evidence:
p ( y | , m ) p ( | m ) d
accounts for both accuracy
and complexity of the
model
allows for inference about
structure (generalisability)
of the model
p(y|m)
p( y | m)
Gharamani, 2004
y
all possible datasets
Various approximations, e.g.:
- negative free energy, AIC, BIC
McKay 1992, Neural Comput.
Penny et al. 2004a, NeuroImage
Approximations to the model evidence in DCM
Maximizing log model evidence
= Maximizing model evidence
Logarithm is a
monotonic function
Log model evidence = balance between fit and complexity
log p ( y | m ) accuracy ( m ) complexity ( m )
log p ( y | , m ) complexity ( m )
No. of
parameters
SPM2 & SPM5 offered 2 approximations:
Akaike Information Criterion:
Bayesian Information Criterion:
No. of
data points
AIC log p ( y | , m ) p
BIC log p ( y | , m )
p
log N
2
Penny et al. 2004a, NeuroImage
Penny 2012, NeuroImage
The (negative) free energy approximation
• Under Gaussian assumptions about the posterior (Laplace
approximation):
log p ( y | m )
log p ( y | , m ) K L q , p | m K L q , p | y , m
F log p ( y | m ) K L q , p | y , m
log p ( y | , m ) K L q , p | m
accuracy
com plexity
The complexity term in F
• In contrast to AIC & BIC, the complexity term of the negative
free energy F accounts for parameter interdependencies.
KL q ( ), p ( | m )
1
2
ln C
1
2
ln C | y
1
2
|y
T
1
C
|y
• The complexity term of F is higher
– the more independent the prior parameters ( effective DFs)
– the more dependent the posterior parameters
– the more the posterior mean deviates from the prior mean
• NB: Since SPM8, only F is used for model selection !
Bayes factors
To compare two models, we could just compare their log
evidences.
But: the log evidence is just some number – not very intuitive!
A more intuitive interpretation of model comparisons is made
possible by Bayes factors:
positive value, [0;[
B12
p ( y | m1 )
p( y | m2 )
Kass & Raftery classification:
Kass & Raftery 1995, J. Am. Stat. Assoc.
B12
p(m1|y)
Evidence
1 to 3
50-75%
weak
3 to 20
75-95%
positive
20 to 150
95-99%
strong
150
99%
Very strong
BMS in SPM8: an example
attention
M1
stim
M1
M2
M3
M4
M3
stim
PPC
V1
attention
V1
V5
PPC
M2
M2 better than M1
BF 2966
F = 7.995
PPC
attention
stim
V1
V5
M3 better than M2
BF 12
F = 2.450
V5
M4
attention
PPC
M4 better than M3
BF 23
F = 3.144
stim
V1
V5
Fixed effects BMS at group level
Group Bayes factor (GBF) for 1...K subjects:
GBF ij
BF
(k )
ij
k
Average Bayes factor (ABF):
A B Fij
K
BF
(k )
ij
k
Problems:
- blind with regard to group heterogeneity
- sensitive to outliers
Random effects BMS for heterogeneous groups
Dirichlet parameters
= “occurrences” of models in the population
r ~ Dir ( r ; )
Dirichlet distribution of model probabilities r
mk ~ p (mk | p )
mk ~ p (mk | p )
mk ~ p (mk | p )
m 1 ~ Mult ( m ;1, r )
y1 ~ p ( y1 | m 1 )
y1 ~ p ( y1 | m 1 )
y2 ~ p ( y2 | m2 )
y1 ~ p ( y1 | m 1 )
Multinomial distribution of model labels m
Measured data y
Model inversion
by Variational
Bayes (VB) or
MCMC
Stephan et al. 2009a, NeuroImage
Penny et al. 2010, PLoS Comp. Biol.
LD
m2
MOG
FG
LD|LVF
MOG
FG
LD|RVF
MOG
LD|LVF
RVF
stim.
LD
LD
LG
LVF
stim.
RVF LD|RVF
stim.
m2
Subjects
MOG
FG
LD
LG
LG
FG
m1
LG
LVF
stim.
m1
Data:
Stephan et al. 2003, Science
Models: Stephan et al. 2007, J. Neurosci.
-35
-30
-25
-20
-15
-10
-5
Log model evidence differences
0
5
p(r >0.5 | y) = 0.997
1
5
4.5
4
p r1 r2 99 . 7 %
m2
3.5
m1
p(r 1|y)
3
2.5
2
1.5
r1 84.3%
r2 15.7%
0.5
0
0
1 11.8
2 2.2
1
0.1
0.2
0.3
0.4
0.5
r
Stephan et al. 2009a, NeuroImage
0.6
1
0.7
0.8
0.9
1
nonlinear
Model space partitioning:
comparing model families
log
GBF
80
Summed log evidence (rel. to RBML)
FFX
linear
60
40
20
p(r >0.5 | y) = 0.986
1
0
5
CBMN CBMN(ε) RBMN RBMN(ε) CBML CBML(ε) RBML RBML(ε)
RFX
12
4.5
10
4
m2
m1
alpha
8
3.5
p r1 r2 9 8 .6 %
6
3
p(r 1|y)
4
2
2.5
2
0
CBMN CBMN(ε) RBMN RBMN(ε) CBML CBML(ε) RBML RBML(ε)
1.5
1
16
14
12
m1
m2
4
8
1* k
r2 2 6 .5 %
r1 7 3 .5 %
0.5
0
0
10
alpha
Model
space
partitioning
0.1
0.2
0.3
0.4
0.5
r
k 1
0.6
0.7
0.8
0.9
1
1
6
4
8
2* k
2
k 5
0
nonlinear models
linear models
Stephan et al. 2009, NeuroImage
Comparing model families – a second example
• data from Leff et al.
2008, J. Neurosci
• one driving input, one
modulatory input
• 26 = 64 possible
modulations
• 23 – 1 input patterns
• 764 = 448 models
• integrate out uncertainty
about modulatory
patterns and ask where
auditory input enters
Penny et al. 2010, PLoS Comput. Biol.
Bayesian Model Averaging (BMA)
• abandons dependence of parameter
inference on a single model
• uses the entire model space
considered (or an optimal family of
models)
• computes average of each parameter,
weighted by posterior model
probabilities
p n | y1 .. N
p
n
| y n , m p m | y1 .. N
m
NB: p(m|y1..N) can be obtained
by either FFX or RFX BMS
• represents a particularly useful
alternative
– when none of the models (or model
subspaces) considered clearly
outperforms all others
– when comparing groups for which the
optimal model differs
Penny et al. 2010, PLoS Comput. Biol.
definition of model space
inference on model structure or inference on model parameters?
inference on
individual models or model space partition?
optimal model structure assumed
to be identical across subjects?
yes
FFX BMS
comparison of model
families using
FFX or RFX BMS
inference on
parameters of an optimal model or parameters of all models?
optimal model structure assumed
to be identical across subjects?
yes
no
FFX BMS
RFX BMS
no
RFX BMS
Stephan et al. 2010, NeuroImage
FFX analysis of
parameter estimates
(e.g. BPA)
RFX analysis of
parameter estimates
(e.g. t-test, ANOVA)
BMA
Overview
• Bayesian model selection (BMS)
• Extended DCM for fMRI: nonlinear, two-state, stochastic
• Embedding computational models in DCMs
• Integrating tractography and DCM
• Applications of DCM to clinical questions
DCM10 in SPM8
• DCM10 was released as part of SPM8 in July 2010 (version 4010).
• Introduced many new features, incl. two-state DCMs and stochastic DCMs
• This led to various changes in model defaults, e.g.
– inputs mean-centred
– changes in coupling priors
– self-connections estimated separately for each area
• For details, see:
www.fil.ion.ucl.ac.uk/spm/software/spm8/SPM8_Release_Notes_r4010.pdf
• Further changes in version 4290 (released April 2011) to accommodate new
developments and give users more choice (e.g., whether or not to meancentre inputs).
The evolution of DCM in SPM
• DCM is not one specific model, but a framework for Bayesian inversion of
dynamic system models
• The default implementation in SPM is evolving over time
– improvements of numerical routines (e.g., for inversion)
– change in priors to cover new variants (e.g., stochastic DCMs,
endogenous DCMs etc.)
To enable replication of your results, you should ideally state
which SPM version (release number) you are using when
publishing papers.
In the next SPM version, the release number will be stored in
the DCM.mat.
y
y
BOLD
y
activity
x2(t)
λ
hemodynamic
model
activity
x3(t)
activity
x1(t)
neuronal
states
x
integration
modulatory
input u2(t)
driving
input u1(t)
y
t
Neural state equation
( j)
x ( A u j B ) x Cu
A
endogenous
connectivity
t
The classical DCM:
a deterministic, one-state,
bilinear model
modulation of
connectivity
direct inputs
B
( j)
C
x
x
x
u j x
x
u
Factorial structure of model specification in DCM10
• Three dimensions of model specification:
– bilinear vs. nonlinear
– single-state vs. two-state (per region)
– deterministic vs. stochastic
• Specification via GUI.
bilinear DCM
non-linear DCM
modulation
driving
input
driving
input
modulation
Two-dimensional Taylor series (around x0=0, u0=0):
f
f x
f ( x , u ) f ( x 0 ,0 )
x
u
ux ... 2
...
dt
x
u
xu
x 2
dx
Bilinear state equation:
A
dt
dx
(i)
u i B x Cu
i 1
m
f
f
2
2
2
Nonlinear state equation:
A
dt
dx
m
uB
i
i 1
n
(i)
x
j 1
j
D
( j)
x Cu
Neural population activity
0.4
0.3
0.2
u2
0.1
0
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0.6
u1
0.4
x3
0.2
0
0.3
0.2
0.1
0
x1
x2
3
fMRI signal change (%)
2
1
0
Nonlinear dynamic causal model (DCM)
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
0
10
20
30
40
50
60
70
80
90
100
4
3
A
dt
dx
m
uB
n
(i)
i
i 1
j 1
( j)
x j D x Cu
2
1
0
-1
3
2
1
Stephan et al. 2008, NeuroImage
0
attention
MAP = 1.25
0.10
0.8
0.7
PPC
0.6
0.26
0.5
0.39
1.25
stim
0.26
V1
0.13
0.46
0.50
V5
0.4
0.3
0.2
0.1
0
-2
motion
Stephan et al. 2008, NeuroImage
-1
0
1
2
3
4
p ( D V 5 ,V 1 0 | y ) 99 . 1 %
PPC
5
Two-state DCM
Single-state DCM
Two-state DCM
input
u
E
x1
E
x1
x1
I
x1
I
x1
x x Cu
ij ij exp( Aij uB ij )
x x Cu
ij Aij uB ij
11
N 1
1N
NN
Marreiros et al. 2008, NeuroImage
x1
x
x N
EE
11
IE
11
EE
N1
0
11
EI
11
II
1N
EE
0
NN
EE
0
0
Extrinsic
(between-region)
coupling
NN
IE
0
0
EE
NN
II
NN
Intrinsic
(within-region)
coupling
x1E
I
x1
x
E
xN
xI
N
Estimates of hidden causes and states
(Generalised filtering)
Stochastic DCM
inputs or causes - V2
1
dx
dt
0.5
(A
v u
u jB
j
( j)
)x Cv
(x)
0
-0.5
-1
0
200
(v)
400
600
800
1000
hidden states - neuronal
0.1
excitatory
signal
0.05
• all states are represented in generalised
coordinates of motion
• random state fluctuations w(x) account for
endogenous fluctuations,
have unknown precision and smoothness
two hyperparameters
• fluctuations w(v) induce uncertainty about
how inputs influence neuronal activity
• can be fitted to resting state data
1200
0
-0.05
-0.1
0
200
400
600
800
1000
1200
hidden states - hemodynamic
1.3
flow
volume
dHb
1.2
1.1
1
0.9
0.8
0
200
400
600
800
1000
1200
predicted BOLD signal
2
observed
predicted
1
0
-1
-2
Li et al. 2011, NeuroImage
-3
0
200
400
600
time (seconds)
800
1000
1200
Overview
• Bayesian model selection (BMS)
• Extended DCM for fMRI: nonlinear, two-state, stochastic
• Embedding computational models in DCMs
• Integrating tractography and DCM
• Applications of DCM to clinical questions
Prediction errors drive synaptic plasticity
PE(t)
x3
R
x1
a0
x2
McLaren 1989
t
wt w0 P E k
k 1
t
w t a 0 P E ( ) d
0
synaptic plasticity during learning = f (prediction error)
Learning of dynamic audio-visual associations
1
Conditioning Stimulus
CS1
Target Stimulus
CS2
0.8
or
p(face)
or
CS
0
Response
TS
200
400
600
Time (ms)
800
0.6
0.4
0.2
2000
±
650
0
0
200
400
600
trial
den Ouden et al. 2010, J. Neurosci.
800
1000
Hierarchical Bayesian learning model
prior on volatility
volatility
p k 1
k
vt-1
p v t 1 | v t , k ~ N v t , exp( k )
vt
probabilistic association
rt
rt+1
observed events
ut
ut+1
Behrens et al. 2007, Nat. Neurosci.
p rt 1 | rt , v t ~ Dir rt , exp( v t )
Explaining RTs by different learning models
Reaction times
1
True
Bayes Vol
HMM fixed
HMM learn
RW
450
0.8
430
p(F)
RT (ms)
440
420
0.6
0.4
410
400
390
0.2
0.1
0.3
0.5
0.7
0.9
p(outcome)
0
400
0.7
• Rescorla-Wagner
• Hidden Markov models
(2 variants)
520
560
600
Bayesian model selection:
0.6
Exceedance prob.
• hierarchical Bayesian learner
480
Trial
5 alternative learning models:
• categorical probabilities
440
hierarchical Bayesian model
performs best
0.5
0.4
0.3
0.2
0.1
0
Categorical
model
den Ouden et al. 2010, J. Neurosci.
Bayesian
learner
HMM (fixed) HMM (learn)
RescorlaWagner
Stimulus-independent prediction error
Putamen
Premotor cortex
p < 0.05
(cluster-level wholebrain corrected)
0
-0.5
0
-0.5
-1
-1.5
-2
BOLD resp. (a.u.)
BOLD resp. (a.u.)
p < 0.05
(SVC)
-1
-1.5
p(F)
p(H)
den Ouden et al. 2010, J. Neurosci .
-2
p(F)
p(H)
Prediction error (PE) activity in the putamen
PE during active
sensory learning
PE during incidental
sensory learning
p < 0.05
(SVC)
PE during
reinforcement learning
O'Doherty et al. 2004,
Science
den Ouden et al. 2009,
Cerebral Cortex
PE = “teaching signal” for
synaptic plasticity during
learning
Could the putamen be regulating trial-by-trial changes of
task-relevant connections?
Hierarchical
Bayesian
learning model
Prediction errors control plasticity
during adaptive cognition
• Influence of visual
areas on premotor
cortex:
PUT
– stronger for
surprising stimuli
– weaker for expected
stimuli
PMd
PPA
den Ouden et al. 2010, J. Neurosci .
p = 0.017
p = 0.010
FFA
ongoing pharmacological
and genetic studies
Hierarchical variational Bayesian learning
p 1
k 1
volatility
association
p x 3 | x 3 , ~ N x 3 , exp( )
k
k
x3
x3
x
k 1
2
x
k 1
events in the world
sensory stimuli
u
k 1
k 1
p x 2 | x 2 , x3
k
k
2
u
k 1
k
~ N x 2 , exp( x 3 )
k 1
k
k
p x1 | x 2 ~ Bernoulli S x 2
k
p u | x1
k
x1
x1
k 1
k
k
k
k
~ M oG x1 0, x1 1
k
k
Mean-field decomposition
p u , x , | u
k
k
1.. k 1
q x1 q x 2 q x 3 q
k
k
k
Mathys et al. (2011), Front. Hum. Neurosci.
Overview
• Bayesian model selection (BMS)
• Extended DCM for fMRI: nonlinear, two-state, stochastic
• Embedding computational models in DCMs
• Integrating tractography and DCM
• Applications of DCM to clinical questions
Diffusion-weighted imaging
Parker & Alexander, 2005,
Phil. Trans. B
1.6
Integration of
tractography
and DCM
1.4
1.2
1
R1
R2
0.8
0.6
0.4
0.2
0
-2
-1
0
1
2
low probability of anatomical connection
small prior variance of effective connectivity parameter
1.6
1.4
1.2
1
R1
R2
0.8
0.6
0.4
0.2
0
Stephan, Tittgemeyer et al.
2009, NeuroImage
-2
-1
0
1
high probability of anatomical connection
large prior variance of effective connectivity parameter
2
Proof of
concept
study
probabilistic
tractography
FG
34 6.5%
FG
left
FG
FG
right
24 43.6%
13 15.7%
LG
left
LG
LG
12 34.2%
LG
right
anatomical
connectivity
DCM
connectionspecific priors
for coupling
parameters
6.5%
v 0.0384
2
1.8
1 5 .7 %
1.6
v 0 .1 0 7 0
1.4
1.2
1
0.8
0.6
34.2%
43.6%
v 0.5268
v 0.7746
0.4
Stephan, Tittgemeyer et al.
2009, NeuroImage
0.2
0
-3
-2
-1
0
1
2
3
Connection-specific prior variance as a function of
anatomical connection probability
ij
0
m 1: a=-32,b=-32 m 2: a=-16,b=-32 m 3: a=-16,b=-28 m 4: a=-12,b=-32 m 5: a=-12,b=-28 m 6: a=-12,b=-24 m 7: a=-12,b=-20 m 8: a=-8,b=-32 m 9: a=-8,b=-28
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 10: a=-8,b=-24 m 11: a=-8,b=-20 m 12: a=-8,b=-16 m 13: a=-8,b=-12 m 14: a=-4,b=-32 m 15: a=-4,b=-28 m 16: a=-4,b=-24 m 17: a=-4,b=-20 m 18: a=-4,b=-16
1
1
1
1
1
1
1
1
1
1 0 exp( ij )
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 19: a=-4,b=-12 m 20: a=-4,b=-8 m 21: a=-4,b=-4 m 22: a=-4,b=0
m 23: a=-4,b=4 m 24: a=0,b=-32 m 25: a=0,b=-28 m 26: a=0,b=-24 m 27: a=0,b=-20
1
1
1
1
1
1
1
1
1
• 64 different mappings
by systematic search
across hyperparameters and
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 28: a=0,b=-16 m 29: a=0,b=-12 m 30: a=0,b=-8
m 31: a=0,b=-4
m 32: a=0,b=0
m 33: a=0,b=4
m 34: a=0,b=8
m 35: a=0,b=12 m 36: a=0,b=16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 37: a=0,b=20 m 38: a=0,b=24 m 39: a=0,b=28 m 40: a=0,b=32 m 41: a=4,b=-32
m 42: a=4,b=0
m 43: a=4,b=4
m 44: a=4,b=8
m 45: a=4,b=12
1
1
1
1
1
1
1
1
1
0.5
• yields anatomically
informed (intuitive and
counterintuitive) and
uninformed priors
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 46: a=4,b=16 m 47: a=4,b=20 m 48: a=4,b=24 m 49: a=4,b=28 m 50: a=4,b=32 m 51: a=8,b=12 m 52: a=8,b=16 m 53: a=8,b=20 m 54: a=8,b=24
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 55: a=8,b=28 m 56: a=8,b=32 m 57: a=12,b=20 m 58: a=12,b=24 m 59: a=12,b=28 m 60: a=12,b=32 m 61: a=16,b=28 m 62: a=16,b=32
m 63 & m 64
1
1
1
1
1
1
1
1
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0
0.5
1
0
0.5
0
0.5
1
0
0
0.5
1
log group Bayes factor
600
400
200
log group Bayes factor
0
0
10
20
30
model
40
50
60
0
10
20
30
model
40
50
60
40
50
60
700
695
690
685
680
post. model prob.
0.6
0.5
0.4
0.3
Models with anatomically informed
priors (of an intuitive form)
0.2
0.1
0
0
10
20
30
model
m 1: a=-32,b=-32 m 2: a=-16,b=-32 m 3: a=-16,b=-28 m 4: a=-12,b=-32 m 5: a=-12,b=-28 m 6: a=-12,b=-24 m 7: a=-12,b=-20 m 8: a=-8,b=-32 m 9: a=-8,b=-28
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 10: a=-8,b=-24 m 11: a=-8,b=-20 m 12: a=-8,b=-16 m 13: a=-8,b=-12 m 14: a=-4,b=-32 m 15: a=-4,b=-28 m 16: a=-4,b=-24 m 17: a=-4,b=-20 m 18: a=-4,b=-16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 19: a=-4,b=-12 m 20: a=-4,b=-8 m 21: a=-4,b=-4 m 22: a=-4,b=0
m 23: a=-4,b=4 m 24: a=0,b=-32 m 25: a=0,b=-28 m 26: a=0,b=-24 m 27: a=0,b=-20
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 28: a=0,b=-16 m 29: a=0,b=-12 m 30: a=0,b=-8
m 31: a=0,b=-4
m 32: a=0,b=0
m 33: a=0,b=4
m 34: a=0,b=8
m 35: a=0,b=12 m 36: a=0,b=16
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 37: a=0,b=20 m 38: a=0,b=24 m 39: a=0,b=28 m 40: a=0,b=32 m 41: a=4,b=-32
m 42: a=4,b=0
m 43: a=4,b=4
m 44: a=4,b=8
m 45: a=4,b=12
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 46: a=4,b=16 m 47: a=4,b=20 m 48: a=4,b=24 m 49: a=4,b=28 m 50: a=4,b=32 m 51: a=8,b=12 m 52: a=8,b=16 m 53: a=8,b=20 m 54: a=8,b=24
1
1
1
1
1
1
1
1
1
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0.5
0
0
0
0
0
0
0
0
0
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
0
0.5
1
m 55: a=8,b=28 m 56: a=8,b=32 m 57: a=12,b=20 m 58: a=12,b=24 m 59: a=12,b=28 m 60: a=12,b=32 m 61: a=16,b=28 m 62: a=16,b=32
m 63 & m 64
1
1
1
1
1
1
1
1
1
0.5
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0.5
0
0.5
1
0
0
0.5
Models with anatomically informed priors (of an intuitive form) were
clearly superior to anatomically uninformed ones: Bayes Factor >109
1
Overview
• Bayesian model selection (BMS)
• Extended DCM for fMRI: nonlinear, two-state, stochastic
• Embedding computational models in DCMs
• Integrating tractography and DCM
• Applications of DCM to clinical questions
Model-based predictions for single patients
model structure
BMS
set of
parameter estimates
model-based decoding
BMS: Parkison‘s disease and treatment
Age-matched
controls
Rowe et al. 2010,
NeuroImage
PD patients
on medication
Selection of action modulates
connections between PFC and SMA
PD patients
off medication
DA-dependent functional disconnection
of the SMA
Model-based decoding by generative embedding
step 1 —
model inversion
step 2 —
kernel construction
A
B
C
measurements from
an individual subject
A
subject-specific
inverted generative model
B
C
Brodersen et al. 2011, PLoS Comput. Biol.
subject representation in the
generative score space
step 3 —
support vector classification
step 4 —
interpretation
jointly discriminative
model parameters
A→B
A→C
B→B
B→C
separating hyperplane fitted to
discriminate between groups
Discovering remote or “hidden” brain lesions
Discovering remote or “hidden” brain lesions
detect “down-stream” network changes
altered synaptic coupling among non-lesioned regions
Model-based decoding of disease status:
mildly aphasic patients (N=11) vs. controls (N=26)
Connectional fingerprints
from a 6-region DCM of
auditory areas during speech
perception
PT
PT
HG
(A1)
HG
(A1)
MGB
MGB
S
S
Brodersen et al. 2011, PLoS Comput. Biol.
Model-based decoding of disease status:
aphasic patients (N=11) vs. controls (N=26)
Classification accuracy
PT
PT
HG
(A1)
HG
(A1)
MGB
auditory stimuli
Brodersen et al. 2011, PLoS Comput. Biol.
MGB
Generative embedding
using DCM
Multivariate searchlight
classification analysis
Generative score space
0.4
0.4
0.3
0.3
0.2
0.2
0.1
0.1
-0.3
0
0
patients
-0.35
controls
-0.1
-0.5
-0.1
-0.5
0
Voxel (-42,-26,10) mm
-0.15
-0.2
-0.25
-10
0
0.5
10
-0.15
-10
-0.4
-0.4
0
0
Voxel (-56,-20,10) mm
0.5 10
-0.2
L.HG L.HG
generative
embedding
Voxel (64,-24,4) mm
Voxel-based feature space
-0.25
-0.3
-0.35
0.5
-0.4
-0.4
-0.2
-0.2
0 -0.5
L.MGB L.MGB
0
0
-0.5
0.5
0
R.HG L.HG
Definition of ROIs
Are regions of interest defined
anatomically or functionally?
anatomically
A
1 ROI definition
and n model inversions
unbiased estimate
functionally
Functional contrasts
Are the functional contrasts defined
across all subjects or between groups?
across
subjects
between
groups
B
1 ROI definition and n model inversions
slightly optimistic estimate:
voxel selection for training set and test set
based on test data
D
1 ROI definition and n model inversions
highly optimistic estimate:
voxel selection for training set and test set
based on test data and test labels
C
Repeat n times:
1 ROI definition and n model inversions
unbiased estimate
E
Repeat n times:
1 ROI definition and 1 model inversion
slightly optimistic estimate:
voxel selection for training set based on test
data and test labels
F
Repeat n times:
1 ROI definition and n model inversions
unbiased estimate
Brodersen et al. 2011, PLoS Comput. Biol.
Key methods papers: DCM for fMRI and BMS – part 1
•
Brodersen KH, Schofield TM, Leff AP, Ong CS, Lomakina EI, Buhmann JM, Stephan KE (2011) Generative
embedding for model-based classification of fMRI data. PLoS Computational Biology 7: e1002079.
•
Daunizeau J, David, O, Stephan KE (2011) Dynamic Causal Modelling: A critical review of the biophysical and
statistical foundations. NeuroImage 58: 312-322.
•
Friston KJ, Harrison L, Penny W (2003) Dynamic causal modelling. NeuroImage 19:1273-1302.
•
Friston K, Stephan KE, Li B, Daunizeau J (2010) Generalised filtering. Mathematical Problems in Engineering
2010: 621670.
•
Friston KJ, Li B, Daunizeau J, Stephan KE (2011) Network discovery with DCM. NeuroImage 56: 1202–1221.
•
Friston K, Penny W (2011) Post hoc Bayesian model selection. Neuroimage 56: 2089-2099.
•
Kasess CH, Stephan KE, Weissenbacher A, Pezawas L, Moser E, Windischberger C (2010) Multi-Subject
Analyses with Dynamic Causal Modeling. NeuroImage 49: 3065-3074.
•
Kiebel SJ, Kloppel S, Weiskopf N, Friston KJ (2007) Dynamic causal modeling: a generative model of slice
timing in fMRI. NeuroImage 34:1487-1496.
•
Li B, Daunizeau J, Stephan KE, Penny WD, Friston KJ (2011). Stochastic DCM and generalised filtering.
NeuroImage 58: 442-457
•
Marreiros AC, Kiebel SJ, Friston KJ (2008) Dynamic causal modelling for fMRI: a two-state model. NeuroImage
39:269-278.
•
Penny WD, Stephan KE, Mechelli A, Friston KJ (2004a) Comparing dynamic causal models. NeuroImage
22:1157-1172.
•
Penny WD, Stephan KE, Mechelli A, Friston KJ (2004b) Modelling functional integration: a comparison of
structural equation and dynamic causal models. NeuroImage 23 Suppl 1:S264-274.
Key methods papers: DCM for fMRI and BMS – part 2
•
Penny WD, Stephan KE, Daunizeau J, Joao M, Friston K, Schofield T, Leff AP (2010) Comparing
Families of Dynamic Causal Models. PLoS Computational Biology 6: e1000709.
•
Penny WD (2012) Comparing dynamic causal models using AIC, BIC and free energy. Neuroimage 59:
319-330.
•
Stephan KE, Harrison LM, Penny WD, Friston KJ (2004) Biophysical models of fMRI responses. Curr
Opin Neurobiol 14:629-635.
•
Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic
models with DCM. NeuroImage 38:387-401.
•
Stephan KE, Harrison LM, Kiebel SJ, David O, Penny WD, Friston KJ (2007) Dynamic causal models of
neural system dynamics: current state and future extensions. J Biosci 32:129-144.
•
Stephan KE, Weiskopf N, Drysdale PM, Robinson PA, Friston KJ (2007) Comparing hemodynamic
models with DCM. NeuroImage 38:387-401.
•
Stephan KE, Kasper L, Harrison LM, Daunizeau J, den Ouden HE, Breakspear M, Friston KJ (2008)
Nonlinear dynamic causal models for fMRI. NeuroImage 42:649-662.
•
Stephan KE, Penny WD, Daunizeau J, Moran RJ, Friston KJ (2009a) Bayesian model selection for
group studies. NeuroImage 46:1004-1017.
•
Stephan KE, Tittgemeyer M, Knösche TR, Moran RJ, Friston KJ (2009b) Tractography-based priors for
dynamic causal models. NeuroImage 47: 1628-1638.
•
Stephan KE, Penny WD, Moran RJ, den Ouden HEM, Daunizeau J, Friston KJ (2010) Ten simple rules
for Dynamic Causal Modelling. NeuroImage 49: 3099-3109.
Thank you