Jody Culham Brain and Mind Institute Department of Psychology Western University http://www.fmri4newbies.com/ Basics of Experimental Design for fMRI: Block Designs Last Update: October 6, 2014 Last Course: Psychology 9223,

Download Report

Transcript Jody Culham Brain and Mind Institute Department of Psychology Western University http://www.fmri4newbies.com/ Basics of Experimental Design for fMRI: Block Designs Last Update: October 6, 2014 Last Course: Psychology 9223,

Jody Culham
Brain and Mind Institute
Department of Psychology
Western University
http://www.fmri4newbies.com/
Basics of Experimental Design
for fMRI:
Block Designs
Last Update: October 6, 2014
Last Course: Psychology 9223, F2014, Western University
Asking the Right Question
“Attending a poster session at a recent meeting, I was reminded of the old adage
‘To the man who has only a hammer, the whole world looks like a nail.’ In this
case, however, instead of a hammer we had a magnetic resonance imaging
(MRI) machine and instead of nails we had a study. Many of the studies
summarized in the posters did not seem to be designed to answer questions
about the functioning of the brain; neither did they seem to bear on specific
questions about the roles of particular brain regions. Rather, they could best be
described as ‘exploratory’. People were asked to engage in some task while the
activity in their brains was monitored, and this activity was then interpreted post
hoc.”
-- Stephen M. Kosslyn (1999). If neuroimaging is the answer, what is the question? Phil
Trans R Soc Lond B, 354, 1283-1294.
Brains Needed
"...the single most critical piece of equipment is still the
researcher's own brain. All the equipment in the world
will not help us if we do not know how to use it properly,
which requires more than just knowing how to operate
it. Aristotle would not necessarily have been more
profound had he owned a laptop and known how to
program. What is badly needed now, with all these
scanners whirring away, is an understanding of exactly
what we are observing, and seeing, and measuring, and
wondering about."
-- Endel Tulving, interview in Cognitive Neuroscience (2002,
Gazzaniga , Ivry & Mangun, Eds., NY: Norton, p. 323)
Toys Are Not Enough
“Expensive equipment doesn’t merit a lousy study.”
-- Louis Sokoloff
Localization vs. Holism:
A Brief History
Localization for Localization’s Sake
Danckert et al., 2004, Neuropsychologia
• Clinical Research
– presurgical planning
– understanding functional reorganization
• Basic Research
– implicating well-established areas in cognitive tasks
Phrenology
1883
Franz Joseph Gall
Not a completely idiotic idea
(cf. skull endocasts)
Localization
Broca’s area
~1861
Phineas Gage
~1800s
But not everyone agreed…
Karl Lashley
1890-1958
Hebb’s Cell Assembly
Donald Hebb
1904-1985
Behaviorism (circa 1950s)
Stimulus
Black
Box
Response
Cognitive Science (circa 1980s)
Attention
Stimulus
Perception
Recognition
Response
Memory
Decision Making
Too many
empty
boxes?
Early fMRI (1990s)
“Blobology”
The Future of fMRI?
Networks
Map of semantic space
derived from fMRI
Gallant Lab
Forward Inference
Faces
activate
Fusiform Face Area (FFA)
Partial Reverse Inference
If FFA lights up, stimulus is considered a face
Danger Zone: Reverse Inference
“[Mitt Romney’s] still photos prompted a
significant amount of activity in the
amygdala, indicating voter anxiety…”
Iacoboni et al., 2007, New York Times
Op-Ed, This is Your Brain on Politics
Danger Zone: Reverse Inference
“The Romney amygdala activation might
indicate anxiety, or any number of other
feelings that are associated with the
amygdala -- anger, happiness or even
sexual excitement”
Martha Farah, Neuroethics and Law Blog
Localization for Reverse Inference
• Reverse inference is iffy
• If you want to know how subjects feel about
something, start by just asking them
– “How scary do you find Mitt Romney?”
– “How sexy do you find Mitt Romney?”
• or seek a cheaper proxy (e.g., Skin Conductance
Response as a proxy of arousal)
Why You Shouldn’t Use fMRI
•
•
•
•
It’s the most expensive approach
If you’re interested in behavior, study behavior
EEG/ERP/MEG have better temporal resolution
TMS and neuropsychology speak more directly to causality
– fMRI activation may be epiphenomonal
• neurophysiology/eCoG give more direct access to neural
processing
Epiphenomena
Huettel et al. fMRI
What Can fMRI Add?
• explicit testing of models derived from other approaches
• inform and constrain theories of cognition
• whole brain coverage that can constrain or direct data from
other approaches (neurophysiology, ERPs, ECoG)
• investigation of neural mechanisms of elaborated human
functions (language, math, tool use)
• correlations between brain and behavior
• help to understand clinical disorders or development
• look at coding and connections between brain regions
Trends in Cognitive Sciences 2013
What have we learned about the face area?
The face area is activated:
• when faces are perceived or imagined
 correlation between brain and behavior
•
for stimuli at the fovea
 cues to brain organization
•
by circular patterns
 cues/constraints for modelling
•
in certain areas of the monkey brain
 cues to brain evolution
•
for other categories of objects that subjects
have extensive experience with
 debate regarding nature/nurture
•
to some degree by other categories of objects
 debate regarding distributed vs. modular coding
in the brain
The fusiform face area may be impaired:
• in some but not all patients who have
problems recognizing faces
• in people with autism
 understanding of brain disorders
Finding the Right Experiment
So you want to do an fMRI study?
Typical cost of performing an fMRI experiment:
Average cost of performing a thought experiment:
Your Salary
CONCLUSION: Unless you are Bill Gates, a thought
experiment is much more efficient!
Thought Experiments
• What do you hope to find?
• What would that tell you about the cognitive process involved?
• Would it add anything to what is already known from other techniques?
• Could the same question be asked more easily, more cheaply or better with other
techniques?
• What would be the alternative outcomes (and/or null hypothesis)?
• Or is there not really any plausible alternative (in which case the experiment may
not be worth doing)?
• If the alternative outcome occurred, would the study still be interesting?
• If the alternative outcome is not interesting, is the hoped-for outcome likely enough
to justify the attempt?
• What would the “headline” be if it worked? Is it sexy enough to warrant the time,
funding and effort?
• “Ideas are cheap.” -- Jody’s former supervisor, Jane Raymond
• Good experimenters generate many ideas and ensure that only the fittest
survive
• What are the possible confounds?
• Can you control for those confounds?
• Has the experiment already been done? “A year of research can save you an
hour on PubMed!”
How Science is Supposed to Work
How Science Often Works
•
•
•
•
What has the crowd missed?
Where do the theories fall short?
What if…?
This result doesn’t make any
sense!
• “Fishing expeditions” aren’t
always bad (just don’t propose
them in a grant)
• “One generation’s noise is the
next generation’s signal.”
Two Strategies
Explaining your rationale
• Typical grant/proposal motivation:
– We know lots about X but not much about Y. Therefore we need to
study Y.
• Problem: Sometimes we don’t know much about Y because it’s
uninteresting or unimportant
– Absurd example: No one (to my knowledge) has used fMRI to compare nose
picking vs. rest. Does that mean we should rush out to study it?
• Better grant/proposal motivation:
– We know lots about X but not Y. Y is interesting and important
because... Therefore we need to study Y.
Sledgehammers and Gravy
Is your approach building on an established paradigm?
• “If it works, don’t mess with it.”
• If it mostly works, you may want to mess with it but do some validation
If your approach needs a novel paradigm or major tweaks to an established
paradigm:
•Before you run the whole study, test some pilot subjects to test and
optimize the approach
•Consider a “sledgehammer” pilot
•Sometimes it’s best to test extreme conditions with high statistical power
before including all control conditions or testing subtlely different conditions
•For some paradigms with low power (e.g., fMRI adaptation, MVPA), you
may not be able to see effects in a few pilot subjects so you just have to run
the whole sample and hope it works
“Never sacrifice the meat & potatoes to get the gravy”
Testing Patients
• fMRI is the art of the barely possible
• neuropsychology is the art of the barely possible
• combining fMRI and neuropsychology can be very
valuable
• BUT it’s (art of the barely possible)2
• If you want to test a paradigm in patients or special
groups (either single cases or group studies),
develop a robust paradigm in control subjects first
• Don’t use patients for pilot testing
Understanding Subtraction Logic
Mental Chronometry
•
•
F. C. Donders
Dutch physiologist
1818-1889
use reaction times to infer
cognitive processes
fundamental tool for behavioral
experiments in cognitive science
Classic Example
T1: Simple Reaction Time
•
Hit button when you see a light
Detect
Stimulus
Press
Button
T2: Discrimination Reaction Time
•
Hit button when light is green but not red
Detect
Stimulus
Discriminate
Color
Press
Button
T3: Choice Reaction Time
•
Hit left button when light is green and right button when light is red
Detect
Stimulus
Discriminate
Color
Time
Choose
Button
Press
Button
Subtraction Logic
T2
Detect
Stimulus
Discriminate
Color
-
T1
Detect
Stimulus
Press
Button
=
Discriminate
Color
Press
Button
Subtraction Logic
T3
Detect
Stimulus
Discriminate
Color
Choose
Button
T2
Detect
Stimulus
Discriminate
Color
=
Choose
Button
Press
Button
Press
Button
Limitations of Subtraction Logic
Assumption of pure insertion
• You can insert a component process into a task without
disrupting the other components
• Widely criticized
Top Ten Things Sex and Brain Imaging
Have in Common
10. It's not how big the region is, it's what you do with it.
9. Both involve heavy PETting.
8. It's important to select regions of interest.
7. Experts agree that timing is critical.
6. Both require correction for motion.
5. Experimentation is everything.
4. You often can't get access when you need it.
3. You always hope for multiple activations.
2. Both make a lot of noise.
Now you should get this joke!
1. Both are better when the assumption of pure insertion is met.
Source: students in the Dartmouth McDonnell-Pew Summer Institute
Subtraction Logic: Brain Imaging Example
Hypothesis (circa early 1990s): Some areas of the brain are
specialized for perceiving objects
Simplest design: Compare pictures of objects vs. a control
stimulus that is not an object
seeing
seeing
pictures
pictures
like
like
minus
Malach et al., 1995, PNAS
= object perception
Objects > Textures
Lateral
Occipital
Complex
(LOC)
Malach et al., 1995, PNAS
fMRI Subtraction
-
=
Other Differences
• Is subtraction logic valid here?
• What else could differ between objects and textures?
Objects > Textures
• object shapes
• irregular shapes
• familiarity
– namability
• visual features (e.g., brightness, contrast, etc.)
• actability
• attention-grabbing
Other Subtractions
Lateral Occipital Complex
Visual Cortex (V1)
Grill-Spector et al., 1998, Neuron
>
>
Kourtzi & Kanwisher, 2000, J Neurosci
>
Malach et al., 1995, PNAS
Dealing with Attentional Confounds
fMRI data seem highly susceptible to the amount of attention drawn to the stimulus
or devoted to the task.
How can you ensure that activation is not simply due to an attentional confound?
Add an attentional requirement to all stimuli or tasks.
Example: Add a “one back”
task
• subject must hit a button
whenever a stimulus repeats
• the repetition detection is
much harder for the
scrambled shapes
• any activation for the intact
shapes cannot be due only to
attention
Time
Other common confounds that
reviewers love to hate:
• eye movements
• motor movements
Change only one thing between conditions!
As in Donders’ method, in functional imaging studies, two
paired conditions should differ by the inclusion/exclusion
of a single mental process
How do we control the mental operations that subjects carry
out in the scanner?
i)
Manipulate the stimulus
• works best for automatic mental processes
ii) Manipulate the task
• works best for controlled mental processes
DON’T DO BOTH AT ONCE!!!
Source: Nancy Kanwisher
Beware the “Brain Localizer”
• Can have multiple comparisons/baselines
• Most common baseline = rest
• In some fields the baseline may be straightforward
– For example, in vision studies, the baseline is often fixation on
a point on an otherwise blank screen
• Be careful that you don’t try to subtract too much
Reaching – rest
= visual stimulus
+ localization of stimulus
+ arm movement
+ somatosensory feedback
+ response planning
+…
“Our task activated the
occipito-temporo-parieto-fronto-subcortical network”
Another name for this is “the brain”!
What are people doing during “rest”?
What are people really doing during rest?
• Daydreaming, thinking
– “Gawd this is boring. I wonder how long I’ve been in
here. I went at 2:00. It must be about 3:30 now…”
• Remembering, imagining
– “I gotta remember to pick up a carton of milk on the way
home”
• Attending to bodily sensations
– “I really have to pee!”, “My back hurts”, “Get me outta here!”
• Getting drowsy
– “Zzzzzz… I only closed my eyes for a second… really!”
Problems with a Rest Baseline?
• For some tasks (e.g., memory
studies), rest is a poor,
uncontrolled baseline
Parahippocampal
Cortex
Stark et al., 2001, PNAS
– memory structures (e.g., medial
temporal lobes) may be
DEactivated in a task compared to
rest
• To get a non-memory baseline,
some memory researchers put
a low-memory task in the
baseline condition
– e.g., hearing numbers and
categorizing them as even or odd
Default Mode Network
Fox and Raichle, 2007, Nat. Rev. Neurosci.
• red/yellow = areas that tend to be activated during tasks
• task > resting baseline
• blue/green = areas that tend to be deactivated during tasks
• task < resting baseline
Situation #1
•Conditions A and B activate area X relative to rest
•This is the kind of time course we would expect in a taskpositive region (e.g., intraparietal sulcus)
•A – B difference = 0.5%
0A0B0A0B0A0B0A0B0
Situation #2
•Conditions A and B DEactivate area X relative to rest
•This is the kind of time course we would expect in a tasknegative region (e.g., precuneus focus in default mode
network)
•In fact, we could view this as task B yielding more
DEactivation than task A
•A – B difference = 0.5%
Situation #3
•Condition A activates area X, Condition B has no effect
•A – B difference = 0.5%
0A0B0A0B0A0B0A0B0
Sometimes our hypotheses may be consistent with
some situations (e.g., activation) but not others (e.g.,
deactivation)
If we have included a rest baseline, we can distinguish
the possibilities
The benefit of testing only two conditions,
A and B without including a rest baseline is
that we spend all our imaging time on the
conditions we care most about
0A0B0A0B0A0B0A0B0
BA BA BA BA BA BA BA BA
One potential drawback is that we
cannot tell whether conditions differ in
activation levels vs. deactivation levels
Is concurrent behavioral data necessary?
“Ideally, a concurrent, observable and measureable behavioral response,
such as a yes or no bar-press response, measuring accuracy or reaction
time, should verify task performance.”
-- Mark Cohen & Susan Bookheimer, TINS, 1994
“I wonder whether PET research so far has taken the methods of
experimental psychology too seriously. In standard psychology we need
to have the subject do some task with an externalizable yes-or-no
answer so that we have some reaction times and error rates to analyze –
those are our only data. But with neuroimaging you’re looking at the
brain directly so you literally don’t need the button press… I wonder
whether we can be more clever in figuring out how to get subjects to
think certain kinds of thoughts silently, without forcing them to do some
arbitrary classification task as well. I suspect that when you have people
do some artificial task and look at their brains, the strongest activity you’ll
see is in the parts of the brain that are responsible for doing artificial
tasks.
-- Steve Pinker, interview in the Journal of Cognitive Neuroscience, 1994
Source: Nancy Kanwisher
Choosing a Block Design
Parameters for Neuroimaging
You decide:
• number of slices
• slice orientation
• slice thickness
• in-plane resolution (field of view and matrix size)
• volume acquisition time (usually = TR)
• length of a run
• number of runs
• duration and sequence of epochs within each run
• counterbalancing within or between subjects
Your physicist can help you decide:
• pulse sequence (e.g., gradient echo vs. spin echo)
• k-space sampling (e.g., echo-planar vs. spiral imaging)
• TR, TE, flip angle, etc.
Tradeoffs
“fMRI is like trying to assemble a ship in a
bottle – every which way you try to move,
you encounter a constraint” -- Mel Goodale
Number of slices vs. volume acquisition time
• the more slices you take, the longer you need to acquire them
• e.g., 30 slices in 2 sec vs. 45 slices in 3 sec
Number of slices vs. in-plane resolution
• the higher your in-plane resolution, the fewer slices you can acquire in a constant volume
acquisition time
• e.g., in 2 sec, 7 slices at 1.5 x 1.5 mm resolution (128 x 128 matrix) vs. 28 slices at 3 mm x
3 mm resolution (64 x 64 matrix)
More Power to Ya!
Statistical Power
• the probability of rejecting the null hypothesis when it is actually false
• “if there’s an effect, how likely are you to find it”?
Effect size
• bigger effects, more power
• e.g., LO localizer (intact vs. scrambled objects) -- 1 run is usually enough
• looking for activation during imagery of objects might require many more runs
Sample size
• larger n, more power
• more subjects
• longer runs
• more runs per subject
Signal:Noise Ratio
• better SNR, more power
• higher magnetic field
• multi-channel coils
• fewer artifacts (physical noise, physiological noise)
How many subjects?
• It depends…
• Garden variety
– n=12+ for Random Effects (RFX – stay tuned)
• may be able to get away with fewer using single subject analyses if a
small number of subjects (e.g., 6) shows consistent effects
• more is better
• Limited samples
– non-human primates: n=2
– patients
•
•
•
•
depends what you want to claim
single patient has spared function: n=1 (given known control data)
single patient vs. group: n=1 patient, multiple controls
a group of patients differs from controls, n=many per group (e.g., 12+)
Are fMRI studies underpowered?
• Typical sample sizes (e.g., 12-20) may only find large
effects
• Underpowered studies may have hidden costs
– likelihood of false positives
– researcher’s time trying to make sense of marginal effects
– delays due to harassment by reviewers
• one approach is to spend more money collecting a
larger data set and do multiple analyses (e.g.,
subtractions, MVPA, connectivity)
Put your conditions in the same run!
As far as possible, put the two conditions you want to compare within
the same run.
Why?
• subjects get drowsy and bored
• magnet may have different amounts of noise from one run to another (e.g., spike)
• some preprocessing (e.g., z-normalization) may affect stats differently between runs
Difference in significance ≠ Significant difference
Common flawed logic:
Run1: A – baseline
Run2: B – baseline
…or
Group A: task – baseline
Group B: task - baseline
“A – 0 was significant, B – 0 was not,  Area X is activated by A more than B”
BOLD Activation (%)
By this logic, there is higher activation for B (blue)
than A (pink) in the data to the left.
Do you agree?
Bottom line: If you want to compare A vs. B,
compare A vs. B! Simple, eh?
A
B
Error bars =
95% confidence limits
This issue is particularly important when comparing
two groups (or tasks) where data quality may differ
(e.g., different age groups; patients vs. controls)
Run Duration
How long should a run be?
• Short enough that the subject can remain comfortable
without moving or swallowing
• Long enough that you’re not wasting a lot of time restarting
the scanner
• My ideal is ~6 ± 2 minutes
Simple Example Experiment: LO Localizer
Lateral Occipital Complex
• responds when subject
views objects
Intact
Objects
Blank
Screen
TIME
(Unit: Volumes)
One volume (12 slices) every 2 seconds for 272
seconds (4 minutes, 32 seconds)
Condition changes every 16 seconds (8 volumes)
Scrambled
Objects
Options for Block Design Sequences
That design was only one of many possibilities. Let’s consider some of the
other options and the pros and cons of each.
Let’s assume we want to have an LO localizer
We need at least two conditions:
but we could consider including a third condition
Let’s assume that in all cases we need 2 sec/volume to cover the range of
slices we require
Let’s also assume a total run duration of 136 volumes (x 2 sec = 272 sec = 4
min, 16 sec
We’ll start with 2 condition designs…
Convolution of Single Trials
Neuronal Activity
BOLD Signal
Haemodynamic Function
Time
Time
Slide from Matt Brown
Block Design: Short Equal Epochs
raw time
course
HRFconvolved
time course
Time (2 s volumes)
Alternation every 4 sec (2 volumes)
• signal amplitude is weakened by HRF because signal doesn’t have enough time to
return to baseline
• not to far from range of breathing frequency (every 4-10 sec)  could lead to
respiratory artifacts
• if design is a task manipulation, subject is constantly changing tasks, gets
confused
Block Design: Short Unequal Epochs
raw time
course
HRFconvolved
time course
Time (2 s volumes)
4 sec stimuli (2 volumes) with 8 sec (4 volumes) baseline
• we’ve gained back most of the HRF-based amplitude loss but the other problems
still remain
• now we’re spending most of our time sampling the baseline
Block Design: Long Epochs
The other extreme…
raw time
course
HRFconvolved
time course
Time (2 s volumes)
Alternation Every 68 sec (34 volumes)
• more noise at low frequencies
• linear trend confound
• subject will get bored
• very few repetitions – hard to do eyeball test of significance
Find the “Sweet Spots”
Respiration
• every 4-10 sec (0.3 Hz)
• moving chest distorts susceptibility
Cardiac Cycle
• every ~1 sec (0.9 Hz)
• pulsing motion, blood changes
Solutions
• gating
• avoiding paradigms at those frequencies
You want your paradigm frequency
to be in a “sweet spot” away from
the noise
Block Design: Medium Epochs
raw time
course
HRFconvolved
time course
Time (2 s volumes)
Every 16 sec (8 volumes)
• allows enough time for signal to oscillate fully
• not near artifact frequencies
• enough repetitions to see cycles by eye
• a reasonable time for subjects to keep doing the same thing
Block Design: Other Niceties
truncated
too soon
Time (2 s volumes)
• If you start and end with a baseline condition, you’re less likely
to lose information with linear trend removal and you can use
the last epoch in an event related average
Block Design Sequences: Three Conditions
• Suppose you want to add a third condition to act as a
more neutral baseline
• For example, if you wanted to identify visual areas as
well as object-selective areas, you could include resting
fixation as the baseline.
• That would allow two subtractions
– scrambled - fixation  visual areas
– intact - scrambled  object-selective areas
• That would also help you discriminate differences in
activations from differences in deactivations
• Now the options increase.
• For simplicity, let’s keep the epoch duration at 16 sec.
Block Design: Repeating Sequence
• We could just order the epochs in a repeating sequence…
• Problem: There might be order effects
• Solution: Counterbalance with another order
• Problem: If you lose a run (e.g., to head motion), you lose
counterbalancing)
Block Design: Random Sequence
• We could make multiple runs with the order of conditions
randomized…
• Problem: Randomization can be flukey
• Problem: To avoid flukiness, you’d want to have different
randomization for different runs and different subjects, but
then you’re going to spend ages defining protocols for
analysis
Block Design: Regular Baseline
• We could have a fixation baseline between all stimulus
conditions (either with regular or random order)
Benefit: With event-related averaging,
this regular baseline design provides
nice clear time courses, even for a
block design
Problem: You’re spending half of your
scan time collecting the condition you
care the least about
But I have 4 conditions to compare!
Here are a couple of options.
A. Orderly progression
Pro: Simple
Con: May be some confounds (e.g., linear
trend if you predict green&blue > pink&yellow)
B. Random order in each run
Pro: order effects should average out
Con: pain to make various protocols, no possibility to average all data into one
time course, many frequencies involved
C. Kanwisher lab clustered design
• sets of four main condition epochs separated by baseline epochs
• each main condition appears at each location in sequence of four
• two counterbalanced orders (1st half of first order same as 2nd half of
second order and vice versa) – can even rearrange data from 2nd order to
allow averaging with 1st order
Pro: spends most of your n on key
conditions, provides more repetitions
Con: not great for event-related averaging
because orders are not balanced (e.g., in
top order, blue is preceded by the baseline
1X, by green 2X, by yellow 1X and by pink
0X.
As you can imagine, the more conditions you try to shove in a run, the thornier
ordering issues are and the fewer n you have for each condition.
But I have 8 conditions to compare!
• Just don’t.
• In my experience, any block design experiment with
more than four conditions becomes unmanageable
and incomprehensible
• Event-related designs might still be an option… stay
tuned…
Clarification re. spatial smoothing
3D (No interpolation)
2D (No interpolation)
1D (No interpolation)
1D (No interpolation)
0
0
0
53 53 53 128 128 128 155 155 155 164 164 164 128 128 128 155 155 155 164 164 164 128 128 128 127 127 127 139 139 139 123 123 123
3-mm functional voxels shown at 1-mm resolution
Spatial Smoothing
Gaussian kernel
• smooth each voxel by a
Gaussian or normal
function, such that the
nearest neighboring voxels
have the strongest
weighting
Maximum
Half-Maximum
Full Width at Half-Maximum
(FWHM)
-8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8
FWHM = 6
Gaussian Smoothing (4-mm) FWHM on One Voxel
0
0
0
53 53 53 128 128 128 155 155 155 164 164 164 128 128 128 155 155 155 164 164 164 128 128 128 127 127 127 139 139 139 123 123 123
Smoothed V14 ~= 0.1xV11 + 0.3xV12 + 0.75xV13 + 1xV14 + 0.75xV15 + 0.3xV16 + 0.1xV17
0.1 + 0.3 + 0.75 + 1 + 0.75 + 0.3 + 0.1
Repeat for every voxel…
0
0
0
53 53 53 128 128 128 155 155 155 164 164 164 128 128 128 155 155 155 164 164 164 128 128 128 127 127 127 139 139 139 123 123 123
Effect of Smoothing
pre-smoothing
post-smooothing (4-mm FWHM)
Gaussian Smoothing (8-mm) FWHM on One Voxel
0
0
0
53 53 53 128 128 128 155 155 155 164 164 164 128 128 128 155 155 155 164 164 164 128 128 128 127 127 127 139 139 139 123 123 123
Now voxels within +/- 8 mm have an effect
Why Smooth?
Signal
outside
brain
Smoothed Signal
gray
matter
white
matter
gray
matter
outside
brain
Noise
Smoothed Noise
(Signal + Noise)
Smoothed (Signal + Noise)
• Signal sums
• Random noise cancels
1D - 2D – 3D Gaussians
Effects of Spatial Smoothing on Activity
No smoothing
4-mm FWHM
7-mm FWHM
10-mm FWHM
EXTRA SLIDES
Prepare Well: Subjects
• recruit and screen your subjects well in advance
– safety screening
• best to let them read through and self-screen beforehand so you don’t
get any embarrassing situations (e.g., discussions about IUDs,
pregnancy)
– eye glasses
– handedness
• make sure your subjects know how to be good subjects
– http://www.ssc.uwo.ca/psychology/culhamlab/Jody_web/Subject_Info
/firsttime_subjects.htm
• make sure you and the subjects can contact each other in
case of problems or delays
• if possible, be a subject yourself to see what the pitfalls and
strategies might be
• remember to bring:
– subject fees (and receipt book)
– consent and screening forms
Prepare Well: Experiments
• test all equipment in advance
• test software under realistic circumstances (same computer, timing and
duration as fMRI experiments)
• make sure you know all of the parameters the technician will want (e.g.,
pulse sequence, timing, slices and orientation)
• at RRI, prepare a spreadsheet with mouseclicks and stopwatch times
• check the timing as you go, especially at the beginning of an experiment
• keep accurate log notes as you go
• check with the technician regularly to ensure that your log notes record
the same run number as the scanner
• attach your timing spreadsheet to the log notes for that subject
• write down any problems that arose (e.g., “subject missed second last
trial”; “subject drowsy through first ~third of run”)
Prepare Well: Postprocessing
• move data to secure location as soon as possible
• save one backup in the rawest form possible
– if advances in reconstruction occur, you will need unprocessed data
to use them
• save other backups at natural points (e.g., backup and
delete 2D data once you’ve made 3D data)
– have redundancy
– don’t put all backups on the same CD/DVD or you’re toast if one is
damaged (CDs aren’t forever like we once thought)
• save full projects to one DVD (or HD partition) once you’re
done so you can reload an entire project if you need to
reanalyze
• keep a subject archive
…
Dealing with Frustration
Murphy's law acts with particular vigour in fMR
imaging:
Number of pieces of equipment required
in an fMRI experiment: ~50
Probability of any one piece of equipment
working in a session: 95%
Probability of everything working in a
session: 0.95^50 = 7.6%
Solution for a good
imaging session =
$4 million magnet
+ $3 roll of duct tape
Sign that used to be at
the 1.5 T at MGH
How NOT to do an imaging experiment
• ask a stupid question
– e.g., “I wonder what lights up for nose picking vs. rest”
• compare poorly-defined conditions that differ in many respects
• use a paradigm from another technique (e.g., cognitive psychology)
without optimizing any of the timing for fMRI, e.g., 1 minute epochs
• be naively optimistic
– go straight for the “whipped cream” experiment without starting with a “sledgehammer”
experiment
• never look at raw data, time courses or individual data, just plunk it all
into one big stat model and look at what comes out
• publish a long list of activated foci in every possible comparison
• don’t use any statistical corrections
• write a long discussion on why your task activates the subcorticooccipito-parieto-temporo-frontal network