05_MEEG_StatsPractica

Download Report

Transcript 05_MEEG_StatsPractica

Practical aspects of…

Statistical Analysis of M/EEG Sensor- and Source-Level Data

Jason Taylor

MRC Cognition and Brain Sciences Unit (CBU) Cambridge Centre for Ageing and Neuroscience (CamCAN) Cambridge, UK [email protected] | http://imaging.mrc-cbu.cam.ac.uk/meg (wiki) 14 May 2012 | London | Thanks to Rik Henson and Dan Wakeman @ CBU

Overview

The SPM approach to M/EEG Statistics:

A mass-univariate statistical approach to inference regarding effects in space/time/frequency (using replications across trials or subjects).

• •

In Sensor Space:

2D Time-Frequency 3D Topography-by-Time

In Source Space:

3D Time-Frequency contrast images 4-ish-D space by (factorised) time Alternative approaches: SnPM, PPM, etc.

Example Datasets

CTF Multimodal Face-Evoked Dataset | SPM8 Manual (Rik Henson) CTF MEG/ActiveTwo EEG: 275 MEG 128 EEG 3T fMRI (with nulls) 1mm 3 sMRI 2 sessions Stimuli: ~160 face trials/session ~160 scrambled trials/session Group: N=12 subjects (Henson et al, 2009 a,b,c) Chapter 37 SPM8 Manual

Neuromag Faces Dataset | Dan Wakeman & Rik Henson @ CBU Neuromag MEG & EEG Data: 102 Magnetometers 204 Planar Gradiometers 70 EEG HEOG, VEOG, ECG 1mm 3 sMRI 6 sessions Stimuli: faces & scrambled-face images (480 trials total) Group: N=18 (Henson et al, 2011)

400 600ms 800 1000ms 1700ms

Biomag 2010 Award-Winning Dataset!

Neuromag Lexical Decision Dataset | Taylor & Henson @ CBU Neuromag MEG Data: 102 Magnetometers 204 Planar Gradiometers HEOG, VEOG, ECG 1mm 3 sMRI 4 sessions Stimuli: words & pseudowords presented visually (480 trials total) Group: N=18 Taylor & Henson (submitted) Neuromag Lexical Decision

The Multiple Comparison Problem

Where is the effect?: The curse of too much data

The Multiple Comparison Problem • The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true.

* Must consider Familywise (vs. per-comparison) Error Rate • Comparisons are often made implicitly, e.g., by viewing (“eye balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume.

-> When is there an effect in time e.g., GFP (1D)? time

The Multiple Comparison Problem • The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true.

* Must consider Familywise (vs. per-comparison) Error Rate • Comparisons are often made implicitly, e.g., by viewing (“eye balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume.

-> When/at what frequency is there an effect an effect in time/frequency (2D)? time

The Multiple Comparison Problem • The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true.

* Must consider Familywise (vs. per-comparison) Error Rate • Comparisons are often made implicitly, e.g., by viewing (“eye balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume.

x -> When/where is there an effect in sensor-topography space/time (3D)? y

The Multiple Comparison Problem • The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true.

* Must consider Familywise (vs. per-comparison) Error Rate • Comparisons are often made implicitly, e.g., by viewing (“eye balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume.

-> When/where is there an effect in source space/time (4-ish-D)? x z y

The Multiple Comparison Problem • The more comparisons we conduct, the more Type I errors (false positives) we will make when the Null Hypothesis is true.

* Must consider Familywise (vs. per-comparison) Error Rate • Comparisons are often made implicitly, e.g., by viewing (“eye balling”) data before selecting a time-window or set of channels for statistical analysis. RFT correction depends on the search volume.

Random Field Theory (RFT) is a method for correcting for multiple statistical comparisons with N-dimensional spaces (for parametric statistics, e.g., Z-, T-, F- statistics).

* Takes smoothness of images into account Worsley Et Al (1996). Human Brain Mapping, 4:58-73

GLM: Condition Effects after removing variance due to confounds

Each trial-type (6) Confounds (4) Each trial W/in Subject (1 st -level) model -> one image per trial -> one covariate value per trial Also: Group Analysis (2 nd -level) -> one image per subject per condition -> one covariate value per subject per condition beta_00* image volumes reflect (adjusted) condition effects Henson et al., 2008, NImage

Sensor-space analyses

Where is an effect in time-frequency space?

1 subject (1 st -level analysis) 1 MEG channel

Faces > Scrambled

Morlet wavelet projection 1 t-x-f image per trial Kilner et al., 2005, Nsci Letters CTF Multimodal Faces

Where is an effect in time-frequency space?

EEG

18 subjects 1 channel (2 nd -level analysis)

MEG (Mag)

100-220ms 8-18Hz -500 +1000 Time (ms) -500 +1000 Time (ms) Neuromag Faces

Where is an effect in sensor-time space?

3D Topography x-Time Image volume (EEG)

y

1 subject ALL EEG channels Each trial, Topographic projection (2D) for each time point (1D) = topo-time image volume (3D) Faces vs Scrambled

x CTF Multimodal Faces

Where is an effect in sensor-time space?

Analysis over subjects (2 nd Level)

NOTE for MEG: Complicated by variability in head-position SOLUTION(?): Virtual transformation to same position (sessions, subjects) - using Signal Space Separation (SSS; Taulu et al, 2005; Neuromag systems)

With out transformation to Device Space With transformation to Device Space

Stats over 18 subjects, RMS of planar gradiometers Improved: (i.e., more blobs, larger T values) w/ head-position transform Other evidence: ICA ‘splitting’ with movement Taylor & Henson (2008) Biomag Neuromag Lexical Decision

Where is an effect in sensor-time space?

Analysis over subjects (2 nd Level):

Magnetometers: Pseudowords vs Words PPM (P>.95 effect>0)

Effect Size 18 fT -18

Taylor & Henson (submitted) Neuromag Lexical Decision

Source-space analyses

Where is an effect in source space (3D)?

STEPS: Estimate evoked/induced

energy (RMS) at each dipole for a certain time-frequency window of interest. - e.g., 100-220ms, 8-18 Hz - For each condition (Faces, Scrambled) - For each sensor type OR fused modalities

Write data to 3D image

- in MNI space - smooth along 2D surface

Smooth by 3D Gaussian Submit to GLM

Henson et al., 2007, NImage

Where is an effect in source space (3D)?

RESULTS: Faces > Scrambled

fMRI EEG MEG

sensor fusion

E+MEG Henson et al, 2011 Neuromag Faces

Where is an effect in source space (3D)?

RESULTS: Faces > Scrambled

fMRI

with fMRI priors

EEG+MEG+fMRI

with group optimised fMRI priors

EEG+MEG+fMRI mean stats Henson et al, 2011 Neuromag Faces

Where and When do effects emerge/disappear in source space (4-ish-D: time factorised)?

Condition x Time-window Interactions

Factorising time allows you to infer (rather than simply describe) when effects emerge or disappear.

We've used a data-driven method (hierarchical cluster analysis) in sensor space to define time windows of interest for source-space analyses.

* high-dimensional distance between topography vectors * cut the cluster tree where the solution is stable over longest range of distances Taylor & Henson, submitted Neuromag Lexical Decision

Where and When do effects emerge/disappear in source space (4-ish-D)?

Condition x Time-window Interactions

Factorising time allows you to infer (rather than simply describe) when effects emerge or disappear.

We've used a data-driven method (hierarchical cluster analysis) in sensor space to define time-windows of interest for source-space analyses.

* estimate sources in each

sub-time-window

* submit to GLM with conditions & time-windows as factors (here: Cond effects per t-win) source timecourses Taylor & Henson, submitted Neuromag Lexical Decision

Where and When do effects emerge/disappear in source space (4-ish-D)?

Condition x Time-window Interactions

Factorising time allows you to infer (rather than simply describe) when effects emerge or disappear.

We've used a data-driven method (hierarchical cluster analysis) in sensor space to define time-windows of interest for source-space analyses.

* estimate sources in each sub time-window * submit to GLM with conditions & time-windows as factors:

Cond x TW interactions

Taylor & Henson, submitted Neuromag Lexical Decision

Alternative Approaches

Alternative Approaches

Classical SPM approach: Caveats Inverse operator induces long-range error correlations (e.g., similar gain vectors from non-adjacent dipoles with similar orientation), making RFT correction conservative Distributions over subjects/voxels may not be Gaussian (e.g., if sparse, as in MSP) (Taylor & Henson, submitted; Biomag 2010)

Alternative Approaches

Non-Parametric Approach (SnPM)

Robust to non-Gaussian distributions Less conservative than RFT when dfs<20

Caveats:

No idea of effect size (e.g., for power, future expts) Exchangeability difficult for more complex designs (Taylor & Henson, Biomag 2010) SnPM Toolbox by Holmes & Nichols: http://go.warwick.ac.uk/tenichols/software/snpm/

p<.05 FWE

CTF Multimodal Faces

Alternative Approaches

Posterior Probability Maps (PPMs)

Bayesian Inference No need for RFT (no MCP) Threshold on posterior probabilty of an effect greater than some size Can show effect size after thresholding

Caveats:

Assume Gaussian distribution (e.g., of mean over voxels)

p>.95 ( γ>1SD)

CTF Multimodal Faces

Future Directions

• Extend RFT to 2D cortical surfaces (+1D time)? (e.g., Pantazis et al., 2005, NImage) • Novel approaches to controlling for multiple comparisons (e.g., 'unique extrema' in sensors: Barnes et al., 2011, NImage) • Go multivariate? To identify linear combinations of spatial (sensor or source) effects in time and space (e.g., Carbonnell, 2004, NImage; but see Barnes et al., 2011) To detect spatiotemporal patterns in 3D images (e.g., Duzel, 2004, NImage; Kherif, 2003, NImage)

-- The end --

• Thanks for listening • Acknowledgements: • • Rik Henson (MRC CBU) Dan Wakeman (MRC CBU / now at MGH) • Vladimir, Karl, and the FIL Methods Group • More info: •

http://imaging.mrc-cbu.cam.ac.uk/meg (wiki) [email protected]