|
|
|
|
|
Signal = Measurable response to stimulus |
|
Noise = Components of measurement that interfere with
detection of signal |
|
Statistical detection theory: |
|
Understand relationship between stimulus & signal |
|
Characterize noise statistically |
|
Can then devise methods to distinguish noise-only
measurements from signal+noise measurements, and assess their reliability |
|
Methods and usefulness depend strongly on the assumptions |
|
Some methods are robust against erroneous assumptions,
and some are not |
|
|
|
|
|
|
FMRI StimulusSignal connection and noise statistics are both poorly
characterized |
|
Result: there is no best way to analyze FMRI time series
data: there are only reasonable analysis methods |
|
To deal with data, must make some assumptions about the
signal and noise |
|
Assumptions will be wrong, but must do something |
|
Different kinds of experiments require different kinds of
analyses |
|
Since signal models and questions you ask about the signal
will vary |
|
It is important to understand what is going on, so you can
select and evaluate reasonable analyses |
|
|
|
|
Write down a mathematical model connecting stimulus (or
activation) to signal |
|
Write down a statistical model for the noise |
|
Combine them to produce an equation for measurements given
signal+noise |
|
Equation will have unknown parameters, which are to be
estimated from the data |
|
N.B.: signal may have zero strength |
|
Use statistical detection theory to produce an algorithm
for processing the measurements to assess signal presence and characteristics |
|
e.g., least squares fit of model parameters to data |
|
|
|
|
|
Most common forms of FMRI analysis involve fitting an
activation+BOLD model to each voxels time series separately (AKA
univariate analysis) |
|
Some pre-processing steps may do inter-voxel computations |
|
e.g., spatial smoothing to reduce noise |
|
Result of model fits is a set of parameters at each voxel,
estimated from that voxels data |
|
e.g., activation amplitude, delay, shape |
|
SPM = statistical parametric map |
|
Further analysis steps operate on individual SPMs |
|
e.g., combining/contrasting data among subjects |
|
|
|
|
First: Block-trial FMRI data |
|
Activation occurs over a sustained period of time (say,
10 s or longer), usually from more than one stimulation event, in rapid
succession |
|
BOLD (hemodynamic) response accumulates from multiple close
activations and is large |
|
BOLD response is often visible in time series |
|
Next 4 slides: same brain voxel in 9 imaging runs |
|
black curve (noisy) = data |
|
red curve (above data) = ideal model response |
|
blue curve (within data) = model fitted to data |
|
somatosensory task (finger being rubbed) |
|
|
|
|
|
|
Assume a fixed shape h(t ) for the HRF |
|
e.g., h(t ) = t 8.6 exp(-t/0.547) [MS Cohen,
1997] |
|
Convolved with stimulus timing (e.g., AFNI program waver),
get ideal response function r(t ) |
|
Assume a form for the baseline |
|
e.g., a + b×t for a constant plus a linear trend |
|
In each voxel, fit data Z(t ) to a curve of the form |
|
Z(t ) a + b×t + b× r(t ) |
|
a, b, b are unknown parameters to be calculated in each voxel |
|
a,b are nuisance parameters |
|
b is amplitude of r(t )
in data = how much BOLD |
|
|
|
|
|
Usually have more than one class of stimulus or activation
in an experiment |
|
e.g., want to see size of face activation vis--vis
house activation; or, what vs. where activity |
|
Need to model each separate class of stimulus with a
separate response function r1(t ), r2(t ), r3(t
), . |
|
Each rj(t ) is based on the stimulus timing for
activity in class number j |
|
Calculate a bj amplitude =
amount of rj(t ) in voxel data time series Z(t ) |
|
Contrast bs to see which voxels
have differential activation levels under different stimulus conditions |
|
e.g., statistical test on the question b1b2 = 0 ? |
|
|
|
|
|
In each voxel, fit data Zn to a curve of the
form |
|
Zn a + b×tn + b×rn
for n=0,1,,N-1 (N=# time pts) |
|
a, b, b are unknown parameters to be calculated in each voxel |
|
a,b are nuisance baseline parameters |
|
b is amplitude of r(t )
in data = how much BOLD |
|
Baseline model might be more complicated for long (> 200
s) continuous imaging runs: |
|
T < 300 s: a+b×t+c×t 2 |
|
Longer: a+b×t+c×t 2 + T/200 low frequency components |
|
Might also include as extra baseline components the
estimated subject head movement time series, in order to remove residual
contamination from such artifacts |
|
|
|
|
|
In each voxel, fit data Zn to a curve of the
form |
|
|
|
bj is
amplitude in data of rn(j)=rj(tn)
; i.e., how much of j th response function in in the data time
series |
|
In simple regression, each rj(t ) is derived
directly from stimulus timing and user-chosen HRF model |
|
In terms of stimulus times: |
|
If stimulus occurs on the imaging TR time-grid, stimulus
can be represented as a 0-1 time series: |
|
where sk(j)=1 if stimulus
#j is on |
|
at time t=k ×TR, and sk(j)=0
if #j is off at that time: |
|
|
|
|
|
Number of equations = number of time points |
|
100s per run, but perhaps 1000s per subject |
|
Number of unknowns usually in range 550 |
|
Least squares solution: |
|
denotes an estimate of the true
(unknown) |
|
From
, calculate
as the fitted model |
|
|
|
|
|
|
|
is the residual time series = noise (we hope) |
|
Collinearity: when matrix cant be
inverted |
|
Near collinearity: when inverse exists but is huge |
|
|
|
|
Choose HRF model h(t) [AKA fixed-model regression] |
|
Build model responses rn(t) to each stimulus
class |
|
Using h(t) and the stimulus timing |
|
Choose baseline model time series |
|
Constant + linear + quadratic + movement? |
|
Assemble model and baseline time series into the columns of
the R matrix |
|
For each voxel time series z, solve zRb for |
|
Individual subject maps: Test the coefficients in that you
care about for statistical significance |
|
Group maps: Transform the coefficients in that you care about to
Talairach space, and perform statistics on these values |
|
|
|
|
Six preparatory steps are possible |
|
Image registration (realignment): program 3dvolreg |
|
Image smoothing: program 3dmerge |
|
Image masking: program 3dClipLevel or 3dAutomask |
|
Conversion to percentile: programs 3dTstat and 3dcalc |
|
Censoring out time points that are bad: program 3dToutcount
or 3dTqual |
|
Catenating multiple imaging runs into 1 big dataset:
program 3dTcat |
|
|
|
Not all steps are necessary or desirable in any given case |
|
In this first example, will only do registration, since the
data obviously needs this correction |
|
|
|
|
type source epi_r1_decon ; then wait for programs to run |
|
type afni to view what weve got |
|
Switch Underlay to epi_r1_reg (output from 3dvolreg) |
|
Switch Overlay to epi_r1_func (output from 3dDeconvolve) |
|
Sagittal Image and Graph viewers |
|
FIMIgnore2 to have graph viewer not plot 1st 2 time pts |
|
FIMPick Ideal ; pick epi_r1_ideal.1D
(output from waver) |
|
Define Overlay to set up functional coloring |
|
OlayAllstim[0] Coef (sets coloring to be from model fit b) |
|
ThrAllstim[0] t-s (sets threshold to be model fit t-statistic) |
|
See Overlay (otherwise wont see the function!) |
|
Play with threshold slider to get a meaningful activation
map (e.g., t=4 is a decent threshold) |
|
|
|
|
Graph viewer: OptTran
1DDataset #N to plot the model fit
dataset output by 3dDeconvolve |
|
Will open the control panel for the Dataset #N plugin |
|
Click first Input on ; then choose Dataset
epi_r1_fitts+orig |
|
Also choose Color dk-blue to get a pleasing plot |
|
Then click on Set+Close (to close the plugin panel) |
|
Should now see fitted time series in the graph viewer
instead of data time series |
|
Graph viewer: click OptDouble PlotOverlay on to make the
fitted time series appear as an overlay curve |
|
This tool lets you visualize the quality of the data fit |
|
Can also now overlay function on MP-RAGE anatomical by
using Switch Underlay to anat+orig dataset |
|
Probably wont want to graph the anat+orig
dataset! |
|
|
|
Last part of script epi_r1_decon: |
|
|
|
3dDeconvolve
\ |
|
-input epi_r1_reg+orig \ |
|
-nfirst 2
\ |
|
-num_stimts 7
\ |
|
-stim_file 1 epi_r1_ideal.1D \ |
|
-stim_label 1 AllStim
\ |
|
-stim_file 2 epi_r1_mot.1D'[0]' \ |
|
-stim_base 2
\ |
|
-stim_file 3 epi_r1_mot.1D'[1]' \ |
|
-stim_base 3
\ |
|
-stim_file 4 epi_r1_mot.1D'[2]' \ |
|
-stim_base 4
\ |
|
-stim_file 5 epi_r1_mot.1D'[3]' \ |
|
-stim_base 5
\ |
|
-stim_file 6 epi_r1_mot.1D'[4]' \ |
|
-stim_base 6
\ |
|
-stim_file 7 epi_r1_mot.1D'[5]' \ |
|
-stim_base 7
\ |
|
-tout
\ |
|
-bucket epi_r1_func_mot \ |
|
-fitts
epi_r1_fitts_mot |
|
|
|
|
|
|
|
The experiment analyzed here in fact is more complicated |
|
There are 4 related visual stimulus types |
|
One goal is to find areas that are differentially activated
between these different types of stimuli |
|
We have 4 imaging runs, 108 useful time points each
(skipping first 2 in each run) that we will analyze together |
|
Already registered and put together into dataset rall_vr+orig |
|
Stimulus timing files are in subdirectory stim_files/ |
|
Script file waver_ht2 will create HRF models for
regression: |
|
cd stim_files |
|
waver -dt 2.5 -GAM -input scan1to4a.1D
> scan1to4a_hrf.1D |
|
waver -dt 2.5 -GAM -input scan1to4t.1D
> scan1to4t_hrf.1D |
|
waver -dt 2.5 -GAM -input scan1to4h.1D
> scan1to4h_hrf.1D |
|
waver -dt 2.5 -GAM -input scan1to4l.1D
> scan1to4l_hrf.1D |
|
cd .. |
|
type source waver_ht2 to run this script |
|
Might also use 1dplot to check if things are reasonable |
|
|
|
|
|
Script file decon_ht2 does the job: |
|
3dDeconvolve -xout -input
rall_vr+orig
\ |
|
-num_stimts 4
\ |
|
-stim_file 1 stim_files/scan1to4a_hrf.1D -stim_label 1 Actions \ |
|
-stim_file 2 stim_files/scan1to4t_hrf.1D -stim_label 2 Tool \ |
|
-stim_file 3 stim_files/scan1to4h_hrf.1D -stim_label 3 HighC \ |
|
-stim_file 4 stim_files/scan1to4l_hrf.1D -stim_label 4 LowC \ |
|
-concat contrasts/runs.1D
\ |
|
-glt 1 contrasts/contr_AvsT.txt -glt_label 1 AvsT
\ |
|
-glt 1 contrasts/contr_HvsL.txt -glt_label 2 HvsL
\ |
|
-glt 1 contrasts/contr_ATvsHL.txt -glt_label 3
ATvsHL
\ |
|
-full_first -fout -tout
\ |
|
-bucket func_ht2 |
|
Run this script by typing source decon_ht2 (takes a few
minutes) |
|
|
|
|
|
-glt 1 contrasts/contr_AvsT.txt -glt_label 1 AvsT |
|
-glt 1 contrasts/contr_HvsL.txt -glt_label 2 HvsL |
|
-glt 1 contrasts/contr_ATvsHL.txt -glt_label 3 ATvsHL |
|
GLTs are General Linear Tests |
|
3dDeconvolve provides tests for each regressor separately,
but if you want to test combinations or contrasts of the b weights in each
voxel, you need the -glt option |
|
File contrasts/contr_AvsT.txt = 0 0 0 0 0 0 0 0 1 -1 0 0 (one line with 12 numbers) |
|
Goal is to test a linear combination of the b weights |
|
In this data, we have 12 b weights: 8 baseline parameters (2 per imaging run),
which are first in the b vector, and 4 regressor
magnitudes, which are from -stim_file options |
|
This particular test contrasts the Actions and Tool bs |
|
tests if bActions bTool ¹ 0 |
|
|
|
|
|
|
Simple or Fixed-shape regression |
|
We fixed the shape of the HRF |
|
Used waver to generate the signal model from the stimulus
timing |
|
Found the amplitude of the signal model in each voxel |
|
Deconvolution or Variable-shape regression |
|
We allow the shape of the HRF to vary in each voxel, for
each stimulus class |
|
Appropriate when you dont want to over-constrain the
solution by assuming an HRF shape |
|
However, need to have enough time points during the HRF in
order to resolve its shape |
|
|
|
|
Can represent arbitrary shapes of the HRF, up and down,
with ease |
|
Meaning of each bq is
completely obvious |
|
Value of HRF at time lag q×TR after activation |
|
3dDeconvolve is set up to deal with stick functions for
representing HRF, so using them is very easy |
|
What is called p here is given by command line option -stim_maxlag
in the program |
|
When choosing p, rule is to estimate longest duration of
neural activation after stimulus onset, then add 10-12 seconds to allow for
slowness of hemodynamic response |
|
|
|
|
|
h(t ) = 0 for any t not on the TR grid |
|
This limitation means that, using stick functions as our
basis set, we can only model stimuli that are locked to the TR grid |
|
That is, stimuli/activations dont occur at fully general
times, but only occur at integer multiples of TR |
|
For example, suppose an activation is at t =1.7×TR |
|
We need to model the response at later times, such as 2×TR, 3×TR, etc., so need to model h(t ) at
times such as t=0.3×TR, 1.3×TR, etc., after the stimulus |
|
But the stick function model doesnt allow for such
intermediate times |
|
or, can allow Dt for sticks to be a
fraction of TR for data |
|
e.g., Dt = TR/2 , which implies
twice as many bq parameters to cover the same time interval (time interval
needed is set by hemodynamics) |
|
then would allow stimuli that occur at
TR-grid or halfway in-between |
|
|
|
|
Instead of inputting a signal model time series (e.g.,
created with waver and stimulus timing), you input the stimulus timing
directly |
|
Format: a text file with 0s and 1s, 0 at TR-grid times with
no stimulus, 1 at time with stimulus |
|
Must specify the maximum lag (in units of TR) that we
expect HRF to last after each stimulus |
|
This requires you to make a judgment about the activation
brief or long? |
|
3dDeconvolve returns estimated values for each bq for each stimulus class |
|
Usually then use a GLT to test the HRF (or pieces of it)
for significance |
|
|
|
|
-stim_maxlag k p = option to set the maximum lag to p for
stimulus timing file #k for k=0,1,2, |
|
Stimulus timing file input using command line option
-stim_file k filename as before |
|
Can also use -stim_minlag k m option to set the minimum lag
if you want a value m different from 0 |
|
In which case there are p-m+1 parameters in this HRF |
|
-stim_nptr k r = option to specify that there are r
stimulus subintervals per TR, rather than just 1 |
|
This feature can be used to get a finer grained HRF, at the
cost of adding more parameters that need to be estimated |
|
Need to make sure that the input stimulus timing file (from
-stim_file) has r entries per TR |
|
TR for -stim_file and for output HRF is
data TR r |
|
|
|
|
|
cd AFNI_data2 |
|
data is in ED/ subdirectory (10 runs of 136 images, TR=2 s) |
|
script in file @analyze_ht05 |
|
stimuli timing and GLT contrast files in misc_files/ |
|
start script now by typing source @analyze_ht05 |
|
will discuss details of script while it runs |
|
This is an event-related study from
Mike Beauchamp (LBC/NIMH) |
|
Four classes of stimuli (short videos) |
|
Tools moving (e.g., a hammer pounding) - TM |
|
People moving (e.g., jumping jacks) - HM |
|
Points outlining tools moving (no objects, just points) - TP |
|
Points outlining people moving - HP |
|
Goal is to find if there is an area that distinguishes
natural motions (HM and HP) from simpler rigid motions (TM and TP) |
|
|
|
|
Registration of each imaging run (there are 10): 3dvolreg |
|
Smooth each volume in space (136 sub-bricks per run): 3dmerge |
|
Create a brain mask: 3dAutomask and 3dcalc |
|
Rescale each voxel time series in each imaging run so that
its average through time is 100: 3dTstat and 3dcalc |
|
If baseline is 100, then a bq of 5 (say) indicates a 5% signal
change in that voxel at time laq #q after stimulus |
|
Catenate all imaging runs together into one big dataset
(1360 time points): 3dTcat |
|
Compute HRFs and statistics: 3dDeconvolve |
|
Each HRF will have 15 output points (lags from 0 to 14)
with a TR of 1.0 s, since the input data has a TR of 2.0 s and we will be
using the -stim_nptr k r option with r=2 |
|
Average together central points 4..9 of each separate HRF
to get peak % change in each voxel: 3dTstat |
|
|
|
|
-mask = used to turn off processing for some voxels |
|
speed up the program by not processing non-brain voxels |
|
-input1D = used to process a single time series, rather
than a dataset full of time series |
|
test out a stimulus timing sequence |
|
-nodata option can be used to check for collinearity |
|
-censor = used to turn off processing for some time points |
|
for time points that are bad (e.g., too much movement) |
|
-sresp = output standard deviation of HRF estimates |
|
can plot error bars around HRF in AFNI graph viewer |
|
-errts = output residuals (i.e., difference between fitted
model and data) |
|
for statistical analysis of time series noise |
|
-jobs = run with multiple CPUS |
|
extra speed, if you have a dual- or quad-processor system! |
|
|
|
|
3dDeconvolve is not set up to use tent functions directly |
|
In the past, we have now explained in grotesque detail how
to set up a combination of waver, 3dcalc, and 3dDeconvolve to trick the
system into doing deconvolution with tent functions (or other basis sets) |
|
However, you are saved from this excruciation |
|
At this moment, we have an interactive Matlab script that
will set up the details for you |
|
In the near future, we will put tent functions directly
into 3dDeconvolve, allowing the direct use of non-TR locked stimulus timing |
|
Date of this promise: 13 July 2004 |
|
|
|
|
|
Can have activations with multiple phases that are not
always in the same time relationship to each other |
|
e.g., solving a visually presented puzzle: |
|
subject sees puzzle |
|
subject cogitates a while |
|
subject responds with solution |
|
With fixed-shape regression, this isnt too hard, since we
can treat each phase as a separate stimulus class and lay down a separate
fixed-shape HRF for each type of stimulus time |
|
But fixed-shape regression is probably not reasonable for
this problem, since (at least) the cogitation phase isnt fixed among the
trials |
|
Deconvolution assumes that the HRF (although unknown) is
the same among all trials in the same class |
|
|
|
|
|
Noise in FMRI is caused by several factors, not
completely characterized |
|
MR thermal noise (well understood) |
|
Cardiac and respiratory cycles (partly understood) |
|
In principle, could measure these sources of noise
separately and then try to regress them out |
|
Scanner fluctuations (e.g., thermal drift) |
|
Small subject head movements |
|
Very low frequency fluctuations (periods longer than 100 s) |
|
Data analysis should try to remove what can be removed and
allow for the statistical effects of what cant be removed |
|
Serial correlation in the noise time series affects the t-
and F-statistics calculated by 3dDeconvolve |
|
At present, nothing is done to correct for this effect |
|
|
|
|
|
Linear models arent everything |
|
For example, could try to fit HRF of the form |
|
Unknowns b and c appear nonlinearly in this formula |
|
Program 3dNLfim can do nonlinear regression (including
nonlinear deconvolution) |
|
User must provide a C function that computes the model time
series, given a set of parameters (e.g., a, b, c) |
|
Program then drives this C function repeatedly, searching
for the set of parameters that best fit each voxel |
|
Has been used to fit pharmacological wash-in/wash-out
models (difference of two exponentials) to FMRI data acquired during
pharmacological challenges |
|
e.g., injection of nicotine, cocaine, etc. |