|
|
|
|
|
Signal = Measurable response to stimulus |
|
Noise = Components of measurement that interfere with
detection of signal |
|
Statistical detection theory: |
|
Understand relationship between stimulus & signal |
|
Characterize noise statistically |
|
Can then devise methods to distinguish noise-only
measurements from signal+noise measurements, and assess the methods
reliability |
|
Methods and usefulness depend strongly on the assumptions |
|
Some methods are robust against erroneous assumptions,
and some are not |
|
|
|
|
|
|
FMRI StimulusSignal connection and noise
statistics are both poorly characterized |
|
Result: there is no best way to analyze FMRI time series
data: there are only reasonable analysis methods |
|
To deal with data, must make some assumptions about the
signal and noise |
|
Assumptions will be wrong, but must do something |
|
Different kinds of experiments require different kinds of
analyses |
|
Since signal models and questions you ask about the signal
will vary |
|
It is important to understand what is going on, so you can
select and evaluate reasonable analyses |
|
|
|
|
Write down a mathematical model connecting stimulus (or
activation) to signal |
|
Write down a statistical model for the noise |
|
Combine them to produce an equation for measurements given
signal+noise |
|
Equation will have unknown parameters, which are to be
estimated from the data |
|
N.B.: signal may have zero strength (no activation) |
|
Use statistical detection theory to produce an algorithm
for processing the measurements to assess signal presence and characteristics |
|
e.g., least squares fit of model parameters to data |
|
|
|
|
|
Most common forms of FMRI analysis involve fitting an
activation+BOLD model to each voxels time series separately (AKA
univariate analysis) |
|
Some pre-processing steps may do inter-voxel computations |
|
e.g., spatial smoothing to reduce noise |
|
Result of model fits is a set of parameters at each voxel,
estimated from that voxels data |
|
e.g., activation amplitude, delay, shape |
|
SPM = statistical parametric map |
|
Further analysis steps operate on individual SPMs |
|
e.g., combining/contrasting data among subjects |
|
|
|
|
FMRI only measures changes due to neural activity |
|
Baseline level of signal in a voxel means little or nothing
about neural activity |
|
Also, baseline level tends to drift around slowly (100 s
time scale or so) |
|
Therefore, an FMRI experiment must have at least 2
different neural conditions (tasks and/or stimuli) |
|
Then statistically test for differences in the MRI signal
level between conditions |
|
Many experiments: one condition is rest |
|
Baseline is modeled separately from activation signals, and
baseline model includes rest periods |
|
|
|
|
First: Block-trial FMRI data |
|
Activation occurs over a sustained period of time (say,
10 s or longer), usually from more than one stimulation event, in rapid
succession |
|
BOLD (hemodynamic) response accumulates from multiple close
activations and is large |
|
BOLD response is often visible in time series |
|
Next 5 slides: same brain voxel in 9 imaging runs |
|
black curve (noisy) = data |
|
red curve (above data) = ideal model response |
|
blue curve (within data) = model fitted to data |
|
somatosensory task (finger being rubbed) |
|
|
|
|
|
|
Assume a fixed shape h(t ) for the HRF |
|
e.g., h(t ) = t 8.6 exp(-t/0.547) [MS Cohen,
1997] |
|
Convolved with stimulus timing (e.g., AFNI program waver),
get ideal response function r(t ) |
|
Assume a form for the baseline |
|
e.g., a + b×t for a constant plus a linear trend |
|
In each voxel, fit data Z(t ) to a curve of the form |
|
Z(t ) a + b×t + b× r(t ) |
|
a, b, b are unknown parameters to be calculated in each voxel |
|
a,b are nuisance parameters |
|
b is amplitude of r(t )
in data = how much BOLD |
|
|
|
|
Slow baseline drift (time scale 100 s and longer) makes
doing FMRI with long duration stimuli very difficult |
|
Learning experiment, where the task is done continuously
for ~15 minutes and the subject is scanned to find parts of the brain that
adapt |
|
Pharmaceutical challenge, where the subject is given some
psychoactive drug whose action plays out over 10+ minutes (e.g., cocaine,
ethanol) |
|
Very short duration stimuli that are also very close in
time to each other are very hard to tell apart, since their HRFs will have
90-95% overlap |
|
Binocular rivalry, where percept switches ~ 0.5 s |
|
|
|
|
|
Usually have more than one class of stimulus or activation
in an experiment |
|
e.g., want to see size of face activation vis--vis
house activation; or, what vs. where activity |
|
Need to model each separate class of stimulus with a
separate response function r1(t ), r2(t ), r3(t
), . |
|
Each rj(t ) is based on the stimulus timing for
activity in class number j |
|
Calculate a bj amplitude =
amount of rj(t ) in voxel data time series Z(t ) |
|
Contrast bs to see which voxels
have differential activation levels under different stimulus conditions |
|
e.g., statistical test on the question b1b2 = 0 ? |
|
|
|
|
You do not model the baseline condition |
|
e.g., rest, visual fixation, high-low tone
discrimination, or some other simple task |
|
FMRI can only measure changes in MR signal levels between
tasks |
|
So you need some simple-ish task to serve as a reference
point |
|
The baseline model (e.g., a + b ×t ) takes care of the signal level to which the MR signal returns when
the active tasks are turned off |
|
Modeling the reference task explicitly would be redundant
(or collinear, to anticipate a forthcoming jargon word) |
|
|
|
|
|
In each voxel, fit data Zn to a curve of the
form |
|
Zn a + b×tn + b×rn
for n=0,1,,N-1 (N=# time pts) |
|
a, b, b are unknown parameters to be calculated in each voxel |
|
a,b are nuisance baseline parameters |
|
b is amplitude of r(t )
in data = how much BOLD |
|
Baseline model might be more complicated for long (> 150
s) continuous imaging runs: |
|
150 < T < 300 s: a+b×t+c×t 2 |
|
Longer: a+b×t+c×t 2 + T/200 low frequency components |
|
Might also include as extra baseline components the
estimated subject head movement time series, in order to remove residual
contamination from such artifacts |
|
|
|
|
|
In each voxel, fit data Zn to a curve of the
form |
|
|
|
bj is
amplitude in data of rn(j)=rj(tn)
; i.e., how much of j th response function in in the data time
series |
|
In simple regression, each rj(t ) is derived
directly from stimulus timing and user-chosen HRF model |
|
In terms of stimulus times: |
|
If stimulus occurs on the imaging TR time-grid, stimulus
can be represented as a 0-1 time series: |
|
where sk(j)=1 if stimulus
#j is on |
|
at time t=k ×TR, and sk(j)=0
if #j is off at that time: |
|
|
|
|
|
Number of equations = number of time points |
|
100s per run, but perhaps 1000s per subject |
|
Number of unknowns usually in range 550 |
|
Least squares solution: |
|
denotes an estimate of the true
(unknown) |
|
From
, calculate
as the fitted model |
|
|
|
|
|
|
|
is the residual time series = noise (we hope) |
|
Collinearity: when matrix cant be
inverted |
|
Near collinearity: when inverse exists but is huge |
|
|
|
|
Choose HRF model h(t) [AKA fixed-model regression] |
|
Build model responses rn(t) to each stimulus
class |
|
Using h(t) and the stimulus timing |
|
Choose baseline model time series |
|
Constant + linear + quadratic + movement? |
|
Assemble model and baseline time series into the columns of
the R matrix |
|
For each voxel time series z, solve zRb for |
|
Individual subject maps: Test the coefficients in that you
care about for statistical significance |
|
Group maps: Transform the coefficients in that you care about to
Talairach space, and perform statistics on these values |
|
|
|
|
Six preparatory steps are common: |
|
Image registration (realignment): program 3dvolreg |
|
Image smoothing: program 3dmerge |
|
Image masking: program 3dClipLevel or 3dAutomask |
|
Conversion to percentile: programs 3dTstat and 3dcalc |
|
Censoring out time points that are bad: program 3dToutcount
or 3dTqual |
|
Catenating multiple imaging runs into 1 big dataset:
program 3dTcat |
|
|
|
Not all steps are necessary or desirable in any given case |
|
In this first example, will only do registration, since the
data obviously needs this correction |
|
|
|
|
type source epi_r1_decon ; then wait for programs to run |
|
type afni to view what weve got |
|
Switch Underlay to epi_r1_reg (output from 3dvolreg) |
|
Switch Overlay to epi_r1_func (output from 3dDeconvolve) |
|
Sagittal Image and Graph viewers |
|
FIMIgnore2 to have graph viewer not plot 1st 2 time pts |
|
FIMPick Ideal ; pick epi_r1_ideal.1D
(output from waver) |
|
Define Overlay to set up functional coloring |
|
OlayAllstim[0] Coef (sets coloring to be from model fit b) |
|
ThrAllstim[0] t-s (sets threshold to be model fit t-statistic) |
|
See Overlay (otherwise wont see the function!) |
|
Play with threshold slider to get a meaningful activation
map (e.g., t =4 is a decent threshold) |
|
|
|
|
Graph viewer: OptTran
1DDataset #N to plot the model fit
dataset output by 3dDeconvolve |
|
Will open the control panel for the Dataset #N plugin |
|
Click first Input on ; then choose Dataset
epi_r1_fitts+orig |
|
Also choose Color dk-blue to get a pleasing plot |
|
Then click on Set+Close (to close the plugin panel) |
|
Should now see fitted time series in the graph viewer
instead of data time series |
|
Graph viewer: click OptDouble PlotOverlay on to make the
fitted time series appear as an overlay curve |
|
This tool lets you visualize the quality of the data fit |
|
Can also now overlay function on MP-RAGE anatomical by
using Switch Underlay to anat+orig dataset |
|
Probably wont want to graph the anat+orig
dataset! |
|
|
|
Last part of script epi_r1_decon: |
|
|
|
3dDeconvolve
\ |
|
-input epi_r1_reg+orig \ |
|
-nfirst 2
\ |
|
-num_stimts 7
\ |
|
-stim_file 1 epi_r1_ideal.1D \ |
|
-stim_label 1 AllStim
\ |
|
-stim_file 2 epi_r1_mot.1D'[0]' \ |
|
-stim_base 2
\ |
|
-stim_file 3 epi_r1_mot.1D'[1]' \ |
|
-stim_base 3
\ |
|
-stim_file 4 epi_r1_mot.1D'[2]' \ |
|
-stim_base 4
\ |
|
-stim_file 5 epi_r1_mot.1D'[3]' \ |
|
-stim_base 5
\ |
|
-stim_file 6 epi_r1_mot.1D'[4]' \ |
|
-stim_base 6
\ |
|
-stim_file 7 epi_r1_mot.1D'[5]' \ |
|
-stim_base 7
\ |
|
-tout
\ |
|
-bucket epi_r1_func_mot \ |
|
-fitts
epi_r1_fitts_mot |
|
|
|
|
|
|
|
The experiment analyzed here in fact is more complicated |
|
There are 4 related visual stimulus types |
|
One goal is to find areas that are differentially activated
between these different types of stimuli |
|
We have 4 imaging runs, 108 useful time points each
(skipping first 2 in each run) that we will analyze together |
|
Already registered and put together into dataset rall_vr+orig |
|
Stimulus timing files are in subdirectory stim_files/ |
|
Script file waver_ht2 will create HRF models for
regression: |
|
cd stim_files |
|
waver -dt 2.5 -GAM -input scan1to4a.1D
> scan1to4a_hrf.1D |
|
waver -dt 2.5 -GAM -input scan1to4t.1D
> scan1to4t_hrf.1D |
|
waver -dt 2.5 -GAM -input scan1to4h.1D
> scan1to4h_hrf.1D |
|
waver -dt 2.5 -GAM -input scan1to4l.1D
> scan1to4l_hrf.1D |
|
cd .. |
|
Type source waver_ht2 to run this script |
|
Might also use 1dplot to check if input .1D files are
reasonable |
|
|
|
|
|
Script file decon_ht2 does the job: |
|
3dDeconvolve -xout -input
rall_vr+orig
\ |
|
-num_stimts 4
\ |
|
-stim_file 1 stim_files/scan1to4a_hrf.1D -stim_label 1 Actions \ |
|
-stim_file 2 stim_files/scan1to4t_hrf.1D -stim_label 2 Tool \ |
|
-stim_file 3 stim_files/scan1to4h_hrf.1D -stim_label 3 HighC \ |
|
-stim_file 4 stim_files/scan1to4l_hrf.1D -stim_label 4 LowC \ |
|
-concat contrasts/runs.1D
\ |
|
-glt 1 contrasts/contr_AvsT.txt -glt_label 1 AvsT
\ |
|
-glt 1 contrasts/contr_HvsL.txt -glt_label 2 HvsL
\ |
|
-glt 1 contrasts/contr_ATvsHL.txt -glt_label 3
ATvsHL
\ |
|
-full_first -fout -tout
\ |
|
-bucket func_ht2 |
|
Run this script by typing source decon_ht2 (takes a few
minutes) |
|
|
|
|
|
-glt 1 contrasts/contr_AvsT.txt -glt_label 1 AvsT |
|
-glt 1 contrasts/contr_HvsL.txt -glt_label 2 HvsL |
|
-glt 1 contrasts/contr_ATvsHL.txt -glt_label 3 ATvsHL |
|
GLTs are General Linear Tests |
|
3dDeconvolve provides tests for each regressor separately,
but if you want to test combinations or contrasts of the b weights in each
voxel, you need the -glt option |
|
File contrasts/contr_AvsT.txt = 0 0 0 0 0 0 0 0 1 -1 0 0 (one line with 12 numbers) |
|
Goal is to test a linear combination of the b weights |
|
In this data, we have 12 b weights: 8 baseline parameters (2 per imaging run),
which are first in the b vector, and 4 regressor
magnitudes, which are from -stim_file options |
|
This particular test contrasts the Actions and Tool bs |
|
tests if bActions bTool ¹ 0 |
|
|
|
|
|
|
Simple or Fixed-shape regression: |
|
We fixed the shape of the HRF |
|
Used waver to generate the signal model from the stimulus
timing |
|
Found the amplitude of the signal model in each voxel |
|
Deconvolution or Variable-shape regression: |
|
We allow the shape of the HRF to vary in each voxel, for
each stimulus class |
|
Appropriate when you dont want to over-constrain the
solution by assuming an HRF shape |
|
Caveat: need to have enough time points during the HRF in
order to resolve its shape |
|
|
|
|
Can represent arbitrary shapes of the HRF, up and down,
with ease |
|
Meaning of each bq is
completely obvious |
|
Value of HRF at time lag q×TR after activation |
|
3dDeconvolve is set up to deal with stick functions for
representing HRF, so using them is very easy |
|
What is called p here is given by command line option -stim_maxlag
in the program |
|
When choosing p, rule is to estimate longest duration of
neural activation after stimulus onset, then add 10-12 seconds to allow for
slowness of hemodynamic response |
|
|
|
|
|
h(t ) = 0 for any t not on the TR grid |
|
This limitation means that, using stick functions as our
basis set, we can only model stimuli that are locked to the TR grid |
|
That is, stimuli/activations dont occur at fully general
times, but only occur at integer multiples of TR |
|
For example, suppose an activation is at t =1.7×TR |
|
We need to model the response at later times, such as 2×TR, 3×TR, etc., so need to model h(t ) at
times such as t=(2-1.7)×TR=0.3×TR, t=1.3×TR, etc., after the stimulus |
|
But the stick function model doesnt allow for such
intermediate times |
|
or, can allow Dt for sticks to be a
fraction of TR for data |
|
e.g., Dt = TR/2 , which implies
twice as many bq parameters to cover the same time interval (time interval
needed is set by hemodynamics) |
|
then would allow stimuli that occur on
TR-grid or halfway in-between |
|
|
|
|
Instead of inputting a signal model time series (e.g.,
created with waver and stimulus timing), you input the stimulus timing
directly |
|
Format: a text file with 0s and 1s, 0 at TR-grid times with
no stimulus, 1 at time with stimulus |
|
Must specify the maximum lag (in units of TR) that we
expect HRF to last after each stimulus |
|
This requires you to make a judgment about the activation
brief or long? |
|
3dDeconvolve returns estimated values for each bq for each stimulus class |
|
Usually then use a GLT to test the HRF (or pieces of it)
for significance |
|
|
|
|
-stim_maxlag k p = option to set the maximum lag to p for
stimulus timing file #k for k=0,1,2, |
|
Stimulus timing file input using command line option
-stim_file k filename as before |
|
Can also use -stim_minlag k m option to set the minimum lag
if you want a value m different from 0 |
|
In which case there are p-m+1 parameters in this HRF |
|
-stim_nptr k r = option to specify that there are r
stimulus subintervals per TR, rather than just 1 |
|
This feature can be used to get a finer grained HRF, at the
cost of adding more parameters that need to be estimated |
|
Need to make sure that the input stimulus timing file (from
-stim_file) has r entries per TR |
|
TR for -stim_file and for output HRF is
data TR r |
|
|
|
|
|
cd AFNI_data2 |
|
data is in ED/ subdirectory (10 runs of 136 images, TR=2 s) |
|
script in file @analyze_ht05 |
|
stimuli timing and GLT contrast files in misc_files/ |
|
start script now by typing source @analyze_ht05 |
|
will discuss details of script while it runs |
|
Event-related study from Mike Beauchamp |
|
Four classes of stimuli (short videos) |
|
Tools moving (e.g., a hammer pounding) - TM |
|
People moving (e.g., jumping jacks) - HM |
|
Points outlining tools moving (no objects, just points) - TP |
|
Points outlining people moving - HP |
|
Goal is to find if there is an area that distinguishes
natural motions (HM and HP) from simpler rigid motions (TM and TP) |
|
|
|
|
Registration of each imaging run (there are 10): 3dvolreg |
|
Smooth each volume in space (136 sub-bricks per run): 3dmerge |
|
Create a brain mask: 3dAutomask and 3dcalc |
|
Rescale each voxel time series in each imaging run so that
its average through time is 100: 3dTstat and 3dcalc |
|
If baseline is 100, then a bq of 5 (say) indicates a 5% signal
change in that voxel at time laq #q after stimulus |
|
Catenate all imaging runs together into one big dataset
(1360 time points): 3dTcat |
|
Compute HRFs and statistics: 3dDeconvolve |
|
Each HRF will have 15 output points (lags from 0 to 14)
with a TR of 1.0 s, since the input data has a TR of 2.0 s and we will be
using the -stim_nptr k r option with r=2 |
|
Average together central points 4..9 of each separate HRF
to get peak % change in each voxel: 3dTstat |
|
|
|
|
-mask = used to turn off processing for some voxels |
|
speed up the program by not processing non-brain voxels |
|
-input1D = used to process a single time series, rather
than a dataset full of time series |
|
test out a stimulus timing sequence |
|
-nodata option can be used to check for collinearity |
|
-censor = used to turn off processing for some time points |
|
for time points that are bad (e.g., too much movement) |
|
-sresp = output standard deviation of HRF estimates |
|
can plot error bands around HRF in AFNI graph viewer |
|
-errts = output residuals (i.e., difference between fitted
model and data) |
|
for statistical analysis of time series noise |
|
-jobs N = run with multiple CPUS N of them |
|
extra speed, if you have a dual- or quad-processor system! |
|
|
|
|
3dDeconvolve is not set up to use tent functions directly |
|
In the past, we have now explained in grotesque detail how
to set up a combination of waver, 3dcalc, and 3dDeconvolve to trick the
system into doing deconvolution with tent functions (or other basis sets) |
|
However, you are saved from this excruciation |
|
In the near future, we will put tent functions directly
into 3dDeconvolve, allowing the direct use of non-TR locked stimulus timing |
|
Date of this promise: 13 July 2004 (AFNI Summer Bootcamp) |
|
|
|
|
See http://afni.nimh.nih.gov/doc/misc/3dDeconvolveSummer2004/ |
|
Equation solver: Gaussian elimination to compute R matrix
pseudo-inverse was replaced by SVD (like principal components) |
|
Advantage: smaller sensitivity to computational errors |
|
Condition number and inverse error values are printed
at program startup, as measures of accuracy of pseudo-inverse |
|
Condition number < 1000 is good |
|
Inverse error < 1.0e-10 is good |
|
3dDeconvolve_f program can be used to compute in single
precision (7 decimal places) rather than double precision (16) |
|
For better speed, but with lower numerical accuracy |
|
Best to do at least one run both ways to check if results
differ significantly (SVD solver
should be safe) |
|
|
|
|
Matrix inputs for -glt option can now indicate lots of zero
entries using a notation like 30@0 1 -1 0 0 to indicate that 30 zeros precede
the rest of the input line |
|
Example: 10 imaging runs and -polort 2 for baseline |
|
Can put comments into matrix and .1D files, using lines
that start with # or // |
|
Can use \ at end of line to specify continuation |
|
Matrix input for GLTs can also be expressed symbolically,
using the names given with the -stim_label options: |
|
-stim_label 1 Ear -stim_maxlag 1 4 |
|
-stim_label 2 Wax -stim_maxlag 2 4 |
|
Old style GLT might be |
|
{zeros for baseline} 0 0 1 1 1 0 0 -1
-1 -1 |
|
New style (via -gltsym option) is |
|
Ear[2..4] -Wax[2..4] |
|
|
|
|
|
New -xsave option saves the R matrix (and other info) into
a file that can be used later with the -xrestore option to calculate some
extra GLTs, without re-doing the entire analysis (goal: save some time) |
|
-input option now allows multiple 3D+time datasets to be
specified to automatically catenate individual runs into one file on the
fly |
|
Avoids having to use program 3dTcat |
|
User must still supply full-length .1D files for the
various input time series (e.g., -stim_file) options |
|
-concat option will be ignored if this option is used |
|
Break points between runs will be taken as the break points
between the various -input datasets |
|
-polort option now uses Legendre polynomials instead of
simple 1, t, t 2, t 3, basis functions (more
numerical accuracy) |
|
|
|
|
|
3dDeconvolve now checks for duplicate -stim_file names and
for duplicate matrix columns, and prints warnings |
|
These are not fatal errors |
|
If the same regressor is given twice, each copy will only
get half the amplitude (the beta weight) in the solution |
|
All-zero regressors are now allowed |
|
Will get zero weight in the solution |
|
A warning message will be printed to the terminal |
|
Example: task where subject makes a choice for each
stimulus |
|
You want to analyze correct and incorrect trials a separate
cases |
|
What if a subject makes no mistakes? Hmmm |
|
|
|
|
|
Direct input of stimulus timing plus a response model,
using new -stim_times option |
|
-stim_times k tname rtype |
|
k = stimulus index (from 1 to -num_stimts value) |
|
tname = name of .1D file containing stimulus times in units
of seconds (important: TR value in dataset header must be correct!) |
|
rtype = name of response model to use for each stimulus
read from tname file |
|
GAM = gamma variate function from waver |
|
TENT(b,c,n) = tent function deconvolution, ranging from
time s+b to s+c after each stimulus time s, with n basis functions |
|
several other rtype options available |
|
|
|
|
|
Recall: -iresp option outputs the HRF model for one
stimulus |
|
When used with -stim_times, values are usually output using
the dataset TR time spacing |
|
Can changes to a different grid via new -TR_times dt
option, which sets the output grid spacing for -iresp to dt for HRF models
computed via -stim_times |
|
Is useful for producing nice smooth pictures of HRF |
|
Also works with -sresp option (= std.dev. of HRF) |
|
Difficulty: using GLTs with results from -stim_times |
|
GLTs operate on regression coefficients |
|
For most rtype models, regression coefficients dont
correspond directly to HRF amplitudes |
|
Exceptions: GAM, TENT, BLOCK |
|
Planned solution: see next slide |
|
|
|
|
Automatic baseline normalization of input time series |
|
Automatic mask generation ( la 3dAutomask program) |
|
Spatial blur ( la 3dmerge -1blur) |
|
Time shift input before analysis ( la 3dTshift program) |
|
Negative lags for -stim_file method of deconvolution |
|
for pre-stimulus cognition/anticipation |
|
-stim_times already allows pre-stimulus response |
|
Area under curve addition to -gltsym to allow testing of
pieces of HRF models from -stim_times |
|
Slice- and/or voxel-dependent regressors |
|
For physiological noise cancellation, etc. |
|
Floating point output format |
|
Currently is shorts + scale factor |
|
|
|
|
|
|
Can have activations with multiple phases that are not
always in the same time relationship to each other; e.g.: |
|
subject gets cue #1 |
|
variable waiting time (hold) |
|
subject gets cue #2, emits response |
|
which depends on both cue #1 and #2 |
|
Cannot treat this as one event with one HRF, since the
different waiting times will result in different overlaps in separate
responses from cue #1 and cue #2 |
|
Solution is multiple HRFs: separate HRF (fixed shape or
deconvolution) for cue #1 times and for cue #2 times |
|
Must have significant variability in inter-cue waiting
times, or will get a nearly-collinear model |
|
impossible to tell tail end of HRF #1
from the start of HRF #2, if always locked together in same temporal
relationship |
|
How much variability is significant? Good question. |
|
|
|
|
|
Solving a visually presented puzzle: |
|
subject sees puzzle |
|
subject cogitates a while |
|
subject responds with solution |
|
The problem is that we expect some voxels to be significant
in phase (b) as well as phases (a) and/or (c) |
|
Variable length of phase (b) means that shape for its
response varies between trials |
|
Which is contrary to the whole idea of
averaging trials together to get decent statistics (which is basically what
linear regression amounts to, in a fancy sort of way) |
|
Could assume response amplitude in phase (b) is constant
across trials, and response duration varies directly with time between phases
(a) and (c) |
|
Need three HRFs; phase (b)s is a
little tricky to generate using waver, but it could be done |
|
|
|
|
|
|
Noise in FMRI is caused by several factors, not
completely characterized |
|
MR thermal noise (well understood, unremovable) |
|
Cardiac and respiratory cycles (partly understood) |
|
In principle, could measure these sources of noise
separately and then try to regress them out |
|
RETROICOR program underway (R Birn & M Smith of
FIM/NIMH) |
|
Scanner fluctuations (e.g., thermal drift of hardware) |
|
Small subject head movements (10-100 mm) |
|
Very low frequency fluctuations (periods longer than 100 s) |
|
Data analysis should try to remove what can be removed and
allow for the statistical effects of what cant be removed |
|
Serial correlation in the noise time series affects the t-
and F-statistics calculated by 3dDeconvolve |
|
At present, nothing is done to correct for this effect (by
us) |
|
|
|
|
|
Linear models arent everything |
|
e.g., could try to fit HRF of the form |
|
Unknowns b and c appear nonlinearly in this formula |
|
Program 3dNLfim can do nonlinear regression (including
nonlinear deconvolution) |
|
User must provide a C function that computes the model time
series, given a set of parameters (e.g., a, b, c) |
|
Program then drives this C function repeatedly, searching
for the set of parameters that best fit each voxel |
|
Has been used to fit pharmacological wash-in/wash-out
models (difference of two exponentials) to FMRI data acquired during
pharmacological challenges |
|
e.g., injection of nicotine, cocaine, etc. |
|
these are tricky experiments, at best |