Slide 1
"Signal = Measurable response to..."
 Signal = Measurable response to stimulus
 Noise = Components of measurement that interfere with detection of signal
 Statistical detection theory:
 Understand relationship between stimulus & signal
 Characterize noise statistically
 Can then devise methods to distinguish noise-only measurements from signal+noise measurements, and assess their reliability
 Methods and usefulness depend strongly on the assumptions
 Some methods are robust against erroneous assumptions, and some are not

FMRI Philosopy: Signals and Noise
 FMRI StimulusSignal connection and noise statistics are both poorly characterized
 Result: there is no best way to analyze FMRI time series data: there are only reasonable analysis methods
 To deal with data, must make some assumptions about the signal and noise
 Assumptions will be wrong, but must do something
 Different kinds of experiments require different kinds of analyses
 Since signal models and questions you ask about the signal will vary
 It is important to understand what is going on, so you can select and evaluate reasonable analyses

Meta-method for creating analysis methods
 Write down a mathematical model connecting stimulus (or activation) to signal
 Write down a statistical model for the noise
 Combine them to produce an equation for measurements given signal+noise
 Equation will have unknown parameters, which are to be estimated from the data
 N.B.: signal may have zero strength
 Use statistical detection theory to produce an algorithm for processing the measurements to assess signal presence and characteristics
 e.g., least squares fit of model parameters to data

Time Series Analysis on Voxel Data
 Most common forms of FMRI analysis involve fitting an activation+BOLD model to each voxels time series separately (AKA univariate analysis)
 Some pre-processing steps may do inter-voxel computations
 e.g., spatial smoothing to reduce noise
 Result of model fits is a set of parameters at each voxel, estimated from that voxels data
 e.g., activation amplitude, delay, shape
 SPM = statistical parametric map
 Further analysis steps operate on individual SPMs
 e.g., combining/contrasting data among subjects

"First:"
 First: Block-trial FMRI data
 Activation occurs over a sustained period of time (say, 10 s or longer), usually from more than one stimulation event, in rapid succession
 BOLD (hemodynamic) response accumulates from multiple close activations and is large
 BOLD response is often visible in time series
 Next 4 slides: same brain voxel in 9 imaging runs
 black curve (noisy) = data
 red curve (above data) = ideal model response
 blue curve (within data) = model fitted to data
 somatosensory task (finger being rubbed)

Slide 7
Slide 8
Slide 9
Slide 10
Slide 11
More Sample FMRI Data Time Series
 Second: Event-related FMRI
 Activation occurs in single relatively brief intervals
 Events can be randomly or regularly spaced in time
 If events are randomly spaced in time, signal model itself looks noise-like
 BOLD response to stimulus tends to be weaker since fewer nearby-in-time activations have overlapping hemodynamic responses
Next slide: Visual stimulation experiment

Two Voxel Time Series from Same Run
"HRF is the idealization of..."
 HRF is the idealization of measurable FMRI signal change responding to a single activation cycle (up and down) from a stimulus in a voxel

Linearity of HRF
 Multiple activation cycles in a voxel, closer in time than duration of HRF:
 Assume that overlapping responses add

Linearity and Extended Activation
 Extended activation, as in a block-trial experiment:
 HRF accumulates over its duration ( 10 s)

Convolution Signal Model
 FMRI signal we look for in each voxel is taken to be sum of the individual trial HRFs
 Stimulus timing is assumed known (or measured)
 Resulting time series (blue curves) are called the convolution of the HRF with the stimulus timing
 Must also allow for baseline and baseline drifting
 Convolution models only the FMRI signal changes

"Assume a fixed shape h..."
 Assume a fixed shape h(t ) for the HRF
 e.g., h(t ) = t 8.6 exp(-t/0.547) [MS Cohen, 1997]
 Convolved with stimulus timing (e.g., AFNI program waver), get ideal response function r(t )
 Assume a form for the baseline
 e.g., a + b×t  for a constant plus a linear trend
 In each voxel, fit data Z(t ) to a curve of the form
         Z(t ) a + b×t + r(t )
 a, b, b are unknown parameters to be calculated in each voxel
 a,b are nuisance parameters
 b is amplitude of r(t ) in data = how much BOLD

Simple Regression: Example
Multiple Stimuli = Multiple Regressors
 Usually have more than one class of stimulus or activation in an experiment
 e.g., want to see size of face activation vis--vis house activation; or, what vs. where activity
 Need to model each separate class of stimulus with a separate response function r1(t ), r2(t ), r3(t ), .
 Each rj(t ) is based on the stimulus timing for activity in class number j
 Calculate a bj amplitude = amount of rj(t ) in voxel data time series Z(t )
 Contrast bs to see which voxels have differential activation levels under different stimulus conditions
 e.g., statistical test on the question b1b2 = 0 ?

Multiple Regressors: Cartoon
Multiple Regressors: Collinearity!L!
Multiple Regressors: Near Collinearity
Equations: Notation
 Will generally follow notation of Doug Wards manual for the AFNI program 3dDeconvolve
 Time: continuous in reality, but in steps in the data
 Functions of continuous time are written like f(t )
 Functions of discrete time expressed like       where n=0,1,2, and TR=time step
 Usually use subscript notion fn as shorthand
 Collection of numbers assembled in a column is a vector and is printed in boldface:

Equations: Single Response Function
 In each voxel, fit data Zn to a curve of the form
       Zn a + b×tn + rn   for n=0,1,,N-1 (N=# time pts)
 a, b, b are unknown parameters to be calculated in each voxel
 a,b are nuisance baseline parameters
 b is amplitude of r(t ) in data = how much BOLD
 Baseline model might be more complicated for long (> 200 s) continuous imaging runs:
 T < 300 s: a+b×t+c×t 2
 Longer:     a+b×t+c×t 2 + T/200 low frequency components
 Might also include as extra baseline components the estimated subject head movement time series, in order to remove residual contamination from such artifacts

Equations: Multiple Response Functions
 In each voxel, fit data Zn to a curve of the form
 bj is amplitude in data of rn(j)=rj(tn) ; i.e., how much of j th response function in in the data time series
 In simple regression, each rj(t ) is derived directly from stimulus timing and user-chosen HRF model
 In terms of stimulus times:
 If stimulus occurs on the imaging TR time-grid, stimulus can be represented as a 0-1 time series:
                              where sk(j)=1 if stimulus #j is on
  at time t=k ×TR, and sk(j)=0 if #j is off at that time:

Equations: Matrix-Vector Form
 Express known data vector as a sum of known columns with unknown coefficents:

Visualizing the R Matrix
Solving zRb for b
 Number of equations = number of time points
 100s per run, but perhaps 1000s per subject
 Number of unknowns usually in range 550
 Least squares solution:
     denotes an estimate of the true (unknown)
 From    , calculate            as the fitted model
            is the residual time series = noise (we hope)
 Collinearity: when matrix         cant be inverted
 Near collinearity: when inverse exists but is huge

Simple Regression: Recapitulation
 Choose HRF model h(t) [AKA fixed-model regression]
 Build model responses rn(t) to each stimulus class
 Using h(t) and the stimulus timing
 Choose baseline model time series
 Constant + linear + quadratic + movement?
 Assemble model and baseline time series into the columns of the R matrix
 For each voxel time series z, solve zRb for
 Individual subject maps: Test the coefficients in that you care about for statistical significance
 Group maps: Transform the coefficients in    that you care about to Talairach space, and perform statistics on these    values

"Enough theory (for now"
 Enough theory (for now: more to come later!)
 To look at the data: type cd AFNI_data1/afni ; then afni
 Switch Underlay to dataset epi_r1
 Then Sagittal Image and Graph
 FIMPick Ideal ; then click afni/ideal_r1.1D ; then Set
 Right-click in image, Jump to (ijk), then 29 11 13, then Set

Preparing Data for Analysis
 Six preparatory steps are possible
 Image registration (realignment): program 3dvolreg
 Image smoothing: program 3dmerge
 Image masking: program 3dClipLevel or 3dAutomask
 Conversion to percentile: programs 3dTstat and 3dcalc
 Censoring out time points that are bad: program 3dToutcount or 3dTqual
 Catenating multiple imaging runs into 1 big dataset: program 3dTcat
 Not all steps are necessary or desirable in any given case
 In this first example, will only do registration, since the data obviously needs this correction

Data Analysis Script
 In file epi_r1_decon:
waver -GAM                        \
      -input epi_r1_stim.1D       \
      -TR 2.5                     \
      > epi_r1_ideal.1D
3dvolreg -base 2                  \
         -prefix epi_r1_reg       \
         -1Dfile epi_r1_mot.1D    \
         -verb                    \
         epi_r1+orig
3dDeconvolve                      \
    -input epi_r1_reg+orig        \
    -nfirst     2                 \
    -num_stimts 1                 \
    -stim_file  1 epi_r1_ideal.1D \
    -stim_label 1 AllStim         \
    -tout                         \
    -bucket epi_r1_func           \
    -fitts  epi_r1_fitts

Contents of.1D files
epi_r1_stim.1D   epi_r1_ideal.1D
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   1                   0
   1                   24.4876
   1                   122.869
   1                   156.166
   1                   160.258
   1                   160.547
   1                   160.547
   1                   160.547
   0                   160.547
   0                   136.059
   0                   37.6781
   0                   4.38121
   0                   0.288748
   0                   0
   0                   0
                     

To Run Script and View Results
 type source epi_r1_decon ; then wait for programs to run
 type afni to view what weve got
 Switch Underlay to epi_r1_reg (output from 3dvolreg)
 Switch Overlay to epi_r1_func (output from 3dDeconvolve)
 Sagittal Image and Graph viewers
 FIMIgnore2 to have graph viewer not plot 1st 2 time pts
 FIMPick Ideal ; pick epi_r1_ideal.1D (output from waver)
 Define Overlay to set up functional coloring
 OlayAllstim[0] Coef (sets coloring to be from model fit b)
 ThrAllstim[0] t-s (sets threshold to be model fit t-statistic)
 See Overlay (otherwise wont see the function!)
 Play with threshold slider to get a meaningful activation map (e.g., t=4 is a decent threshold)

More Viewing the Results
 Graph viewer: OptTran 1DDataset #N to plot the model fit dataset output by 3dDeconvolve
 Will open the control panel for the Dataset #N plugin
 Click first Input on ; then choose Dataset epi_r1_fitts+orig
 Also choose Color dk-blue to get a pleasing plot
 Then click on Set+Close (to close the plugin panel)
 Should now see fitted time series in the graph viewer instead of data time series
 Graph viewer: click OptDouble PlotOverlay on to make the fitted time series appear as an overlay curve
 This tool lets you visualize the quality of the data fit
 Can also now overlay function on MP-RAGE anatomical by using Switch Underlay to anat+orig dataset
Probably wont want to graph the anat+orig dataset!

Stimulus Correlated Movement?
Removing Residual Motion Artifacts
 Last part of script epi_r1_decon:
3dDeconvolve                         \
    -input epi_r1_reg+orig           \
    -nfirst     2                    \
    -num_stimts 7                    \
    -stim_file  1 epi_r1_ideal.1D    \
    -stim_label 1 AllStim            \
    -stim_file  2 epi_r1_mot.1D'[0]' \
    -stim_base  2                    \
    -stim_file  3 epi_r1_mot.1D'[1]' \
    -stim_base  3                    \
    -stim_file  4 epi_r1_mot.1D'[2]' \
    -stim_base  4                    \
    -stim_file  5 epi_r1_mot.1D'[3]' \
    -stim_base  5                    \
    -stim_file  6 epi_r1_mot.1D'[4]' \
    -stim_base  6                    \
    -stim_file  7 epi_r1_mot.1D'[5]' \
    -stim_base  7                    \
    -tout                            \
    -bucket epi_r1_func_mot          \
    -fitts  epi_r1_fitts_mot

Some Results: Before and After
Multiple Stimulus Classes
 The experiment analyzed here in fact is more complicated
 There are 4 related visual stimulus types
 One goal is to find areas that are differentially activated between these different types of stimuli
 We have 4 imaging runs, 108 useful time points each (skipping first 2 in each run) that we will analyze together
 Already registered and put together into dataset rall_vr+orig
 Stimulus timing files are in subdirectory stim_files/
 Script file waver_ht2 will create HRF models for regression:
     cd stim_files
     waver -dt 2.5 -GAM -input scan1to4a.1D > scan1to4a_hrf.1D
     waver -dt 2.5 -GAM -input scan1to4t.1D > scan1to4t_hrf.1D
     waver -dt 2.5 -GAM -input scan1to4h.1D > scan1to4h_hrf.1D
     waver -dt 2.5 -GAM -input scan1to4l.1D > scan1to4l_hrf.1D
     cd ..
 type source waver_ht2 to run this script
 Might also use 1dplot to check if things are reasonable

Regression with Multiple Model Files
 Script file decon_ht2 does the job:
3dDeconvolve -xout -input rall_vr+orig                                \
    -num_stimts 4                                                     \
    -stim_file 1 stim_files/scan1to4a_hrf.1D  -stim_label 1 Actions   \
    -stim_file 2 stim_files/scan1to4t_hrf.1D  -stim_label 2 Tool      \
    -stim_file 3 stim_files/scan1to4h_hrf.1D  -stim_label 3 HighC     \
    -stim_file 4 stim_files/scan1to4l_hrf.1D  -stim_label 4 LowC      \
    -concat contrasts/runs.1D                                         \
    -glt 1 contrasts/contr_AvsT.txt   -glt_label 1 AvsT               \
    -glt 1 contrasts/contr_HvsL.txt   -glt_label 2 HvsL               \
    -glt 1 contrasts/contr_ATvsHL.txt -glt_label 3 ATvsHL             \
    -full_first -fout -tout                                           \
    -bucket func_ht2
 Run this script by typing source decon_ht2 (takes a few minutes)

Regressors for This Script
New Features of 3dDeconvolve - 1
-concat contrasts/runs.1D = file that indicates where
      new imaging runs start
-full_first = put full model statistic first
      in output file, rather than
      last
-fout -tout = output both F- and
      t-statistics
 The full model statistic is an F-statistic that shows how well the sum of all 4 input model time series fits the voxel time series data
 The individual models also will get individual F- and t-statistics indicating the significance of their individual contributions to the time series fit

New Features of 3dDeconvolve - 2
 -glt 1 contrasts/contr_AvsT.txt   -glt_label 1 AvsT
 -glt 1 contrasts/contr_HvsL.txt   -glt_label 2 HvsL
 -glt 1 contrasts/contr_ATvsHL.txt -glt_label 3 ATvsHL
 GLTs are General Linear Tests
 3dDeconvolve provides tests for each regressor separately, but if you want to test combinations or contrasts of the b weights in each voxel, you need the -glt option
 File contrasts/contr_AvsT.txt = 0 0 0 0 0 0 0 0 1 -1 0 0  (one line with 12 numbers)
 Goal is to test a linear combination of the b weights
 In this data, we have 12 b weights: 8 baseline parameters (2 per imaging run), which are first in the b vector, and 4 regressor magnitudes, which are from -stim_file options
 This particular test contrasts the Actions and Tool bs
tests if bActions bTool  ¹ 0

New Features of 3dDeconvolve - 3
 File contrasts/contr_HvsL.txt = 0 0 0 0 0 0 0 0 0 0 1 -1
 Goal is to test if bHighC bLowC  ¹ 0
 File contrasts/contr_ATvsHL.txt = 0 0 0 0 0 0 0 0 1 1 -1 -1
 Goal is to test if (bActions+ bTool) (bHighC+ bLowC)  ¹ 0
 Regions where this statistic is significant will have had different amounts of BOLD signal change in the activity viewing tasks versus the grating viewing tasks
This is a way to factor out primary visual cortext
 -glt_label 3 ATvsHL option is used to attach a meaningful label to the resulting statistics sub-bricks

Results of decon_ht2 Script
Statistics from 3dDeconvolve
 An F-statistic measures significance of how much a model component reduced the variance of the time series data
 Full F measures how much the signal regressors reduced the variance over just the baseline regressors (sub-brick #0 below)
 Individual partial-model Fs measures how much each individual signal regressor reduced data variance over the full model with that regressor excluded (sub-bricks #19, #22, #25, and #28 below)

Alternative Way to Run waver
 Instead of giving stimulus timing on the TR-grid as a set of 0s and 1s
 Can give the actual stimulus times (in seconds) using the -tstim option
 waver -dt 1.0 -GAM -tstim 3 12 17 | 1dplot -stdin
 If times are in a file, can use -tstim `cat filename` to place them on the command line after -tstim option
 This is most useful for event-related experiments

Deconvolution Signal Models
 Simple or Fixed-shape regression
 We fixed the shape of the HRF
 Used waver to generate the signal model from the stimulus timing
 Found the amplitude of the signal model in each voxel
 Deconvolution or Variable-shape regression
 We allow the shape of the HRF to vary in each voxel, for each stimulus class
 Appropriate when you dont want to over-constrain the solution by assuming an HRF shape
 However, need to have enough time points during the HRF in order to resolve its shape

Deconvolution: Pros and Cons
 Letting HRF shape varies allows for subject and regional variability in hemodynamics
 Can test HRF estimate for different shapes; e.g., are later time points more active than earlier?
 Need to estimate more parameters for each stimulus class than a fixed-shape model (e.g., 4-15 vs. 1)
 Which means you need more data to get the same statistical power (assuming that the fixed-shape model you would assume was in fact correct)
 Freedom to get any shape in HRF results can give weird shapes that are difficult to interpret

Expressing HRF via Regression Unknowns
 The tool for expressing an unknown function as a finite set of numbers that can be fit via linear regression is an expansion in basis functions
 The basis functions yq(t ) are known, as is the expansion order p
 The unknowns to be found (in each voxel) comprises the set of weights bq for each yq(t )
 Since b weights appear only by multiplying known values, and HRF only appears by in final signal model by linear convolution, resulting signal model is still solvable by linear regression

Basis Function: Sticks
 The set of basis functions you use determines the range of possible HRFs that you can compute
 Stick (or Dirac delta) functions are very flexible
 But they come with a strict limitation
 d(t ) is 1 at t=0 and is 0 at all other values of t
 yq(t ) = d(t q×TR) for q=0,1,2,,p
 h(0) = b0
 h(TR) = b1
 h(2 ×TR) = b2
 h(3 ×TR) = b3
  et cetera
 h(t ) = 0 for any t not on the TR grid

Sticks: Good Points
 Can represent arbitrary shapes of the HRF, up and down, with ease
 Meaning of each bq is completely obvious
 Value of HRF at time lag q×TR after activation
 3dDeconvolve is set up to deal with stick functions for representing HRF, so using them is very easy
 What is called p here is given by command line option -stim_maxlag in the program
 When choosing p, rule is to estimate longest duration of neural activation after stimulus onset, then add 10-12 seconds to allow for slowness of hemodynamic response

Sticks and TR-locked Stimuli
 h(t ) = 0 for any t not on the TR grid
 This limitation means that, using stick functions as our basis set, we can only model stimuli that are locked to the TR grid
 That is, stimuli/activations dont occur at fully general times, but only occur at integer multiples of TR
 For example, suppose an activation is at t =1.7×TR
 We need to model the response at later times, such as 2×TR, 3×TR, etc., so need to model h(t ) at times such as t=0.3×TR, 1.3×TR, etc., after the stimulus
 But the stick function model doesnt allow for such intermediate times
 or, can allow Dt for sticks to be a fraction of TR for data
 e.g., Dt = TR/2 , which implies twice as many bq  parameters to cover the same time interval (time interval needed is set by hemodynamics)
then would allow stimuli that occur at TR-grid or halfway in-between

Deconvolution and Collinearity
 Regular stimulus timing can lead to collinearity!

3dDeconvolve with Stick Functions
 Instead of inputting a signal model time series (e.g., created with waver and stimulus timing), you input the stimulus timing directly
 Format: a text file with 0s and 1s, 0 at TR-grid times with no stimulus, 1 at time with stimulus
 Must specify the maximum lag (in units of TR) that we expect HRF to last after each stimulus
 This requires you to make a judgment about the activation brief or long?
 3dDeconvolve returns estimated values for each bq for each stimulus class
 Usually then use a GLT to test the HRF (or pieces of it) for significance

New Features of 3dDeconvolve - 4
 -stim_maxlag k p = option to set the maximum lag to p for stimulus timing file #k for k=0,1,2,
 Stimulus timing file input using command line option         -stim_file k filename as before
 Can also use -stim_minlag k m option to set the minimum lag if you want a value m different from 0
 In which case there are p-m+1 parameters in this HRF
 -stim_nptr k r = option to specify that there are r stimulus subintervals per TR, rather than just 1
 This feature can be used to get a finer grained HRF, at the cost of adding more parameters that need to be estimated
 Need to make sure that the input stimulus timing file (from -stim_file) has r entries per TR
TR for -stim_file and for output HRF is data TR r

Script for Deconvolution - The Data
 cd AFNI_data2
 data is in ED/ subdirectory (10 runs of 136 images, TR=2 s)
 script in file @analyze_ht05
 stimuli timing and GLT contrast files in misc_files/
 start script now by typing  source @analyze_ht05
 will discuss details of script while it runs
This is an event-related study from Mike Beauchamp (LBC/NIMH)
 Four classes of stimuli (short videos)
 Tools moving (e.g., a hammer pounding) - TM
 People moving (e.g., jumping jacks) - HM
 Points outlining tools moving (no objects, just points) - TP
 Points outlining people moving - HP
 Goal is to find if there is an area that distinguishes natural motions (HM and HP) from simpler rigid motions (TM and TP)

Script for Deconvolution - Outline
 Registration of each imaging run (there are 10): 3dvolreg
 Smooth each volume in space (136 sub-bricks per run): 3dmerge
 Create a brain mask: 3dAutomask and 3dcalc
 Rescale each voxel time series in each imaging run so that its average through time is 100: 3dTstat and 3dcalc
 If baseline is 100, then a bq  of 5 (say) indicates a 5% signal change in that voxel at time laq #q after stimulus
 Catenate all imaging runs together into one big dataset (1360 time points): 3dTcat
 Compute HRFs and statistics: 3dDeconvolve
 Each HRF will have 15 output points (lags from 0 to 14) with a TR of 1.0 s, since the input data has a TR of 2.0 s and we will be using the -stim_nptr k r option with r=2
 Average together central points 4..9 of each separate HRF to get peak % change in each voxel: 3dTstat

Script for Deconvolution - 1
Script for Deconvolution - 2
Script for Deconvolution - 3
Script for Deconvolution - 4
Script for Deconvolution - 5
Script for Deconvolution - 6
Script for Deconvolution - 6a
Script for Deconvolution - 6b
Script for Deconvolution - 6c
Script for Deconvolution - 7
Results: Humans vs. Tools
Other Fun 3dDeconvolve Options
 -mask = used to turn off processing for some voxels
 speed up the program by not processing non-brain voxels
 -input1D = used to process a single time series, rather than a dataset full of time series
 test out a stimulus timing sequence
 -nodata option can be used to check for collinearity
 -censor = used to turn off processing for some time points
 for time points that are bad (e.g., too much movement)
 -sresp = output standard deviation of HRF estimates
 can plot error bars around HRF in AFNI graph viewer
 -errts = output residuals (i.e., difference between fitted model and data)
 for statistical analysis of time series noise
 -jobs = run with multiple CPUS
 extra speed, if you have a dual- or quad-processor system!

3dDeconvolve with Free Timing
 The fixed-TR stick function approach doesnt fit with arbitrary timing of stimuli
 When subject actions/reactions are self-initiated, timing of activations cannot be controlled
 If you want to do deconvolution, then must adopt a different basis function expansion approach
 One that has a finite number of parameters but also allows for calculation of h(t ) at any arbitrary point in time
 Simplest set of such functions are closely related to stick functions: tent functions

Tent Functions = Linear Interpolation
 Expansion in a set of spaced-apart tent functions is the same as linear interpolation
 Tent function parameters are also easily interpreted as function values
 Must decide on relationship of tent function spacing L and time grid TR

At This Point
 3dDeconvolve is not set up to use tent functions directly
 In the past, we have now explained in grotesque detail how to set up a combination of waver, 3dcalc, and 3dDeconvolve to trick the system into doing deconvolution with tent functions (or other basis sets)
 However, you are saved from this excruciation
 At this moment, we have an interactive Matlab script that will set up the details for you
 In the near future, we will put tent functions directly into 3dDeconvolve, allowing the direct use of non-TR locked stimulus timing
 Date of this promise: 13 July 2004

Advanced Topics in Regression
 Can have activations with multiple phases that are not always in the same time relationship to each other
 e.g., solving a visually presented puzzle:
 subject sees puzzle
 subject cogitates a while
 subject responds with solution
 With fixed-shape regression, this isnt too hard, since we can treat each phase as a separate stimulus class and lay down a separate fixed-shape HRF for each type of stimulus time
 But fixed-shape regression is probably not reasonable for this problem, since (at least) the cogitation phase isnt fixed among the trials
 Deconvolution assumes that the HRF (although unknown) is the same among all trials in the same class

One Solution: Multiple HRFs
 By treating each phase of the activation that is temporally uncoupled from the other phases as a separate stimulus class, then can compute a separate HRF for each
 Using basis function deconvolution, since timing will be arbitrary
 This will only work if the fluctuations in timing are great enough so that the different HRFs dont always overlap in the same way
 Otherwise, it becomes impossible to tell the tail end of HRF #1 from the start of HRF #2, if they are always locked together in the same temporal relationship

Noise Issues
 Noise in FMRI is caused by several factors, not completely characterized
 MR thermal noise (well understood)
 Cardiac and respiratory cycles (partly understood)
 In principle, could measure these sources of noise separately and then try to regress them out
 Scanner fluctuations (e.g., thermal drift)
 Small subject head movements
 Very low frequency fluctuations (periods longer than 100 s)
 Data analysis should try to remove what can be removed and allow for the statistical effects of what cant be removed
 Serial correlation in the noise time series affects the t- and F-statistics calculated by 3dDeconvolve
 At present, nothing is done to correct for this effect

Nonlinear Regression
 Linear models arent everything
 For example, could try to fit HRF of the form
 Unknowns b and c appear nonlinearly in this formula
 Program 3dNLfim can do nonlinear regression (including nonlinear deconvolution)
 User must provide a C function that computes the model time series, given a set of parameters (e.g., a, b, c)
 Program then drives this C function repeatedly, searching for the set of parameters that best fit each voxel
 Has been used to fit pharmacological wash-in/wash-out models (difference of two exponentials) to FMRI data acquired during pharmacological challenges
 e.g., injection of nicotine, cocaine, etc.