Slide 1
"Signal = Measurable response to..."
 Signal = Measurable response to stimulus
 Noise = Components of measurement that interfere with detection of signal
 Statistical detection theory:
 Understand relationship between stimulus & signal
 Characterize noise statistically
 Can then devise methods to distinguish noise-only measurements from signal+noise measurements, and assess the methods reliability
 Methods and usefulness depend strongly on the assumptions
 Some methods are robust against erroneous assumptions, and some are not

FMRI Philosopy: Signals and Noise
 FMRI StimulusSignal connection and noise statistics are both poorly characterized
 Result: there is no best way to analyze FMRI time series data: there are only reasonable analysis methods
 To deal with data, must make some assumptions about the signal and noise
 Assumptions will be wrong, but must do something
 Different kinds of experiments require different kinds of analyses
 Since signal models and questions you ask about the signal will vary
 It is important to understand what is going on, so you can select and evaluate reasonable analyses

Meta-method for creating analysis methods
 Write down a mathematical model connecting stimulus (or activation) to signal
 Write down a statistical model for the noise
 Combine them to produce an equation for measurements given signal+noise
 Equation will have unknown parameters, which are to be estimated from the data
 N.B.: signal may have zero strength (no activation)
 Use statistical detection theory to produce an algorithm for processing the measurements to assess signal presence and characteristics
 e.g., least squares fit of model parameters to data

Time Series Analysis on Voxel Data
 Most common forms of FMRI analysis involve fitting an activation+BOLD model to each voxels time series separately (AKA univariate analysis)
 Some pre-processing steps may do inter-voxel computations
 e.g., spatial smoothing to reduce noise
 Result of model fits is a set of parameters at each voxel, estimated from that voxels data
 e.g., activation amplitude, delay, shape
 SPM = statistical parametric map
 Further analysis steps operate on individual SPMs
 e.g., combining/contrasting data among subjects

Some Features of FMRI Voxel Time Series
 FMRI only measures changes due to neural activity
 Baseline level of signal in a voxel means little or nothing about neural activity
 Also, baseline level tends to drift around slowly (100 s time scale or so)
 Therefore, an FMRI experiment must have at least 2 different neural conditions (tasks and/or stimuli)
 Then statistically test for differences in the MRI signal level between conditions
 Many experiments: one condition is rest
 Baseline is modeled separately from activation signals, and baseline model includes rest periods

"First:"
 First: Block-trial FMRI data
 Activation occurs over a sustained period of time (say, 10 s or longer), usually from more than one stimulation event, in rapid succession
 BOLD (hemodynamic) response accumulates from multiple close activations and is large
 BOLD response is often visible in time series
 Next 5 slides: same brain voxel in 9 imaging runs
 black curve (noisy) = data
 red curve (above data) = ideal model response
 blue curve (within data) = model fitted to data
 somatosensory task (finger being rubbed)

Slide 8
Slide 9
Slide 10
Slide 11
Slide 12
More Sample FMRI Data Time Series
 Second: Event-related FMRI
 Activation occurs in single relatively brief intervals
 Events can be randomly or regularly spaced in time
 If events are randomly spaced in time, signal model itself looks noise-like (to the human eye)
 BOLD response to stimulus tends to be weaker since fewer nearby-in-time activations have overlapping hemodynamic responses
 Next slide: Visual stimulation experiment

Two Voxel Time Series from Same Run
"HRF is the idealization of..."
 HRF is the idealization of measurable FMRI signal change responding to a single activation cycle (up and down) from a stimulus in a voxel

Linearity of HRF
 Multiple activation cycles in a voxel, closer in time than duration of HRF:
 Assume that overlapping responses add

Linearity and Extended Activation
 Extended activation, as in a block-trial experiment:
 HRF accumulates over its duration ( 10 s)

Convolution Signal Model
 FMRI signal we look for in each voxel is taken to be sum of the individual trial HRFs
 Stimulus timing is assumed known (or measured)
 Resulting time series (blue curves) are called the convolution of the HRF with the stimulus timing
 Must also allow for baseline and baseline drifting
 Convolution models only the FMRI signal changes

"Assume a fixed shape h..."
 Assume a fixed shape h(t ) for the HRF
 e.g., h(t ) = t 8.6 exp(-t/0.547) [MS Cohen, 1997]
 Convolved with stimulus timing (e.g., AFNI program waver), get ideal response function r(t )
 Assume a form for the baseline
 e.g., a + b×t  for a constant plus a linear trend
 In each voxel, fit data Z(t ) to a curve of the form
         Z(t ) a + b×t + r(t )
 a, b, b are unknown parameters to be calculated in each voxel
 a,b are nuisance parameters
 b is amplitude of r(t ) in data = how much BOLD

Simple Regression: Example
Duration of Stimuli - Important Caveats
 Slow baseline drift (time scale 100 s and longer) makes doing FMRI with long duration stimuli very difficult
 Learning experiment, where the task is done continuously for ~15 minutes and the subject is scanned to find parts of the brain that adapt
 Pharmaceutical challenge, where the subject is given some psychoactive drug whose action plays out over 10+ minutes (e.g., cocaine, ethanol)
 Very short duration stimuli that are also very close in time to each other are very hard to tell apart, since their HRFs will have 90-95% overlap
 Binocular rivalry, where percept switches ~ 0.5 s

Slide 22
Multiple Stimuli = Multiple Regressors
 Usually have more than one class of stimulus or activation in an experiment
 e.g., want to see size of face activation vis--vis house activation; or, what vs. where activity
 Need to model each separate class of stimulus with a separate response function r1(t ), r2(t ), r3(t ), .
 Each rj(t ) is based on the stimulus timing for activity in class number j
 Calculate a bj amplitude = amount of rj(t ) in voxel data time series Z(t )
 Contrast bs to see which voxels have differential activation levels under different stimulus conditions
 e.g., statistical test on the question b1b2 = 0 ?

Multiple Stimuli - Important Caveat
 You do not model the baseline condition
 e.g., rest, visual fixation, high-low tone discrimination, or some other simple task
 FMRI can only measure changes in MR signal levels between tasks
 So you need some simple-ish task to serve as a reference point
 The baseline model (e.g., a + b ×t ) takes care of the signal level to which the MR signal returns when the active tasks are turned off
 Modeling the reference task explicitly would be redundant (or collinear, to anticipate a forthcoming jargon word)

Multiple Regressors: Cartoon
Multiple Regressors: Collinearity!!
Multiple Regressors: Near Collinearity
The Geometry of Collinearity - 1
The Geometry of Collinearity - 2
Equations: Notation
 Will generally follow notation of Doug Wards manual for the AFNI program 3dDeconvolve
 Time: continuous in reality, but in steps in the data
 Functions of continuous time are written like f(t )
 Functions of discrete time expressed like       where n=0,1,2, and TR=time step
 Usually use subscript notion fn as shorthand
 Collection of numbers assembled in a column is a vector and is printed in boldface:

Equations: Single Response Function
 In each voxel, fit data Zn to a curve of the form
       Zn a + b×tn + rn   for n=0,1,,N-1 (N=# time pts)
 a, b, b are unknown parameters to be calculated in each voxel
 a,b are nuisance baseline parameters
 b is amplitude of r(t ) in data = how much BOLD
 Baseline model might be more complicated for long (> 150 s) continuous imaging runs:
 150 < T < 300 s: a+b×t+c×t 2
 Longer:     a+b×t+c×t 2 + T/200 low frequency components
 Might also include as extra baseline components the estimated subject head movement time series, in order to remove residual contamination from such artifacts

Equations: Multiple Response Functions
 In each voxel, fit data Zn to a curve of the form
 bj is amplitude in data of rn(j)=rj(tn) ; i.e., how much of j th response function in in the data time series
 In simple regression, each rj(t ) is derived directly from stimulus timing and user-chosen HRF model
 In terms of stimulus times:
 If stimulus occurs on the imaging TR time-grid, stimulus can be represented as a 0-1 time series:
                              where sk(j)=1 if stimulus #j is on
  at time t=k ×TR, and sk(j)=0 if #j is off at that time:

Equations: Matrix-Vector Form
 Express known data vector as a sum of known columns with unknown coefficents:

Visualizing the R Matrix
Solving zRb for b
 Number of equations = number of time points
 100s per run, but perhaps 1000s per subject
 Number of unknowns usually in range 550
 Least squares solution:
     denotes an estimate of the true (unknown)
 From    , calculate            as the fitted model
            is the residual time series = noise (we hope)
 Collinearity: when matrix         cant be inverted
 Near collinearity: when inverse exists but is huge

Simple Regression: Recapitulation
 Choose HRF model h(t) [AKA fixed-model regression]
 Build model responses rn(t) to each stimulus class
 Using h(t) and the stimulus timing
 Choose baseline model time series
 Constant + linear + quadratic + movement?
 Assemble model and baseline time series into the columns of the R matrix
 For each voxel time series z, solve zRb for
 Individual subject maps: Test the coefficients in that you care about for statistical significance
 Group maps: Transform the coefficients in    that you care about to Talairach space, and perform statistics on these    values

"Enough theory (for now"
 Enough theory (for now: more to come later!)
 To look at the data: type cd AFNI_data1/afni ; then afni
 Switch Underlay to dataset epi_r1
 Then Sagittal Image and Graph
 FIMPick Ideal ; then click afni/ideal_r1.1D ; then Set
 Right-click in image, Jump to (ijk), then 29 11 13, then Set

Preparing Data for Analysis
 Six preparatory steps are common:
 Image registration (realignment): program 3dvolreg
 Image smoothing: program 3dmerge
 Image masking: program 3dClipLevel or 3dAutomask
 Conversion to percentile: programs 3dTstat and 3dcalc
 Censoring out time points that are bad: program 3dToutcount or 3dTqual
 Catenating multiple imaging runs into 1 big dataset: program 3dTcat
 Not all steps are necessary or desirable in any given case
 In this first example, will only do registration, since the data obviously needs this correction

Data Analysis Script
 In file epi_r1_decon:
waver -GAM                        \
      -input epi_r1_stim.1D       \
      -TR 2.5                     \
      > epi_r1_ideal.1D
3dvolreg -base 2                  \
         -prefix epi_r1_reg       \
         -1Dfile epi_r1_mot.1D    \
         -verb                    \
         epi_r1+orig
3dDeconvolve                      \
    -input epi_r1_reg+orig        \
    -nfirst     2                 \
    -num_stimts 1                 \
    -stim_file  1 epi_r1_ideal.1D \
    -stim_label 1 AllStim         \
    -tout                         \
    -bucket epi_r1_func           \
    -fitts  epi_r1_fitts

Contents of.1D files
epi_r1_stim.1D   epi_r1_ideal.1D
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   0                   0
   1                   0
   1                   24.4876
   1                   122.869
   1                   156.166
   1                   160.258
   1                   160.547
   1                   160.547
   1                   160.547
   0                   160.547
   0                   136.059
   0                   37.6781
   0                   4.38121
   0                   0.288748
   0                   0
   0                   0
                     

To Run Script and View Results
 type source epi_r1_decon ; then wait for programs to run
 type afni to view what weve got
 Switch Underlay to epi_r1_reg (output from 3dvolreg)
 Switch Overlay to epi_r1_func (output from 3dDeconvolve)
 Sagittal Image and Graph viewers
 FIMIgnore2 to have graph viewer not plot 1st 2 time pts
 FIMPick Ideal ; pick epi_r1_ideal.1D (output from waver)
 Define Overlay to set up functional coloring
 OlayAllstim[0] Coef (sets coloring to be from model fit b)
 ThrAllstim[0] t-s (sets threshold to be model fit t-statistic)
 See Overlay (otherwise wont see the function!)
 Play with threshold slider to get a meaningful activation map (e.g., t =4 is a decent threshold)

More Viewing the Results
 Graph viewer: OptTran 1DDataset #N to plot the model fit dataset output by 3dDeconvolve
 Will open the control panel for the Dataset #N plugin
 Click first Input on ; then choose Dataset epi_r1_fitts+orig
 Also choose Color dk-blue to get a pleasing plot
 Then click on Set+Close (to close the plugin panel)
 Should now see fitted time series in the graph viewer instead of data time series
 Graph viewer: click OptDouble PlotOverlay on to make the fitted time series appear as an overlay curve
 This tool lets you visualize the quality of the data fit
 Can also now overlay function on MP-RAGE anatomical by using Switch Underlay to anat+orig dataset
Probably wont want to graph the anat+orig dataset!

Stimulus Correlated Movement?
Removing Residual Motion Artifacts
 Last part of script epi_r1_decon:
3dDeconvolve                         \
    -input epi_r1_reg+orig           \
    -nfirst     2                    \
    -num_stimts 7                    \
    -stim_file  1 epi_r1_ideal.1D    \
    -stim_label 1 AllStim            \
    -stim_file  2 epi_r1_mot.1D'[0]' \
    -stim_base  2                    \
    -stim_file  3 epi_r1_mot.1D'[1]' \
    -stim_base  3                    \
    -stim_file  4 epi_r1_mot.1D'[2]' \
    -stim_base  4                    \
    -stim_file  5 epi_r1_mot.1D'[3]' \
    -stim_base  5                    \
    -stim_file  6 epi_r1_mot.1D'[4]' \
    -stim_base  6                    \
    -stim_file  7 epi_r1_mot.1D'[5]' \
    -stim_base  7                    \
    -tout                            \
    -bucket epi_r1_func_mot          \
    -fitts  epi_r1_fitts_mot

Some Results: Before and After
Multiple Stimulus Classes
 The experiment analyzed here in fact is more complicated
 There are 4 related visual stimulus types
 One goal is to find areas that are differentially activated between these different types of stimuli
 We have 4 imaging runs, 108 useful time points each (skipping first 2 in each run) that we will analyze together
 Already registered and put together into dataset rall_vr+orig
 Stimulus timing files are in subdirectory stim_files/
 Script file waver_ht2 will create HRF models for regression:
     cd stim_files
     waver -dt 2.5 -GAM -input scan1to4a.1D > scan1to4a_hrf.1D
     waver -dt 2.5 -GAM -input scan1to4t.1D > scan1to4t_hrf.1D
     waver -dt 2.5 -GAM -input scan1to4h.1D > scan1to4h_hrf.1D
     waver -dt 2.5 -GAM -input scan1to4l.1D > scan1to4l_hrf.1D
     cd ..
 Type source waver_ht2 to run this script
 Might also use 1dplot to check if input .1D files are reasonable

Regression with Multiple Model Files
 Script file decon_ht2 does the job:
3dDeconvolve -xout -input rall_vr+orig                                \
    -num_stimts 4                                                     \
    -stim_file 1 stim_files/scan1to4a_hrf.1D  -stim_label 1 Actions   \
    -stim_file 2 stim_files/scan1to4t_hrf.1D  -stim_label 2 Tool      \
    -stim_file 3 stim_files/scan1to4h_hrf.1D  -stim_label 3 HighC     \
    -stim_file 4 stim_files/scan1to4l_hrf.1D  -stim_label 4 LowC      \
    -concat contrasts/runs.1D                                         \
    -glt 1 contrasts/contr_AvsT.txt   -glt_label 1 AvsT               \
    -glt 1 contrasts/contr_HvsL.txt   -glt_label 2 HvsL               \
    -glt 1 contrasts/contr_ATvsHL.txt -glt_label 3 ATvsHL             \
    -full_first -fout -tout                                           \
    -bucket func_ht2
 Run this script by typing source decon_ht2 (takes a few minutes)

Regressors for This Script
Extra Features of 3dDeconvolve - 1
-concat contrasts/runs.1D = file that indicates where
      new imaging runs start
-full_first = put full model statistic first
      in output file, not last
-fout -tout = output both F- and
      t-statistics
 The full model statistic is an F-statistic that shows how well the sum of all 4 input model time series fits voxel time series data
 The individual models also will get individual F- and t-statistics indicating the significance of their individual contributions to the time series fit
 i.e., FActions tells if model (Actions+HighC+LowC+Tools+baseline) explains more of the data variability than model (HighC+LowC+Tools+baseline) with Actions omitted

Extra Features of 3dDeconvolve - 2
 -glt 1 contrasts/contr_AvsT.txt   -glt_label 1 AvsT
 -glt 1 contrasts/contr_HvsL.txt   -glt_label 2 HvsL
 -glt 1 contrasts/contr_ATvsHL.txt -glt_label 3 ATvsHL
 GLTs are General Linear Tests
 3dDeconvolve provides tests for each regressor separately, but if you want to test combinations or contrasts of the b weights in each voxel, you need the -glt option
 File contrasts/contr_AvsT.txt = 0 0 0 0 0 0 0 0 1 -1 0 0  (one line with 12 numbers)
 Goal is to test a linear combination of the b weights
 In this data, we have 12 b weights: 8 baseline parameters (2 per imaging run), which are first in the b vector, and 4 regressor magnitudes, which are from -stim_file options
 This particular test contrasts the Actions and Tool bs
tests if bActions bTool  ¹ 0

Extra Features of 3dDeconvolve - 3
 File contrasts/contr_HvsL.txt = 0 0 0 0 0 0 0 0 0 0 1 -1
 Goal is to test if bHighC bLowC  ¹ 0
 File contrasts/contr_ATvsHL.txt = 0 0 0 0 0 0 0 0 1 1 -1 -1
 Goal is to test if (bActions+ bTool) (bHighC+ bLowC)  ¹ 0
 Regions where this statistic is significant will have had different amounts of BOLD signal change in the activity viewing tasks versus the grating viewing tasks
This is a way to factor out primary visual cortex
 -glt_label 3 ATvsHL option is used to attach a meaningful label to the resulting statistics sub-bricks

Results of decon_ht2 Script
Statistics from 3dDeconvolve
 An F-statistic measures significance of how much a model component reduced the variance of the time series data
 Full F measures how much the signal regressors reduced the variance over just the baseline regressors (sub-brick #0 below)
 Individual partial-model F s measures how much each individual signal regressor reduced data variance over the full model with that regressor excluded (sub-bricks #19, #22, #25, and #28 below)

Alternative Way to Run waver
 Instead of giving stimulus timing on the TR-grid as a set of 0s and 1s
 Can give the actual stimulus times (in seconds) using the -tstim option
 waver -dt 1.0 -GAM -tstim 3 12 17 | 1dplot -stdin
 If times are in a file, can use -tstim `cat filename` to place them on the command line after -tstim option
 This is most useful for event-related experiments

Deconvolution Signal Models
 Simple or Fixed-shape regression:
 We fixed the shape of the HRF
 Used waver to generate the signal model from the stimulus timing
 Found the amplitude of the signal model in each voxel
 Deconvolution or Variable-shape regression:
 We allow the shape of the HRF to vary in each voxel, for each stimulus class
 Appropriate when you dont want to over-constrain the solution by assuming an HRF shape
 Caveat: need to have enough time points during the HRF in order to resolve its shape

Deconvolution: Pros and Cons
  Letting HRF shape varies allows for subject and regional variability in hemodynamics
  Can test HRF estimate for different shapes; e.g., are later time points more active than earlier?
  Need to estimate more parameters for each stimulus class than a fixed-shape model (e.g., 4-15 vs. 1 parameter=amplitude of HRF)
  Which means you need more data to get the same statistical power (assuming that the fixed-shape model you would otherwise use was in fact correct)
  Freedom to get any shape in HRF results can give weird shapes that are difficult to interpret

Expressing HRF via Regression Unknowns
 The tool for expressing an unknown function as a finite set of numbers that can be fit via linear regression is an expansion in basis functions
 The basis functions yq(t ) are known, as is the expansion order p
 The unknowns to be found (in each voxel) comprises the set of weights bq for each yq(t )
 Since b weights appear only by multiplying known values, and HRF only appears in final signal model by linear convolution, resulting signal model is still solvable by linear regression

Basis Function: Sticks
 The set of basis functions you use determines the range of possible HRFs that you can compute
 Stick (or Dirac delta) functions are very flexible
 But they come with a strict limitation
 d(t ) is 1 at t=0 and is 0 at all other values of t
 yq(t ) = d(t q×TR) for q=0,1,2,,p
 h(0) = b0
 h(TR) = b1
 h(2 ×TR) = b2
 h(3 ×TR) = b3
  et cetera
 h(t ) = 0 for any t not on the TR grid

Sticks: Good Points
 Can represent arbitrary shapes of the HRF, up and down, with ease
 Meaning of each bq is completely obvious
 Value of HRF at time lag q×TR after activation
 3dDeconvolve is set up to deal with stick functions for representing HRF, so using them is very easy
 What is called p here is given by command line option -stim_maxlag in the program
 When choosing p, rule is to estimate longest duration of neural activation after stimulus onset, then add 10-12 seconds to allow for slowness of hemodynamic response

Sticks and TR-locked Stimuli
 h(t ) = 0 for any t not on the TR grid
 This limitation means that, using stick functions as our basis set, we can only model stimuli that are locked to the TR grid
 That is, stimuli/activations dont occur at fully general times, but only occur at integer multiples of TR
 For example, suppose an activation is at t =1.7×TR
 We need to model the response at later times, such as 2×TR, 3×TR, etc., so need to model h(t ) at times such as t=(2-1.7)×TR=0.3×TR,   t=1.3×TR,   etc., after the stimulus
 But the stick function model doesnt allow for such intermediate times
 or, can allow Dt for sticks to be a fraction of TR for data
 e.g., Dt = TR/2 , which implies twice as many bq  parameters to cover the same time interval (time interval needed is set by hemodynamics)
then would allow stimuli that occur on TR-grid or halfway in-between

Deconvolution and Collinearity
 Regular stimulus timing can lead to collinearity!

3dDeconvolve with Stick Functions
 Instead of inputting a signal model time series (e.g., created with waver and stimulus timing), you input the stimulus timing directly
 Format: a text file with 0s and 1s, 0 at TR-grid times with no stimulus, 1 at time with stimulus
 Must specify the maximum lag (in units of TR) that we expect HRF to last after each stimulus
 This requires you to make a judgment about the activation brief or long?
 3dDeconvolve returns estimated values for each bq for each stimulus class
 Usually then use a GLT to test the HRF (or pieces of it) for significance

Extra Features of 3dDeconvolve - 4
 -stim_maxlag k p = option to set the maximum lag to p for stimulus timing file #k for k=0,1,2,
 Stimulus timing file input using command line option         -stim_file k filename as before
 Can also use -stim_minlag k m option to set the minimum lag if you want a value m different from 0
 In which case there are p-m+1 parameters in this HRF
 -stim_nptr k r = option to specify that there are r stimulus subintervals per TR, rather than just 1
 This feature can be used to get a finer grained HRF, at the cost of adding more parameters that need to be estimated
 Need to make sure that the input stimulus timing file (from -stim_file) has r entries per TR
TR for -stim_file and for output HRF is data TR r

Script for Deconvolution - The Data
 cd AFNI_data2
 data is in ED/ subdirectory (10 runs of 136 images, TR=2 s)
 script in file @analyze_ht05
 stimuli timing and GLT contrast files in misc_files/
 start script now by typing  source @analyze_ht05
 will discuss details of script while it runs
Event-related study from Mike Beauchamp
 Four classes of stimuli (short videos)
 Tools moving (e.g., a hammer pounding) - TM
 People moving (e.g., jumping jacks) - HM
 Points outlining tools moving (no objects, just points) - TP
 Points outlining people moving - HP
 Goal is to find if there is an area that distinguishes natural motions (HM and HP) from simpler rigid motions (TM and TP)

Script for Deconvolution - Outline
 Registration of each imaging run (there are 10): 3dvolreg
 Smooth each volume in space (136 sub-bricks per run): 3dmerge
 Create a brain mask: 3dAutomask and 3dcalc
 Rescale each voxel time series in each imaging run so that its average through time is 100: 3dTstat and 3dcalc
 If baseline is 100, then a bq  of 5 (say) indicates a 5% signal change in that voxel at time laq #q after stimulus
 Catenate all imaging runs together into one big dataset (1360 time points): 3dTcat
 Compute HRFs and statistics: 3dDeconvolve
 Each HRF will have 15 output points (lags from 0 to 14) with a TR of 1.0 s, since the input data has a TR of 2.0 s and we will be using the -stim_nptr k r option with r=2
 Average together central points 4..9 of each separate HRF to get peak % change in each voxel: 3dTstat

Script for Deconvolution - 1
Script for Deconvolution - 2
Script for Deconvolution - 3
Script for Deconvolution - 4
Script for Deconvolution - 5
Script for Deconvolution - 6
Script for Deconvolution - 6a
Script for Deconvolution - 6b
Script for Deconvolution - 6c
Script for Deconvolution - 7
Results: Humans vs. Tools
Yet More Fun 3dDeconvolve Options
 -mask = used to turn off processing for some voxels
 speed up the program by not processing non-brain voxels
 -input1D = used to process a single time series, rather than a dataset full of time series
 test out a stimulus timing sequence
 -nodata option can be used to check for collinearity
 -censor = used to turn off processing for some time points
 for time points that are bad (e.g., too much movement)
 -sresp = output standard deviation of HRF estimates
 can plot error bands around HRF in AFNI graph viewer
 -errts = output residuals (i.e., difference between fitted model and data)
 for statistical analysis of time series noise
 -jobs N = run with multiple CPUS N of them
 extra speed, if you have a dual- or quad-processor system!

3dDeconvolve with Free Timing
 The fixed-TR stick function approach doesnt fit with arbitrary timing of stimuli
 When subject actions/reactions are self-initiated, timing of activations cannot be controlled
 If you want to do deconvolution, then must adopt a different basis function expansion approach
 One that has a finite number of parameters but also allows for calculation of h(t ) at any arbitrary point in time
 Simplest set of such functions are closely related to stick functions: tent functions

Tent Functions = Linear Interpolation
 Expansion in a set of spaced-apart tent functions is the same as linear interpolation
 Tent function parameters are also easily interpreted as function values (e.g., b2 = response at time t = 2×L)
 User must decide on relationship of tent function spacing L and time grid TR

At This Point (13 July 2004)
 3dDeconvolve is not set up to use tent functions directly
 In the past, we have now explained in grotesque detail how to set up a combination of waver, 3dcalc, and 3dDeconvolve to trick the system into doing deconvolution with tent functions (or other basis sets)
 However, you are saved from this excruciation
 In the near future, we will put tent functions directly into 3dDeconvolve, allowing the direct use of non-TR locked stimulus timing
 Date of this promise: 13 July 2004 (AFNI Summer Bootcamp)

Upgrades Since July 2004
 See  http://afni.nimh.nih.gov/doc/misc/3dDeconvolveSummer2004/
 Equation solver: Gaussian elimination to compute R matrix pseudo-inverse was replaced by SVD (like principal components)
 Advantage: smaller sensitivity to computational errors
 Condition number and inverse error values are printed at program startup, as measures of accuracy of pseudo-inverse
 Condition number < 1000 is good
 Inverse error < 1.0e-10 is good
 3dDeconvolve_f program can be used to compute in single precision (7 decimal places) rather than double precision (16)
 For better speed, but with lower numerical accuracy
 Best to do at least one run both ways to check if results differ significantly  (SVD solver should be safe)

Recent Upgrades - 2
 New -xjpeg xxx.jpg option will save a JPEG image file of the columns of the R matrix into file xxx.jpg (and an image of the pseudo-inverse of R into file xxx_psinv.jpg)

Recent Upgrades - 3
 Matrix inputs for -glt option can now indicate lots of zero entries using a notation like 30@0 1 -1 0 0 to indicate that 30 zeros precede the rest of the input line
 Example: 10 imaging runs and -polort 2 for baseline
 Can put comments into matrix and .1D files, using lines that start with # or //
 Can use \ at end of line to specify continuation
 Matrix input for GLTs can also be expressed symbolically, using the names given with the -stim_label options:
 -stim_label 1 Ear -stim_maxlag 1 4
 -stim_label 2 Wax -stim_maxlag 2 4
 Old style GLT might be
{zeros for baseline} 0 0 1 1 1 0 0 -1 -1 -1
 New style (via -gltsym option) is
 Ear[2..4] -Wax[2..4]

Recent Upgrades - 4
 New -xsave option saves the R matrix (and other info) into a file that can be used later with the -xrestore option to calculate some extra GLTs, without re-doing the entire analysis (goal: save some time)
 -input option now allows multiple 3D+time datasets to be specified to automatically catenate individual runs into one file on the fly
 Avoids having to use program 3dTcat
 User must still supply full-length .1D files for the various input time series (e.g., -stim_file) options
 -concat option will be ignored if this option is used
 Break points between runs will be taken as the break points between the various -input datasets
 -polort option now uses Legendre polynomials instead of simple 1, t, t 2, t 3, basis functions (more numerical accuracy)

Recent Upgrades - 5
 3dDeconvolve now checks for duplicate -stim_file names and for duplicate matrix columns, and prints warnings
 These are not fatal errors
 If the same regressor is given twice, each copy will only get half the amplitude (the beta weight) in the solution
 All-zero regressors are now allowed
 Will get zero weight in the solution
 A warning message will be printed to the terminal
 Example: task where subject makes a choice for each stimulus
 You want to analyze correct and incorrect trials a separate cases
 What if a subject makes no mistakes?  Hmmm

Recent Upgrades - 6
 Direct input of stimulus timing plus a response model, using new -stim_times option
 -stim_times k tname rtype
 k = stimulus index (from 1 to -num_stimts value)
 tname = name of .1D file containing stimulus times in units of seconds (important: TR value in dataset header must be correct!)
 rtype = name of response model to use for each stimulus read from tname file
 GAM = gamma variate function from waver
 TENT(b,c,n) = tent function deconvolution, ranging from time s+b to s+c after each stimulus time s, with n basis functions
 several other rtype options available

Recent Upgrades - 7
 Recall: -iresp option outputs the HRF model for one stimulus
 When used with -stim_times, values are usually output using the dataset TR time spacing
 Can changes to a different grid via new -TR_times dt option, which sets the output grid spacing for -iresp to dt for HRF models computed via -stim_times
 Is useful for producing nice smooth pictures of HRF
 Also works with -sresp option (= std.dev. of HRF)
 Difficulty: using GLTs with results from -stim_times
 GLTs operate on regression coefficients
 For most rtype models, regression coefficients dont correspond directly to HRF amplitudes
 Exceptions: GAM, TENT, BLOCK
 Planned solution: see next slide

Upgrades Planned or Dreamed of
 Automatic baseline normalization of input time series
 Automatic mask generation ( la 3dAutomask program)
 Spatial blur ( la 3dmerge -1blur)
 Time shift input before analysis ( la 3dTshift program)
 Negative lags for -stim_file method of deconvolution
 for pre-stimulus cognition/anticipation
 -stim_times already allows pre-stimulus response
 Area under curve addition to -gltsym to allow testing of pieces of HRF models from -stim_times
 Slice- and/or voxel-dependent regressors
 For physiological noise cancellation, etc.
 Floating point output format
 Currently is shorts + scale factor

Advanced Topics in Regression
 Can have activations with multiple phases that are not always in the same time relationship to each other; e.g.:
 subject gets cue #1
 variable waiting time (hold)
 subject gets cue #2, emits response
 which depends on both cue #1 and #2
 Cannot treat this as one event with one HRF, since the different waiting times will result in different overlaps in separate responses from cue #1 and cue #2
 Solution is multiple HRFs: separate HRF (fixed shape or deconvolution) for cue #1 times and for cue #2 times
 Must have significant variability in inter-cue waiting times, or will get a nearly-collinear model
impossible to tell tail end of HRF #1 from the start of HRF #2, if always locked together in same temporal relationship
 How much variability is significant?  Good question.

Even More Complicated Case
 Solving a visually presented puzzle:
 subject sees puzzle
 subject cogitates a while
 subject responds with solution
 The problem is that we expect some voxels to be significant in phase (b) as well as phases (a) and/or (c)
 Variable length of phase (b) means that shape for its response varies between trials
Which is contrary to the whole idea of averaging trials together to get decent statistics (which is basically what linear regression amounts to, in a fancy sort of way)
 Could assume response amplitude in phase (b) is constant across trials, and response duration varies directly with time between phases (a) and (c)
Need three HRFs; phase (b)s is a little tricky to generate using waver, but it could be done

Noise Issues
 Noise in FMRI is caused by several factors, not completely characterized
 MR thermal noise (well understood, unremovable)
 Cardiac and respiratory cycles (partly understood)
 In principle, could measure these sources of noise separately and then try to regress them out
 RETROICOR program underway (R Birn & M Smith of FIM/NIMH)
 Scanner fluctuations (e.g., thermal drift of hardware)
 Small subject head movements (10-100 mm)
 Very low frequency fluctuations (periods longer than 100 s)
 Data analysis should try to remove what can be removed and allow for the statistical effects of what cant be removed
 Serial correlation in the noise time series affects the t- and F-statistics calculated by 3dDeconvolve
 At present, nothing is done to correct for this effect (by us)

Nonlinear Regression
 Linear models arent everything
 e.g., could try to fit HRF of the form
 Unknowns b and c appear nonlinearly in this formula
 Program 3dNLfim can do nonlinear regression (including nonlinear deconvolution)
 User must provide a C function that computes the model time series, given a set of parameters (e.g., a, b, c)
 Program then drives this C function repeatedly, searching for the set of parameters that best fit each voxel
 Has been used to fit pharmacological wash-in/wash-out models (difference of two exponentials) to FMRI data acquired during pharmacological challenges
 e.g., injection of nicotine, cocaine, etc.
 these are tricky experiments, at best