AFNI HOW-TO #2:
BLOCK DESIGNS AND AFNI 3dDeconvolve
Table of Contents:
* Create anatomical datasets using AFNI 'to3d'
* Create time series datasets using AFNI 'to3d'
* Concatenate all four runs into a single dataset using '3dTcat'
* Volume register your datasets using '3dvolreg'
* Concatenate stimulus timings from all 4 runs.
* Create ideal reference function for each of the four stimulus
conditions using 'waver'
* Analyze data using '3dDeconvolve'
------------------------------------------------------------------------
------------------------------------------------------------------------
* EXAMPLE EXPERIMENT *
------------------------------------------------------------------------
------------------------------------------------------------------------
The following experiment will serve as our example:
Experiment:
Visual processing of moving objects. Participants are presented with
animate, biological items (e.g., humans) or manipulable, non-biological
items (e.g., tools). Controls are high- and low-contrast gradients.
Design: BLOCK
Conditions:
1. Animate Human (a)
2. Moving Tool (t)
3. Low Contrast Moving Gradient (l)
4. High Contrast Moving Gradient (h)
Data Collected:
One spoiled grass anatomical scan
(stored in a directory called 'SPGR_data')
Four EPI time-series scans
(stored in a directory called 'EPI_data')
For additional information on this experiment, see 'background:the experiment'
of this how-to or refer to:
Beauchamp, M.S., Lee, K.E., Haxby, J.V., & Martin, A. (2002). Parallel visual
motion processing streams for manipulable objects and human movements.
Neuron, 34:149-159.
------------------------------------------------------------------------
------------------------------------------------------------------------
* PRE-PROCESSING DATA IN AFNI *
------------------------------------------------------------------------
------------------------------------------------------------------------
* COMMAND: to3d
Use AFNI 'to3d' to convert your anatomical I-files into 3D datasets.
Usage: to3d [-options] image_files
(see also 'to3d -help')
-----------------------
* EXAMPLE of 'to3d' using anatomical data:
cd to the directory containing your anatomical I-files and run 'to3d'
from there...
cd $topdir/DDmb_data/SPGR_data
to3d -prefix DDSPGR -session $topdir/afni I.*
This command will create an AFNI 3D dataset, which consists of a BRIK
file (the collection of all the 2D slice data forms) and a HEAD file
(auxiliary information pertaining to the data). This dataset will be
saved in the $topdir/afni directory (for clarity, all datasets created in
this how-to will be saved in the $topdir/afni directory).
DDSPGR+orig.BRIK DDSPGR+orig.HEAD
-----------------------
* EXPLANATION of 'to3d' options:
There are numerous 'to3d' options to choose from. Adding these options to
the command line will allow the user to bypass the 'to3d' Graphical User
Interface (GUI). If the user chooses to work in GUI mode, many of these
options (but not all) can also be selected by filling out the on-screen
form. Since our raw data is in I-file format, it is not necessary to
include any options pertaining to specific dimensions of the image files,
such as the orientation of the slices in space, the voxel size, the slice
offset, etc. AFNI 'to3d' is capable of retrieving this header information
from the I-file itself. In a nutshell, I-files are understood because
the header information in I-files can be read by AFNI.
-prefix DDSPGR
Creates a prefix name for your AFNI 3D dataset, in this example, 'DDSPGR'.
To bypass the 'to3d' GUI, the user must provide a prefix name on the
command line.
-session <name>
Writes the 3D dataset into the appropriate session directory, in this
example, $topdir/afni.
------------------------------------------------------------------------
------------------------------------------------------------------------
* COMMAND: to3d (for time series data)
Use AFNI 'to3d' to convert your functional data into 3D+time datasets.
Usage: to3d [-options] -time:[zt or tz] [#slices] [#reps] \
[TR(ms)] [tpattern] image_files
(see also 'to3d -help')
The example below involves raw data that is not in I-file format. It
contains no header information.
To create a 3D+time dataset, additional time-axis information, which
cannot be modified from the GUI, must be given on the 'to3d' command line.
In addition, if you have image files that are not in I-file format (as in
our example), additional options must be included on the command line.
-----------------------
* EXAMPLE of 'to3d' using time series data:
cd to the directory containing your time series datasets and run
'to3d' from there:
cd $topdir/DDmb_data/EPI_data
to3d -prefix DDr1 -session $topdir/afni \
-save_outliers $outlier_dir/DDr1.outliers \
-orient SPR -zorigin 69 -epan \
-xSLAB 118.125S-I -ySLAB 118.125P-A -zSLAB 69R-61L \
-time:tz 110 27 2500 alt+z 3Ds:0:0:64:64:1:"DDr1*"
This command will create a head and brick file for run #1 of the echo
planar data (remember, we have a total of 4 EPI runs in this example).
The data will be saved in the $topdir/afni directory.
DDr1+orig.BRIK DDr1+orig.HEAD
-----------------------
* EXPLANATION of 'to3d' options:
Consider the input files in the above 'to3d' command, 'DDr1*'.
The raw image files used in this example contain no header information at
all. Each file, such as 'DDr1_06.058' (i.e., the image from the first
run, sixth slice, and fifty-eighth time point), is 8192 bytes long,
consisting only of a 64x64 voxel array of 2-byte short integers.
Since there is no header information for 'to3d' to read, the user must
supply it (unlike in our previous case, using I-files). Below is some of
the information that one writes down during the scanning session and
includes on the 'to3d' command line:
-save_outliers $outlier_dir/DDr1.outliers:
This option tells the program to save the outliers in a specified
directory, in this example ' $outlier_dir', under the file name
'DDr1.outliers'.
-orient SPR
This option specifies the 3-dimensional orientation of the dataset.
Each single image, such as 'DDr1_06.058', is oriented as Superior-
to-Inferior, and Posterior-to-Anterior (taken in the Sagittal plane).
Different slices, ordered Right-to-Left, are contained in different
files. For instance, the single brain volume (3-dimensional) taken
in run 1, at time point 58, is contained in the file list:
DDr1_01.058 DDr1_02.058 DDr1_03.058 . . . DDr1_27.058
Those 27 sagittal images represent a single "snapshot" of the brain,
the whole brain at (approximately) one point in time.
-zorigin 69
This option tells 'to3d' that the first slice is located at 69 mm Right
of the origin (approximately in the center of the brain). The 'Right'
matches the 'z direction' in the '-orient SPR' option, above.
-epan
This option is used to inform 'to3d' that these are Echo Planar images.
-xSLAB 118.125S-I
This option specifies what the distances are from center-to-center of the
outermost voxels in the x-direction. In this example, the first voxel
along the x-axis is located at 118.125 mm Superior, and the last is
located at 118.125 mm Inferior.
Note that if the user supplies only the first value, the image is assumed
to be centered, so that '118.125S-I' is equivalent to '118.125S-118.125I'.
The 64 voxel coordinates along this axis are assumed to be evenly spaced.
As a side note, you may notice that AFNI 'to3d' provides another option
called '-xFOV'. Be aware that '-xSLAB' and '-xFOV' are not
interchangeable.
The '-xSLAB' option, which has been used in this case, specifies the
locations of the voxel coordinates (i.e., the centers of the voxels). In
this case, the distance between the centers of the outermost voxels is
236.25 mm. This represents the width of 63 voxels (as another example,
note that the centers of 2 voxels are separated by only 1 voxel width).
Therefore, 'to3d' concludes that the voxel size is 3.75 mm in that
direction.
Alternatively, the '-FOV' option refers to the Field Of View, which is
taken as the outer edge of the first voxel to the outer edge of last voxel
along the relevant axis. In this case, the FOV would be 120 mm Superior to
120 mm Inferior (this information would have been collected during the
scanning session). 'to3d' accepts either type of option. It is up to the
user to collect accurate information at scan time.
-ySLAB 118.125P-A
Similarly, this option specifies that the voxel coordinates along the
y-axis are located from 118.125 mm Posterior, to 118.125 mm Anterior.
Again, the 64 voxel coordinates along this axis are assumed to be evenly
spaced, implying a voxel width of 3.75 mm.
-zSLAB 69R-61L
These are the outer coordinates for the voxel centers along the z-axis,
which is from image slice to image slice. Since there are 27 slices, the
implied voxel width is 5 mm (104/26). Again, note that if we had used
'-zFOV', the coordinates would be '71.5R-63.5L',half of a voxel width
farther on each side.
-----------------------
-time:tz 110 27 2500 alt+z 3Ds:0:0:64:64:1:"DDr1*"
The individual components of this command are broken down and explained
below. NOTE: for historical purposes, this part of the command line
CANNOT be modified from the GUI; it must be given on the command line
-time:tz 110 27 2500 alt+z
This sequence of option parameters tells 'to3d' the scan timing, with
respect to the list of input files.
The '-time:tz' portion of this command tells 'to3d' that the image
slices are first presented in order of time (t-axis), followed by space
(z-axis). Recall the alphabetical order of the example files by
considering file 'DDr1_06.058'. These files are labeled such that r1
represents the run number, 06 is the slice number, and 058 is the time
point. In this example, each run is initially put into a separate
dataset, getting its own 'to3d' command. So alphabetically, the files
will be inputed where the time axis varies first (t-axis), and then the
slice number (the z-axis).
The 3 numbers following '-time:tz' are nt, nz and TR, respectively:
nt:
In this example, there are 110 "time points" (i.e., each 3-D brain
volume was collected at 110 points in time, per run). For example, the
110 images for run 1, slice 6, are contained in the files,
'DDr1_06.001', 'DDr1_06.002', ... 'DDr1_06.110'.
nz:
Next is 27, which represents the number of sagittal slices (our
z-direction) in each 3-D brain volume. Consider run 1, time point 58,
for example. The 27 slice images making up the corresponding brain
volume are, 'DDr1_01.058', 'DDr1_02.058', ... 'DDr1_27.058'.
TR:
Finally, 2500 refers to the TR, in milliseconds. This is the temporal
resolution of the data acquisition, i.e. the length of time from the
beginning of one 27-slice volume scan to the next. The TR tells 'to3d'
that one slice was acquired every 92.6 ms. One can see that the scan
time for each run was 275 seconds, or 2.5 seconds (the TR) multiplied
by 110 time points.
alt+z:
Finally, 'alt+z' informs 'to3d' that the slices were acquired in an
interleaved manner, by alternating in the positive z-direction. Within
a single 3-D volume scan of 27 slices, the order in which they were
acquired is 1, 3, 5, ... 25, 27, 2, 4, 6, ... 24, 26.
-----------------------
3Ds:0:0:64:64:1:"DDr1*"
This last argument specifies the format of the input images, and also what
the actual input files are. The format is specified by '3Ds:0:0:64:64:1:'
and the list of input files is given by 'DDr1*'. The individual
components of this command are broken down and explained below.
3Ds:
Within the format specification, the "s" in '3Ds' means that each input
file is an image of 2-byte integers, with the bytes swapped. That is,
each voxel in the image has a value between -32768 and 32767 (16 bit,
signed integers). A "swapped" byte indicates that each 2-byte integer
should have those bytes reversed before being evaluated.
Consider the 2-digit number "seventy-four". By universal custom, one
would write the "7" first, followed by the "4". However, customs for
computers are not so universal. Linux/Intel computers use the MSD
(most significant digit) format, where "seventy-four" would be
represented numerically as "74". In contrast, SUN and SGI systems use
the LSD (least significant digit) format whereby "seventy-four" would
appear as "47". In the world of computers, both formats are
acceptable, but if one is using a Linux machine to run AFNI, and the
images are in LSD format, a warning may appear, informing the user that
negative voxels have been detected and a byte swap is necessary. This
can be easily done by clicking the "Byte Swap" button on the 'to3d' GUI
or using the '3Ds' option rather than '3D'.
3Ds:0:
Following '3Ds' is a sequence of 5 numbers, separated by colons. The
first numbers represents the 'hglobal', or the size of the global
header. This is the number of bytes to skip at the start of each file,
to get past the image header, to the actual image data. In this case,
the image files contain no header information, so hglobal is set to
zero.
3Ds:0:0:
The second number (again, 0) is 'himage', which specifies the number of
header bytes to skip for each image, within a single file. This
applies only when files contain multiple images.
3Ds:0:0:64:64:1:
The remaining three numbers are 'nx', 'ny' and 'nz'. Each image in
this scan is 2-dimensional, 64 by 64 voxels in size. Therefore, nx, ny
and nz are 64, 64, and 1, respectively.
3Ds:0:0:64:64:1:"DDr1*"
Following the last ':' is the filename specifier. The wild card '*'
tells 'to3d' to take the list of all files starting with 'DDr1' as
input, which is to say, all files from run 1.
The quotes around 'DDr1*' are required. Recall that without quotes,
the shell will attempt to expand the expression into a list of files.
Specifically, the shell would try to expand not just 'DDr1*', but
'3Ds:0:0:64:64:1:DDr1*', and there are no files of that sort here.
Putting the quotes around the expression passes the exact string
"DDr1*" to 'to3d', allowing the program to find the list of files to
read for input.
-----------------------
With all the parameters set for dataset DDr1, the same parameters can now
be applied to the remaining EPI runs (i.e., 2, 3, and 4). Instead of
re-typing all the 'to3d' options on the command line, the user can simply
refer to the parameters previously set for DDr1 and tell 'to3d' to apply
them to the remaining runs. This can easily be done by using the
'-geomparent' option:
* EXAMPLE of to3d with the '-geomparent' option:
foreach run ( 2 3 4 )
to3d -geomparent $topdir/afni/DDr1+orig -session $topdir/afni \
-prefix DDr{$run} -save_outliers $outlier_dir/DDr{$run}.outliers\
-time:tz 110 27 2500 alt+z 3Ds:0:0:64:64:1:"DDr${run}*"
end
The above commands will result in AFNI 3D+time datasets for EPI runs 2, 3,
and 4. These datasets will be saved in our $topdir/afni directory.
DDr2+orig.BRIK DDr2+orig.HEAD
DDr3+orig.BRIK DDr3+orig.HEAD
DDr4+orig.BRIK DDr4+orig.HEAD
-----------------------
* EXPLANATION of above to3d options:
foreach
This is a UNIX command that implements a loop where the loop variable
takes on values from one or more lists. In this example, the loop
variable is '$run', which takes on the values of EPI runs 2, 3, and 4.
-geomparent $topdir/afni/DDr1+orig
Since we have already provided the geometry information for the run 1
dataset, that dataset information can be used as the geometry parent for
the remaining runs. This is a handy shortcut that replaces the '-orient',
'-zorigin', '-epan', '-xSLAB', '-ySLAB' options in this example.
------------------------------------------------------------------------
------------------------------------------------------------------------
* COMMAND: 3dTcat
Use AFNI 3dTcat to concatenate (i.e., combine) sub-bricks from each EPI
dataset into one big 3D+time dataset. One can combine datsets by either
concatenating them (as in this example) or averaging them, as was done
in the first how-to (ht01_ARzs), using AFNI '3dcalc'.
Usage: 3dTcat [-options]
(see also '3dTcat -help')
-----------------------
EXAMPLE of '3dTcat' from the command line:
cd to the directory containing your 3D+time datasets and run 3dTcat from
there ...
cd $topdir/afni
3dTcat -prefix DDrall \
DDr1+orig'[2..109]' DDr2+orig'[2..109]' \
DDr3+orig'[2..109]' DDr4+orig'[2..109]'
The numbers within the closed brackets refer to the time points within
each run. Remember that each run consists of 110 time points
(i.e., 0-109). Time points 0 and 1 have been intentionally omitted from
each run (probably because of excessive noise), leaving us with 108 time
points per run. The end result is a concatenated 3D+time dataset with 432
time points (108 time points * 4 EPI runs = 432):
DDrall+orig.BRIK DDrall+orig.HEAD
Note: Be sure to enclose the time points within the square brackets with
single quotes ('). For an exhaustive explanation of why these single
quotes are important, refer to the UNIX help.
------------------------------------------------------------------------
------------------------------------------------------------------------
* COMMAND: 3dvolreg
Use AFNI 3dvolreg to register each of the 432 sub-bricks (i.e., time
points 0-431) to the last one (time point 431).
Usage: 3dvolreg [-options] -base <n> -prefix <pname> <input files>
(see also '3dvolreg -help')
-----------------------
EXAMPLE of '3dvolreg' from the command line:
cd to the directory containing your concatenated 3D+time dataset and run
3dvolreg from there ...
cd $topdir/afni
3dvolreg -dfile DDrallvrout -base 431 -prefix DDrallvr DDrall+orig
-----------------------
Explanation of above 3dvolreg options:
-dfile
This option allows the user to save the motion parameters that were needed
to bring each sub-brick back into alignment with the base. Parameters
such as the roll (rotation about the I-S axis), pitch (rotation about the
R-L axis), and yaw (rotation about the A-P axis) are saved in this file.
-base
This option allows the user to select the sub-brick that will serve as the
base from which the remaining sub-bricks will be realigned. If the
'-base' option is not provided on the command line, the first sub-brick
(i.e., time point 0) will serve as the base by default. In this example,
we have assigned our last time point (431) as the base.
-prefix
The prefix for our newly created, volume registered dataset will be
'DDrallvr'.
-----------------------
The above command line will result in the creation of a volume-registered
3D+time dataset. This dataset will be saved in our $topdir/afni
directory:
DDrallvr+orig.BRIK DDrallvr+orig.HEAD
------------------------------------------------------------------------
------------------------------------------------------------------------
DATA PRE-PROCESSING IS COMPLETE.
STATISTICAL ANALYSIS TO PRODUCE FUNCTIONAL DATA BRICKS IS NEXT...
------------------------------------------------------------------------
------------------------------------------------------------------------
In this example, our experiment consisted of four different conditions
that the subject viewed and responded to:
1. Animate objects (a),
2. manipulable tools (t),
3. a high contrast gradient wheel (h), and
4. a low contrast gradient wheel (l).
Stimulus time series were created for each conditions and for each of the
four runs, resulting in 16 stimulus timing files:
RUN 1: RUN 2: RUN 3: RUN 4:
scan1a.txt scan2a.txt scan3a.txt scan4a.txt
scan1t.txt scan2t.txt scan3t.txt scan4t.txt
scan1h.txt scan2h.txt scan3h.txt scan4h.txt
scan1l.txt scan2l.txt scan3l.txt scan4l.txt
If you open any one of these stimulus timing files with a text editor
(emacs, nedit, vi, etc.), you will notice a column of 0's and 1's. The
1's in this example represent a stimulus presentation (i.e., an animate
object), and the 0's indicate a period of time where an animate object did
NOT appear on the screen.
Example: nedit scan1a.txt
0 no animate object
0 "
0 "
0 "
0 "
0 "
0 "
1 animate object appears
1 "
1 "
1 "
1 "
1 "
1 "
1 "
0 no animate object
0 " ...
For each of the four stimulus conditions, we must first concatenate the
stimulus timings from all four runs into a single file. For example, the
four stimulus time series for the 'Animate' condition (scan1a.txt,
scan2a.txt, scan3a.txt, and scan4a.txt) must be concatenated into a single
file, which we will name 'scan1to4a.1D'.
The UNIX command that will concatenate files is 'cat'
(see also 'cat --help')
NOTE: since 'cat' is a UNIX program (not a unique AFNI program),
one must type two dashes before 'help' (--help) to view the
help information.
------------------------
cd to the directory containing your stimulus timing files and run 'cat'
from there...
cd $topdir/stim_files
foreach stim_type ( a t h l )
cat scan1$stim_type.txt scan2$stim_type.txt \
scan3$stim_type.txt scan4$stim_type.txt \
> $topdir/afni/regressors/scan1to4$stim_type.1D
end
-----------------------
* Explanation of above command line:
foreach
This is the UNIX command that implements a loop where the loop variable
takes on values from one or more lists. In this example, the loop
variable is '$stim_type', which takes on the values of the four stimulus
conditions 'a' (animate), 't' (tool), 'h' (high contrast), and 'l' (low
contrast.
'>'
In each iteration of the 'foreach' loop, the '>' symbol redirects the four
pre-existing stimulus time series files (e.g. the four scan*h.txt files
for stimulus type 'h') from the terminal window we are working in, into a
new file in another directory. In this example, each new stimulus file
(e.g. scan1to4h.1D, for stimulus type 'h') is saved in the
$topdir/afni/regressors directory.
-----------------------
The above command line will result in the creation of a concatenated
stimulus timing file for each of our four stimulus conditions. These
files will be saved in our $topdir/afni/regressors directory:
scan1to4a.1D
scan1to4t.1D
scan1to4h.1D
scan1to4l.1D
-----------------------
* An extra tidbit:
To view these 4 stimulus timing files, put them together into a file with
four columns, using the AFNI program '1dcat'. This new file will be
called 'scan1to4_all.1D':
1dcat scan1to4a.1D scan1to4t.1D scan1to4h.1D scan1to4l.1D \
> scan1to4_all.1D
To view 'scan1to4_all.1D', open the file in a text editor such as nedit,
emacs, or vi (for UNIX masters). Each stimulus condition can also be
plotted using the AFNI program '1dplot':
1dplot scan1to4_all.ID
------------------------------------------------------------------------
------------------------------------------------------------------------
* COMMAND: waver
Use AFNI waver to create an ideal hemodynamic response function, one for
each stimulus times series.
Usage: waver [-options] > output_filename
(see also 'waver -help')
-----------------------
EXAMPLE of 'waver' from the command line:
cd to the directory containing your concatenated stimulus timing files and
run waver from there ...
cd $topdir/afni/regressors
waver -dt 2.5 -GAM -input scan1to4a.1D > scan1to4ahrf.1D
waver -dt 2.5 -GAM -input scan1to4t.1D > scan1to4thrf.1D
waver -dt 2.5 -GAM -input scan1to4h.1D > scan1to4hhrf.1D
waver -dt 2.5 -GAM -input scan1to4l.1D > scan1to4lhrf.1D
-----------------------
Explanation of above waver options:
-dt 2.5
This option tells waver to use 2.5 seconds for the delta time (dt), which
is the length of time between data points that are output.
-GAM
This option specifies that waver will used the Gamma variate waveform.
-input <infile>
This option allows waver to read time series from the specified *.1D
formatted 'infiles'. In this example, our infiles are
'scan1to4[a,t,h,l].1D', which will be convolved with the waveform to
produce the output files, 'scan1to4[a,t,h,l]_hrf.1D'.
-----------------------
The above command line will result in the creation of four ideal reference
functions, one for each stimulus condition. These files will be saved in
our $topdir/afni/regressors directory:
scan1to4a_hrf.1D
scan1to4t_hrf.1D
scan1to4h_hrf.1D
scan1to4l_hrf.1D
(Note: hrf = 'hemodynamic response function')
-----------------------
* Another extra tidbit:
To view these 4 ideal reference functions, put them together into a file
with four columns, using the AFNI program '1dcat'. This new file will be
called 'scan1to4_all_hrf.1D':
1dcat scan1to4a_hrf.1D scan1to4t_hrf.1D \
scan1to4h_hrf.1D scan1to4l_hrf.1D \
> scan1to4_all_hrf.1D
To view 'scan1to4_all_hrf.1D', open the file in a text editor. Each
reference function can also be plotted using the AFNI program '1dplot':
1dplot scan1to4_all_hrf.ID
-----------------------------------------------------------------------------
-----------------------------------------------------------------------------
* COMMAND: 3dDeconvolve
The AFNI program '3dDeconvolve' can be used to provide deconvolution
analysis of FMRI time series data. This program has two primary
applications:
(1) calculate the deconvolution of a measured 3D+time dataset with a
specified input stimulus time series.
(2) perform multiple linear regression using multiple input stimulus
time series.
Output consists of an AFNI 'bucket' type dataset containing the least
squares estimates of the linear regression coefficients, t-statistics
for significance of the coefficients, partial F-statistics for
significance of the individual input stimuli, and the F-statistic for
significance of the overall regression. Additional output consists of a
3D+time dataset containing the estimated system impulse response function.
To learn more about deconvolution analysis and '3dDeconvolve', download
the AFNI manuals (afni_doc.tgz) available on the AFNI website:
AFNI_Dist
Usage:
(Note: Mandatory arguments are shown below without brackets;
optional arguments are encased within the square brackets.)
3dDeconvolve
-input <fname> OR -input1D <dname> OR -nodata
[-mask <mname>]
[-censor <cname>]
[-concat <rname>]
[-nfirst <fnumber>]
[-nlast <lnumber>]
[-polort <pnumber>]
[-rmsmin <r>]
[-xout]
[-fdisp <fvalue>]
-num_stimts <number>
-stim_file <k sname>
-stim_label <k slabel>
[-stim_minlag <k m>]
[-stim_maxlag <k n>]
[-stim_nprt <k p>]
[-glt <s glt_name>]
[-glt_label <k glt_label>]
[-iresp <k iprefix>]
[-tshift]
[-sresp <k sprefix>]
[-fitts <fprefix>]
[-errts <eprefix>]
[-fout] [-rout] [-tout] [-vout] [-nocout]
[-full_first]
[-bucket <bprefix>]
(see also '3dDeconvolve -help')
Note: Due to the large number of options in 3dDeconvolve, it is HIGHLY
recommended that the 3dDeconvolve command be written into a script
using a text editor, which can then be run in the UNIX window.
-----------------------
EXAMPLE of a '3dDeconvolve' command:
cd to the directory containing your stimulus timing files and ideal
reference functions for each stimulus condition and run 3dDeconvolve from
there ...
cd $topdir/afni/regressors
3dDeconvolve -input DDrallvr+orig -xout -num_stimts 4 \
-stim_file 1 regressors/scan1to4a_hrf.1D -stim_label 1 Actions \
-stim_file 2 regressors/scan1to4t_hrf.1D -stim_label 2 Tool \
-stim_file 3 regressors/scan1to4h_hrf.1D -stim_label 3 HCMS \
-stim_file 4 regressors/scan1to4l_hrf.1D -stim_label 4 LCMS \
-full_first -fout -tout -concat $topdir/contrasts/runs.1D \
-glt 1 $topdir/contrasts/DDcontrv1.txt -glt_label 1 AvsT \
-glt 1 $topdir/contrasts/DDcontrv2.txt -glt_label 2 HvsL \
-glt 1 $topdir/contrasts/DDcontrv3.txt -glt_label 3 ATvsHL \
-bucket DDrallvrMRv1
-----------------------
Explanation of above 3dDeconvolve options. The mandatory arguments are
preceded by an asterisk:
(*)-input <filename>
This option specifies the filename of the AFNI 3D+time dataset to be used
as input for the deconvolution program. In this example, our input file
is the volume registered and concatenated dataset we created earlier,
called 'DDrallvr+orig'. The '-input' option is mandatory except in cases
where '-nodata' or '-input1D' is used in its place. The '-nodata'
argument allows the user to evaluate the experimental design without
entering measurement data, while the '-input1D' argument specifies that an
AFNI .1D time series data file be used rather than a 3D+time dataset.
-xout
This option is used to the write the experimental design matrix X and
inverse (X'X) matrices to the screen. In essence, this option allows
the user to understand what the program is doing by displaying output onto
the screen while the 3dDeconvolve program runs.
(*)-num_stimts <number>
This is a mandatory argument that indicates the number of input stimulus
time series being used for the deconvolution analysis. In this example,
there are FOUR stimulus time series:
1) scan1to4a.1D (Actions (a)),
2) scan1to4t.1D (Tools (t)),
3) scan1to4h.1D (High Contrast Gradient (h)), and
4) scan1to4l.1D (Low Contrast Gradient (l)),
which are used to create FOUR ideal reference functions:
1) scan1to4a_hrf.1D (Actions (a)),
2) scan1to4t_hrf.1D (Tools (t)),
3) scan1to4h_hrf.1D (High Contrast Gradient (h)), and
4) scan1to4l_hrf.1D (Low Contrast Gradient (l)).
These ideal reference function filenames are passed to '3dDeconvolve' as
'-stim_file' arguments.
3dDeconvolve requires the number of stimulus time series be at least one,
but no more than 50.
(*)-stim_file <k stimulus_filename>
This is a mandatory argument that specifies the filename of each ideal
reference function, and kth input stimulus function it represents. In
this example, our four ideal reference files are stored in the
$topdir/afni/regressors directory:
scan1to4a_hrf.1D (1st input stimulus function, therefore k=1)
scan1to4t_hrf.1D (2nd input stimulus function, therefore k=2)
scan1to4h_hrf.1D (3rd input stimulus function, therefore k=3)
scan1to4l_hrf.1D (4th input stimulus function, therefore k=4)
-stim_label <stimulus label>
This option is an organizational tool that provides a label for the output
corresponding to the kth input stimulus function. For instance, the
stimulus label "Actions" is much easier to remember and understand than
the file name 'scan1to4a_hrf.1D'.
-full_first
Flag used to request that the full model statistics appear in the first
sub-bricks of the bucket dataset, before the sub-bricks for the partial
statistics (default is for the full model statistics to appear last).
-fout
Flag to output the F-statistics. Sub-bricks containing these statistics
will be part of the resulting bucket dataset.
-tout
Flag to output the t-statistics. Sub-bricks containing these statistics
will also be part of the resulting bucket datset.
-concat <rname>
This option specifies a file consisting of a list of run starting points.
Each number in this file will be the TR offset for the respective run.
In this example, the columnar file 'runs.1D' looks like:
0
108
216
324
which tells '3dDeconvolve' that the input dataset consists of 4 runs, with
respective TR offsets of 0, 108, 216, and 324.
This is used when the input data was collected over multiple runs. This
information is part of the statistical model, and affects the contents
of the '-glt' files. Each run will get separate B0 and B1 coefficients
(i.e. each run will have a separate baseline and linear trend coefficient
computed). Also, stimuli from one run should not affect results in the
next run.
-glt <s gltname>
Performs simultaneous linear tests, as specified by the matrix contained
in the file, gltname. For every such '-glt' option (and hence, for every
linear test performed), 3dDeconvolve will create a sub-brick for the
regression coefficients, plus sub-bricks for any other request statistics.
The matrix in the file corresponding to the 'gltname' argument will have
's' rows, each representing a linear test. Consider the script option:
-glt 1 $topdir/contrasts/DDcontrv1.txt
This tells 3dDeconvolve to perform a single linear test, as specified by
the one line in the file, 'DDcontrv1.txt':
0 0 0 0 0 0 0 0 1 -1 0 0
The eight leading zeros correspond to the baseline and linear drift
coefficients for the four runs. The remaining four values of the model
correspond to the four regressors. In this example, the test is to
compare 'Actions' (1) vs. 'Tools' (-1).
-glt_label <k label>
This option is used to put text labels on the sub-bricks corresponding
to a particular linear test. The 'k' specifies which '-glt' option
this label corresponds to. In the example above, 'Actions vs. Tools'
is the first '-glt' option given, so the corresponding 'k' is 1:
-glt_label 1 AvsT
-bucket <prefix name>
This option is used to create a single AFNI "bucket" type dataset, having
multiple sub-bricks. The output is written to the file with the user
specified prefix filename. In our example, this prefix is 'DDrallvrMRv1'.
Each of the individual sub-bricks can then be accessed for display within
the 'afni' program.
The purpose of this command is to simplify file management, since most of
the output results for a particular problem can now be contained within a
single AFNI bucket dataset. The bucket contains various parameters of
interest, such as the F-statistic for significance of the estimated
impulse response. The output 'bucket' dataset is written to the specified
prefix name.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
TIME TO VIEW THE DATA IN AFNI!!!
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
Now that the data have been processed and analyzed, they can be viewed in
AFNI. Below are some recommendations for displaying and viewing the data:
cd to the directory containing the datasets and run AFNI from there ...
cd $topdir/afni
afni
The main AFNI window will appear.
SWITCH ANATOMY: Click on this button to choose an anatomical 3D dataset
to view. In this example, select the dataset 'DDSPGR.'
Press the "SET" button to continue.
To view this dataset, click on the AXIAL, SAGITTAL, and
CORONAL buttons that appear on the main AFNI window.
At this point, there should be no functional data
overlaid onto these images.
SWITCH FUNCTION: Click on this button to choose which functional
3D dataset to view. In this example, select the
dataset, 'DDrallvrMRv1'. Recall that this is the
"bucket" dataset that was created by running
'3dDeconvolve.' It contains 38 sub-bricks. Press the
"SET" button to continue.
DEFINE FUNCTION: Click on this button to begin the process of
overlaying the functional data onto the anatomy. Once
this button is selected, another control panel, with
additional options, will appear to the right of the
main AFNI window.
Note that function will not be seen overlaid onto the
anatomical images until the SEE FUNCTION box, which
appears below "DEFINE FUNCTION", is highlighted.
(A) SELECT FUNCTION AND THRESHOLD:
The first step (within the "DEFINE FUNCTION" control panel) is
to select the sub-brick you would like to view (click on the
'Func' button), and the sub-brick with which to threshold
(click on the 'Thr' button).
For example, to view functional activity that results from the
"Actions" stimulus, go to the 'Func' button and select the
"Actions" correlation coefficient (#17). To set the threshold,
click on the 'Thr' button and select the "Actions" F-statistic
(#19).
(B) SET THE THRESHOLD SLIDER BAR
To the left of the 'Func' and 'Thr' buttons is a slider bar
that is used to adjust the threshold for function display.
Since the threshold ranges from 0-1000, select "3" on the
button below the slider bar (with the two asterisks to the left
of it) to change the range of the slider bar from 0 to 999.99
One way to select an appropriate threshold for function display
is to drag the slider bar until there are no (or very few)
extra-cranial colored voxels. For this example, set the
threshold bar to about 147.1, which sets the p-value to 3.-29
(i.e., 3 times 10 to the power of -29). This is the
significance value PER VOXEL.
(C) CHANGE THE NUMBER OF COLOR PANES
Next to the threshold bar is a color pane. The default is set
at 9 colors, but this can be adjusted by clicking on the button
right below the color panes. For this example, change the
color spectrum to 15.
(D) MAKE YOUR FUNCTIONAL DATA LOOK PRETTY
As it is now, the functional data looks a bit blocky. To
improve its appearance (and make it worthy of publication),
go to DEFINE DATAMODE. This will open up a window. To the
direct right of "DEFINE DATAMODE" you will see two buttons:
"Func resam mode" and "Thr resam mode". These buttons allow
you to resample your functional data. For this example, select
CUBIC interpolation for both buttons. Doing this will give
your functional overlay a smoother, less-blocky, appearance.
THAT'S ALL THERE IS TO IT!!! Try viewing the other stimulus variables
as well as the GLT contrasts (e.g., AvsT) in AFNI.
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
CONGRATULATIONS,
YOU HAVE COMPLETED THE SECOND HOW-TO !!!
-------------------------------------------------------------------------------
-------------------------------------------------------------------------------
There is a tremendous amount of information in this how-to, which may seem
overwhelming at first. However, all it takes is some time and practice to
master the art and science of FMRI analysis.
For additional information on AFNI, FMRI, and statistical principles and
concepts, see the EDUCATIONAL MATERIAL ON THE AFNI WEBSITE:
class notes
Also feel free to view the message boards and to post a question yourself:
AFNI message board