- Gaussian elimination on the normal equations of X has been
replaced by using the SVD to compute the X matrix pseudoinverse.
This can be disabled using the "-nosvd" option. If the SVD solution
is on, then all-zero -stim_file functions will
**NOT**be removed from the analysis (unlike the "-nosvd" analysis).- The reason for not removing the all-zero regressors is so that
GLTs and other scripts that require knowing where various results
are in the output can still function.
- One plausible case that can give all-zero regressors: a task with "correct" and "incorrect" results, you analyze these cases separately, but some subjects are so good they don't have any incorrect responses.

- The SVD solution will set the coefficient and statistics for an all-zero regressor to zero.
- If two identical (nonzero) regressors are input, the program will complain but continue the analysis. In this case, each one would get half the coefficient, which only seems fair. However, your interpretation of such cases should be made with caution.
- For those of you who aren't mathematicians: the SVD solution basically creates the orthgonal principal components of the columns of the X matrix (the baseline and stimulus regressors), and then uses those to solve the linear system of equations in each voxel.

- The reason for not removing the all-zero regressors is so that
GLTs and other scripts that require knowing where various results
are in the output can still function.
- The X matrix condition number is now computed and printed. This
can be disabled using the "-nocond" option. As a rough guide, if
the matrix condition number is about 10
^{p}, then roundoff errors will cause about p decimal places of accuracy to be lost in the calculation of the regression parameter estimates. In double precision, a condition number more than 10^{7}would be worrying. In single precision, more than 1000 would be cause for concern. Note that if Gaussian elimination is used, then the effective condition number is squared (twice as bad in terms of lost decimal places); this is why the SVD solution was implemented.- The condition number is the ratio of the largest to the smallest singular value of X. If Gaussian elimination is used ("-nosvd"), then this ratio is squared.

- Use of 3dDeconvolve_f (single precision program) now requires
"informed consent" from the user, indicated by putting the option
"-OK" first on the command line. This is because roundoff error can
cause big errors in single precision if the matrix condition number
is over 1000.
- The new "-xjpeg filename" option will save a JPEG image of the
columns of the regression matrix X into the given file. Notes:
- Each column is scaled separately, from white=minimum to black=maximum.
- Environment variable
`AFNI_XJPEG_COLOR`determine the colors of the lines drawn between the columns. The color format is "rgbi:rf/gf/bf", where each value rf,gf,bf is a number between 0.0 and 1.0 (inclusive); for example, yellow would be "rgbi:1.0/1.0/0.0". As a special case, if this value is the string "none" or "NONE", then these lines will not be drawn. - Environment variable
`AFNI_XJPEG_IMXY`determines the size of the image saved when via the -xjpeg option to 3dDeconvolve. It should be in the format AxB, where 'A' is the number of pixels the image is to be wide (across the matrix rows) and 'B' is the number of pixels high (down the columns); for example:setenv AFNI_XJPEG_IMXY 768x1024

which means to set the x-size (horizontal) to 768 pixels and the y-size (vertical) to 1024 pixels. These values are the default, by the way.If the first value 'A' is negative and less than -1, its absolute value is the number of pixels across PER ROW. If the second value 'B' is negative, its absolute value is the number of pixels down PER ROW. (Usually there are many fewer columns than rows.)

- 3dDeconvolve now checks for duplicate -stim_file names, and
duplicate matrix columns. Only warning messages are printed --
these are not fatal errors (at least, if the SVD solution is on).
- Matrix inputs for the "-glt" option can now use a notation like
"30@0" to indicate that 30 0s in a row are to be placed on the
line. For example, if you have 10 runs catenated together, and you
used "-polort 2", then there are 30 baseline parameters to skip
(usually) when specifying each GLT row; a sample matrix file with
34 entries per row is below:
30@0 1 -1 0 0 30@0 0 0 1 -1

- The new "-gltsym gltname" option lets you describe the rows of
a GLT matrix using a symbolic notation. Each stimulus is symbolized
by its -stim_label option. Each line in the 'gltname' file
corresponds to a row in the GLT matrix. On each line should be a
set of stimulus symbols, which can take the following forms (using
the label 'Stim' as the examplar):
Stim = means put +1 in the matrix row for each lag of Stim +Stim = same as above -Stim = means put -1 in the matrix for for each lag of Stim Stim[2..7] = means put +1 in the matrix for lags 2..7 of Stim 3*Stim[2..7] = means put +3 in the matrix for lags 2..7 of Stim Stim[[2..4]] = means put +1 in the matrix for lags 2..4 of Stim in 3 successive rows of the matrix, as in 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 whereas Stim[2..4] would yield one matrix row 0 0 1 1 1 0 0 0

There can be no spaces or '*' characters in the stimulus symbols; each set of stimulus symbols on a row should be separated by one or more spaces. For example, the two multi-lag regressors entered with the options below-stim_label 1 Ear -stim_minlag 1 0 -stim_maxlag 1 5 \ -stim_label 2 Wax -stim_minlag 2 2 -stim_maxlag 2 7

could have a GLT matrix row specified by+Ear[2..5] -Wax[4..7]

which would translate into a matrix row like{zeros for the baseline} 0 0 1 1 1 1 0 0 -1 -1 -1 -1

- With -gltsym, you do not have to specify the number of rows on the command line -- the program will determine that from the file.
- You can embed comment lines in the file -- these are lines that start with the characters "#" or "//".
- If you want to access the polynomial baseline parameters for some bizarre reason, you can use the symbolic name "Ort"; otherwise, the GLT matrix elements corresponding to these parameters will all be set to 0, as in the example above.
- A GLT can be expressed directly on the command line with an
option of the form
-gltsym 'SYM: +Ear[2..5] -Wax[4..7]'

where the 'SYM:' that starts the string indicates that the rest of the string should be used to define the 1 row matrix. It is important that this string be enclosed in forward single quotes, as shown. If you want to have multiple rows specified, use the '\' character to mark the end of each row, as in-gltsym 'SYM: +Ear[2..5] \ -Wax[4..7]'

- You probably want to use the "-glt_label" option with -gltsym, as with -glt.
- If you want to have the matrices generated by -gltsym printed to the screen, you can set environment variable AFNI_GLTSYM_PRINT to YES.

- Polynomial baseline functions now default to Legendre
polynomials, which are more pleasantly behaved than the older power
baseline functions. If you need the old power functions, you must
use the -nolegendre option; this should only be the case if you use
the baseline parameter estimates for some purpose.
- For each block of contiguous data, the time range from first to
last is scaled to the interval [-1,1]. The standard Legendre
polynomials P
_{n}(x) are then entered as baseline regressors, for n=0,1,...

- For each block of contiguous data, the time range from first to
last is scaled to the interval [-1,1]. The standard Legendre
polynomials P
- You can save ONLY the estimated parameters (AKA regression
coefficients) for each voxel into a dataset with the new "-cbucket
cprefix" option. This may be useful if you want to do some
calculations with these estimates; you won't have to extract them
from the various statistics that are stored in the output of the
"-bucket bprefix" option.
- In combination with the old "-bucket bprefix" option, the new
"-xsave" option saves the X matrix (and some other information)
into file "bprefix.xsave". Use this option when you first run
3dDeconvolve, if you think you might want to run some extra GLTs
later, using the "-xrestore" option (below) -- this is usually much
faster than running the whole analysis over from scratch.
- The new "-xrestore filename.xsave" option will read the -xsave
file and allow you to carry out extra GLTs after the first
3dDeconvolve run. When you use -xrestore, the only other options
that have effect are "-glt", "-glt_label", "-gltsym", "-num_glt",
"-fout", "-tout", "-rout", "-quiet", and "-bucket". All other
options on the command line will be ignored (silently). The
original time series dataset (from "-input") is named in the -xsave
file, and must be present for -xrestore to work. If the parameter
estimates were saved in the original -bucket or -cbucket dataset,
they will also be read; otherwise, the estimates will be
re-computed from the voxel time series as needed. The new output
sub_bricks from the new -glt options will be stored as follows:
- no "-bucket" option given in the -xrestore run

==> will be stored at end of original -bucket dataset - "-bucket bbb" option given in the -xrestore run

==> will be stored in dataset with prefix "bbb", which will be created if necessary; if "bbb" already exists, new sub-bricks will be appended to this dataset

- no "-bucket" option given in the -xrestore run
- The "-input" option now allows input of multiple 3D+time
datasets, as in
-input fred+orig ethel+orig lucy+orig ricky+orig

Each command line argument after "-input" that does NOT start with a '-' character is taken to be a new dataset. These datasets will be catenated together in time (internally) to form one big dataset. Other notes:- You must still provide regressors that are the full length of the catenated imaging runs; the program will NOT catenate files for the "-input1D", "-stim_file", or "-censor" options.
- If this capability is used, the "-concat" option will be ignored, and the program will use time breakpoints corresponding to the start of each dataset from the command line.

- Unless you use the "-quiet" option, 3dDeconvolve now prints a
"progress meter" while it runs. When it is done, this will look
like
++ voxel loop:0123456789.0123456789.0123456789.0123456789.0123456789.

where each digit is printed when 2% of the voxels are done. - Direct input of stimulus timing, plus generation of a response
model, with the new "-stim_times" option:
**-stim_times k tname rtype**is the stimulus index (from 1 to the`k``-num_stimts`value)is the name of the file that contains the stimulus times (in units of seconds, as in the TR of the`tname``-input`file). There are two formats for this file.- A single column of numbers, in which case each time is relative to the start of the first imaging run ("global times").
- If there are 'R' runs catenated together (either directly on
the command line, or as represented in the
`-concat`option), the second format is to give the times within each run separately. In this format, the input file`tname`would have R rows, on per run; the times for each run take up one row. For example, with R=2:12.3 19.8 23.7 29.2 39.8 52.7 66.6 21.8 32.7 41.9 55.5

These times will be converted to global times by the program, by adding the time offset for each imaging run.

**N.B.**: The times are relative to the*start*of the data time series as input to 3dDeconvolve. If the first few points of each imaging run have been cut off, then the actual stimulus times must be adjusted correspondingly (e.g., if 2 time points were excised with TR=2.5, then the actual stimulus times should be reduced by 5.0 before being input to 3dDeconvolve). - When using the multi-row input style, you may have the
situation where the particular class of stimulus does not occur at
all in a given imaging run. To encode this, the corresponding row
of the timing file should consist of a single '
`*`' character; for example, if there are 4 imaging runs but the`k`^{th}stimulus only occurs in runs 2 and 4, then the`tname`file would look something like this:* 3.2 7.9 18.2 21.3 * 8.3 17.5 22.2

- In the situation where you are using multi-row input, AND there is at most one actual stimulus per run, then you might think that the correct input would be something like:
* * 30 *

However, this will be confused with the 1 column format, which means global times, and so this is wrong. Instead, you can put an extra '*' on one line with an actual stimulus, and then things will work OK:* * 30 * *

specifies the type of response model that is to follow each stimulus. The following formats for`rtype``rtype`are recognized:`'GAM'`=> The response function

h_{G}(t;b,c) = (t/(bc))^{b}exp(b-t/c)

for the Cohen parameters b=8.6, c=0.547. This function peaks at the value 1 at t=bc, and is the same as the output of`waver -GAM`.`'GAM(b,c)'`=> Same response function as above, but where you give the 'b' and 'c' values explicitly. The`GAM`response models have 1 regression parameter per voxel (the amplitude of the response).`'SPMG2'`=> The SPM gamma variate regression model, which has 2 regression parameters per voxel. The basis functions are

h_{SPM,1}(t) = exp(-t) [ t^{5}/12 - t^{15}/(6*15!) ]

h_{SPM,2}(t) = d/dt [ h_{SPM,1}(t) ]`'TENT(b,c,n)'`=> A tent function deconvolution model, ranging between times s+b and s+c after each stimulus time s, with n basis functions (and n regression parameters per voxel).

A 'tent' function is just the colloquial term for a 'linear B-spline'. That is

tent(x) = max( 0 , 1-|x| )

A tent function model for the hemodynamic response function is the same as modeling the HRF as a continuous piecewise linear function. Here, the input 'n' is the number of straight-line pieces.`'CSPLIN(b,c,n)'`=> A cubic spline deconvolution model; similar to the`TENT`model, but where smooth cubic splines replace the non-smooth tent functions.`'SIN(b,c,n)'`=> A sin() function deconvolution model, ranging between times s+b and s+c after each stimulus time s, with n basis functions (and n regression parameters per voxel). The q^{th}basis function, for q=1..n, is

h_{SIN,q}(t) = sin(qπ(t-b)/(c-b))`'POLY(b,c,n)'`=> A polynomial function deconvolution model, ranging between times s+b and s+c after each stimulus time s, with n basis functions (and n regression parameters per voxel). The q^{th}basis function, for q=1..n, is

h_{POLY,q}(t) = P_{q}(2(t-b)/(c-b)-1)

where P_{q}(x) is the q^{th}Legendre polynomial.`'BLOCK(d,p)'`=> A block stimulus of duration d starting at each stimulus time.- The basis block response function is the convolution of a gamma
variate response function with a 'tophat' function:

H(t) = ∫_{0}^{min(t,d)}h(t-s) ds where h(t) = (t/4)^{4}exp(4-t);

h(t) peaks at t=4 with h(4)=1, whereas H(t) peaks at t=d/(1-exp(-d/4). Note that the peak value of H(t) depends on 'd'; call this peak value H_{peak}(d). `'BLOCK(d)'`means that the response function to a stimulus at time s is H(t-s) for t=s..s+d+15.`'BLOCK(d,p)'`means that the response function to a stimulus at time s is p⋅H(t-s)/H_{peak}(d) for t=s..s+d+15. That is, the response is rescaled so that the peak value of the entire block is 'p' rather than H_{peak}(d). For most purposes, the best value would be p=1.`'BLOCK'`is a 1 parameter model (the amplitude).

- The basis block response function is the convolution of a gamma
variate response function with a 'tophat' function:
`'EXPR(b,c) exp1 exp2 ...'`=> A set of user-defined basis functions, ranging between times s+b and s+c after each stimulus time s. The expressions are given using the syntax of 3dcalc, and can use the symbolic variables:

'`t`' = time from stimulus;

'`x`' =`t`scaled to range from 0 to 1 over the`b..c`interval;

'`z`' =`t`scaled to range from -1 to 1 over the`b..c`interval.

An example, which is equivalent to`'SIN(0,35,3)'`, is`'EXPR(0,35) sin(PI*x) sin(2*PI*x) sin(3*PI*x)'`. Expressions are separated by blanks, and must not contain whitespace themselves. An expression must use at least one of the symbols 't', 'x', or 'z', unless the entire expression is the single character "`1`".

- The basis functions defined above are not normalized in any
particular way. The
`-basis_normall`option can be used to specify that each basis function be scaled so that its peak absolute value is a constant. For examplewill scale each function to have amplitude 1. Note that this scaling is actually done on a very fine grid over the entire domain of t values for the function, and so the exact peak value may not be reached on any given point in the actual FMRI time series.`-basis_normall 1`- Note that it is the basis function that is normalized,
*not*the convolution of the basis function with the stimulus timing! - The
`-basis_normall`option must be given*before*any`-stim_times`options to which you want it applied!

- Note that it is the basis function that is normalized,
- If you use a
`-iresp`option to output the hemodynamic (impulse) response function corresponding to a`-stim_times`option, this function will be sampled at the rate given by the new`-TR_times dt`option. The default value is the TR of the input dataset, but you may wish to plot it at a higher time resolution. (The same remarks apply to the`-sresp`option.) - Since the parameters in most models do not correspond directly
to amplitudes of the response, care must be taken when using GLTs
with these.
- The parameters for
`GAM`,`TENT`,`CSPLIN`, and`BLOCK`do corresond directly to FMRI signal change amplitudes. - I NEED TO THINK THIS THROUGH SOME MORE

- The parameters for
- Next to be implemented (someday): an option to compute areas under the curve from the basis-function derived HRFs.

More changes are on the way - RWCox - 22 Sep 2004 - Bilbo and Frodo Baggins' birthday!

- The
`-nodata`option now works with the`-stim_times`option.- However, since
`-stim_times`needs to know the number of time points (NT) and the time spacing (TR), you have to supply these values after the`-nodata`option if you are using`-stim_times`. - For example:

to indicate 114 points in time with a spacing of 2.5 s.**-nodata 114 2.5**

- However, since